The phone on my desk rings, and when I answer it a voice in hushed tones says: “I've got one."
This isn't MI5. This is TQUK, and the voice belongs to Alex, our end-point assessment coordinator. We aren’t looking for fugitives; we’re looking for question writers. This is just one task in a long list of challenges we’ve faced as we prepare to deliver TQUK’s first end-point assessments for the new apprenticeship standards.
The new standards – employer-led, publicly funded programmes of learning designed to replace the old apprenticeship frameworks – will require the apprentice to pass an end-point assessment. This is the collective term for the series of assessments that take place at the end of the formative learning stage. The groups of employers leading the development process, called trailblazers, determine what knowledge and skills the apprentice has to demonstrate and what tests they have to undertake. Only organisations listed on the Register of Apprentice Training Providers (RoATP) are able to deliver end-point assessment, and we are one.
Here, I’ll focus on just one element of assessment that we’re working with: the multiple-choice question paper (also known as the Situational Judgement Test, or the On Demand Test). It’s one of the most recognisable test formats there is and features in many of the new trailblazer assessment plans. TQUK’s been writing MCQs for years. So what, you may ask, is the problem?
In a nutshell, TQUK is not in control. And we like to be in control. The assessment plans created by the trailblazer groups are extremely detailed and rigid, and they specify how things must be put together, right down to the question paper specification. These paper specifications have not been designed by our assessment experts. Nor has the standard – which details the knowledge and skills the apprentice will develop – moved through our organisation’s 10-stage development process, during which we put a lot of thought into the proposed assessment methods. We must work with what we are given and, in some cases, what we are given is not what we would’ve made.
‘Apprenticeships are not qualifications’
Take, for example, the assessment plans which state that "the questions will cover the knowledge and skills identified on the standard". We encounter our first problem. It is not possible to validly test skills with a multiple-choice question. A skills test is looking for proof of competence, not proof of knowledge. This immediately renders any content relating to skills as unsuitable for use as subject matter for the questions in this test.
What is left of the standard is, in most cases, not indexed or structured in a consistent way to allow for the question paper to be generated against defined learning outcomes. Apprenticeships are not qualifications, we all accept that, but often the features of a qualification being deliberately avoided – such as a structure based around learning outcomes and assessment criteria – are there because of many years of experience and they have evolved into an accepted way to ensure that design of content does not hinder valid assessment.
Add to this that some of the standards are split down into specialisms and we begin to reveal the next layer of challenge: covering the vast range of knowledge in the specified number of questions in enough depth to allow for rich feedback. Rich feedback from the test is required by assessment plans in order to inform other elements involved in the end-point assessment. One standard that we are working with has left us with no choice but to design a paper specification that gives us only 25 questions to assess 14 topics. This means that, at most, each topic will be tested with only two questions, sometimes only one. Is it fair to say that an apprentice has failed to demonstrate knowledge of a topic because they have got one question wrong?
‘A whole new set of challenges’
Writing these questions is also not straightforward. Almost every standard that we’ve come across which includes an MCQ test requires the questions to be "situational" and requires application of knowledge to real-life scenarios. Perhaps we’re a little over-zealous in our requirements for question writers. Or perhaps we recognise that in order to write these questions you need up–to-date and relevant workplace experience, experience of assessment and an understanding of the intricacies of multiple-choice question development. People who meet these requirements are hard to find.
Just when you think you’ve got it cracked, you pick up the assessment plan for the next trailblazer and, due to the lack of standardisation, you are faced with a whole new set of challenges.
But we will continue to do what we do, the way we’ve always done it. TQUK will develop the best, most reliable assessments within the confines of these assessment plans. Ultimately, we understand and support the intentions behind the trailblazer apprenticeships and what the government is trying to do, and we will contribute wherever we are asked towards the development or improvement of new or existing plans.
The Institute for Apprenticeships has an important role to play in stepping in to provide some structure around the format of the standards and assessment plans. This is vital if Apprentice Assessment Organisations (AAOs) are to develop robust, valid and reliable assessments that do the apprentice justice and allow the principles of sound assessment to be applied. Without this there is little chance of comparability across assessments delivered by different AAOs.
Katie Orr is head of awarding organisation at TQUK