Pisa responds to critics with new computer tests

Computer-based adaptive tests were used in the 2018 Pisa assessments being published this week in response to criticisms of the test

Catherine Lough

How reliable are the Pisa global rankings?

In the latest results for Pisa – the Programme for International Student Assessment – new computer-based assessments were developed in response to criticisms that the tests are “one-size fits all”.

For the first time, adaptive testing was used for the reading component of Pisa in 2018, which also saw the largest sample of students ever collected, with 600,000 fifteen-year-olds taking part.

News: Will Tuesday's Pisa results be Gove's report card?

Need to know: What are the Pisa international education rankings?

Exclusive: England opts out of new Pisa creativity test

In a presentation by Andreas Schleicher, coordinator of the OECD’s Pisa programme, he said: “As Pisa has grown bigger and more and more countries participated it became more diverse. Countries join at the high end of the performance distribution but also at the lower end of the performance distribution.”

“This has often been one of the criticisms of Pisa – how can you give a one-size-fits-all assessment to students in so many different environments and abilities?”

Mr Schleicher said the 2018 assessments addressed the issue through using adaptive testing in the reading component, which would prevent students with particular challenges from being frustrated by overly difficult questions.

“Students start the initial test and if they’ve got those questions fairly right, they will be given harder elements in the test up to the point that they reach their potential.

“In the same way, if students have difficulties with this very first component, they will be given an easier block, and afterwards maybe an even easier block – so in a way, the test responds to the students’ abilities in an interactive way and therefore captures more precisely their different learning abilities.

“It’s also fairer and more appropriate to do this. If you have a student with great difficulties, for example, a student who has recently just migrated to the country of destination, you also don’t yet speak the language well, you get a very difficult and complex test and it may be hard for students, they may get very frustrated and not respond well to the test.”

He added that students who were “very smart” would face a different issue, in that they might be given tasks far too easy for them and not respond well either.

Mr Schleicher said that open-ended questions on reading would not be assessed through adaptive testing and are still assessed manually, although he said technology might be developed in future to assess extended answers to open questions.



Register to continue reading for free

It only takes a moment and you'll get access to more news, plus courses, jobs and teaching resources tailored to you

author bio

Catherine Lough

Catherine Lough is a reporter at Tes.

Find me on Twitter @CathImogenLough

Latest stories