Most countries dictate when standardised assessments will be sat – which makes Scotland “almost unique” given that the country's teachers determine the timing of the tests, according to the body charged with designing and delivering Scottish National Standardised Assessments (SNSAs).
The advantage, says the Australian Council for Education Research (Acer), is that the tests can be used diagnostically by teachers; the disadvantage is it makes it harder to use the information gathered from the assessments to track the performance of children over time or make comparisons.
This barrier to drawing comparisons between schools and councils will be welcomed by those who fear that test results could be used to create league tables. But for others, the warning will be confirmation that the tests cannot give a valid national picture and are a waste of time.
Background: Treat national test results ‘with caution’
Acer delivered a workshop at the Association of Directors of Education in Scotland (Ades) annual conference in Cumbernauld last week. Juliette Mendelovits, a research director at Acer, spent a large chunk of the workshop highlighting the wide range of reports that could be generated from the test data by local authorities and schools. These reports allowed them to track the performance of children in the targeted year groups over time; compare schools or classes; and drill down to find out about the performance of “subgroups”, such as girls, disadvantaged pupils or pupils with English as an additional language.
However, because pupils in different classes will take the tests – which are sat in P1, P4, P7 and S3 – at different times of year, all of this data comes with a health warning that the point at which pupils sit the tests must be taken into consideration when reaching conclusions about how well they are performing.
Last year, when it published information about the results of the SNSAs, Acer warned that there was “clear evidence” that the time of year children sat the assessments could result in a “marked increase” in their performance, with children who sat them later in the year almost always performing better.
It therefore warned that comparisons between pupils or groups of pupils must be carefully drawn.
Ms Mendelovits told Tes Scotland: “The main reason other places don’t do that [allow children to sit standardised assessments at different times of the year] is they are more focused on the aggregated data and less on the diagnostic use of the assessment. And certainly if the focus is on monitoring trends over time – if that’s the key thing you want to get – having a particular time of year when the test is sat makes that a whole lot easier.
“But the focus in Scotland is on how you can help teachers to improve learning so the focus is on when the teacher thinks it would be most advantageous to sit the test and when the child is ready.
“The pro is that you are going to get maximum information from the children at the critical point; the con is it makes comparison more difficult.”
She added, however, that over time, Acer was expecting that patterns of test-sitting would “become more settled” and comparison would be “facilitated”.
At the moment, the point at which the tests are sat in Scotland is not fixed, although schools are increasingly moving towards administering them during the first half of the school year.
Acer UK chief executive Desmond Bermingham put the change down to schools becoming more confident in the use of the tests and beginning to use them for “different purposes”, including to identify pupils’ strengths and weaknesses.
He said: “One trend we are seeing in the numbers is an increasing number of schools using the assessment at the beginning or early on in the academic year. In the first year, fairly predictably, it was almost entirely at the end, but as teachers get more confident, people are beginning to use it for different purposes and our assumption is schools using it at the beginning of the academic year are using it as a diagnostic tool to identify strengths and weaknesses.”
According to delegates at the Ades conference workshop, the feedback they were receiving was that one of the most valuable reports generated by the tests was the breakdown of how children had performed either individually or as a class on each test item. This, they said, was allowing teachers to identify the gaps in their pupils’ learning.
The Scottish government has been at pains to stress that the new SNSAs are not about creating big data for government to judge the education system.
David Leng – the so-called “SNSA product owner” – said at last year’s Scottish Learning Festival that the point of the tests was to allow teachers to get information that helped to understand pupil progress and next steps.
At the same event, Allison Skerrett – an associate professor in language and literacy at the University of Texas, Austin, and one of the Scottish government’s international education advisers – said the tests should be rebranded because they were not truly standardised tests and were more about “benchmarking”.
She said: “Benchmarking is about seeing where someone is at a particular point in time. It carries the message of trying to understand each child’s pathway or learning, and what is needed in terms of support. 'Standardised' gives a different picture of ‘this is what everyone should know and be doing at this point in time'.”