Although it’s no longer statutory for colleges to write a self-assessment report (SAR) – and Ofsted no longer insist on seeing one – it remains vital that colleges self-assess, because effective self-assessment provides an essential insight into what a college does well and needs to do better.
Education is not all about qualifications: it’s also about preparing students for life and work. But outcomes data is important and, as such, the self-assessment process should start with an analysis of the most recent exams performance. In practice, this means achievement rates, pass rates, attendance and retention. But of equal importance are high grades and value-added scores, as well as comparative data showing how various groups of students performed – including those eligible for free school meals, those who’ve been in care, those from ethnic minorities and those eligible for high-needs funding or with special educational needs or disabilities.
Each college is different and context is all: if a particular group of students underperformed the previous year, that group’s performance should also be reviewed – regardless of national priorities – to find out whether the gap has narrowed or closed.
For study programme students, access to enrichment and work experience, as well as outcomes in English and maths, should also be analysed. This is, in part, about compliance – but it’s also about the quality and breadth of the student experience.
Each set of data should be compared with the previous two years and with benchmarks such as national and provider averages.
No stone should be left unturned during the self-assessment process. As well as exams data, leaders should review quality assurance information, such as learning walk reports and the results of work scrutiny. In-year target-setting, assessment and tracking data should also be reviewed, to determine if teacher assessment was accurate, as well as professional-development and performance-management records.
Curriculum planning information should be pored over, too, to ascertain whether the curriculum is responsive to the needs of students, local and national employers – and LEP priorities. Progression and destinations data is crucial here: do a majority of students achieve a positive destination and do those progressing from one level of qualification to the next do so in college? Progression data can demonstrate that a college has planned pathways leading to higher-level study and skilled employment, rather than a series of dead-end qualifications because that’s what they’ve always done and have the staff to teach.
Student voice surveys should be included in the self-assessment process, because they provide a useful analysis of the consumer experience. The results should be compared with the previous year’s and – if available – benchmark data from similar colleges.
Once self-assessment reports have been written, they need to be validated. This usually involves a formal panel review with members of the senior team, governors and external consultants or advisers.
Curriculum leaders should prepare their reports – including data analysis – in advance of validation meetings and submit them to the panel for their consideration. The panellists should interrogate the reports before the meeting, highlighting key strengths and weaknesses – and annotating pertinent questions and concerns. If this work is done in advance, the meeting itself can focus on panellists’ questions rather than leaders’ presentations, in order to try to ascertain the reasons for certain outcomes and trends.
There is no doubt that curriculum leaders can present their findings in a positive light, but what is needed is an honest account of the facts and an appropriate level of challenge.
The documentation given to panellists in advance of validation should not be too long or descriptive. Rather, they should be succinct and evaluative. Ultimately, panellists need to know the following:
* What are the headline results per qualification, per level, per age range and per student group?
* What were the achievement and pass rates – and the high grades and value-added scores – versus what was predicted, what was targeted, and the benchmark?
* What interventions were put in place, when and for which students? What effect did each intervention have and what was learned about the value of each intervention? Is assessment and tracking effective?
The end-result of the self-assessment process – once the report has been validated – should be a quality improvement plan (QIP) that articulates the priorities for the next year or so. SARs and quality improvement plans should talk to each other. Self-assessment provides the roadmap for the year ahead. Improvement plans explain how to reach your destination.
QIPs should be succinct and focused on a small number of high-impact actions. If there are too many objectives, it’s likely they’ll be ignored. QIPs should be specific, measurable, achievable, realistic and timely (SMART), making it clear who is responsible for each action, the resources that are required, the agreed deadline for completion, what success will look like and how it will be measured.
QIPs should also be “live” documents, frequently reviewed and annotated, rather than corporate brochures, dragged from their dusty drawers to impress governors and inspectors.
The self-assessment and improvement planning process is cyclical and never-ending. Once the year is out, the QIP should be reviewed and the outcome of this review should feed into the next SAR. It’s a war of attrition and one you’ll never win. But that’s the beauty of working in education: you’re always learning and improving.
Matt Bromley is an education journalist and author. He tweets @mj_bromley