Winston Churchill once said ‘success is stumbling from failure to failure without losing enthusiasm.’
Looking back now on assessment in our first year at Michaela, I can now see what I was blind to then: we stumbled and blundered. What mistakes did we make, and how did we stumble?
We spent hours marking. We spent ages inputting data. And we didn’t design assessments cumulatively.
First mistake: we spent exorbitant amounts of time in the first year marking, in particular marking English and History essays and paragraphs. We wrote comments, we set targets, we tried individualised icons, we corrected misspellings, we corrected grammatical errors, we judged and scored written accuracy, we wrote and shared rubrics with pupils. We spent hours every week on this. Over the year, we must have spent hundreds of hours on it.
The hidden pitfall of marking is opportunity cost. Every hour that a teacher spends marking is an hour they can’t spend on renewable resourcing: resourcing that endures for years. Marking a book is useful for one pupil once only: creating a knowledge organiser is useful for every pupil (and every teacher) that ever uses it at again. Marking is a hornet. Hornets are high-effort, low-impact; butterflies are high-impact, low-effort. Knowledge organisers are a butterfly; marking is a hornet. We had been blind to just how badly the hornet’s nest of marking was stinging us. So we cut marking altogether and now no longer mark at all.
Our second mistake: we spent far too much time in the first few years on data input. We typed in multiple scores for pupils that we didn’t use. Preoccupied by progress, we thought we needed as many numbers as we could get our hands on. But the simplistic equation of ‘more data, better progress’ didn’t hold up under scrutiny. Every teacher typed in multiple scores for each assessment, which were then collated so we could analyse the breakdowns. We were deluged in data, but thirsting for insight. There was far too much data to possibly act on. My muddled thinking left us mired in mediocrity, and we had invested 100s of hours for little long-term impact.
What we realised is this: data must serve teachers, rather than teachers serving data. Our axiom now is that we must only collect data that we use. There’s no point in drowning in data, or killing ourselves to input data that we don’t use.
Our third mistake was this: we had forgotten about forgetting. We designed end-of-unit assessments that tested what pupils had only just learned, and then congratulated ourselves when they did well whilst it was very fresh in the memory. We had pupils write essays just after they had finished the unit. We coached them to superb performances – but they were performances that they would not be able to repeat on that text in English or that period of History even a few weeks later. Certainly, months later, they wouldn’t stand a chance. Just as if you asked me to retake for Physics GCSE tomorrow, I would flunk it badly, so just one year on, our pupils would flunk the exact assessment that they had aced one year earlier.
Looking back with hindsight, these three mistakes – on marking, data and design – helped us realise our two great blind spots in assessment: workload and memory. We didn’t design our assessments with pupils’ memory and teachers’ workload in mind.
We were creating unnecessary and unhelpful workload for teachers that prevented them focusing on what matters most. Marking and data were meant to improve teaching and assessment, but assessment and teaching and had ended up being inhibited by them.
We were forgetting just how much our pupils were forgetting. Forgetting is a huge problem amongst pupils and a huge blind spot in teaching. If pupils have forgotten the Shakespeare play they were studying last year, can they really be said to have learned it properly? What if they can’t remember the causes or course of the war they studied last year in history? Learning is for nothing if it’s all forgotten.
The Battle of the Bridge
Assessment is the bridge between teaching and learning. There’s always a teaching-learning gap. Just because we’ve taught it, it doesn’t mean pupils have learned it. The best teachers close the teaching-learning gap so that their pupils learn – and remember rather than forget – what they are being taught. We’ve found the idea of assessment as a bridge to be a useful analogy for curriculum and exam design. Once you see assessment as a bridge, you can begin to ask new questions that generate new insights: what principles in teaching are equivalent to the laws of physics that underpin the engineering and construction of the bridge? How can we design and create a bridge that is built to endure? How can we create an assessment model that bridges the teaching-learning gap?
We’ve found 3 assessment solutions that have exciting potential. Here are the reasons I’m excited about them:
They have absolutely no cost.
They are low-effort for staff to create.
They have high impact on pupils’ learning.
They are not tech-dependent at all.
They are based on decades of scientific research.
They can be immediately implemented by any teacher on Monday morning.
They have stood the test of time at Michaela over the last three years.
I anticipate we’ll still be using them in three, six and even ten years’ time, and beyond.
In short: no cost, low effort, high impact, research-based, long-term solutions.
Three of the most effective assessment tools we’ve found for closing the teaching-learning gap are daily recaps, weekly quizzes and knowledge exams.
Over 100 years of scientific research evidence suggests that the testing effect has powerful impact on remembering and forgetting. If pupils are to remember and learn what we teach them in the subject curriculum, assessment must be cumulative and revisit curriculum content. The teaching-learning gap gets worse if pupils forget what they’ve learned. As cognitive science has shown, ‘if nothing has been retained in long-term memory, nothing has been learned’. Assessment, by ensuring pupils revisit what they’re learning, can help ensure they remember it.
Pupils forget very swiftly. We use daily recaps, weekly quizzes and biannual knowledge exams to boost pupils’ long-term memory retention and prevent forgetting.
- Daily recaps
Daily recaps are a butterfly: low-effort, high-impact. Departments create recap questions for every single lesson. Every single lesson starts with a recap. They are easy to resource. They consolidate pupils’ learning so they don’t forget. Every day they spend up to 20 minutes in each lesson applying what they’ve learned before. In English, for example, we spend those 20 minutes on grammar recaps, spelling recaps, vocabulary recaps, literature recaps (with questions on characters, themes, plots, devices and context). We do recaps on the unit they have been studying over the last few weeks. We do recaps on the previous unit and previous year’s units. This daily habit builds very strong retention and motivation: pupils feel motivated because they see how much they are remembering and how much more they are learning than ever before. All recaps are open questions, and weaker forms might be given clues. The recaps are always written; they are no-stakes, without any data being collected; they give instant feedback, as they are swiftly marked, corrected and improved by pupils themselves. We’ve ask pupils after: ‘hands up who got 4 out of 5? Hands up who got 5 out of 5, 100%?’ Pupils achieving 100% feel successful and motivated to work hard to revise.
- Weekly Quizzes
Weekly quizzes are a butterfly: low-effort on workload, high-impact on learning. Departments create quiz questions for every week in the school year. Every week there is a quiz in every subject. They are easy to resource. They challenge and test pupils’ understanding. They are mastery tests, where most pupils should be able to achieve a strong result.
We have dramatically, decisively simplified how teachers score them. Instead of marking every single question laboriously, teachers simply sort them into piles. They make swift judgement calls about whether each pupil’s quiz is a pass, excellent, or fail. Each judgement is a simple scan of the pupil’s quiz paper and a decision as to which of the three piles it should be in. Accuracy isn’t perfect, but nor does it need to be: there are diminishing returns to perfecting accuracy.
The data is then inputted in 30 seconds into a beautifully simple tracker. Any pupil failing often is red-flagged, so teachers can focus in lessons on pupils who are struggling. And that is the only data point that our teachers have to keep in mind: which pupils are struggling most?
- Knowledge Exams
Knowledge exams are another butterfly – high impact, low effort. What I love about our knowledge exams is that they are cumulative, so that pupils revise and remember what they’ve learned. We have exam weeks twice yearly, in January and July (not half-termly). We set GCSE-style exams for depth, and we set knowledge exams to test a much fuller breadth of the knowledge pupils have learned. Knowledge exams are 35-question exams that take 60 minutes to complete. They are beautifully simple: they are organised onto 1 sheet of A4 paper, and they can be answered by pupils on one double-sided piece of A4. The breadth we can achieve with these exams is staggering. By Year 9, we have 3 knowledge exams in History, Religion, Science and English alone; they organise 35 questions on what pupils learned in Year 7 and 35 questions on what pupils learned in Year 8, centred on those years’ knowledge organisers. Twice a year, pupils are challenged to revise and remember what they’ve learned over all the years they spent in secondary school. This means they answer 12 knowledge exams – over 400 questions in total across 4 subjects. I am willing to bet that many of our teachers could not beat even our Year 7 pupils on these exams across all subjects! Imagine more than 24 sides of A4 packed with answers from every pupil in the school. The humble knowledge exam is a great catcher of knowledge.
As for marking them? We simply sort them into three piles: excellent, pass and fail. We don’t even record the marks. Teachers just note the names of pupils who failed multiple knowledge exams so we know who’s struggled.
Knowledge exams solve the breadth-depth tradeoff in exams. They give pupils maximum practice with minimum marking burden on teachers.
Simplicity must cut through assessment complexity. We should practise what we preach on cognitive overload for teachers as well as pupils. Assessment resources must be renewable, replicable, sustainable, scalable, enduring, long-term.
And the impact of recaps, quizzes and knowledge exams? Well, it’s very early days yet, but we’ve had some (very weak) Y8 or Y9 pupils miss an entire term though unavoidable long-term illness, only to return fully remembering what they’ve been taught the previous term and previous year. It’s an early indicator that the assessment strategy is bridging the teaching-learning gap and overcoming the savage forgetting curve. The real test of its impact will be GCSE results in 2019, A-level results in 2021 and University access and graduation beyond.
The two blind spots we’ve discovered – memory and workload – provide us with ways of interrogating our teaching and assessment practice:
- How much are pupils remembering?
- Where are they forgetting?
- Where are teachers overloaded?
And I still think that we at Michaela can do more and find better ways of creating assessments with memory and workload in mind. I’m sure our pupils are not yet remembering as much as we’d like them to. I had a conversation with Jonny Porter, our Head of Humanities, just this week, about ramping up the previous-unit daily recaps we do. In this sense, even at Michaela we still feel blind on the blind spot of memory – pupils are still forgetting some of what we are teaching, and we want them to remember what they are learning for the very long-term. Our ambition is that they have learned what we’ve taught for years to come: for five, ten, twenty years.
Every day, teachers and pupils at Michaela see Churchill’s words on the wall: ‘success is never final; failure never fatal; it’s the courage that counts.’ It takes courage to radically simplify assessment – and courage to continually confront our workload and memory blind spots.