…the blog that shall not be named
Sometimes I just get carried away a bit. I managed to get an early copy of Daisy Christodoulou’s new book on assessment called Making Good Progress. I read it, and I made notes. It seems a bit of a shame to do nothing with them, so I decided to publish them as blogs (6 of them as it was about 6000 words). They are only mildly annotated. I think they are fair and balanced, but you will only think so if you aren’t expecting an incredulous ‘oh, it’s the most important book ever’ or ‘it is absolutely useless’. I’ve encountered both in Twitter discussions.
PART 1 | PART 2 | PART 3 | PART 4 | PART 5 | CONCLUSION
PART 1 – THE BEGINNING
This part addresses the foreword, introduction and chapter 1.
I have been following the English education blogosphere for some time now. Daisy Christodoulou might be best known for her book ‘7 myths about education’ (and winning University Challenge with her team). ‘7 myths’ was a decent book with some nice and accessible writing, especially useful because it gave knowledge a bit more attention again. Points for improvement were the fact they weren’t really 7 myths in my view, 3 were variations of another myth, the empirical backing was a bit one-sided, and there was an error in quoting (revised) Bloom. But any way, a fresh voice, and some good ideas; bring it on, now in a new book on assessment.
The foreword of the book (again) is by Dylan Wiliam, best known perhaps for his ‘formative assessment’ work with Paul Black. After all the government malarkey on assessment with ‘assessment after levels’ he rightly so emphasises the timeliness of the book. Schools can make new assessment systems. Of course it is telling that a book needs to address this; it could be argued -especially when a government is keen to point at top performing PISA countries- that such an assessment system could be designed by a government. Of course, we now hear this more and more, but only after finishing the old system, opening the way to all kinds of empirically less grounded and tested practices. The foreword ends with a statement I am not convinced by, namely that formative and summative assessment might have have to be kept apart. For instance, it is perfectly acceptable to use worked examples from old summative assessments in a formative way. One could argue that both summative and formative assessments draw from the same source. In fact, in one of the promoted types of assessment, comparative judgement, one advice seems to be to use exemplars for students to know what teachers are looking for: a summative and formative mix.
One thing that immediately strikes me is that I love the formatting. The book has a nice layout and a good structure. Throughout the book, polygon diagrams perhaps suggest more structure than there is (who hasn’t used triangles ;-). Contrary to 7 myths each chapter seems to really tackle a separate issue, rather than the same issue in a different guise. The reference lists in the beginning are quite extensive. though for people who know the blogosphere a bit one-sided (Oates, Hirsch etc.). Later chapters have less references, and that is a shame because the second half is far more constructive and less ‘this and this is bad’ (more on that later). I can agree with a lot of criticisms in the first half, and even with the drawbacks of ‘levels’, but I am less convinced that some of the proposed alternatives will be an improvement. More evidence would have worked there.
The book starts with an introduction. Unfortunately the introduction immediately sets the tone, and in an un-evidenced way. “In the UK, teacher training courses and the Office for Standards in Education, Children’s Services and Skills (Ofsted) encouraged independent project-based learning, promoted the teaching of transferable skills, and made bold claims about how the Internet replace memory.” I find that a gross generalization. Of course I know about the Robinson’s and Mitras of the world, and there probable *are* people in those organisations saying this (and outside), but is it rife? It is a pattern that also was apparent in ‘7 myths’. The sentence after that with ‘pupils learn best with direct instruction’ (no, novice pupils, it can even backfire with better pupils, so-called expertise reversal) and ‘independent projects overwhelm our limited working memories’ (no, this depends on the amount of germane load or, if you will, element interaction) in my view are caricatures of the scientific evidence. Often this has been parried in debates that it is reasonable to simplify it this way. I’m not sure; my feeling is that this is actually how new myths take hold. Luckily, what follows is a good explanation and problem statement for the book; I think it is good to tackle the topic of assessment.
Chapter 1 starts with a focus on Assessment for Learning (AfL). I think the analysis of why AfL failed, partly focussing on the role and types of feedback, is a good one. Black and Wiliam themselves emphasised the pivotal role of feedback, in that it needed to lead to a change in behaviour in the students. This did not seem to happen well enough. On page 21 it is ironic, given what follows in later chapters, that Christodoulou writes “When government get their hands on anything involving the word ‘assessment’, they want it to be about high stakes monitoring and tracking, not low-stakes diagnostics.” I feel that when Nick Gibb embraces ‘comparative judgement’, this is exactly what is happening. The analysis then continues, on page 23, with sketching two broad approaches in developing skills in the ‘generic skills’ and ‘deliberate practice’ methods. I had the well-known ‘false dichotomy’ feeling here. By adding words like ‘generic’ and also linking one approach to ‘project-based’ I felt there clearly was an ‘agenda’ to let one approach be ‘wrong’ and one ‘correct’. It even goes as far on page 26 to say that the ‘generic skills’ method leads to more focus on exam tasks. No real support for this supposition. Actually, some deliberate practice methods focus on ‘worked examples’ where using exam tasks would be reasonable but also ‘working with exam tasks’. I agree that approaches should be discussed, by the way, but as so many discussions on the web, not in a dichotomous way if evidence points to more nuance.