How to educate a citizen

This blog is adapted from a long thread on twitter. I thought it was useful to test how swiftly I could turn it into a blog. You can probably notice in the language.

Ok, some thoughts on E.D.Hirsch’s latest book. To be honest, I’ve seen/heard 4 or 5 interviews with him so some of that might be mixed in. Let me begin by saying that it’s quite clear that a desire for social justice really drives Hirsch. I seems passionate in both audio and writing. Several people, including himself, gave called this (last) book his most pronounced. I can see that but I do think because of that some facts suffer. This is why I thought it wasn’t as good as ‘Why knowledge matters’ (I wrote… about that book). There is overlap with that book. I wonder how many edutwitter books by now have reported on De Groot and Chase and Simon’s chess research…On the whole, I thought the themes were quite fragmented in that the book connects a lot of seemingly disconnected topics.

Some points in chapter 1 interesting (although I don’t know enough about ethnicity and culture to really comment) but also, I thought, an exaggeration of the US crisis (at least, based on PISA). “The nations that moved ahead of us had taken note from PISA of their shortcomings in schooling, and they improved their results. We continue downward.” (p. 23). Really? (From the US country notes)

And what about TIMSS and PIRLS? Sure, you might still be able to make a case things are not going well, but if you rely so much on such scores (see chapter 6 as well) then this isn’t very strong, in my opinion.

By the way, in chapter 2 the PISA claim is repeated:
“On the international PISA test for fifteen- year- olds, the scores of US children and their relative ranking have both been sinking ever since 2000, when the scores started being recorded. “ – so here it is again.

Chapter 2 are interviews/conversations with two teachers. They are interesting. I think they ideologically nicely juxtapose their previous and current beliefs. But they are anecdotal. And in my opinion a caricature of ‘child-centered classrooms’. Also very US focussed (understandable). All in all, although I enjoyed reading the interviews, I was wondering what support they actually gave for the central thesis of the book. Not so much, I thought.

Chapter 3 focuses on evidence on shared-knowledge schools. A lot is made of a study by Grissmer but I haven’t been able to locate it. If you do, please let me know. Hirsch says “These are decisive results.” but we need to be able to then read it. A footnote says “These are decisive results: The results were reported by Professor Grissmer in a recent public lecture at a school conference in Denver. I eagerly await publication!” – unsatisfactory. It would have been good to put some more references re charters and knowledge-curricula. To my knowledge they are quite mixed. That’s why it’s so important, in my opinion, to look at the details. On Charter schools Fryer’s work or Chabrier maybe Or the older Datnow et al.…
An EEF trial:…

But sure, a book can’t mention everything. But to have the argument to revolve around a-yet-to-be-published study…  Chapter 4 is about teacher training institutions. Thought that was quite one-sided and possibly very US-centric. But the classics are there: Dewey, constructivism and then Kirschner, Sweller & Clark of course. Would have liked more discussion. Chapter 5 underlines what Hirsch I think has mentioned numerous times in the interviews as well: ‘according to brain science’. There’s quite some overlap, IIRC, with ‘Why Knowledge Matters’. There are some interesting references here. But for a claim ‘according to science’ I thought ‘blank slate’ was quite limited. Maybe ‘the nature of nurture’ would be appropriate. Ericsson comes by again with a similar line as in his previous book. I’m not sure if it is a completely correct representation of his work. This also is the chapter with chess research again and reading comprehension. I think most of you might know that I think the ‘only domain-specific’ claim is not correct. Chi, Ohlsson, Abrami, Kaufman, and more would give a picture where skills are ‘practical knowledge’, rooted in domain knowledge but with generic elements.

Chapter 6 gives an international perspective, so potentially for my International Largescale Assessment focus the most interesting.
Unfortunately, it has several issues. Firstly, it is often just stated that a curriculum is or is not ‘knowledge-focussed’, seldom with a robust analysis or description of that curriculum (just like the previous book for Sweden and France). Secondly, even if you would pin down the curriculum, you also need to know other education system variables. But even then, most of the PISA data is tricky for causal claims. As a result, the inferences are shaky. This is the graph for Germany: “The path still continues upward” – seems legit for reading.

But December 2019 PISA 2018 results and what about other subjects? Plus the lag from reforms and scores. And an analysis of *how* we know Germany “instituted what was in effect a shared- knowledge national curriculum in each grade of elementary school.” – has it now worn off?

We had already seen the imo incorrect US pisa analysis. And then for Sweden again huge assumptions about effects. But for Hirsch these are just a prologue to “That prediction turns to a near certainty when we consider the example of France.”. Now, similar arguments hold for France. I previously had written about it as it was in the previous book as well

It starts with “These national data are scientifically more compelling than any contrived experiment could ever be, because of the large number of subjects sampled in the PISA scores.” (p. 133) – not sure what Hirsch means here re the argument he had just built up. For this analysis I think it was only a fairly limited verbal test. As far as I know, it’s not a lab experiment. But anyway, those scores do indeed show a decrease. Not sure how you can causally claim the Loi Jospin did that. But Hirsch’s claims have always been about inequality as well. Hirsch has evidenced this better this time, but the irony is that it does uncover an error in the previous diagram (in ‘Why Knowledge Matters’).

The new diagram is better because it is taken directly from French sources. However, this graph was published after his previous book.

But compare the previous diagram and this one. On the left ‘laborers’ and ‘unemployed’ seem switched. In the blog I had already mentioned not all categories were there.

And although 1997 is included now, 2015 wasn’t. Of course you could argue that the curriculum effects Hirsch focussed on weren’t relevant any more for France, but as 2015 was relevant for the other countries, I would have included it. *Especially* because the book does quite a strange thing of *mirroring* the trajectory of the scores towards the future, and then saying ‘this is what you’d get if you do a shared-knowledge curriculum’. So all years covered, except the existing 2015 data points.

The final chapters/part are/is about commonality and patriotism. I recognised the community focus from the previous book. I like that. I’m not so convinced, though, that ‘patriotism’ is the best way to build such (speech) communities. Of course, you *can* go on the ‘patriotism is fine’ tour, but to be honest I don’t understand why you really would want to venture that way, when you’ve already got a strong ‘community’ argument. Or ‘tribe’ arguments.

Ok, that’s enough, lest I get memes a la ‘lengthier than the book itself’. 😎


Presentation Metacognition for Teachmeet SEF

Here are the slides:


Presentations on geometry education

I did two presentations on geometry education this week. One at the BSRLM day conference and one seminar at Birkbeck, University of London.

Slides for Birkbeck:
The slides have more links.
The flowchart software from my Japanese colleagues is at
Slides for the BSRLM:

AERA 2019

I made a dedicated page for AERA 2019.


Item on BBC radio Solent

Dr Christian Bokhove appeared on BBC radio Solent on the 30th of August.
It was in reaction to a new report by the Education Policy Institute (EPI) that assessed the state of the teacher labour market in England, and contained recommendations on how to especially get more maths and science teachers.
You can here the contribution from 37 minutes via the BBC solent website, but key points (in a different order) were:

  1. Good that recruitment and retention emphasised as big challenge. This has been denied for too long, especially by the government.
  2. Rather than recruitment, retention is probably more important. A record number of teachers are leaving, and we should try to find out why. Although money plays a part, workload through bureaucracy, working hours, marking, even more important.
  3. With current funding cuts it’s hard to justify monetary incentives for *some* new teachers. This is likely to just cause resentment.
  4. I am not aware of monetary incentives in the long run (retention) really being successful. It should not become just like bursaries and grants to train to teach; the National Audit Office’s findings were not positive:…
  5. We need good subject knowledge and stringent requirements regarding degrees can perhaps deliver that. But it is not a given they are or become the best teachers and, more importantly, last five/six years saw lot of pressure on entry requirements *because of* teacher shortages. So when you raise the bar, expect larger shortages.



Thoughts on memorisation

I don’t feel I have the time to really write (or rather finish) a blog piece on memoriation. But this sequence of tweets, as a reaction to some memorisation claims in Scientific American give some support for why I think that image is a caricature. Memorisation, procedures, understanding all go hand in hand (see for example Rittle-Johnson, Siegler, Alibali, Star etc.). I leave it to others to write a real blog 😉



EEF: Core Knowledge

It is almost impossible to extensively discuss all the studies done by the EEF. In a previous blog I summarised the reports from July 2015 and in this Google spreadsheet I have tabulated all the EEF reports. One study I thought did not get much ‘airplay’ was the “Word and World reading programme” which:


“aimed to improve the reading comprehension and wider literacy skills of children aged 7­–9 from low income families. The programme focused on improving the vocabulary and background knowledge (sometimes labelled ‘core knowledge’) of pupils, through the use of specially designed ‘knowledge rich’ reading material, vocabulary word lists, a read-aloud approach, and resources such as atlases and globes. The programme is based on the rationale that children need background knowledge to be able to comprehend what they read, and that improving background knowledge is an effective way to help struggling readers.”

I was interested in this project because to be honest I had heard a lot of Hirsch’s work in books by, for example, Daisy Christodoulou but not yet read a lot about actual ‘Core Knowledge’ inspired interventions (I had read some info, I think it was the Durham university press release, that she also was involved in the delivery/training of the programme). I agree with many that sometimes knowledge has been undervalued. To become an expert, you need knowledge, one particular poignant example is in educating mathematics teachers: they really need more maths knowledge than ‘just one step ahead’ of what they are teaching (at both GCSE and A level). But it also isn’t the case that when you have knowledge everything else follows automatically. With that in mind I was curious how this intervention would fare. The programme was developed and delivered by The Curriculum Centre, a charitable organisation which is part of Future Academies.

The first thing that struck me on the report page was that the study was classified as a ‘Pilot Study’ and further that “The study did not seek to assess impact on attainment in a robust way”. For almost £150k I would expect there to be a bit more ambition? The three aims of the evaluation (pilot?) were (i) to assess the feasibility of the approach and its reception by schools. (ii) to assess the promise of the approach and provide recommendations that could be used to improve the approach in the future. (iii) to provide recommendations that could be used to design any future trial, including an assessment of the appropriate size of any future trial. Especially the third aim seems a bit premature, although granted, in the report the answer to the question “Is the approach ready for a full trial without further development?” is no. This is justified because the results will show there are some big challenges.

The report has some very interesting sections:

  • There is an overview of previous ‘Core Knowledge’ research. This overview shows very mixed results with extremely positive but also extremely negative effects. There are numerous issues with potential bias as well, which makes the evaluators conclude “Although widely implemented, the evidence base linking the CK approach to improved literacy is currently underdeveloped. Evaluations to date have commonly adopted matched designs and have been developer led or funded.”. I think this is sufficient grounds to study further.
  • Having reiterated the aims on the start page, I was surprised in the report to see the ‘likely magnitude of the effect’ as objective. Again, it seems set up to provide further funding for a largescale effectiveness trial.
  • The sample concerned eight primary schools, with a further eight schools in the same areas acting as control. There were two year-groups in each school (Year 3 and Year 4). It was further assumed that there would be 90 pupils in each school (1.5 classes in each year group, and an average of 30 pupils per class), yielding a total of 720 pupils (90 pupils each in eight schools) in each group.
  • I often am a bit worried about control groups who use ‘regular practice’ because this might not be a homogeneous approach. I know it is suggested that randomization partly ‘solves’ this but nevertheless I would like to know more about these ‘regular practices’. Note that this also is important from an intervention point of view: it could be that schools that have a similar approach as the intervention already (it is suggested in the report this was not the case, but it was for EEF growth mindset study) might not improve much.
  • There is a section on ‘significance’ in the report which originally, as one of the evaluators mentioned, also was in the ‘Philosophy report’ (note this can also be seen in some references in the reference list which are not in *that* report but are in this one).
    The last sentences seem rather dismissive of approaches used in most of the EEF reports.
  • The intervention was well received, but what I wonder is whether this was to be expected as, as far as I can see Future Academies mainly featured. I could imagine that, although there were no ‘knowledge’ programme in place, there might already be a certain culture. Of course, this is perfectly fine, but I think ‘teacher reception’ of a programme only is a small element of its total appeal.
  • The section on ‘lesson implementation’ also was very interesting. It seemed to show that the implementation was generally well conducted, which seems a bit contradictory with a later point. But the most fascinating point to me was:
    “It appeared that the highly prescriptive and structured lessons were both an advantage and a disadvantage. Most teachers said they liked the fact that the lessons were planned for them and there was minimal preparation on their part; some, however, adhered so closely to the prescribed programme that the lessons appeared contrived and there was little opportunity for open discussions. In contrast, where teachers attempted to initiate discussions, their lack of general knowledge and confidence in taking the discussions beyond the text was sometimes apparent.”

    One of the conclusions addresses this lack of subject knowledge:

    “In some lessons, teachers’ subject knowledge did not appear to be sufficient to support an in-depth discussion with pupils about some of the topics within the programme curriculum. This suggests that additional training or support materials may have been beneficial.”

    I think it’s a bit unfair to say that teachers’ subject knowledge did not appear to be sufficient (apart from the fact that we are dealing with a self-selected set of teachers from one specific academy chain) as (i) the intervention was quite prescriptive, and (ii) the recommendation shows that it might be missing some features in the design of the intervention. There were more of those ‘areas of improvement’, for example in visuals and the quality of the workbooks.

  • In light of the first bullet it is remarkable that the Curriculum Centre (TCC) designed the teacher survey themselves. And it could have been much better. The report shows:
  • There is quite a long list of factors supporting implementation and also a longer list with barriers. The teacher turnover within participating schools was striking.
  • Finally the effects, which in the ‘web page’ conclude “did not indicate a large positive effect”, actually indicate a very small (probably non-significant 😉 negative effect. I think the conclusions presented on the webpage are a bit coloured.
    The picture for FSM and gender differences are slightly different but not very notable.

Overall, I feel it can be said that the Core Knowledge intervention was not effective, although teachers felt it was and liked the intervention. There also seemed to be many things that could and should be improved in the intervention. In the meantime it can hardly be said there is evidence to suggest Core Knowledge might be more effective than ‘regular practice’ (whatever that may be). Sure, teachers in the participating schools like the intervention, but is this enough to warrant its implementation? The recent ‘evidence informed’ developments would suggest not, after all many myths are widely accepted. The suggestion that teachers lack subject knowledge, if true, might result in recommendation for teachers’ subject knowledge but I think it’s a bit ‘easy’ to suggest that this might have impeded the implementation of the intervention. Designers of an intervention need to take the teachers into account; after all they need to deliver the programme. This should be a feature of the complete intervention. So the overall judgement at the moment is that it is not effective and many aspects of the intervention should be improved. I think it would be strange if results like these would culminate in a larger effectiveness trial.

In my book ‘knowledge’ remains a very important, maybe the most important, ingredient in developing both skills and understanding. There are good reasons to assume this, as I will try to elaborate on for mathematics education in a future post about ‘memorisation and understanding’. But even the way you organise such ‘knowledge’ through interventions needs empirical evidence. Unfortunately, this EEF report on the Hirsch inspired ‘Core Knowledge’ does not provide such evidence.


It’s been a while…

snaI’m updating the website and hope to make more time for blogging.

For a start I posted the materials for the Network meeting in Duisburg on October 21st, 2014 here.


Hopefully more to follow.



aeraI’m at AERA 2014 and will be trying to liveblog on twitter.


xyz-MOOC reflections

There’s so much online that’s written about MOOCs. Disclaimer: I follow many MOOCs (see here) and would love to teach one. What I am getting tired of is the discussion on ‘this is too instructional’ and ‘oh, it needs to be more open’. Education has to be varied; there is room for all approaches, just as long as it fits the educational objectives and aims. Sometimes a lecture works well, so for example being told what a theory entails, and then an example, after which you try for yourself is a perfectly reasonable way to learn things. Sometimes, you can collaborate with students, in real life and online in forums. Peer review is great. We’ve got many many useful tools for teaching and learning. However, it often seems as if ‘both extremes’ of the continuum just want to make the point that ‘instruction is the devil’ or ‘problem-based learning is too vague’. Stop it!

With regard to MOOCs a recent discussion has been about success and fail factors for MOOCs. Some suggest, they even use their own fancy acronym like cMOOC or xMOOC, that some are ‘old pedagogy’ and others are ‘new’. I don’t think so. Classroom discussions, collaborations etc have been in education for centuries, so don’t pretend as if there really is something new under the sun if you look at pedagogy. Of course, there are differences in technology.

How can we explain the success of certain MOOCs, and failure of others (can we even call it a failure, in this time with more media coverage than ever?). I would like to see an analysis that takes into account factors as: difficulty level, feelings of entitlement, no obligations. Sometimes I’ve got the idea that participants feel entitled to a course that is suited for every level. So a more open ‘here are some suggestions for reading, write down what you think’ fits more of the 30k plus participants than a course with difficult maths in it. And people seem to expect that because it’s open and available it should be. I don’t agree. You can just un-enroll. So going to this case, a bit being devil’s advocate: just that thousands of students are happy with the course, doesn’t per se mean it’s a good course, just that a lot of people found it enjoyable, could cope with it, etc. Does the fact that MOOCs make courses more accessible mean that everyone’s entitled to learn? Or is this up to the learner?