How to educate a citizen

This blog is adapted from a long thread on twitter. I thought it was useful to test how swiftly I could turn it into a blog. You can probably notice in the language.

Ok, some thoughts on E.D.Hirsch’s latest book. To be honest, I’ve seen/heard 4 or 5 interviews with him so some of that might be mixed in. Let me begin by saying that it’s quite clear that a desire for social justice really drives Hirsch. I seems passionate in both audio and writing. Several people, including himself, gave called this (last) book his most pronounced. I can see that but I do think because of that some facts suffer. This is why I thought it wasn’t as good as ‘Why knowledge matters’ (I wrote… about that book). There is overlap with that book. I wonder how many edutwitter books by now have reported on De Groot and Chase and Simon’s chess research…On the whole, I thought the themes were quite fragmented in that the book connects a lot of seemingly disconnected topics.

Some points in chapter 1 interesting (although I don’t know enough about ethnicity and culture to really comment) but also, I thought, an exaggeration of the US crisis (at least, based on PISA). “The nations that moved ahead of us had taken note from PISA of their shortcomings in schooling, and they improved their results. We continue downward.” (p. 23). Really? (From the US country notes)

And what about TIMSS and PIRLS? Sure, you might still be able to make a case things are not going well, but if you rely so much on such scores (see chapter 6 as well) then this isn’t very strong, in my opinion.

By the way, in chapter 2 the PISA claim is repeated:
“On the international PISA test for fifteen- year- olds, the scores of US children and their relative ranking have both been sinking ever since 2000, when the scores started being recorded. “ – so here it is again.

Chapter 2 are interviews/conversations with two teachers. They are interesting. I think they ideologically nicely juxtapose their previous and current beliefs. But they are anecdotal. And in my opinion a caricature of ‘child-centered classrooms’. Also very US focussed (understandable). All in all, although I enjoyed reading the interviews, I was wondering what support they actually gave for the central thesis of the book. Not so much, I thought.

Chapter 3 focuses on evidence on shared-knowledge schools. A lot is made of a study by Grissmer but I haven’t been able to locate it. If you do, please let me know. Hirsch says “These are decisive results.” but we need to be able to then read it. A footnote says “These are decisive results: The results were reported by Professor Grissmer in a recent public lecture at a school conference in Denver. I eagerly await publication!” – unsatisfactory. It would have been good to put some more references re charters and knowledge-curricula. To my knowledge they are quite mixed. That’s why it’s so important, in my opinion, to look at the details. On Charter schools Fryer’s work or Chabrier maybe Or the older Datnow et al.…
An EEF trial:…

But sure, a book can’t mention everything. But to have the argument to revolve around a-yet-to-be-published study…  Chapter 4 is about teacher training institutions. Thought that was quite one-sided and possibly very US-centric. But the classics are there: Dewey, constructivism and then Kirschner, Sweller & Clark of course. Would have liked more discussion. Chapter 5 underlines what Hirsch I think has mentioned numerous times in the interviews as well: ‘according to brain science’. There’s quite some overlap, IIRC, with ‘Why Knowledge Matters’. There are some interesting references here. But for a claim ‘according to science’ I thought ‘blank slate’ was quite limited. Maybe ‘the nature of nurture’ would be appropriate. Ericsson comes by again with a similar line as in his previous book. I’m not sure if it is a completely correct representation of his work. This also is the chapter with chess research again and reading comprehension. I think most of you might know that I think the ‘only domain-specific’ claim is not correct. Chi, Ohlsson, Abrami, Kaufman, and more would give a picture where skills are ‘practical knowledge’, rooted in domain knowledge but with generic elements.

Chapter 6 gives an international perspective, so potentially for my International Largescale Assessment focus the most interesting.
Unfortunately, it has several issues. Firstly, it is often just stated that a curriculum is or is not ‘knowledge-focussed’, seldom with a robust analysis or description of that curriculum (just like the previous book for Sweden and France). Secondly, even if you would pin down the curriculum, you also need to know other education system variables. But even then, most of the PISA data is tricky for causal claims. As a result, the inferences are shaky. This is the graph for Germany: “The path still continues upward” – seems legit for reading.

But December 2019 PISA 2018 results and what about other subjects? Plus the lag from reforms and scores. And an analysis of *how* we know Germany “instituted what was in effect a shared- knowledge national curriculum in each grade of elementary school.” – has it now worn off?

We had already seen the imo incorrect US pisa analysis. And then for Sweden again huge assumptions about effects. But for Hirsch these are just a prologue to “That prediction turns to a near certainty when we consider the example of France.”. Now, similar arguments hold for France. I previously had written about it as it was in the previous book as well

It starts with “These national data are scientifically more compelling than any contrived experiment could ever be, because of the large number of subjects sampled in the PISA scores.” (p. 133) – not sure what Hirsch means here re the argument he had just built up. For this analysis I think it was only a fairly limited verbal test. As far as I know, it’s not a lab experiment. But anyway, those scores do indeed show a decrease. Not sure how you can causally claim the Loi Jospin did that. But Hirsch’s claims have always been about inequality as well. Hirsch has evidenced this better this time, but the irony is that it does uncover an error in the previous diagram (in ‘Why Knowledge Matters’).

The new diagram is better because it is taken directly from French sources. However, this graph was published after his previous book.

But compare the previous diagram and this one. On the left ‘laborers’ and ‘unemployed’ seem switched. In the blog I had already mentioned not all categories were there.

And although 1997 is included now, 2015 wasn’t. Of course you could argue that the curriculum effects Hirsch focussed on weren’t relevant any more for France, but as 2015 was relevant for the other countries, I would have included it. *Especially* because the book does quite a strange thing of *mirroring* the trajectory of the scores towards the future, and then saying ‘this is what you’d get if you do a shared-knowledge curriculum’. So all years covered, except the existing 2015 data points.

The final chapters/part are/is about commonality and patriotism. I recognised the community focus from the previous book. I like that. I’m not so convinced, though, that ‘patriotism’ is the best way to build such (speech) communities. Of course, you *can* go on the ‘patriotism is fine’ tour, but to be honest I don’t understand why you really would want to venture that way, when you’ve already got a strong ‘community’ argument. Or ‘tribe’ arguments.

Ok, that’s enough, lest I get memes a la ‘lengthier than the book itself’. 😎


Presentation Metacognition for Teachmeet SEF

Here are the slides:


Presentations on geometry education

I did two presentations on geometry education this week. One at the BSRLM day conference and one seminar at Birkbeck, University of London.

Slides for Birkbeck:
The slides have more links.
The flowchart software from my Japanese colleagues is at
Slides for the BSRLM:

AERA 2019

I made a dedicated page for AERA 2019.


Item on BBC radio Solent

Dr Christian Bokhove appeared on BBC radio Solent on the 30th of August.
It was in reaction to a new report by the Education Policy Institute (EPI) that assessed the state of the teacher labour market in England, and contained recommendations on how to especially get more maths and science teachers.
You can here the contribution from 37 minutes via the BBC solent website, but key points (in a different order) were:

  1. Good that recruitment and retention emphasised as big challenge. This has been denied for too long, especially by the government.
  2. Rather than recruitment, retention is probably more important. A record number of teachers are leaving, and we should try to find out why. Although money plays a part, workload through bureaucracy, working hours, marking, even more important.
  3. With current funding cuts it’s hard to justify monetary incentives for *some* new teachers. This is likely to just cause resentment.
  4. I am not aware of monetary incentives in the long run (retention) really being successful. It should not become just like bursaries and grants to train to teach; the National Audit Office’s findings were not positive:…
  5. We need good subject knowledge and stringent requirements regarding degrees can perhaps deliver that. But it is not a given they are or become the best teachers and, more importantly, last five/six years saw lot of pressure on entry requirements *because of* teacher shortages. So when you raise the bar, expect larger shortages.


Education Education Research

Can research literacy help our schools?

This is the English text of a blog that appeared on a Swedish site (kind translation by Sara Hjelm).

In efforts to debunk education myths there is a real danger that research is oversimplified. This is wholly understandable from the perspective of a teacher. Finding and understanding research is a hard and difficult process. The ‘wisdom of the crowds’ might help in this, but it often remains a challenge for all involved to translate complex research findings to concrete recommendations for teachers. It is certainly not the case that teacher simply can adopt and integrate these ideas in their daily practice. Furthermore, you can shout as often as you want that ‘according to research X should work’ but if it’s not working during teaching, you will make adjustments.

Why is it such a challenge for teachers to interpret research findings? As Howard-Jones (2014) indicates, this firstly might be because of cultural conditions, for example with regard to differences in terminology in language (e.g. see Lilienfeld et al., 2015; 2017). An example of this can be seen in the use of the words ‘significance’ and ‘reliability’. Both have a typical ‘daily use’ but also a specific statistical and assessment meaning. A second reason Howard-Jones mentions, is that counter-evidence might be difficult to access. A third element might be that claims simply are untestable, for example because they assume knowledge about cognitive processes, or even the brain, that are unknown to us (yet). Finally, an important factor we can’t rule out is bias. When we evaluate and scrutinise evidence, a range of emotional, developmental and cultural biases interact with emerging myths. One particularly important bias is ‘publication bias’, which might be one of the biggest challenges for academia in generak. Publication bias is sometimes called the ‘file drawer problem’ and refers to the situation what you read in research articles often are just the positive outcomes. If a study does not yield a ‘special’ finding, then unfortunately it is less likely to be published.

Because of these challenges, navigating your way through the research landscape is very time-consuming and requires a lot of research knowledge, for example on research designs, prior literature, statistical methods, key variables used and so forth. And even with this appropriate knowledge, understanding research still will take a lot of time. For a quick scan this might be 15 minutes or so, but for the full works you would have to look in detail at the instruments, the statistical methods or you would have to follow-up other articles referenced in a paper, often amounting to hours of works. This is time that busy practitioners haven’t got. Science is incremental ie we build on an existing body of knowledge, and every new study provides a little bit more insight in the issue at hand. One study most likely is not enough to either confirm or disprove a set of existing studies. A body of knowledge can be more readily formed through triangulation and looking at the same phenomenon from different perspectives:ten experimental studies might sit next to ten qualitative studies, economic papers might sit next to classroom studies.

In my view, there are quite a lot of examples where there is a danger that simple conclusions might create new myths or misconceptions. Let me give two of them, which have been popular on social media. The first example is the work by E.D. Hirsch. I think his views can’t be seen separate from the US context. Hirsch is passionate about educational fairness, but the so-called GINI coefficient seems to indicate that systemic inequality is much larger in the US. Hirsch in my view also tends to disregard different types of knowledge: he is positive about ‘knowledge’ but quite negative about ‘skills’, for example. However, ‘skills’ could simply be seen as ‘practical knowledge’ (e.g. see Ohlsson, 2011), emphasising the important role of knowledge, but still acknowledging you need more than ‘declarative knowledge’ to be ‘skilled’. In his last book, Hirsch also contends that a student-centred curriculum increased educational inequality in France, while more recent data and a more comprehensive analysis, seems to indicate this is not the case (see A second example might be the currently very popular Cognitive Load Theory by Professor John Sweller. Not everyone seems to realise that this theory does not include a view on motivation. Sweller is open about this and that’s fine of course. It does, however, not mean that it is irrelevant. Research needs to indicate what its scope is, and what it does or does not include, and subsequent conclusions need to be commensurate with the research questions and scope. This precision in wording is important, but inevitably suffers from word count restrictions, whether in articles, blogs or 280 character tweets. There is a tension between brevity, clarity and doing justice to the complex nature of the education context.

Ideally, I think, we can help each other out. We need practitioners, we need academics, we need senior leadership, we need exam boards, we need subject specialists, to all work together. We also need improved incentives to build these bridges. I am hopeful that, if we do that, we can genuinely make a positive contribution to our schools.

Dr. Christian Bokhove was a secondary maths and computer science teacher in the Netherlands from

1998 to 2012 and now is a lecturer in mathematics education at the University of Southampton. He tweets as @cbokhove and has a blog he should write more for at


Howard-Jones, P. (2014). Neuroscience and education: myths and messages. Nature Reviews Neuroscience, 15(12), 817-824.

Lilienfeld, S.O., Pydych, A.L., Lynn, S.J., Latzman, R.D., & Waldman, I.D. (2017). 50 Differences that make a difference: A compendium of frequently confused term pairs in psychology. Frontiers in Psychology,

Lilienfeld, S.O., Sauvigné, K.C., Lynn, S.J., Cautin, R.L., Latzman, R.D., & Waldman, I.D. (2015). Fifty psychological and psychiatric terms to avoid: a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases. Frontiers in Psychology,

Ohlsson, S. (2011). Deep Learning: How the Mind Overrides Experience. Cambridge University Press: New York.

Education Education Research

Presentation for HoDs mathematics of Trinity group

I gave a presentation about the spatial research I did recently with 85 year 7 pupils.

Education Education Research

researchEd presentation on myths about myths

Last weekend I gave a Dutch and English version of my ‘This is the new myth’ talk. This talk did not come about in some vain attempt to take over the mythical status of some other excellent ‘mythbusters’, like Pedro De Bruyckere, Paul Kirschner and Casper Hulshof in their excellent book, but more with frustration how some facts opposed to certain myths, became simplified beyond recognition, often distorting the original message. In other words, a danger that the debunking of myths became new myths on their own. In this talk I go into how myths might come about, and give some examples, including one on iron in spinach. I then give some examples where I think facts are misrepresented on and in the (social) media. I have mainly chosen themes that are often highlighted by those who endeavour for a more evidence-informed approach to teaching, in that process purport to combat myths, but then -in my view- give an overly simplistic representation of some research findings. In the talk I cover sources that for example purportedly show ‘peer disruption costs students money’, ‘we believe research quicker if there is a brain picture’, ‘ less load is best and so there is no place for problem-based learning and inquiry in education’ and ‘student-centred policies cause inequality’. Maybe there are other robust studies that show this (although I would need to be convinced) but the sources I have observed on the web, are almost always misrepresented, in my opinion.  I realise that these descriptions *also* simplify these judgements, but the aim is not to focus on the errors per se, but that we need to be vigilant and aware of the mechanisms behind myth creation.

The slides for the talk are here:

A video of the talk is here:

I recently also saw an article (only in Dutch, I think) that nicely complements my talk and I might integrate some of the sources in a future version.

Education Research Research

Transcribing audio with less pain

forblogLike so many people I’ve never really liked transcribing audio, for example from interviews or focus groups. It is time-consuming and boring. Of course, you can outsource this but that unfortunately costs money. So I thought: “how can I do this quicker with available services.”

Last year with a colleague I wrote an article on exactly this: using the Youtube auto-captioning feature to more quickly transcribe audio. The quality of Youtube’s voice recognition has improved considerably in the last decade. The paper gives three examples, from interview audio, a classroom recording, and a Chilcott inquiry interview to show how useful this can be for transcribing audio ‘as a first transcript version’. I just posted the pre-publication.


To demonstrate the procedure, I applied it to my recent podcast with TES.

  1. You first need to get hold of an audio file. I assume you have it from your data collection. Sometimes you can obtain them like using apps in the browser like DownThemAll! (that one is for Firefox),
  2. Before being able to upload to Youtube, you need to make a video file out of it. For windows, I prefer Movie Maker. Unfortunately this has been discontinued, but you can still find it here. I make a video with an image and the audio as accompanying sound.
  3. Now this ‘movie’ (actually audio with one image) can be uploaded to Youtube. After a few hours Youtube should have created closed captions for the audio. Ensure that privacy settings are set correctly.
  4. The captions can be downloaded as text file via multiple tools like DIY captions or downsub. Some software is non-web-browser based, and some can also work with private settings (just as long as you are the ‘owner’ of the file, of course). The result might be a subtitle file, which could further be edited with subtitle software.
  5. You can see that this version already is pretty good. I think it captures it for around 80%. It took maybe 15 minutes of actual labour and some time for the Youtube captioning to do its work, for a 40 minute audio file.  This saves me a lot of time.
Education Education Research

Educational inequality: old paper by Hanushek

Probably one of the most influential people in OECD policy has been Hanushek. For someone from the Netherlands, the constant ‘bashing’ of selection and ‘early tracking’ has been particularly noteworthy. Mainly, because anecdotally I feel that system equality is a big factor, and also because ‘despite’ early tracking the Netherlands tends to do reasonably well in large-scale assessments (except, for some years now TIMSS year 4, which is worrying).

The most often cited paper is this paper by Hanushek and Woesmann. The important image is:

I have got some issues with the inference that ‘early tracking’ tends to increase inequality, based don this data, certainly for the Netherlands.

  1. The data is based on the dispersion of achievement (standard deviation). The Netherlands has the lowest spread in both situations, but contributes to ‘early tracking is bad’ because the SD increases. Yet it still is lowest of all included countries.
  2. PIRLS and PISA reading are two very different large-scale assessments. PIRLS is published by the IEA and their studies tend to be more curriculum focused, while PISA reading less so. I don’t think you can compare them this way.
  3. This also is hard because, as far as I know, the cross-sectional sampling is different, with one looking at classrooms (PIRLS) and the other schools (PISA). At least, that is the case now. There are several years of schooling between the two measurements, and also the samples are different.
  4. Achievement scores in large-scale assessments are typically standardised around a mean of 500, and standard deviation of 100. Standardising this again to help a comparison of two completely different tests seems rather strange. Especially if you then argue that the *slopes* denote increase or decrease of inequality.
  5. Finally, of course, causation/correlation issues.

In sum, I think it is an original study but hard to draw conclusions.