ISEB project

I’m sorry I can’t be at the ISEB project kickoff meeting because of Covid. On this page I have collected some materials and insights -in raw form- that I was going to use for my talk.

Firstly, let me say that it is great that this project will be trying to push forward the quality of mathematics assessments. Although in my heart I would like such assessment items to be as open and wide-ranging as possible, as to fully facilitate the development of both conceptual and procedural knowledge, the current technological state-of-the-art only manages to do that in some places. I was going to show some examples of innovative assessments ‘in real time’, but now I will just say that novel assessments in time might be able through games (for example see Dragonbox, but also see some of the challenges here), platforms like Numworx (you can log in with a guest account and see some of the open tasks) or assessment methods like ‘comparative judgement’.

However, these are not practical at scale. To be able to improve assessment at scale, we need to both improve technology AND assessment items. In this project we work on improving assessment items and validate them, starting by improving the assessment practices and items that we have. In exploring these issues, I think the following themes and issues are relevant.

  1. Conceptual and procedural knowledge go hand-in-hand. Discussions about improving mathematics sometimes tend to end in binary discussions. However, mathematical proficiency consists of several ingredients. It is helpful when designing mathematics questions to keep this in mind. The slides below give an overview of some of the ideas.
  2. I previously did some videos about the topic as well, see the youtube video below for a relevant excerpt.
  3. One of the themes in the slides, is about ‘cognitive domains’. I like TIMSS’ distinction between items for ‘knowing’, ‘applying’ and ‘reasoning’. TIMSS does not publish many items because of validity and copyright reasons, but you can get some insight in assessment items and cognitive domains via the TIMSS 2011 released items. You can read more about the cognitive domains in the assessment framework, here for the TIMSS 2019 edition.
  4. If this all reminds you of something, I wouldn’t be surprised. As there are overlaps with Bloom’s taxonomy. There is a a lot to criticise about that, but most of the criticisms extend to how they have been used. The ‘revised taxonomy’ is quite decent and you can read how it overlaps quite nicely with many insights from cognitive science: impact version and an open access version. Although the term ‘higher order’ never aimed to say one assessment type was more important than another, it can be said that more cognitive processing is necessary for some than others.

While designing mathematics assessment items, it’s fine to, of course, have items for all stripes. Some items will address more factual, ‘knowing’ questions, others more ‘reasoning’ tasks. In the remainder of the day, you will aim to design assessment items that require more cognitive processing, initially as multiple choice questions.