Categories
Education Politics

Thoughts on the HE green paper

I was asked to give my thoughts on the HE green paper. Here are some thoughts.

  • I agree with a strong emphasis on teaching.
  • This can be improved considerably but it is a caricature to suggest that teaching is not valued.
  • A stronger emphasis on teacher should not mean that it’s just another extra obligation. It must be understood that if we want more emphasis on teaching, perhaps there must be less emphasis on research. A nightmare scenario would be that the TEF adds to the bureaucracy of HEIs
  • In other words, it should not be another REF, but then just for teaching. Certainly if we want a cross-over between research and teaching (p. 20), especially important perhaps for an Education School, then this also means appreciating what can’t be done.
  • Measuring ‘teaching quality’ is difficult. Using multi-modal approaches is better than simple metrics. I think chapter 3 on the TEF addresses this quite reasonably.
  • Teaching is a collaborative affair where teachers (staff) are experts who ‘teach’ students. Teachers together with students. A too student-centered approach (aimed at what the students want) is not desirable. Students do not always know best. Metrics like the NSS are very arbitrary and do not correlate strongly with teaching quality. Research shows that variance of surveys like these lies more at the student level and not institutional level, in other words differences within HEI are much larger than between HEI. This means that comments like on p.19 (point 7.) are unwarranted, as hardly any variance on student experience and engagement can be explained at the HEI level. I appreciate that this can be mitigated by using multiple sources for determining ‘teaching quality’ but this must really be key. In addition, good teachers already listen to students.
  • The link between ‘teaching quality’ and raising fees is undesirable. The comment on p. 19 on ‘value for money’ is subjective as ‘value for money’ can also mean that the fees should be lowered (this would be a good idea), thus increasing the ‘value for money’. Yet, in this context it is used to argue the value should be increased. Given the high scores for the NSS, this seems strange. Further, the HEPI-HEA research also says (p. 9): “Unsurprisingly, when asked about their top three priorities for institutional expenditure, 48% of students chose ‘reducing fee levels’. However, four further clear priorities emerge, each chosen by over one-third of students:  increasing teaching hours, decreasing class sizes, better training for lecturers  and better learning facilities.”. The current report seems to what one-sidedly have chosen only a few of these items.
  • There is another inconsistency in all of this: if teaching quality for students is improved, partly based on student judgements, I presume, students are ‘rewarded’ with higher fees. This seems very paradoxical, very ‘anti market’, and could also set teachers against students.
  • GPA seems more appropriate than the current ‘banding’ but the problem is more that student achievement is conflated with ‘teaching quality’. Certainly with a strong contribution of student evaluations I doubt a new system will do away with a tendency to have higher grades. The new system even seems to incentivize this. I would prefer actions that really do something about root causes. Luckily the limitations are acknowledged on p.26 (point #41).
  • On participation the document does not convey any sense of wider, systemic reasons for a lack of participation, for example Socio-economic inequalities. Of course this document is about HE but like many foci, some acknowledgement of this would have been good.
  • The changes in the market do not acknowledge that there is no real market. Like many semi-public examples this will combine the worst of both worlds.
  • On the education structure: students should not be at the center: students AND teachers should be at the center. In point 4 staff is sorely missed. In addition, ‘market principles’ are very central. It also says ‘affordable’ which seems counter to the fee developments in the last decade. A name ‘Office for Students’ fails to acknowledge HE is a joint affair.
  • The changes in the architecture seem very reasonable but the question needs to be asked “what does this restructuring really solve?”. It sounds like rebranding exercises the private sector often does: old wine in new wineskins. The costs of these reforms are often underestimated (e.g. IT costs).
Categories
Education Research

Presentation at BSRLM day conference

These are the slides of the presentation “Opportunity to learn secondary maths: A curriculum approach with TIMSS 2011 data” I gave at the BSRLM day conference on 7 November 2015 at the University of Reading.

Categories
Education

[Dutch post] Maatwerk: de verengelsing van het onderwijs

Afgelopen Juni stuurde ik onderstaande reactie naar iemand van de VO-raad. Het vat goed samen hoe ik over de huidige ontwikkelingen naar een zogenaamd maatwerkdiploma denk: “Mijn betoog is echter dat er een levensgroot gevaar is dat de wens om maatwerk te leveren, naar mijn mening ingegeven door een relatief kleine groep anekdotische dramaverhalen, gaat leiden tot ‘Engelse toestanden’. Dus dat we een groot pluspunt van het Nederlandse systeem op losse schroeven zetten voor een ideaal dat nooit bereikt gaat worden.”

Ik probeer wat zaken te formuleren die mij te binnen schoten in alle discussies op twitter. Ik heb niet te veel op die precieze bewoording gelet en ben gewoon gaan schrijven; het kan op punten onsamenhangend zijn. Grosso modo komt het er op neer dat ik bang ben dat het Nederlandse systeem te veel ‘Angelsaksisch’ gaat worden: meer selectie en meritocratie. Ik denk dat het moeilijk valt te ontkennen dat economisch gezien de UK en VS ongelijker zijn. Dit geldt ook voor onderwijs. Met alle voorbehouden van dien (met name met betrekking tot validiteit PISA’s, PIAAC’s, TIMSS van de wereld) is 1 aspect in Nederland dat de spreiding van scores kleiner is, met andere woorden de slechtere leerlingen doen het nog steeds vrij goed in Nederland. In de UK is dit niet het geval, Dit terwijl ze ‘op papier’ veel van de voordelen zouden moeten hebben waar nu hoog over wordt opgegeven. Laat ik eens enkele zaken opsommen:

  1. De gedachte dat in Engeland (ik zeg UK maar Schotland is behoorlijk anders) meer maatwerk zou zijn stamt denk ik van de gedachte van het ‘oude systeem’ met O en A levels. Het romantische beeld van de 10-jarige wiskundegenie die al naar de universiteit kan gaan. Los van diverse andere uitdagingen die dit met zich meebrengt is het in de praktijk niet zo romantisch. Op paper zijn er ‘allerlei kansen’ om naar hogere niveaus te gaan, in de praktijk wordt dit gegeven gebruikt om eigenlijk -als het niet lukt- de verantwoordelijkheid op kinderen af te schuiven. Immers, je *kunt* naar een hoger plan maar je haalt het niet. Dus in mijn ogen zorgt dit er juist voor dat er meer ongelijkheid komt (zie ook de volgende punten).
  2. Het toetsen in de UK wordt op papier veel langer uitgesteld. Einde basisonderwijs zijn er de SATs maar die dienen niet voor plaatsing VO. Ze zijn er alleen om basisscholen op ‘af te rekenen’. VO plaatsen worden vergeven op basis van ‘catchment’, kortom waar je woont. Dit is natuurlijk al sterk verbonden met Socio-Economische Status, beter bedeelden wonen in gebieden met doorgaans betere scholen, mede omdat catchment ook huurprijzen en huizenprijzen (indirect) bepalen. Dit leidt tot straten waar ene kant straat met zelfde huizen andere kant 200 pond per maand duurder. Maar het is nog gekker, op papier begint iedereen ‘gelijk’ aan de VO maar om ‘maatwerk’ te leveren voor ‘de betere leerling’ kun je sommige vakken op een ‘hoger niveau’ doen. Hierbij krijg je meer leerstof aangeboden. Dit werkt dan weer door in het VO alwaar alweer op paper ‘iedereen gelijk is’ maar kinderen na een paar maanden in zg. ‘sets’ worden ingedeeld. Eigenlijk is dat gewoon ‘ability grouping’ waarbij het idee dan is dat je kunt ‘opstromen’ per vak als je daar talent voor hebt. Op papier dus weer ‘maatwerk’. Maar ook dat werkt in de praktijk niet: lagere sets kom je bijna niet uit, omdat het sociaal-pedagogische klimaat niet optimaal is (zeg maar, er wordt niet opgelet, want iedereen straalt uit ‘we zitten inde laagste set. Daarnaast wordt niet alle stof onderwezen (Opportunity to Learn). Dus heb je een achterstand, demotivatie, de lagere sets blijven veroordeeld tot lager, hogere worden hoger. Ogenschijnlijk maatwerk leidt tot ongelijkheid.
  3. Het is al vaker geopperd op twitter en vervolgens wordt er gezegd ‘maar we willen niet dat het lagere geaccepteerd wordt maar dat als iemand wat meer kan (ik moet denken aan Rosenmullers wiskunde op VWO voorbeeld) dit ook kan. Ik denk dat dit veel minder vaak gaat voorkomen dan wat naar mijn mening veel meer de trend is: de voorbeelden van de VWO leerling die niet goed in talen is en dreigt niet te slagen, die talen op een lager niveau gaat doen. Het meest heldere systeem blijft voor mij een uniform diploma. Waarbij aangetekend dat het allang niet uniform meer is met profielen, keuzevakken en zelfs al maatwerkregelingen. Dat is al genoeg maatwerk en ik denk dat het overgote deel van de leerlingen daar prima mee kan opschieten. ik heb daarom ook al eens geopperd dat de omvang van ‘het probleem’ maar eens helde rgemaakt moet worden. Dat is niet wat anekdotes hier en daar. Het vereist ook dat duidelijk wordt gemaakt dat het hudiige systeem fnuikend is. Dat betekent niet hier en daar een anekdote dat iemand niet heeft kunnen doen wat hij/zij wil (en bovendien zal hier de neiging zijn om ‘het systeem’ de schuld te geven). Ik denk dat het ‘probleem’ wel meevalt.
  4. Ranglijsten en rankings. We werken naar een systeem met ongelijke diploma’s (want maatwerk). Dit systeem zal leiden tot veel verschillen aan het einde van het VO, zoals nu in Engeland met GCSEs. In Engeland zijn er dan nog meer verschillen aan het einde van A-levels en vervolgens is er het Hoger Onderwijs waarbij de A-level cijfers de doorslag geven. In het voorjaar schrijven toekomstige studenten zich in en krijgen ze zogenaamde ‘offers’, bijvoorbeeld ‘als je drie A*’s haalt (A*A*A*) dan willen we je graag naar Oxford. Soms zijn die zelfs ‘unconditional’. Ook moet je soms nog een aanvullende toets doen of een intake gesprek (meestal de betere universiteiten, bij ons is dat bijvoorbeeld al zo voor de lerarenopleiding). De prestigieuze, hoger op ranking staande universiteiten en opleidingen kunnen hogere eisen stellen. Neem mijn uni (zg. Russel Group, zeg maar sub-top, top 20), engineering AAA, de meeste opleidingen AAB, wat minder populaire ABB, maar niet lager. Vervolgens doe je A-level examens. Haal je het dan kun je je ‘offer’ opnemen en naar de uni in kwestie. haal je het niet, dan moet je alsnog een plek zoeken in September (dit heet ‘clearing’). Opleidingen proberen daar hun overgebleven plaatsen te vullen. 1 Oktober begint het Academische jaar, mede ook om die reden. Ik zie het als volgt: doordat er zo veel verschillen zijn in de ‘pakketten’ voor A-levels, iedereen kan een ander pakket, is er maatwerk. Maar hierdoor is er geen uniforme toelatingseis te stellen. Universiteiten zullen selecteren. Maatwerk leidt tot selectie. Selectie leidt tot competitie. Het voorland van Nederland als er ‘maatwerk’ komt. Dat is geen goede zaak.

Nou snap ik dat ik in dit hele relaas al meerdere keren heb gezegd dat het ‘in de praktijk’ niet werkt. De gedachte kan daarom zijn ‘maar het is wel een mooi idee dus wij doen het gewoon anders en beter, en dan gaat het goedkomen’. Mijn betoog is echter dat er een levensgroot gevaar is dat de wens om maatwerk te leveren, naar mijn mening ingegeven door een relatief kleine groep anekdotische dramaverhalen, gaat leiden tot ‘Engelse toestanden’. Dus dat we een groot pluspunt van het Nederlandse systeem op losse schroeven zetten voor een ideaal dat nooit bereikt gaat worden.

Dit alles laat niet onverlet dat (i) de betere leerling beter bediend kan worden, (ii) maatwerk soms heel handig kan zijn. Over (i): persoonlijk denk ik dat dit meer een mentaliteitskwestie is. Ik heb nooit goed begrepen hoe we personen anders dan ‘de betere leerling’ in het VO verantwoordelijk kunnen houden of ‘het er echt uit komt’. Als een puber, en dat valt best te begrijpen, zichzelf niet tot daden kan aanzetten dan moet deze daar aan werken. Al dan niet gesteund door ouders, schools enz. Maar hij/zij moet het wel doen en de actoren om hem/haar heen moeten niet. Dat is geen ontkenning van de rol van het onderwijs; die moet ‘slechts’ (blijven) doen waar ze goed in zou moeten zijn: onderwijzen. Over (ii): hier kan de wet misschien wat vrijer gemaakt worden. Maar dat is geen systeemwijziging. Het opstromen en stapelen weer makkelijker maken, iets wat ook de inspectie constateeert (maar dan gek genoeg in plaats van te zeggen ‘dat is slecht’ ook op de maatwerk-toer gaat), is een veel snellere en minder risicovoller manier om de toegankelijkheid goed te houden maar toch meer maatwerk te leveren. Het legt ook de verantwoordelijkheid bij de leerling, niet het onderwijssysteem.

Categories
Games

Avid gamer

As some might know I’m quite an avid gamer (but lately have less time, still have to finish Witcher 3). I wrote about my best 2014 games before and also about games in maths education. This is an invited lecture I gave at Winchester School of Arts about narratives in games.

Categories
Education Education Research Games ICT Math Education MathEd Tools

Games in maths education

This is a translation of a review that appeared a while back in Dutch in the journal of the Mathematical Society (KWG) in the Netherlands. I wasn’t able to always check the original English wording in the book.

Computer games for Maths

Christian Bokhove, University of Southampton, United Kingdom

51iyzu1DTlL._SX326_BO1,204,203,200_Recently, Keith Devlin (Stanford University), known of his newsletter Devlin’s Angle and popularisation of maths, released a computer game (app for the iPad) with his company Innertubegames called Wuzzit Trouble (http://innertubegames.net/). The game purports to, without actually calling them that, address linear Diophantine equations and build on principles from Devlin’s book on computer games and mathematics (Devlin, 2011) in which Devlin explains why computer games are an ‘ideal’ medium for teaching maths in secondary education. In twelve chapters the book discusses topics like street maths in Brasil, mathematical thinking, computer games, how these could contribute to the learning of maths, and concludes with some recommendations for successful educational computer games. The book has two aims: 1. To start a discussion in the world of maths education about the potential for games in education. 2. To convince the reader that well designed games will play an important role in our future maths education, especially in secondary education. In my opinion, Devlin succeeds in the first aim simply by writing a book about the topic. The second aim is less successful.

Firstly, Devlin uses a somewhat unclear definition of ‘mathematical thinking’.: at first it’s ‘simplifying’, then ‘what a mathematician does’, and then something else yet again. Devlin remains quite tentative in his claims and undermines some of his initial statements later on in the book. Although this is appropriate it doesweaken some of the arguments. The book subsequently feels like a set of disjointed claims that mainly serve to support the main claim of the book: computer games matter. A second point I noted is that the book seems very much aimed the US. The book describes many challenges in US education that, in my view, might be less relevant for Europe. The US emphasis also might explain the extensive use of superlatives like an ‘ideal medium’. With these one would expect a good support of claims with evidence. This is not always the case, for example when Devlin claims that “to young players who have grown up in era of multimedia multitasking, this is no problem at all” (p. 141) or  “In fact, technology has now rendered obsolete much of what teachers used to do” (p. 181). Devlin’s experiences with World of Warcraft are interesting but anecdotical and one-sided, as there are many more types of games. It also shows that the world of games changes quickly, a disadvantage of a paper book from 2011.

Devlin has written an original, but not very evidenced, book on a topic that will become more and more relevant over time. As avid gamer myself I can see how computer games have conquered the world. It would be great if mathematics could tap into a fraction of the motivation, resources and concentration it might offer. It’s clear to me this can only happen with careful and rigorous research.

Devlin, Keith. (2011). Mathematics Education for a New Era: Video Games as a Medium for Learning.

Categories
Education Research

ResearchED 2015

Today I want to researchED at South Hampstead High School in London. It was a very fruitful day. Can’t say I heard a lot of new things, I knew most things, either because of my ‘job’ or because I had already read and heard of it via the blogosphere. But it was also very important: speaking from my background as a former secondary maths and computer science teacher and now lecturer/researcher (with some involvement in teacher training as well) I thresearchED-logoink it’s vital that practitioners (practice) and researchers work together in partnership. There are many obstacles for this to happen – I imagine I might know about the obstacles from both sides, practitioners and researchers, because I have experienced and am experiencing both. These are two distinct cultures that need to ‘bridge’ the gap. For this, government needs to ‘invest’ and not think -like with the maths hubs- volunteering can cover this. But ok, enough about that. A kaleidoscope of sessions I visited (although I did tweet about some ‘abstracts’ I read but couldn’t visit).

I started off with Lucy Crehan who reported on international comparisons and her experiences with visiting six countries to discover their education system. I liked this session a lot because I work quite a lot with international comparative but also with more qualitative data from, for example, the TIMSS video study. I try to combine both in the international network project ‘enGasia‘ in which England, Hong Kong and Japan collaborate to (i) compare geometry education in those countries, (ii) design some digital maths books for geometry, and (iii) test them both qualitatively through Lesson Study and more quantitatively through a quasi-experiment. Some of the work from John Jerrim was mentioned.

Then I finally got to see Pedro De Bruykere (slides here), whom I already knew to be a very engaging and funny speaker. He went through many of the myths in the book he wrote with Casper Hulshof and Paul Kirschner: Learning Styles, Learning pyramids, some TED talks (Sugata Mitra who featured in previous blogposts here and here, and Ken Robinson). I can recommend the book as a quick way to get up to speed to myths (and almost-myths). I liked how Pedro described how the section on ‘grade retention’ became more nuanced between the Dutch and English editions of the book because of the results of a new study.

Then back to international comparisons with Tom Oates. I already knew his report on textbooks and I agree that textbooks have to offer us a lot. But then again, I would think that: in the Netherlands maths (my subject) textbooks are used a lot and edited the proceedings of the International Conference on Mathematics Textbook Research and Development 2014. Tim’s talk covered quotes on international comparisons and unpicked the problems (fallacies, faulty arguments) with them. He had a measured conclusion:

After lunch I went to the #journalclub with Beth Greville-Giddings for some cookies. I had prepared by reading the article and making these annotations. The session explained the process of starting a journal club first and then we discussed the paper. One interesting moment was when people discussed the statistics in the paper. I agreed with comments that because we were dealing with a reputable journal the statistics probably were correct. But in my view there is another problem: statistical literacy. In this paper two things stood out for me (statistically, there were more like the definition of engagement): the term ‘significant’ and ‘variance explained’. With largescale data the sample size often is quite high, causing significance more quickly. Because of this reason ‘effect size’ is probably more appropriate. Secondly the statistics seemed to show that not much more extra variance is explained by adding engagement predictors. Any way, journal clubs to me seem like a worthwhile venture; might be good to forge partnerships with HE as well.

Crispin Weston then gave a lecture on how technology might revolutionise research. He framed this by first describing 7 problems and then showing how technology might ‘improve’ or ‘address’ them. It was an interesting approach which resulted in a matrix with ‘solved’ problems. Learning Analytics and standards (see my response to W3C priorities) had a prominent place. I’m a bit skeptical of it all will work. In the MC squared project we are implementing some Learning Analytics, including for creativity, and it’s bloody difficult.

Sri Pavar then talked about Cognitive Science. I think it’s good to summarise these principles. Principles like a memory model and relevant books (although some books referenced were not really about Cognitive Science) were presented. Cognitive Load Theory (CLT) had a prominent place. I couldn’t help tweeting some critical comments about CLT (a good summary here, a newer interesting blog on the measure used here). For example the ‘translation’ of research that a lower cognitive load is better: of course not, you wouldn’t learn anything. Or the often used measurement instrument:

Or the role of schemas: germane load was an (unfalsifiable) attempt at explaining schemas within the CLT framework but apparently some have abandoned it because of the unfalsifiable nature. But then what? And what does it add to existing information processing theories.

The final session was by Professor Rob Coe. He talked about several things pertaining to ‘what works’. He talked about Randomised Controlled Trials, logic, and took us back to last year’s ResearchEd, Dylan Wiliam’s talk.

I am with Coe here. Rob mentioned a little cited paper that sounded very interesting.

Tom Bennett finished the day. In the North Star it was great to meet a whole range of Twitterati, It was an interesting day and professionally I hope practitioners and researchers (Primary, Secondary Education and Higher Education) can grow towards each other:

Oh, and let me end on a contrarian note: some people have got to read up on all that cognitive psychology: most researchers are for more nuanced in their papers 😉

 

Categories
Education Education Research

Journalclub at ResearchED

I think it’s a great idea to study articles in a journalclub setting. I read the article as well and made the following annotations:

See file attached to this message

File: 00220671%2E2013%2E807491 – annotated.pdf

Annotation summary:

— Page 2 —

Seems to me to be quite a limited view of Academic Performance, including only reading.

Must keep in mind: 2000 data. Unfortunately it is quite normal to use ‘older’ data. Some of the delay lies in publication mechanisms so PISA 2012 data released December 2013 is difficult but there are other instances.

I think multilevel analysis is useful but not everyone would agree.

PISA, so contextual effects. Differences countries?

— Page 4 —

Would be good to compare with TIMSS and PIRLS.

This is the ‘standard’ PISA sampling strategy.

I’m not sure if it already was the case for PISA 2000 but now most large-scale assessments need to take into account the complex sampling design. This means sampling WEIGHTS (because of non-response) and so-called PLAUSIBLE VALUES (because not all students make all test items). There is no mention if this in the paper so either PISA 2000 was done differently or they just ‘forgot’.

According to some (e.g. Willms) this is an appropriate substitute for the actual sampling design.

— Page 5 —

In general I would want more transparency regarding missing data and the ‘model building’, see Dedrick et al. (2009) for recommendations.

— Page 6 —

With large N results will be significant very quickly. Often ‘effect size’ is mentioned.

But see how much more is explained over the models: not much more, it seems.

— Page 7 —

So only the first PLAUSIBLE VALUE was used.

Not much ‘variance explained’

— Page 8 —

Stop with ‘thus’

— Page 9 —

Cognitive Load anyone?

Important: cause and effect

So this ‘significant’ marginally interesting because of large N.

Categories
Education Research

Some work presented in the last months

snaSome work was presented in the last months.

At Sunbelt XXXV I presented this work on classroom interaction and Social Network Analysis:

At ICTMT and PME my colleague presented our work on c-books

Categories
Education Research

Example e-mail from a predatory journal

I’m realising more and more that (Open Access) predatory will be more and more problematic in the scientific world. A short while back I wrote about this, and promoted the work of Jeffrey Beall’s list. Since that post I’ve received several more requests from journals. With this post I want to demonstrate what I look at/for when judging an e-mail I receive. Let’s unpick the e-mail.

  1. First of all, although I’d like to think my work is brilliant, the simple fact you get several requests to publish something should be a first warning sign. Sure, sometimes you might get genuine requests, and the further you progress in your academic career this is likely to increase. But most of the time it is YOU who responds to calls, and it is YOU who has work you want in a journal.
  2. mail1The from-email address does not look very professional for a publisher. The mail was sent to my gmail account. It is not really clear why they didn’t use my institutional address. The title of this particular journal ‘American Journal of Educational Research’. Often dubious journals use anagrams of reputable journals, for example this name resembles the ‘American Educational Research Journal’ from AERA. Often same fonts are used or journal websites look very slick and professional. I think they hope you will think it then must be a good journal.
  3. I then took the journal title to Beall’s list and did two things. First I entered the title in the search box with ” so it would look for the whole term. This gave two hits. One post is from end 2012 and clearly is about the publisher in question (SCIEP), another strangely enough also gives the same . It is very interesting to read the whole post and to also read the comments. Also using the name of the publisher in the search box, “Science and Education Publishing”, yields some interesting results, including a murdered doctor as editor in chief. This is another trick predatory journals often use: they will sometimes use names of people who probably don’t even know their names are being used (and if they do, they should probably pull back as quickly as possible from these fraudulent affairs). Sometimes even content is just plagiarised from other journals.
  4. mail2The mail continues, the salutation seems to indicate that there might have been some form of automation in this e-mail. The mail mentions an Impact Factor to ‘wow’ the reader. I think it is very very very unlikely this journal has an Impact Factor. They are often used to, again, instill a feeling of ‘wow this is a good journal’ with the reader.
  5. The mail then mentions they were very interested in a paper I wrote for the ICME-12, but unfortunately that conference was not in 2015 but 2012. Factual errors like those do not convey a positive image. This also certainly holds for the gmail addresses.
  6. The mail concludes with a section on what might be the whole reason for the e-mail: money.mail3 With the, in principle positive, advent of Open Access with their ‘Article Processing Charges’ (APC) rather than university subscriptions, individuals are asked to pay APCs. With relatively low APCs predatory journals hope you will take the bait After all, it seems positive: a modest cost, a publication, etc.

But these predatory journals are problematic. First and foremost they damage the reputation of (social) science. Peer review is often promised but almost non-existent. Of course it is nice to think you are a genius writer and Academic but in most cases *everyone* can improve their work. It does not make sense if an article is accepted within days with no changes requested. Paying to be published is a form of ‘vanity press’. Is peer review perfect? Certainly not, I’ve had my fair share of reviews I thought were pretty dubious (both reject and accept) but I think the alternative, namely *not* to have peer review, would be even more disastrous. Within the realm of ‘peer review’ we should experiment with variations like post-published-reviews and maybe value other modi (note: I for example have always been quite irritated by the fact that social science does not value peer reviewed conference papers as highly as in Computer Science). But in the core, peer review works, just as long as you are critical about where you publish. I do see an additional tension between the big publishers and efforts to make ‘big business’ less influential on Academia. I know several small, non-big-publisher journals that are of top quality, but within the current dynamic of OA journals it is becoming increasingly more difficult to recognise them. So if you are in doubt, also ask around. Let’s make sure we stay vigilant.

 

 

Categories
Uncategorized

EEF: Core Knowledge

It is almost impossible to extensively discuss all the studies done by the EEF. In a previous blog I summarised the reports from July 2015 and in this Google spreadsheet I have tabulated all the EEF reports. One study I thought did not get much ‘airplay’ was the “Word and World reading programme” which:

wordworld

“aimed to improve the reading comprehension and wider literacy skills of children aged 7­–9 from low income families. The programme focused on improving the vocabulary and background knowledge (sometimes labelled ‘core knowledge’) of pupils, through the use of specially designed ‘knowledge rich’ reading material, vocabulary word lists, a read-aloud approach, and resources such as atlases and globes. The programme is based on the rationale that children need background knowledge to be able to comprehend what they read, and that improving background knowledge is an effective way to help struggling readers.”

I was interested in this project because to be honest I had heard a lot of Hirsch’s work in books by, for example, Daisy Christodoulou but not yet read a lot about actual ‘Core Knowledge’ inspired interventions (I had read some info, I think it was the Durham university press release, that she also was involved in the delivery/training of the programme). I agree with many that sometimes knowledge has been undervalued. To become an expert, you need knowledge, one particular poignant example is in educating mathematics teachers: they really need more maths knowledge than ‘just one step ahead’ of what they are teaching (at both GCSE and A level). But it also isn’t the case that when you have knowledge everything else follows automatically. With that in mind I was curious how this intervention would fare. The programme was developed and delivered by The Curriculum Centre, a charitable organisation which is part of Future Academies.

The first thing that struck me on the report page was that the study was classified as a ‘Pilot Study’ and further that “The study did not seek to assess impact on attainment in a robust way”. For almost £150k I would expect there to be a bit more ambition? The three aims of the evaluation (pilot?) were (i) to assess the feasibility of the approach and its reception by schools. (ii) to assess the promise of the approach and provide recommendations that could be used to improve the approach in the future. (iii) to provide recommendations that could be used to design any future trial, including an assessment of the appropriate size of any future trial. Especially the third aim seems a bit premature, although granted, in the report the answer to the question “Is the approach ready for a full trial without further development?” is no. This is justified because the results will show there are some big challenges.

The report has some very interesting sections:

  • There is an overview of previous ‘Core Knowledge’ research. This overview shows very mixed results with extremely positive but also extremely negative effects. There are numerous issues with potential bias as well, which makes the evaluators conclude “Although widely implemented, the evidence base linking the CK approach to improved literacy is currently underdeveloped. Evaluations to date have commonly adopted matched designs and have been developer led or funded.”. I think this is sufficient grounds to study further.
  • Having reiterated the aims on the start page, I was surprised in the report to see the ‘likely magnitude of the effect’ as objective. Again, it seems set up to provide further funding for a largescale effectiveness trial.
  • The sample concerned eight primary schools, with a further eight schools in the same areas acting as control. There were two year-groups in each school (Year 3 and Year 4). It was further assumed that there would be 90 pupils in each school (1.5 classes in each year group, and an average of 30 pupils per class), yielding a total of 720 pupils (90 pupils each in eight schools) in each group.
  • I often am a bit worried about control groups who use ‘regular practice’ because this might not be a homogeneous approach. I know it is suggested that randomization partly ‘solves’ this but nevertheless I would like to know more about these ‘regular practices’. Note that this also is important from an intervention point of view: it could be that schools that have a similar approach as the intervention already (it is suggested in the report this was not the case, but it was for EEF growth mindset study) might not improve much.
  • There is a section on ‘significance’ in the report which originally, as one of the evaluators mentioned, also was in the ‘Philosophy report’ (note this can also be seen in some references in the reference list which are not in *that* report but are in this one).
    sign
    The last sentences seem rather dismissive of approaches used in most of the EEF reports.
  • The intervention was well received, but what I wonder is whether this was to be expected as, as far as I can see Future Academies mainly featured. I could imagine that, although there were no ‘knowledge’ programme in place, there might already be a certain culture. Of course, this is perfectly fine, but I think ‘teacher reception’ of a programme only is a small element of its total appeal.
  • The section on ‘lesson implementation’ also was very interesting. It seemed to show that the implementation was generally well conducted, which seems a bit contradictory with a later point. But the most fascinating point to me was:
    “It appeared that the highly prescriptive and structured lessons were both an advantage and a disadvantage. Most teachers said they liked the fact that the lessons were planned for them and there was minimal preparation on their part; some, however, adhered so closely to the prescribed programme that the lessons appeared contrived and there was little opportunity for open discussions. In contrast, where teachers attempted to initiate discussions, their lack of general knowledge and confidence in taking the discussions beyond the text was sometimes apparent.”

    One of the conclusions addresses this lack of subject knowledge:

    “In some lessons, teachers’ subject knowledge did not appear to be sufficient to support an in-depth discussion with pupils about some of the topics within the programme curriculum. This suggests that additional training or support materials may have been beneficial.”

    I think it’s a bit unfair to say that teachers’ subject knowledge did not appear to be sufficient (apart from the fact that we are dealing with a self-selected set of teachers from one specific academy chain) as (i) the intervention was quite prescriptive, and (ii) the recommendation shows that it might be missing some features in the design of the intervention. There were more of those ‘areas of improvement’, for example in visuals and the quality of the workbooks.

  • In light of the first bullet it is remarkable that the Curriculum Centre (TCC) designed the teacher survey themselves. And it could have been much better. The report shows:
    quest
  • There is quite a long list of factors supporting implementation and also a longer list with barriers. The teacher turnover within participating schools was striking.
  • Finally the effects, which in the ‘web page’ conclude “did not indicate a large positive effect”, actually indicate a very small (probably non-significant 😉 negative effect. I think the conclusions presented on the webpage are a bit coloured.
    results
    The picture for FSM and gender differences are slightly different but not very notable.

Overall, I feel it can be said that the Core Knowledge intervention was not effective, although teachers felt it was and liked the intervention. There also seemed to be many things that could and should be improved in the intervention. In the meantime it can hardly be said there is evidence to suggest Core Knowledge might be more effective than ‘regular practice’ (whatever that may be). Sure, teachers in the participating schools like the intervention, but is this enough to warrant its implementation? The recent ‘evidence informed’ developments would suggest not, after all many myths are widely accepted. The suggestion that teachers lack subject knowledge, if true, might result in recommendation for teachers’ subject knowledge but I think it’s a bit ‘easy’ to suggest that this might have impeded the implementation of the intervention. Designers of an intervention need to take the teachers into account; after all they need to deliver the programme. This should be a feature of the complete intervention. So the overall judgement at the moment is that it is not effective and many aspects of the intervention should be improved. I think it would be strange if results like these would culminate in a larger effectiveness trial.

In my book ‘knowledge’ remains a very important, maybe the most important, ingredient in developing both skills and understanding. There are good reasons to assume this, as I will try to elaborate on for mathematics education in a future post about ‘memorisation and understanding’. But even the way you organise such ‘knowledge’ through interventions needs empirical evidence. Unfortunately, this EEF report on the Hirsch inspired ‘Core Knowledge’ does not provide such evidence.