Categories
Education Research

Research on teaching in Higher Education

I recently have been doing a lot of thinking on the Teaching Excellence Framework. For this reason any blog article that talks about quality of teaching in HE is interesting to me. For example the Times Higher Education article on a study which suggests that “students taught by academics with teaching qualifications are more likely to pass, but less likely to get first-class scores.” The underlying study is this one. I was very surprised to read this article. The article compares teachers with only a PhD and teachers with a PhD AND a teaching certificate (all other types were disregarded).  In my opinion there are several big problems with the study, some demonstrated by one of the central table:

diff

What immediately is apparent:

  • There are no significant differences between PhD and PhD-TeacherCert, as the last column demonstrates (yes I know the limitations of p-values, but the article actually refers to them).
  • Look at the numbers: with such low numbers you can really not make the inferences any way.
  • There is a particularly hard to understand paragraph to argue that we can’t just say that ‘2% versus 5% non-pass might not seem much’ (top line). Well, it isn’t much, so I can’t see how one can argue the opposite.
  • I like it even less that then these highly contentious results are used to suggest two types of teachers, not unlike how glossy magazines do, ‘damage controllers’ and ‘perfection seekers’.
  • The study calculates a ‘group GPA’ for every module which is seen “metric to evaluate the quality of learning from Units”. What? Grades as a metric for quality without regarding other elements like assessments?
  • The group GPA was calculated by converting scores to a Likert scale, in what to me seems a rather arbitrary manner. Luckily this was deemed a little bit ‘contentious’ by the authors.

But there is more:

  • There is almost no development of a framework. After half a page of abstract, there is half a page of introduction and then it’s straight on into the research question and methodology.
  • The limitations section was 2 (two!) lines long, only reporting on how qualitative data was not collected.
  • As a consequence the article is a mere 7 pages. I love concise articles but this is a bit too extreme. On the upside, I assume this journal likes such short articles.
Categories
Education Research

Presentation at BSRLM day conference

These are the slides of the presentation “Opportunity to learn secondary maths: A curriculum approach with TIMSS 2011 data” I gave at the BSRLM day conference on 7 November 2015 at the University of Reading.

Categories
Education Education Research Games ICT Math Education MathEd Tools

Games in maths education

This is a translation of a review that appeared a while back in Dutch in the journal of the Mathematical Society (KWG) in the Netherlands. I wasn’t able to always check the original English wording in the book.

Computer games for Maths

Christian Bokhove, University of Southampton, United Kingdom

51iyzu1DTlL._SX326_BO1,204,203,200_Recently, Keith Devlin (Stanford University), known of his newsletter Devlin’s Angle and popularisation of maths, released a computer game (app for the iPad) with his company Innertubegames called Wuzzit Trouble (http://innertubegames.net/). The game purports to, without actually calling them that, address linear Diophantine equations and build on principles from Devlin’s book on computer games and mathematics (Devlin, 2011) in which Devlin explains why computer games are an ‘ideal’ medium for teaching maths in secondary education. In twelve chapters the book discusses topics like street maths in Brasil, mathematical thinking, computer games, how these could contribute to the learning of maths, and concludes with some recommendations for successful educational computer games. The book has two aims: 1. To start a discussion in the world of maths education about the potential for games in education. 2. To convince the reader that well designed games will play an important role in our future maths education, especially in secondary education. In my opinion, Devlin succeeds in the first aim simply by writing a book about the topic. The second aim is less successful.

Firstly, Devlin uses a somewhat unclear definition of ‘mathematical thinking’.: at first it’s ‘simplifying’, then ‘what a mathematician does’, and then something else yet again. Devlin remains quite tentative in his claims and undermines some of his initial statements later on in the book. Although this is appropriate it doesweaken some of the arguments. The book subsequently feels like a set of disjointed claims that mainly serve to support the main claim of the book: computer games matter. A second point I noted is that the book seems very much aimed the US. The book describes many challenges in US education that, in my view, might be less relevant for Europe. The US emphasis also might explain the extensive use of superlatives like an ‘ideal medium’. With these one would expect a good support of claims with evidence. This is not always the case, for example when Devlin claims that “to young players who have grown up in era of multimedia multitasking, this is no problem at all” (p. 141) or  “In fact, technology has now rendered obsolete much of what teachers used to do” (p. 181). Devlin’s experiences with World of Warcraft are interesting but anecdotical and one-sided, as there are many more types of games. It also shows that the world of games changes quickly, a disadvantage of a paper book from 2011.

Devlin has written an original, but not very evidenced, book on a topic that will become more and more relevant over time. As avid gamer myself I can see how computer games have conquered the world. It would be great if mathematics could tap into a fraction of the motivation, resources and concentration it might offer. It’s clear to me this can only happen with careful and rigorous research.

Devlin, Keith. (2011). Mathematics Education for a New Era: Video Games as a Medium for Learning.

Categories
Education Research

ResearchED 2015

Today I want to researchED at South Hampstead High School in London. It was a very fruitful day. Can’t say I heard a lot of new things, I knew most things, either because of my ‘job’ or because I had already read and heard of it via the blogosphere. But it was also very important: speaking from my background as a former secondary maths and computer science teacher and now lecturer/researcher (with some involvement in teacher training as well) I thresearchED-logoink it’s vital that practitioners (practice) and researchers work together in partnership. There are many obstacles for this to happen – I imagine I might know about the obstacles from both sides, practitioners and researchers, because I have experienced and am experiencing both. These are two distinct cultures that need to ‘bridge’ the gap. For this, government needs to ‘invest’ and not think -like with the maths hubs- volunteering can cover this. But ok, enough about that. A kaleidoscope of sessions I visited (although I did tweet about some ‘abstracts’ I read but couldn’t visit).

I started off with Lucy Crehan who reported on international comparisons and her experiences with visiting six countries to discover their education system. I liked this session a lot because I work quite a lot with international comparative but also with more qualitative data from, for example, the TIMSS video study. I try to combine both in the international network project ‘enGasia‘ in which England, Hong Kong and Japan collaborate to (i) compare geometry education in those countries, (ii) design some digital maths books for geometry, and (iii) test them both qualitatively through Lesson Study and more quantitatively through a quasi-experiment. Some of the work from John Jerrim was mentioned.

Then I finally got to see Pedro De Bruykere (slides here), whom I already knew to be a very engaging and funny speaker. He went through many of the myths in the book he wrote with Casper Hulshof and Paul Kirschner: Learning Styles, Learning pyramids, some TED talks (Sugata Mitra who featured in previous blogposts here and here, and Ken Robinson). I can recommend the book as a quick way to get up to speed to myths (and almost-myths). I liked how Pedro described how the section on ‘grade retention’ became more nuanced between the Dutch and English editions of the book because of the results of a new study.

Then back to international comparisons with Tom Oates. I already knew his report on textbooks and I agree that textbooks have to offer us a lot. But then again, I would think that: in the Netherlands maths (my subject) textbooks are used a lot and edited the proceedings of the International Conference on Mathematics Textbook Research and Development 2014. Tim’s talk covered quotes on international comparisons and unpicked the problems (fallacies, faulty arguments) with them. He had a measured conclusion:

After lunch I went to the #journalclub with Beth Greville-Giddings for some cookies. I had prepared by reading the article and making these annotations. The session explained the process of starting a journal club first and then we discussed the paper. One interesting moment was when people discussed the statistics in the paper. I agreed with comments that because we were dealing with a reputable journal the statistics probably were correct. But in my view there is another problem: statistical literacy. In this paper two things stood out for me (statistically, there were more like the definition of engagement): the term ‘significant’ and ‘variance explained’. With largescale data the sample size often is quite high, causing significance more quickly. Because of this reason ‘effect size’ is probably more appropriate. Secondly the statistics seemed to show that not much more extra variance is explained by adding engagement predictors. Any way, journal clubs to me seem like a worthwhile venture; might be good to forge partnerships with HE as well.

Crispin Weston then gave a lecture on how technology might revolutionise research. He framed this by first describing 7 problems and then showing how technology might ‘improve’ or ‘address’ them. It was an interesting approach which resulted in a matrix with ‘solved’ problems. Learning Analytics and standards (see my response to W3C priorities) had a prominent place. I’m a bit skeptical of it all will work. In the MC squared project we are implementing some Learning Analytics, including for creativity, and it’s bloody difficult.

Sri Pavar then talked about Cognitive Science. I think it’s good to summarise these principles. Principles like a memory model and relevant books (although some books referenced were not really about Cognitive Science) were presented. Cognitive Load Theory (CLT) had a prominent place. I couldn’t help tweeting some critical comments about CLT (a good summary here, a newer interesting blog on the measure used here). For example the ‘translation’ of research that a lower cognitive load is better: of course not, you wouldn’t learn anything. Or the often used measurement instrument:

Or the role of schemas: germane load was an (unfalsifiable) attempt at explaining schemas within the CLT framework but apparently some have abandoned it because of the unfalsifiable nature. But then what? And what does it add to existing information processing theories.

The final session was by Professor Rob Coe. He talked about several things pertaining to ‘what works’. He talked about Randomised Controlled Trials, logic, and took us back to last year’s ResearchEd, Dylan Wiliam’s talk.

I am with Coe here. Rob mentioned a little cited paper that sounded very interesting.

Tom Bennett finished the day. In the North Star it was great to meet a whole range of Twitterati, It was an interesting day and professionally I hope practitioners and researchers (Primary, Secondary Education and Higher Education) can grow towards each other:

Oh, and let me end on a contrarian note: some people have got to read up on all that cognitive psychology: most researchers are for more nuanced in their papers 😉

 

Categories
Education Education Research

Journalclub at ResearchED

I think it’s a great idea to study articles in a journalclub setting. I read the article as well and made the following annotations:

See file attached to this message

File: 00220671%2E2013%2E807491 – annotated.pdf

Annotation summary:

— Page 2 —

Seems to me to be quite a limited view of Academic Performance, including only reading.

Must keep in mind: 2000 data. Unfortunately it is quite normal to use ‘older’ data. Some of the delay lies in publication mechanisms so PISA 2012 data released December 2013 is difficult but there are other instances.

I think multilevel analysis is useful but not everyone would agree.

PISA, so contextual effects. Differences countries?

— Page 4 —

Would be good to compare with TIMSS and PIRLS.

This is the ‘standard’ PISA sampling strategy.

I’m not sure if it already was the case for PISA 2000 but now most large-scale assessments need to take into account the complex sampling design. This means sampling WEIGHTS (because of non-response) and so-called PLAUSIBLE VALUES (because not all students make all test items). There is no mention if this in the paper so either PISA 2000 was done differently or they just ‘forgot’.

According to some (e.g. Willms) this is an appropriate substitute for the actual sampling design.

— Page 5 —

In general I would want more transparency regarding missing data and the ‘model building’, see Dedrick et al. (2009) for recommendations.

— Page 6 —

With large N results will be significant very quickly. Often ‘effect size’ is mentioned.

But see how much more is explained over the models: not much more, it seems.

— Page 7 —

So only the first PLAUSIBLE VALUE was used.

Not much ‘variance explained’

— Page 8 —

Stop with ‘thus’

— Page 9 —

Cognitive Load anyone?

Important: cause and effect

So this ‘significant’ marginally interesting because of large N.

Categories
Education Research

Some work presented in the last months

snaSome work was presented in the last months.

At Sunbelt XXXV I presented this work on classroom interaction and Social Network Analysis:

At ICTMT and PME my colleague presented our work on c-books

Categories
Education Research

Example e-mail from a predatory journal

I’m realising more and more that (Open Access) predatory will be more and more problematic in the scientific world. A short while back I wrote about this, and promoted the work of Jeffrey Beall’s list. Since that post I’ve received several more requests from journals. With this post I want to demonstrate what I look at/for when judging an e-mail I receive. Let’s unpick the e-mail.

  1. First of all, although I’d like to think my work is brilliant, the simple fact you get several requests to publish something should be a first warning sign. Sure, sometimes you might get genuine requests, and the further you progress in your academic career this is likely to increase. But most of the time it is YOU who responds to calls, and it is YOU who has work you want in a journal.
  2. mail1The from-email address does not look very professional for a publisher. The mail was sent to my gmail account. It is not really clear why they didn’t use my institutional address. The title of this particular journal ‘American Journal of Educational Research’. Often dubious journals use anagrams of reputable journals, for example this name resembles the ‘American Educational Research Journal’ from AERA. Often same fonts are used or journal websites look very slick and professional. I think they hope you will think it then must be a good journal.
  3. I then took the journal title to Beall’s list and did two things. First I entered the title in the search box with ” so it would look for the whole term. This gave two hits. One post is from end 2012 and clearly is about the publisher in question (SCIEP), another strangely enough also gives the same . It is very interesting to read the whole post and to also read the comments. Also using the name of the publisher in the search box, “Science and Education Publishing”, yields some interesting results, including a murdered doctor as editor in chief. This is another trick predatory journals often use: they will sometimes use names of people who probably don’t even know their names are being used (and if they do, they should probably pull back as quickly as possible from these fraudulent affairs). Sometimes even content is just plagiarised from other journals.
  4. mail2The mail continues, the salutation seems to indicate that there might have been some form of automation in this e-mail. The mail mentions an Impact Factor to ‘wow’ the reader. I think it is very very very unlikely this journal has an Impact Factor. They are often used to, again, instill a feeling of ‘wow this is a good journal’ with the reader.
  5. The mail then mentions they were very interested in a paper I wrote for the ICME-12, but unfortunately that conference was not in 2015 but 2012. Factual errors like those do not convey a positive image. This also certainly holds for the gmail addresses.
  6. The mail concludes with a section on what might be the whole reason for the e-mail: money.mail3 With the, in principle positive, advent of Open Access with their ‘Article Processing Charges’ (APC) rather than university subscriptions, individuals are asked to pay APCs. With relatively low APCs predatory journals hope you will take the bait After all, it seems positive: a modest cost, a publication, etc.

But these predatory journals are problematic. First and foremost they damage the reputation of (social) science. Peer review is often promised but almost non-existent. Of course it is nice to think you are a genius writer and Academic but in most cases *everyone* can improve their work. It does not make sense if an article is accepted within days with no changes requested. Paying to be published is a form of ‘vanity press’. Is peer review perfect? Certainly not, I’ve had my fair share of reviews I thought were pretty dubious (both reject and accept) but I think the alternative, namely *not* to have peer review, would be even more disastrous. Within the realm of ‘peer review’ we should experiment with variations like post-published-reviews and maybe value other modi (note: I for example have always been quite irritated by the fact that social science does not value peer reviewed conference papers as highly as in Computer Science). But in the core, peer review works, just as long as you are critical about where you publish. I do see an additional tension between the big publishers and efforts to make ‘big business’ less influential on Academia. I know several small, non-big-publisher journals that are of top quality, but within the current dynamic of OA journals it is becoming increasingly more difficult to recognise them. So if you are in doubt, also ask around. Let’s make sure we stay vigilant.

 

 

Categories
Education Research

July 2015 round of EEF projects

eefEvery now and then the Education Endowment Fund releases a series of reports. It’s interesting to see the sheer difference in media attention they create, and also the buzz on social media. I collated all the studies in a Google spreadsheet:

I tabulated the name of the project, the project lead, the money involved (to me it is unclear if this includes the cost of evaluation cost), whether the project was completed or in progress (at time of writing), the evaluator, type of project, number of schools, and I added an indicator of whether there were signs of ‘significance testing’. In this post I want to summarise the recently released reports. I make no claim to cover all aspects of the studies.

Philosophy for Children (EEF page)
This was an effectiveness trial with 40 schools which evaluated Philosophy for Children (P4C), an “approach to teaching in which students participate in group dialogues focused on philosophical issues.” A key conclusion in the report was that there was a positive impact on KS2 attainment, with “2 months progress”. This project caused the most discussion, mainly because of some crucial aspects in the design. There was a guest post by Inglis, and this post.

Word and World Reading Programme (EEF page)
This study was a pilot study of Core Knowledge (E.D. Hirsch inspired) materials from The Curriculum Centre. I was surprised the blogosphere did not really pick up on this study. A suspicious mind might think this might be because the results of this ‘Core Knowledge’ programme were quite underwhelming, and this does not fit the ‘knowledge’ preferences. But of course, that is just as suggestive 🙂 I will write a separate post on this.

Affordable Individual and Small Group Tuition: Primary (EEF page)
Key conclusion that stands out: “Due to the study’s design and problems recruiting schools to receive tuition or participate in the evaluation, this evaluation has not provided a secure estimate of the impact of the project on pupil outcomes.” and also “Participating pupils made slightly less progress in both English and mathematics than those in the matched comparison group. However, this finding was not statistically significant, meaning that it could have occurred by chance.”. Staff members were positive. Recommendations are to improve.

Affordable Individual and Small Group Tuition: Secondary (EEF page)
“Due to the limitations of the study design and the absence of a high-quality comparison group, this evaluation has not provided a secure estimate of the impact of the project on academic outcomes.” and “Participating pupils achieved slightly higher mathematics GCSE scores than pupils in the comparison group, and lower English GCSE scores than pupils in the comparison group. However, it is not possible to attribute either change to the tuition provided.” Staff members are positive.

Graduate Coaching Programme (EEF page)
This trial showed a positive effect with moderate security.”The programme had a positive impact on pupils’ attainment in reading, spelling and grammar, equivalent to approximately five additional months’ progress. The evaluation did not seek to prove that the approach would work in all schools, but did identify strong evidence of promise.” and . The cost was quite high, and the delivery was very varied.

Peer Tutoring in Secondary Schools (EEF page)
This study concluded “This evaluation does not provide any evidence that the Paired Reading programme had an impact on overall reading ability, sentence completion and passage comprehension of participating pupils. “. The security is high and cost relatively low.

Shared Maths (EEF page)
This evaluation (of a 750k+ project) does not provide any evidence that the Durham Shared Maths programme had an impact on attainment in maths, when used with Year 5 and 3 pupils.”, the evidence strength was high and the cost low.

Talk for Writing (EEF page)
This project is “an approach to teaching writing that encompasses a three-stage pedagogy”. Teachers were enthusiastic about the implementation and this went quite smoothly. There was mixed evidence, although teachers reported it had an impact (this seems a theme, teachers thinking something has impact but the evidence not being there?).

Some observations from these studies
What strikes me in most of these studies is that:

  • Most studies report quite small or no effects.
  • Most studies report that staff are positive about the interventions, which seems to suggest that effectiveness and teachers’ perception are only a little bit related.
  • Effects are often worded positively even if these are small or non-significant (with a few reports by one evaluator even making a case against Null Hypothesis Significance Testing which I understand but find strange given the majority of reports).
  • Some reports mention ‘redeeming factors’ for non-effects, for example low costs. Like the Maths Masters study it seems that ‘low cost’ automatically makes an intervention worthwhile, even when no or very small effects.
  • Pilots mainly concluded that (i) yes, the approach was feasible, (ii) mixed results, (iii) interventions needed to be developed for a full trial.
  • There are many ‘arguments’ for further study along the lines of “more time is needed” or “larger samples are needed”, even when initial studies have spent significant amounts of money and had decent samples

Why is this notable? Well, for me mainly because EEF reports have been proposed to tell us ‘what works’. I would be the first to acknowledge that we need a range of qualitative and quantitative research, and that means, in my book, that there *should* be space for smaller scale studies as well. However, this does not seem to be the premise of most of the studies conducted. If £2.4 million is spent on 8 projects I would hope that the conclusions would be a bit more informative than ‘we should try more and harder’. I think it would be good if the reports report *only* on the results and do not make recommendations.

 

Categories
Education Research

Predatory journals

More and more I’m being confronted with questions about journal publications. I devote some words to it in a session for our MSc programme in the module ‘Understanding Education Research’ and recently, in a panel discussion at our local PGR conference, there were questions about how to judge a journal’s reputation. Note that in answering this question I certainly don’t want be a ‘snob’ i.e. that only the conventional and traditional publication methods suffice. Actually, developments on blogging and Open Access are positive changes, in my view. Unfortunately there also is a darker side to all of this:

One place where I always look first when it comes to ‘vanity press’ and predatory journals is Beall’s List, which is “a list of questionable, scholarly open-access publishers.”. What I like about this list is that they are rather sensible about how to use the list: “We recommend that scholars read the available reviews, assessments and descriptions provided here, and then decide for themselves whether they want to submit articles, serve as editors or on editorial boards.”. The list of criteria for determining predatory open access journals is clear as well. One thing you can do is use the search function to see if a journal or publisher gets a mention. This is exactly what I did recently with some high profile research. I was surprised to find out articles were indeed published in such journals.

The first example is this high profile article mentioned in the Times Educational Supplement. It references a press release from Mitra’s university:  

 
The journal title did not ring a bell so I checked Beall’s list, and yes the journal and publisher are mentioned in this article on the list. Just a quick glance, also the comments, should make most scholars think twice to publish in here, certainly if it is ‘groundbreaking’ stuff. This is not to say that articles per se are bad (although methodologically there is much to criticisise as well, maybe later, although this blog does a good job at concisely flagging up some issues) but I am worried that high profile professors are publishing in journals like these (assuming it was done with the authors’ agreement, predatory journals sometimes just steal content to bump up their reputation). In the case of this person it has happened before, in 2012, when the ‘Center of Promoting Ideas’ (this name would be enough for me to not want to appear in their publications) published this article in a journal, which is also on Beal’s list. It is poignant that an Icelandic scholar really got into problems because of this. Some other examples: this article, CIR world also features on Beall’s list (Council for Innovative Research, again a name which raises suspicion by itself).

  

These publications serve as examples that even high end professors could fall victim of predatory journals. I do not mean that in a judgemental way; it shows that more education on the world of predatory journals is needed. Although I must admit, there might be some naivety at play here, experienced scholars should know ‘positive reviews only’, ‘dubious publishing fees’ and ‘unrealistic publication turnovers’ are very suspicious. Early Career Researchers often are targets of predatory journals and it therefore is important to be aware of this ‘dark side’ of Open Access publishing. Beal’s list covers these but recently there also are more and more ‘non open access’ journals that might be a bit dubious as well. In many cases it’s quite a challenge to judge the trustworthiness of publications. Certainly if in social sciences we would want to go away from the hegemony of the five big publishers, there is a lot to be gained in general skills to judge literature. Now, everyone has their own judgements to make when it comes where they want to publish, but I would be very concerned publishing in any journal (and for any publisher) on Beall’s list.

Categories
Education Research

Mindset #1 – measures

It’s fair to say I’m not a regular blogger. I always feel as if polishing lengthy posts is a waste of time. However, when I look at all the zillions of tweets I manage to cram out I feel that elaborating on some of the concise tweets I put out would sometimes be a good thing. So here goes; let’s see if I can get a twoweekly or so blog out. I’ll start with one on mindset, and specifically about measurement. I do not contend I can do any better than the (up until now) four fabulous posts on Slate Star Codex but will try to add some thoughts about measurement.

The first thoughts were based on the following tweet:

I think it refers to this study. (Claro & Paunesku, 2014). One of the authors is the is the person on Slate Star Codex’ blog. On the measurement of mindset in students it states the following:

mindset_chile

This seems to me a rather concise scale (in reports these two questions are even referred to as ‘questionnaire’). The Education Endowment Fund also recently published a report on mindset.( Education Endowment Fund, 2015). Disregarding, the results for now (maybe a later post) the following questions were asked to determine mindset. The methodology section states:

mindset3

So this gives three items. The same amount which is in this sample ‘mindset meter’ at http://survey.perts.net/take/toi. Mindset Interventions Are A Scalable Treatment For Academic Underachievement also gives more information:

“we assessed this belief using two items: “You can learn new things, but you can’t really change your basic intelligence” and “You have a certain amount of intelligence and you really can’t do much to change it” (α = .84; see Blackwell et al., 2007).”

(Paunesku et al., 2015, p. 4)

So two items again. The Blackwell article is here; this article explored “the role of implicit theories of intelligence in adolescents’ mathematics achievement” by using SEM. The 6 point Likert scale is consistent; this article still uses the 6 items, which apparently were reduced to three and later even two. This feels like quite a limited amount to base a construct on? Of course it’s not necessarily wrong but the warnings in this paper by Eisinga et al. (2013) are there for a reason. I also wonder about the wording (knowing that with language, asking unambigious questions is notoriously difficult): what does ‘not really’ mean? One question says ‘do much’, what is ‘much’? The questions have two components so when I score a question high do I agree with both aspects or the relationship? Furthermore they seem to be a variation of the ‘confidence’ theme, an aspect which has been widely researched. I probably misunderstand the concept of ‘mindset’ when I think that ‘self-confidence’ covers this? In the 2007 Blackwell article the now called mindset questions are part of a ‘Theory of Intelligence’ scale which is part of ‘Motivational variables’. So there you have another element: motivation. I surely see how the role of self-confidence and motivation would influence achievement; how is mindset different? Further questions which I might explore in later posts.

References
Blackwell, L. S., Trzesniewski, K. H., & Dweck, C. S. (2007). Implicit theories of intelligence predict achievement across an adolescent transition: A longitudinal study and an intervention. Child Development, 78(1), 246–263.
Claro, S. & Paunesku, D. (2014). Mindset Gap among SES Groups: The Case of Chile with Census Data. Paper presented at the SREE Fall 2014 Conference.
Dweck, C. S. (1999). Self-theories: Their role in motivation, personality, and development. Philadelphia: Psychology Press.
Eisinga, R., Grotenhuis, M., & Pelzer, B. (2013). The reliability of a two-item scale: Pearson, Cronbach, or Spearman-Brown? International Journal of Public Health, 58(4), 637-642.
Education Endowment Fund. (2015). Changing Mindsets: Evaluation report and Executive summary. Retrieved from https://educationendowmentfoundation.org.uk/modals/pdf_download/projects/56/3/857
Paunesku, D., Walton, G.M., Romero, C.L., Smith, E.N., Yeager, D.S., & Dweck, C.S. (2015). Mindset Interventions are a Scalable Treatment for Academic Underachievement. Psychological Science.