I gave two paper presentations recently at the BSRLM day conference in Brighton. Abstracts and slides are below.
predictor for mathematics achievement. Nevertheless, findings are mixed whether this is more the case for other
I gave two paper presentations recently at the BSRLM day conference in Brighton. Abstracts and slides are below.
predictor for mathematics achievement. Nevertheless, findings are mixed whether this is more the case for other
I gave two paper presentations recently at the BSRLM day conference in Brighton. Abstracts and slides are below.
This is a translation of a review that appeared a while back in Dutch in the journal of the Mathematical Society (KWG) in the Netherlands. I wasn’t able to always check the original English wording in the book.
Christian Bokhove, University of Southampton, United Kingdom
Recently, Keith Devlin (Stanford University), known of his newsletter Devlin’s Angle and popularisation of maths, released a computer game (app for the iPad) with his company Innertubegames called Wuzzit Trouble (http://innertubegames.net/). The game purports to, without actually calling them that, address linear Diophantine equations and build on principles from Devlin’s book on computer games and mathematics (Devlin, 2011) in which Devlin explains why computer games are an ‘ideal’ medium for teaching maths in secondary education. In twelve chapters the book discusses topics like street maths in Brasil, mathematical thinking, computer games, how these could contribute to the learning of maths, and concludes with some recommendations for successful educational computer games. The book has two aims: 1. To start a discussion in the world of maths education about the potential for games in education. 2. To convince the reader that well designed games will play an important role in our future maths education, especially in secondary education. In my opinion, Devlin succeeds in the first aim simply by writing a book about the topic. The second aim is less successful.
Firstly, Devlin uses a somewhat unclear definition of ‘mathematical thinking’.: at first it’s ‘simplifying’, then ‘what a mathematician does’, and then something else yet again. Devlin remains quite tentative in his claims and undermines some of his initial statements later on in the book. Although this is appropriate it doesweaken some of the arguments. The book subsequently feels like a set of disjointed claims that mainly serve to support the main claim of the book: computer games matter. A second point I noted is that the book seems very much aimed the US. The book describes many challenges in US education that, in my view, might be less relevant for Europe. The US emphasis also might explain the extensive use of superlatives like an ‘ideal medium’. With these one would expect a good support of claims with evidence. This is not always the case, for example when Devlin claims that “to young players who have grown up in era of multimedia multitasking, this is no problem at all” (p. 141) or “In fact, technology has now rendered obsolete much of what teachers used to do” (p. 181). Devlin’s experiences with World of Warcraft are interesting but anecdotical and one-sided, as there are many more types of games. It also shows that the world of games changes quickly, a disadvantage of a paper book from 2011.
Devlin has written an original, but not very evidenced, book on a topic that will become more and more relevant over time. As avid gamer myself I can see how computer games have conquered the world. It would be great if mathematics could tap into a fraction of the motivation, resources and concentration it might offer. It’s clear to me this can only happen with careful and rigorous research.
Devlin, Keith. (2011). Mathematics Education for a New Era: Video Games as a Medium for Learning.
Since begin November there is an Android version of Wuzzit trouble in the Play store. I assume it’s the same as the iOS one. A blog post about the game is here.
Inspired by this blogpost: The CASIO graphical calculator FX-9860G SD emulator, still in use is some classrooms in the Netherlands, on the left for y=sin(1/x), an online tool on the right. Both resized to a width of 263 px, ratios kept the same.
QED
(Of course TI would argue that you therefore need the TI-nspire CX full color with a whopping 320 by 240 pixels, and other features comparable to an old Nokia phone. But hey, that’s just me, it’s all about the pedagogy!)
A week ago I attended a seminar at the School of Education with visitors from Japan. One of the visitors was Professor Mikio Miyazaki. He showcased some of his work on a flowchart tool for (geometric) proofs at Schoolmath. I loved it and would love to see this integrated as widgets in the Digital Mathematical Environment, for example. I will provide an overview in some screenshots.
1. This is the entry screen. The flowchart tool is part of a larger environment that stores student information.
2. The materials are presented in a nice overview with levels. The stars do NOT denote difficulty but in how many ways you can actually proof the theorem that is presented.
3. I will choose the section on congruency. Students are presented with a geometry task and are asked to prove the theorem presented (I did not yet manage to find out what the difference between elementary mode and advanced mode is). In this particular example there are four stars, so four possible ways to prove it with the help of congruency. Students have to fill in the flowchart by choosing a strategy/action and providing angles and sides. I love the fact that I can just drag and drop angles and sides to the answer boxes and they will appear there.
4. Having filled in the flowchart the answer can be checked. One of the four stars is coloured yellow.
5. Wrong answers are provided with feedback and an indication where the mistake is:
6. Another final example:
It was interesting to hear that this project faces a challenge that many educational tools face: converting flash and java tools to HTML5 format. I’m still quite disappointed that the Apples, Adobes, Googles and Oracles of the world did not manage to provide a transition period.
I’m getting increasingly more annoyed by the rhetoric surrounding Self Organized Learning. Steve Wheeler poses the black-and-white question about Sugata Mitra: is he a genius or a charlatan ? A cunning way to provoke reactions! 🙂 His interview with Mitra is well worth watching. Well, of course, Mitra is neither genius or charlatan. His ideas on self organized learning are interesting and familiar (self organization is a topic that has been studied before, the medium -technology- is the ‘new’ component) but ‘genius’, I don’t think so. Mitra has an established background as a scholar, has written a lot of articles, and although quite rooted in context, certainly isn’t a charlatan. What then, are things that annoy me?
What I would like to see is a research landscape where there’s room to explore big broad statements, but in such a way that we try to unpeel these statements. What works, what doesn’t, when does it work, when doesn’ t it work. I know this post is quite critical, so in a sense one could say that I use the same method I criticized under point 2. A catch-22, should i criticize criticism? Ah, well I did, didn’t i , but if we engage in discussion I will still listen to you. We can only do this if we work together. Amen. 😉
I seem to get involved into many #openbadges discussions on Twitter lately. A while back I wrote on my blog about this topic. I think it was quite well-balanced, acknowledging the positive points but also having so
me questions. I sent the mail to one of the leads in openbadges as well and got a useful reply, albeit referring to reactions to earlier critical posts here and here. Both sources raise similar points, which is comforting but doesn’t get me closer to a possible answer. The rest could have been for the Google Group. Well, I didn’t go there, as I had just written an extensive post. The final line in the reply (see below the first post) was : “intrinsic/extrinsic is *itself* merely a construct, and the recognition of which badges are valuable is an emergent property of the ecosystem.” Later on I had to mail. Well, I did that, so I think that ended with ‘We agree to disagree’ (well, I agreed ;-)).
The discussion came to the front again when I was included in this tweet:
This point was one of the points raised on Twitter and also in the aforementioned blogpost. I never really got an answer. Retracing the discussion on twitter it seemed to have started with a a link to a post called “Let’s ban the sticker, stamp and star” and then a comment that OpenBadges were much different because they were ‘intrinsic’ and stickers ‘extrinsic’. I don’t agree, both have both sides, if we can even see it that black and white. Badges are issued (http://openbadges.org/issue/). Stickers are issued. Badges are earned, stickers are earned. My point is that I don’t agree with the fact they are presented as a lot different. Of course the scale differs. And it’s online in the cloud, so those are all positive points. But different with regard to motivation, I don’t think so.Badges can be another tool in the vocabulary of teachers and students, but like any tool they can be used in good and bad ways. Potential? Sure. But stickers had potential too! 😉
The point on having 1000s of them and ‘control’ over them came up as well; it actually was the topic of the tweet ‘that started it all’. The answer would be ‘metadata’. Well, I wasn’t talking about how you are going to find the badge(s) you want, I was talking about the way the value of badges is determined.
(Note: it was pointed out that metadata is more than just information on location, but also a pointer to criteria and evidence:
Fair enough. But that wasn’t the point, the point was that metadata -in my opinion- will not ‘solve’ the institutional issue. How can we evaluate these criteria and evidence between badges? What if there are 1001 Algebra 101 badges from different institutions? Or someone makes his/her own badge? It’s nice that an individual has an overview of his/her badges, but how can this be useful in the workplace? I worry that it will be just as hard and difficult as before with CV’s, but looking slightly different. Suggesting that OpenBadges will change this is wishful thinking.)
It also has been suggested that that too is “the recognition of which badges are valuable is an emergent property of the ecosystem.”. To me, that sounds like market thinking, but worded differently. Just like ‘the market’ it will depend on the user how much he/she values the badge. Just like the fact that this is pretty hard to do when it comes to cars, houses or insurances, this -in my opinion- will be even harder for educational goals. Does this mean I won’t have anything to do with them? No. I’ve added a Justin Bieber badge to my developer Blog, worked in Moodle with them (in combination with SCORM) and even added them as an experiment to a forthcoming European project (that I will hopefully get, not sure yet). I will keep on thinking about this, hopefully encountering more valid viewpoints than “do your homework” and “shakes head”.
In research we often refer to ‘convenience sampling’ as sampling where subjects are selected because of their convenient accessibility and proximity to the researcher. The most obvious criticism is sample bias. In working with other researchers, reading articles and PhD students there also is a danger of something I would like to call ‘convenience tooling’: choosing the tool first tool you see, you know or has a favorable image. Now, of course there could be many good reasons why a researcher chooses to do so. Maybe it’s because he/she has worked with or developed a certain tool. Maybe the tool in question is ‘the only tool’ that has a certain features. However, to at least have some sort of reasoning behind the tool choice, in my opinion, a good researcher should give arguments why he/she chooses a certain tool. Preferably, if the need to use a tool arises because of a certain research question or framework, a researcher should write down what features for a tool are needed to answer the research question. Then it should be argued how the chosen tool provides the features that are needed. You could compare this whole process with making a requirements document. In the end, it could very well be that the choice for a tool remains the same, but at least a researcher -just as with sampling- is ‘forced’ to make some of his/her tool choices more specific.
OK, so I’m not the type of person who likes to keep long personal logs or elaborate mindmaps of my thoughts. I prefer short 140 character tweets. I followed quite a few MOOCs already, finished half of them and ‘cheated’ on one because I wasn’t going to make such a concept map any way (#lak13). However, in this case I’d thought I’d make an exception. It’s for Octel, an open course in technology enhanced learning. It could easily be that this is both the first and the last post and I will continue via twitter but then it has been fun while it lasted. Will I learn new stuff? I don’t know. What I do know is that TEL has had my interest for many years now.
For me, the main question about TEL would be how to incorporate it in daily school practice, without being evangelical about it. Of course, some tools are nice and interesting to use, but do they give much in return for the investment. Wouldn’t a classroom discussion face-to-face be more efficient? When would TEL be beneficial? And would it be beneficial for everyone (social inclusion)? Not only the white upper class? How can we show teachers how you can use TEL, again, without being evangelical? And, finally, can we have the patience that is needed to integrate TEL or should we just wait and not do anything? Maybe change will come about any way, but not just tomorrow.