ICT Math Education MathEd Research Tools

BSRLM conference part 2 Alnuset

The third session I attended was more a discussion and critique session, led by Monaghan and Mason, on the topic of ‘cultural affordances’. The basis was the work of Chiappini, who -in the ReMath project- used the software program Alnuset (see here to download it) to look at (its) affordances. Monaghan described the work (a paper on the topic, there will be a publication in 2013, was available) and then asked some questions. Chiappini distinguishes three layers of affordances: perceived, ergonomic and cultural. Engestroms cycle of expansive learning is used, as I understood it, to use activities as drivers for transformation of ergonomic affordances into cultural affordances. Monaghan then asked some critical questions, under which whether the theory of Engestrom really was necessary, wouldn’t for example Radfords work on gestures be more appropriate? Another comment pondered whether the steps for expansive learning were prescriptive or descriptive. I think the former: as the author has made the software with certain design elements in mind it is pretty obvious that they have a preconceived notion of how student learning should take place.  It was pretty hard to discuss these more philosophical issues in detail. I’m not really sure if I even understand the work. Although this could be solely because I haven’t read enough about it, I also feel a bit as if ‘difficult words’ are used to state the obvious. I could only describe what I was thinking off. The article that I took home afterwards gave some more pointers. To get a grasp of this I downloaded the software, that reminded me a bit of the Freudenthal Institute’s ‘Geometrische algebra’ applets, and tried out the software. I liked the idea behind the software. In this example I’ve made three expressions, and I can manipulate x. The other two expressions change with x. Some comments:

  1. I like the way expressions are made and the look and feel, as well as the way dragging changes the expression. Also ‘dividing by zero’ causes expressions to disappear. However, why does x=0 disappear as well when I drag x to 0? (see figure)
  2. I don’t see how the drawback of every tool that allows ‘dragging’, namely just pointless dragging, in this case just to line up the different expressions, is solved. Maybe this isn’t the main goal of the software.
  3. I think that the number line should be used in conjunction with tables and graphs, thus forming a triad expression-table-graphs. The addition of things like an algebraic manipulator and a Cartesian plane seems to indicate that the authors also like more than one representation.
  4. It has far too limited scope for algebra. The 30 day trial is handy here, as in my opinion the software doesn’t do enough to warrant the price.

Will Open Badges work?

I recently tweeted about the Open Badges project from Mozilla. I did know some things about the project already, as I knew that Hans de Zwart had already written about it. I, however, had not yet read the white paper. To me, it’s clear what the project is all about. What I miss in the white paper is some sort of assessment of the difficulties and/or risks involved.

Open badges website

The tweets that started this dialogue were:

(1) @ it’s just that ‘we’ don’t even know how to conform to open standards regarding h/w, why then learning goals?
(2) @ and as any #gamification course will point out: won’t #openbadges become a goal in itself?
(3) @ and then, if everyone starts to use #openbadges, how then will we prevent everyone using them?
(4) @ and if they are ‘controlled’ who will decide on them, and won’t this create a bureaucratic hazard?
@ So that’s why. Any documents addressing these issues?

Point (1) has a lot to do with all those open standards that are failing because of many reasons. Lets take the recent endeavours for ePub3. I’ve been following the stuff that @fakebaldur has written and tweeted about this, and I can almost understand why Apple did their own iBooks. So this begs the question whether, if we already have difficulty in agreeing on technical standards, something more ‘soft’ like learning goals can be used for an open standard that everyone agrees on, let alone the distribution of them.This brings me to (3) and (4), as addressing (1) maybe implies some sort of control over the development. Mozilla has taken up the glove for now, but how are they gonna lead this. Some models, like the certified user one, could maybe help. Mozilla does seem to think about his (of course) as they talk about Threat Models to prevent badge spamming. User consent then comes into play too. Also, endorsements seem to have a function in this (as stated in the white paper). But the problem here is, as stated with (1), that learning goals are less tangible than ‘just being who you say you are’. Openbadges want to say much more, just like Foursquare says more than just whether someone has checked in. In addition, the rewards for having badges on 4sq for example (and I’m not equating them, it’s just to illustrate a further critical point) do not have a large monetary value. Learning goals could have, and this seems to be presented as an unique selling point, a large value. For this reason, I think it will be very worthwhile for people and instances, and I’m not talking about fraud, to issue badges. I’m not sure how someone, with the aim being to collect badges during his/her lifetime, can cope with the thousands of badges that are to appear. At least, when the framework becomes successful (but if it doesn’t then the worth of the framework is diminished). This is a problem with all initiatives that require enough on-boarding to be worthwhile. As we have seen from other open initiatives this is a paradox: just saying it is worthwhile to use doesn’t make it worthwhile per se. Furthermore, the administrative load could be considerable. And who is going to decide who may endorse and who doesn’t. If I have badge programming 101 what does this mean? Do I get one when I did java? Or one for Python as well? Or are there separate ones. Another paradox: to say something you don’t want badges to be too generic, to have badges for every tiny specialism, even issued by different issuers, some endorsed and some not endorsed, would mean an administration that maybe doesn’t work.

When it comes to gamifying (tweet (2)) learning goals (you want to earn badges, and hopefully be motivated to store them) one point to address is whether people collect badges because they are representations of their knowledge and skills (more intrinsic, because he or she wants to), or because collecting them is ‘cool’ in itself. You don’t want the latter, as it will make collecting badges a goal in itself. This ties in by things I’ve also learned in the Gamification MOOC.

In general, I do not see many writings and comments on these risks, and I think there should be. So, to conclude, has this all been thought through? I’m not saying it couldn’t be worthwhile to pursue this, but I do not see many critical reflections towards the feasibility of it all. I think the case for (or against ;-)) open badges could be better if these issues were addressed.


MOOCing about

The MOOCs I did were from Coursera

In the last months I have followed several Massive Open Online Courses. Some I followed because I was interested in the topic (e.g. Machine Learning), some I followed because I was curious how the lecturer would address the topic at hand (e.g. Mathematical Thinking). Let me first admit: I did not or have not yet finished all of them. But because I followed several MOOCs, from varying institutions and on varying topics, I think I can give some kind of opinion. The 4 MOOCs (I’m not counting the ones that I just enrolled to see what content there was) I followed or am following are:

  1. Machine Learning. Running August 20th for 10 weeks. Still on course.
  2. Gamification. Running August 27th to October 8th. Final scores still to be determined but already reached the pass rate of 70/100.
  3. Introduction to Mathematical Thinking. Running September 17th for 7 weeks. Started this, but on a personal level, did not learn that much new stuff. Furthermore, for a ‘feel’ of the method, it resembled the gamification course.
  4. Web Intelligence and Big Data. August 27th for 10 weeks. Did not finish it for reasons that will become clear below.

Organisation and planning

There’s a vast difference in the amount of time that is needed for the various courses. According to the course syllabus, the Machine Learning course should have taken me far less time than, for example, the Mathematical thinking course. Even if I take into account the fact that I already know a lot about mathematics, and not so much on Machine Learning, just the simple fact that there are weekly programming assignments in the former, makes this -in my opinion- a much more time-consuming course. With this comes the number of lectures and the duration of the separate videos. I think it would be good if, if possible,  the actual amount of time spent by students was monitored and used for more accurate estimates, maybe even taking into account prior knowledge.

Quality of the materials

Issues like these can cause demotivation

One thing that struck me was the quality of the elements of the courses. Some had engaging speakers, others weren’t as good. Some managed to communicate difficult content, others had trouble doing this. Sometimes visuals added to the lecture, sometimes text was hardly readable. One of the most frustrating experiences I had was feeling as if material that was tested in the quizzes and/or other assignments wasn’t covered in the lectures. The different elements of any course should fit together. If this lack of coherence is accompanied by many mistakes, then demotivation sets in pretty quickly. For me this certainly was the case with the Big Data course. Mind you, I did manage to get reasonable scores, but for me this just wasn’t good enough. Sometimes it even seemed as if it was expected that I had already read up about the topic at hand. I think a MOOC should be a self-contained course, otherwise -although more costly- a MOOC is nothing more than a course at a university with a web-presence. In general, most of the courses do not have a specific text- or course-book; only reading suggestions, that don’t always fit in when it comes to the scope of topics covered, or even the difficulty. I think it would be a good idea if every course would have custom book in a digital format that could be downloaded.


Typical checkbox question (here from Machine Learning)

Assessment was done in various ways in the courses. The most common tests were multiple choice quizzes, with some open (numerical) questions in the Machine Learning course. The inline lecture questions were good if an explanation was provided when ‘failing’ three times. If that explanation wasn’t there I had the feeling it was more ‘multiple guess’. The way in which the quizzes for scores were implemented varied considerably. One model is to allow infinite (in practice: 100) attempts for a quiz. I think this corresponds mostly with formative assessment: the tests aren’t there to judge you but are there to let you practice. Quizzes, did however, always count for the final score. So in a world where in the end we do have some sort of high scales test (pass or fail) it could lead to situations whereby 50 attempts yield 100 out of 100 points. Now you can perfectly happy with that as a teacher, after all in the end students knew their answers. I feel that the case of ‘unlimited attempts’ was a bit too lenient. A second model gave students a limited amount of attempts, 5 was used in some. This at least made sure that practicing and re-sitting a test was limited, but students could revise. Revision could not be done by scrutinizing all the questions and answers but only the answers. Because most quizzes had a randomization in their answers (I did not see random questions pooled from a larger amount of questions), yielding a different order of answers and even different answers to the questions, it wasn’t always trivial to improve your score. The questions themselves were a mix of radio button questions, permitting only one answer, and checkbox questions, permitting more than one answer. The latter often were true/false questions but because combined they often made up one question with many answer possibilities, I actually found them harder than de radio button questions. Just crossing away the answers that weren’t realistic wasn’t possible in these cases. Another model was a variation of the limited attempts by imposing a penalty for extra attempts, sometimes from the second but also from the third or fourth attempt. I preferred the ones where a deduction was imposed from the third attempt: it allowed students to revise their work once without penalty (for a second attempt), but it made sure that students would not be able to just click around because then they would get a penalty when they would try for real. Finally, some exams only had one attempt. Given the fact that the questions often were closed and multiple choice I think the quality of the questions hasto be awfully unambiguous. I wasn’t sure this was always the case.

This was the process for peer review in the gamification course

Some courses involved written assignments, like the gamification course. They often were used in a peer review setting. Apart from the fact that some browsers seemed to struggle with the Coursera module for peer assessment, I thought the idea of allowing open written assignments and peer review them was a great idea. Of course, it can be the only way in which you would add these, as the number of students is just too big. What I did find daunting was the peer review itself. Not the comments themselves, I found that most of them were fair (I’m not saying all fair). I did think that the chosen evaluation method, the rubric, was to coarse to fully evaluate the assignments. How should we mark creativity? What if I could only choose between 0, 1, 2 and 3 and I thought 3 was too much and 2 not enough? How do we make sure that all students are treated equal? How do we make sure that grading is done objectively and not compared to the solutions that students themselves had given? I sometimes had the feeling that scores were arbitrary, not resulting in a lower score than deserved but more often in a higher score. I’ve read about one course, which I didn’t follow, where an ‘ideal’ solution was provided by the course-leader, to serve as an example solution. I think that this part of the course has to be fine-tuned to actually give the grades that one deserves.

A variation of the written assignments were the tasks for Mathematical Thinking. They were given in digital format and the goal was to make the assignments and discuss them in a local study group. Although this is great pedagogy I’m wondering whether this isn’t allowing for too much freedom: if I would have the resilience to make assignments on my own, and discuss them, why would we need MOOCs or schools at all? I think it is necessary to have some incentive to actually do this, apart from ‘curiosity’. It also adds to motivation: just the fact that students could just NOT do their work and get away with it, wasn’t all that motivating for me.

A third type of assignment were the programming assignments. Both Big Data and Machine Learning had these. I was extremely impressed by the programming assignments in the latter course. Naturally, programming is more suited for automated grading than open essay questions. You ‘just’ had to evaluate the assignments, often functions you had to write, with arbitrary and random numbers and see if they gave the same results as intended. In the Machine Learning course this is implemented in a wonderful way: you program you work, you test it, you then submit it, and it is graded instantly, returning the score to the system.

At the end of most courses, a score of 70% would earn a ‘statement of completion’.

Intellectual and curiosity level

Programming assignment for Machine Learning

It’s hard to say anything sensible about the intellectual level of the various courses, because it depends so much on the background of the students. Personally I though the Machine Learning course really was a challenge, whereby I would feel a great sense of accomplishment when I managed to train a Neural Net to recognize handwriting. But then my background is mathematics and computer science, so I’m bound to like this. Same holds for Web Data and intelligence, although I’ve already written enough about other factors that came into play. Gamification was an interesting course that really made me think and enabled me to argue why I was for or against a certain point regarding gamification. This was a strong point of that course: it did not shy away from mentioning criticism on the gamification, which made sure that it wasn’t considered ‘the next holy grail’ but a scientific critique of good and bad points. This in itself raised the intellectual level of the course, even though the course material itself was quite simple. As I only followed the Mathematical Thinking course because I was curious about the peer assessment I did the first half but then concluded that, for me, the course did not offer enough. This is not to say that lots of people could do with a bit more mathematical thinking. For every course, and so MOOCs as well, one should really consider the intellectual and curiosity level of the course: is it something you want to know or not?


Of course MOOCs, in this form, are a new phenomenon. New initiatives will have mistakes in them. Sometimes you’ll just have to start a initiative and work from there. But even then, what I find hard to defend, is the fact that course-leaders aren’t really responsive. Another option is to delegate this to a support community, something that was positive about the Mathematical Thinking course. When students have questions or hints on obvious mistakes they have to be dealt with accordingly, or at least acknowledged. In this sense I was disappointed with the ‘Big Data’ course. In one quiz grading seemed wrong, and when this finally was acknowledged, it brought up a whole new series of mistakes, even with scores being deducted when answers were correct. Again, this doesn’t mean that mistakes are not permitted. ‘Machine Learning’ is in its second run and the forums are full with small mistakes on indexes, symbols that were just misplaced on certain slides. But there weren’t big mistakes (especially in assessments) and mistakes were acknowledged and, if possible, addressed. I would think responsiveness of the course-leader and/or community is another success factor.


I will finish the Machine Learning course and then finish for the moment, as research and teaching gets my priority. I think MOOCs show great potential but just as in real life en education the quality of courses/lessons differ. Therefore, MOOCs are not the holy grail of teaching, and we should be cautious that eager managers think costs can be cut because of MOOCs. MOOCs need, to be good, time and money. And a good teacher. There is nothing new under the sun.

1. Try to set an accurate estimate of the time involved, including the nature of the tasks involved.
2. Make sure that lectures are engaging and the quality of the visuals is up to par.
3. Balance out the course so that all the elements fit together: no surprises.
4. Make sure that there are no big mistakes in lectures, and especially in the assessments.
5. Provide a digital narrative with all, but not more than, the course content (I would suggest this would be freely available).
6. Use a limited amount of attempts in quizzes, and limit the maximum score after a certain amount of attempts.
7. Use a variety of question types in an online quiz.
8. Work out the criteria students have to use when peer grading assignments, for example by providing worked-out examples and fine-tune any rubrics.
9. When not using peer assessment, using written assignments in study groups seem a weak point in the current MOOCs.
10. The mechanism for grading programming assignments is outstanding, and could be used for any programming course.
11. Try to maintain the same level in MOOCs as ‘genuine’ academic courses.
12. Make sure that the course-leaders or the community respond swiftly to mistakes, discussions or inquiries by students.
13. Think up metrics that can be shown with course descriptions, so students can get an impression of the quality of the course.

ICT Math Education MathEd Tools

Geogebra on the Web

Of course, most of us will already know Geogebra. The latest incarnation, called GeogebraWeb, is made in HTML5 and is a great next step towards an application -as Geogebra initially is java software- towards software for various platforms, including tablets. In a kickstarter project Geogebra is now asking funds for making an iPad app. I’m wondering why. Sure, I can think of some reasons, including the great demand for it and maybe even some native features can be used more efficiently than in HTML5. But it isn’t open. Also the fact that other tablet users will just have to wait, even though it is jokingly stated that it will eventually become multi-platdorm for other tablets too, seems strange if the philosophy behind geogebra is open-ness. Then why not stick with HTML5!!!??? Or just make sure that both android and ipad apps are released on the same day!!!!???

And there are more questions. One of the novel HTML5 features is a Google Drive connection (screenshots above and below).

This is the file it created

The advantage of providing open tools is that this perhaps could be other online drives as well. How can we be sure that different platforms will communicate with different cloud functions, knowing that, for example, Apple and Google do not always see eye to eye. And that would be a shame. Inter-operability should work for all environments.


NOTE: In an earlier post I already mentioned that storing student tasks online would be beneficial, describing the DME (DWO in Dutch).

ICT Math Education MathEd Tools

Dabbling in Sketchometry


After being alerted by a colleague, today I dabbled a bit in Sketchometry. I like it. It can recognize finger gestures, especially useful on tablets, for geometric constructions. There is a calculus function but frankly, this adds nothing to the program. Its strength lies in geometry, and the fact that it works on almost everything, as it is web-based. Furthermore it connects to Dropbox and other cloud-systems. It took a while to get used to the cluttered user interface. Reading this file with available gestures really helped a lot (although there is a mistake, where naturally making a circle should give a circle not a straight line). The number of options and features is not very large.

Midpoint gesture

The quintessential construction is the Euler line. I tried to make this on a Nexus. The screen isn’t big enough, really, for the best experience. Also, the use of my (fat) fingers did not work all that well. Certain gestures were hard to carry out, especially if they involved selecting certain points, like drawing perpendiculars or designating intersections. But even if this was the case, it was great to be able to use gestures, anyway. After around 200 gestures (I had to undo many of them because it recognized the wrong ones) I had something that resembled Euler’s line. I loved the gestures for bisectors and midpoints, with the latter being a one from point to point and a loop in the middle. With quite a few objects the application did seem to slow down considerably, and some icons seemed to disappear.

This could be the case because it could be considered a beta version. What bodes less well for the future is the fact that the most recent post is from Jun 23rd (2012). I hope this does not mean it is the end-product of ‘yet another project’, and now that the project is finished no more updates are given. One of the strengths of for example Geogebra is that it managed to create a large userbase and community working on the software, but also creating content.

Of course, the application would have been even better if it would provide character and formula recognition like in windows 7, Snote by samsung, Inftyreader or visionobjects….. 🙂 But overall it is a great concept!

Dabbling in Sketchometry
ICT Math Education MathEd Tools

Teaching kids real math with computers: a comment on Wolfram

Only recently I read this blogpost on a TED talk by Conrad Wolfram.

Although I agree with most in the blogpost, I think Wolfram paints a caricature of mathematics.  Let me make some comments.

I think Wolfram generalizes too much with regard to different countries. I don’t really know that much about the US situation, but I have the impression that procedural fluency and computation are valued much more over there than in Asia or Europe. Something that Michael Pershan also points out in this excellent video. In the Netherlands conceptual understanding is deemed more important, as is the connection to the real world. In this respect Wolfram exaggerates the percentage involved in computation (80% computation by hand).

This brings me to another point. Wolfram is highly involved with some of the developments of Mathematica software (which his brother Stephen created). He even shows it off in his talk. Undoubtedly, Mathematica and Wolfram Alpha are great pieces of software, that can perform awesome calculations. This, however, makes clear that that using a tool to get rid of computation is what is central in his talk, not the other three points.

Mind you, these three steps are very important, and remind me of Polya on problem solving. I just don’t agree with Wolframs fixation on discarding the third point. Wolfram does see a place for teaching ‘computation’ and says we “only [should] do hand calculations where it makes sense”. He also talks about what ‘the basics’ are, and makes a comparison with technology and engineering in cars. Here it would have helped if Wolfram would have acknowledged the difference between blackbox/whitebox systems (see Buchberger,

In the “white box” phase, algorithms must be studied thoroughly, i.e. the underlying theory must be treated completely and algorithmic examples must be studied in all details. In the black box phase, problem instances from the area can be solved by using symbolic computation software systems. This principle can be applied recursively.

The whole section where Wolfram addresses criticims of his approach sounds far too defensive. He does not agree with the fact that mathematics is dumbed down, and using computers is just ‘pushing the buttons’. This has to do with the traditional discussion between ‘Use to learn’ and ‘Learn to use’. Again, I think Wolframs whole argumentation is a bit shaky: first he attacks learning algorithms with pen-and-paper , but then he does see a fantastic use regarding understanding processes and procedures. This is where Wolfram applauds programming as a subject. Then he shows many applications with sliders and claims: Feel the math! He shows an application for increasing the sides of a polygon and claims this introduces  the “early step into limits”. By using a slider? I’m thinking of an applet I used in my math class teaching the concept of slopes and differentation. I thought it worked pretty well…until I found out students were just dragging the two points together. So what is actually learned?

As another example, there is a part where Wolfram subtitutes the power 2 with a 4, uses Mathematica, and then says ‘same principles are applied’.

If he means that the same piece of software was used, then this is the same principle. If his claim is that the mathematics behind solving an higher order equation uses the same principles as solving a quadratic equation, then I wonder if this really is the message you want to convey to students. Of course the outside world has much more difficult equations, and that brings me to a final point made, concerning a ‘means to an end’. Wolfram does not define what the actual goal of mathematics is. If it is ‘getting the result’ then one could argue that using a computer over doing it by hand makes sense. However, if -and that was Wolframs claim- the goal is ‘teaching’ then I think mathematics brings more than just some results. Wolfram seems to see mathematics as a supporting science for other subjects, and does not seem to acknowledge a broader view of mathematics as a subject aimed at problem. Which is strange regarding Wolframs initial words on really teaching mathematics.

By no means I’m claiming this is an overly extensive critique of Wolfram’s talk, just a few points that -to me- warrant the conclusion that Wolfram paints too much of a caricature of maths education. I’d rather keep it with the conclusion of the blogpost it started with (translation): “Put aside those textbook with tasks, en tell students what has inspired you to learn your subject. Tell stories. Or get people into the classroom to show inspring examples. Let students look at problems in a different way, and see how they can address these problems with the help of mathematics.”. Amen.

ICT Math Education MathEd Tools

Storing student work and checking geometry tasks

One of the –in my opinion- most impressive features I have seen in mathematics software is the recent fusion of the Freudenthal Institutes (FI) DME (Digital Mathematical Environment), good content and the ability to plug in components like Geogebra. Of course, it also is software that I know well, because my thesis ( also used the DME.

For the DPICT project of the FI several lesson series were also translated to English. One of them concerned Geometry. Articles and papers of the project will appear, but I think the material warrants an impression in screenshots.

Logging in as a student:

Accessing the Geometry module:

The geometry module consists of several activities:

One of the activities starts with a task not uncommon in Dutch textbooks:

Geometric construction on the right side can be checked. There are open textboxes (which can’t be checked on correct or incorrect, but can be accessed by the teacher). Note that the [c] task, a drop and drag task, was answered incorrectly.

After correcting:

Another task where both constructions and answers are checked:

Teachers can see how students performed:

Teacher looks at the task shown earlier:

ICT Math Education MathEd Tools

Rapid Miner datamining

I recently got into a discussion about datamining. I actually think we have just started to scratch the surface of big data, Learning Analytics and Educational Datamining. As, for example, the Digitale Mathematical Environment (DME, see here), can produce large logfiles of all students’ actions it would be interesting to see if we can find patterns with machine learning techniques. One thing I would like to find out is whether a collaboration with a Computer Science department is possible. I was just dabbling in Rapid Miner, which works and looks great. Using Paren, Automatic System Construction, I trained a sample set of information on Irises. For the future I will see if I can use Rapid Miner. Another option is Weka.

ICT MathEd

Khan Academy

On the MathEd mailing list I’m on there was some inquiry on Khan Academy. I gave my short description/opinion:

Khan Academy (KA) is often associated with a “pedagogy” denoted as “Flipping the classroom”, which denotes that instruction shifts towards “outside the classroom” through the use of videos, freeing up time for useful classroom discussions, making exercises ín the classroom. Personally I don’t see the novelty in that, as many (good) teachers already use many ways to motivate students. However, at least in de US people seem to take up the movies especially in a homeschooling setting, so perhaps this engagement could be seen as a positive thing. It also depends on the math ed culture in a country.

The movies vary greatly in quality, both mathematically as esthetically. Khan himself has said that the “ugly” movies often were most succesful. Recently -also see documentary 60 minutes- there have been some indications that the movies aren’t watched that well. To improve the content KA has joined up with people like Vi Hart (see whom we know of the great Pi & Shakespeare movie. As mentioned before, Bill Gates, has taken on Khan as his protege. providing him with ample funds. Because of this backing I think KA probably will have more of a chance to survive the hausse in digital mathematics tools.

A second part of the academy is the exercise section. Good learning analytics, and a great visual map for presenting dependencies and progress in a curriculum. Still, this is the part I am underwhelmed with. A bit too “drill and practice” to my taste. Only answers. This interactive part should, imo., be improved much more.

So, as with many things, a critical view is necessary, but not without acknowledging the positive things.


Over theCrowdNL (Dutch)

Ik volg via twitter al een tijdje de discussie over het initiatief theCrowdNL. In mijn eigen woorden zou ik het initiatief omschrijven als een netwerkorganisatie die poogt om professionalisering  wat dichter bij de docent zelf te brengen. Zelf schrijven ze op

The Crowd is een kans voor zelfbewuste onderwijsprofessionals die;

1. Willen excelleren (meesterschap, leren van en met elkaar). Leerlingen laten leren is de grote uitdaging van iedere leraar. Dat vraagt niet alleen om gedegen vakkennis maar ook om meesterschap in klassenmanagement, didactiek en pedagogisch handelen. Dat vraagt vooral om een lerende houding bij de leraar.

2. Vrij willen zijn om eigen keuzes te maken (autonomie, regie over eigen professionalisering). Jij gaat niet zitten wachten tot jouw sectiegenoten in beweging komen of tot de school iets organiseert. Je wilt snel en naar eigen inzicht antwoorden vinden op je leervragen. Je zoekt contact met vakgenoten die…;

3. Een bijdrage willen leveren aan het onderwijs van de toekomst (zingeving, onderwijs 3.0). En wie weet, zet je samen iets in beweging wat veel scholen en de politiek niet voor elkaar krijgen: vernieuwend onderwijs voor de toekomst.

Op 1 februari j.l. is theCrowd officieel gelanceerd, inclusief de nieuwe website, die er overigens erg mooi uit ziet. Ik had al wat kanttekeningen bij het initiatief maar na gisteren zijn die alleen maar sterker geworden. Ik dacht dat het goed was om, hoewel confronterend, dit allemaal eens op te schrijven. Die mening is niet in beton gegoten, maar het zijn wel zaken die mij doen fronzen.

Van 50 naar 500
De ambitie is om pas echt met theCrowd van start te gaan als ze binnen een half jaar van 50 early adopters naar 500 deelnemers gaan. Die arbitraire grens van 500 snap ik niet helemaal. Netwerken kun je met 2 personen al. Waarom zou dit niet kunnen als je maar een handvol personen hebt. Al kun je dan afvragen of een nieuwe organisatie dan de moeite is. Betekent dit dat we nu 50 personen ‘early adopters’ hebben die andere mensen gaan werven die óók 500 euro -want dat staat vermeld op de site- willen investeren? De aanname is dat werknemers professionalisering in eigen handen nemen. Mooi, maar zijn nou juist de mensen die op theCrowd reageren, de personen die dit kunnen, nou net niet die personen die het toch al in eigen handen genomen hebben? Een netwerk met gelijkgestemde personen is dan leuk, maar voegt het dan echt wat toe?

500 euro per jaar
Het is mij ook onduidelijk waarom deelname aan theCrowd überhaupt 500 euro aan investering moet kosten. Gezien enkele reacties op het net wordt gedacht en gevonden dat dit toch een klein bedrag is gezien het geld dat scholen per werknemer uittrekken voor deskundigheidsbevordering. Dat klopt. Je mag er echter wel iets voor terug verwachten, en wat dat dan is, vind ik onduidelijk. Bovendien, als er geen 500 leden komen gaat het niet door, dus óf theCrowd houdt toch op te bestaan, óf we hebben er een netwerkorganisatie bij waar blijkbaar vraag naar is, en waar 500×500=250.000 euro per jaar binnenkomt. Ik vind het onvoorstelbaar om een dergelijk bedrag per jaar te innen en uit te geven. Ik zou het juist zo mooi hebben gevonden als dat nou eens niet nodig was. Nu heb ik begrepen dat het ‘business model’ nog besproken gaat worden. Dat lijkt me goed. Niet vanwege de pecunia’s maar vanwege het idee erachter.

Het systeem van experts vind ik ook ondoorzichtig. Gezien het voornoemde “business model”, lijkt het daarmee een grote consultancy-pool waar niet-early-adopters straks hun expertise kunnen halen. Nee, eigenlijk “kopen” want ongeacht wat je haalt of brengt betaal je (investeer je) er 500 euro voor (in). Nogmaals, dat professionalisering geld kost en dat er geld ís klopt vast, maar zelf had ik meer verwacht van het model er achter. Ik las nogal wat tweets die het jammer vonden dat “het weer over geld ging”. Tja, dat krijg je als je een bedrag vraagt zonder duidelijk te maken waar dat bedrag nou voor bedoeld is. Het is flauw om dan criticasters van dit bedrag te verwijten het “over geld te hebben”. Nee, waar het om gaat is, is hoe je je netwerkorganisatie optuigt. Doe dat eens zonder een groot bedrag maar alleen met een administratieve bijdrage, en probeer à la LETS de boel vorm te geven. Een aanname die onder het huidige model lijkt te liggen is dat je voor de beste mensen moet betalen. Ik dacht dat juist “the wisdom of crowds” centraal vond, of de gedachte dat iedereen wel een talent heeft en als we die uitwisselen, dan professionaliseert iedereen.

Wat ik daarbij zelf ook een slecht teken vind is dat de ‘early adopters’ zover ik kan zien vooral bestaan uit personen die”voor hun werk” andere functies vervullen. Ik zag ADE’s (Apple distinguished educators), SCT (Smart Certified Trainers), SEE (Smart Exemplary Educators), APS (consultancy),veel zzp-ers, T3 (Texas instruments), en vast nog meer. Ik zou het geen goede zaak vinden als theCrowd, met zijn kwart miljoen per jaar, activiteiten organiseert door hoofdzakelijk uit deze groep, al dan niet tegen een schappelijker tarief dan de consultancy-fees, experts in te huren. En nee, daarmee zeg ik helemaal niets over de nobele motieven van de initiatiefnemers. Je kunt zeggen: “dat is mooi, want expertise”, en misschien is dat ook wel zo. Ik zou dan graag zien dat die expertise “om niet” zou worden verspreid. Even nog een knuppel in het hoenderhok: theCrowd is toch niet het vehikel om consultants aan klussen te helpen. Voor geld inhuren doe je maar via het andere kanaal, bij theCrowd spelen andere idealen een rol.

Al met al snap ik theCrowd niet helemaal. De gedachte die ik er eerst over had was dat het oude wijn in nieuwe zakken was: een netwerkorganisatie. Nu de site er is zie ik dat het andere wijn in nieuwe zakken is: een soort consultancypool. Uiteraard snap ik dat mensen zouden kunnen zeggen dat voorgaande allemaal wel heel negatief is. Ik ben van mening dat het punten zijn waar een nieuw initiatief een goed antwoord op moet hebben. Misschien komt dat nog. Waar ik niet zo gevoelig voor ben is opmerkingen dat het “toch goed is dat mensen initiatieven nemen” en “wat zelf mijn bijdrage is”. Met andere woorden “doe zelf dan eens wat”. Gelukkig doe ik dat dagelijks. Hoe het heet? Mijn eigen professionalisering ter hand nemen.