If I would write a blog post every time I do some positive or negative critique on Twitter I would have a day job doing it. It’s just too time-consuming to write them all the time. But recently, after a dialogue on the ALT mailing list, I think I need to write something on….. MOOCs again. It was sparked by a useless discussion on acronyms like MOOCs and SPOCs and what there purported differences are. I responded:
We need a MOCUA a Massive Online Course on Useless Acronyms.
Sorry. Got carried away there.
Then today I read a tweet on ‘research about MOOCs’. Yes, research would be great, but please don’t let it touch the -in my opinion- uninteresting field of xMOOCs and cMOOCs definitions (see this post where I wrote about it). And if you would construct a new framework then let it be a solid one with clear definitions. Needless to say the paper that was referenced (here) could perhaps have been better. Some comments:
No-one would argue that quality and learning are important but to me the paper feekfragmented sources were glued together.
The section with the history and background on MOOCs is ok, but far from new.
Then the main part of the paper starts, aimed at classifying MOOCs, and ending with a 12 dimension classification for MOOCs (note that there are typos in this paper, it says ten dimensions). The 12 dimensions are: the degree of openness, the scale of participation (massification), the amount of use of multimedia, the amount of communication, the extent to which collaboration is included, the type of learner pathway (from learner centred to teacher-centred and highly structured), the level of quality assurance, the extent to which reflection is encouraged, the level of assessment,how informal or formal it is ,autonomy, and diversity. The paper then continues to exemplify the framework by categorizing five MOOCs.
The categories are far from clear:
Openness. What is meant by this. Further in the paper ‘open source’ and ‘creative commons’ are mentioned but looking at the CCK MOOC I see ‘Second Life’, Elluminate and ‘uStream’ which aren’t open source as far as I know. Encouragement of sharing through creative commons is good, but is it open if you ‘just’ encourage. Another high scorer used Google Apps. And what is the role of Open Standards? Some courses score ‘medium’ but why. To me one of the courses (OE) seemed open, hosted on Canvas. (Another sloppy mistake here with Audacity instead of Udacity)
The massiveness seems more clear. I suspect it’s based on number of enrollments, but this is not explained clearly enough.
Use of multimedia: have instances been counted. What IS multimedia. An image? Movies? Video-conferencing? Is 10×1 minute movies just as much as 1×10 minutes?
Degree of communication. Forum posts? Are tweets communications? Blogs? One-way communication is communication as well (it seems so because reflective blogs further down the paper count).
Degree of collaboration. When are people collaborating? When they react on someone else’s forum posts? Or is more communication needed. Groupwork? Group products?
etc….
I feel these criteria are fairly arbitrary, if not in their selection, then in the way they are operationalized.
The paper suddenly ends with a ‘7C’s of Learning Design framework’. Conveniently 7C’s, no references, not rooted in evidence.
It also strikes me that many references are from blog posts. Now I fully understand that society is changing, and personally I welcome the fact that the web has so much to offer when it comes to (well written) blog posts. However, I am a bit skeptical when it comes to quoting numerous blog posts as ‘evidence’ for these developments.
In conclusion, I think ‘research’ is a bit too flattering term for a framework that’s not well defined and is not rooted in well established literature. But then again, I’m not a professor of course.
In the last months I have followed several Massive Open Online Courses. Some I followed because I was interested in the topic (e.g. Machine Learning), some I followed because I was curious how the lecturer would address the topic at hand (e.g. Mathematical Thinking). Let me first admit: I did not or have not yet finished all of them. But because I followed several MOOCs, from varying institutions and on varying topics, I think I can give some kind of opinion. The 4 MOOCs (I’m not counting the ones that I just enrolled to see what content there was) I followed or am following are:
Machine Learning. Running August 20th for 10 weeks. Still on course.
Gamification. Running August 27th to October 8th. Final scores still to be determined but already reached the pass rate of 70/100.
Introduction to Mathematical Thinking. Running September 17th for 7 weeks. Started this, but on a personal level, did not learn that much new stuff. Furthermore, for a ‘feel’ of the method, it resembled the gamification course.
Web Intelligence and Big Data. August 27th for 10 weeks. Did not finish it for reasons that will become clear below.
Organisation and planning
There’s a vast difference in the amount of time that is needed for the various courses. According to the course syllabus, the Machine Learning course should have taken me far less time than, for example, the Mathematical thinking course. Even if I take into account the fact that I already know a lot about mathematics, and not so much on Machine Learning, just the simple fact that there are weekly programming assignments in the former, makes this -in my opinion- a much more time-consuming course. With this comes the number of lectures and the duration of the separate videos. I think it would be good if, if possible, the actual amount of time spent by students was monitored and used for more accurate estimates, maybe even taking into account prior knowledge.
Quality of the materials
Issues like these can cause demotivation
One thing that struck me was the quality of the elements of the courses. Some had engaging speakers, others weren’t as good. Some managed to communicate difficult content, others had trouble doing this. Sometimes visuals added to the lecture, sometimes text was hardly readable. One of the most frustrating experiences I had was feeling as if material that was tested in the quizzes and/or other assignments wasn’t covered in the lectures. The different elements of any course should fit together. If this lack of coherence is accompanied by many mistakes, then demotivation sets in pretty quickly. For me this certainly was the case with the Big Data course. Mind you, I did manage to get reasonable scores, but for me this just wasn’t good enough. Sometimes it even seemed as if it was expected that I had already read up about the topic at hand. I think a MOOC should be a self-contained course, otherwise -although more costly- a MOOC is nothing more than a course at a university with a web-presence. In general, most of the courses do not have a specific text- or course-book; only reading suggestions, that don’t always fit in when it comes to the scope of topics covered, or even the difficulty. I think it would be a good idea if every course would have custom book in a digital format that could be downloaded.
Assessments
Typical checkbox question (here from Machine Learning)
Assessment was done in various ways in the courses. The most common tests were multiple choice quizzes, with some open (numerical) questions in the Machine Learning course. The inline lecture questions were good if an explanation was provided when ‘failing’ three times. If that explanation wasn’t there I had the feeling it was more ‘multiple guess’. The way in which the quizzes for scores were implemented varied considerably. One model is to allow infinite (in practice: 100) attempts for a quiz. I think this corresponds mostly with formative assessment: the tests aren’t there to judge you but are there to let you practice. Quizzes, did however, always count for the final score. So in a world where in the end we do have some sort of high scales test (pass or fail) it could lead to situations whereby 50 attempts yield 100 out of 100 points. Now you can perfectly happy with that as a teacher, after all in the end students knew their answers. I feel that the case of ‘unlimited attempts’ was a bit too lenient. A second model gave students a limited amount of attempts, 5 was used in some. This at least made sure that practicing and re-sitting a test was limited, but students could revise. Revision could not be done by scrutinizing all the questions and answers but only the answers. Because most quizzes had a randomization in their answers (I did not see random questions pooled from a larger amount of questions), yielding a different order of answers and even different answers to the questions, it wasn’t always trivial to improve your score. The questions themselves were a mix of radio button questions, permitting only one answer, and checkbox questions, permitting more than one answer. The latter often were true/false questions but because combined they often made up one question with many answer possibilities, I actually found them harder than de radio button questions. Just crossing away the answers that weren’t realistic wasn’t possible in these cases. Another model was a variation of the limited attempts by imposing a penalty for extra attempts, sometimes from the second but also from the third or fourth attempt. I preferred the ones where a deduction was imposed from the third attempt: it allowed students to revise their work once without penalty (for a second attempt), but it made sure that students would not be able to just click around because then they would get a penalty when they would try for real. Finally, some exams only had one attempt. Given the fact that the questions often were closed and multiple choice I think the quality of the questions hasto be awfully unambiguous. I wasn’t sure this was always the case.
This was the process for peer review in the gamification course
Some courses involved written assignments, like the gamification course. They often were used in a peer review setting. Apart from the fact that some browsers seemed to struggle with the Coursera module for peer assessment, I thought the idea of allowing open written assignments and peer review them was a great idea. Of course, it can be the only way in which you would add these, as the number of students is just too big. What I did find daunting was the peer review itself. Not the comments themselves, I found that most of them were fair (I’m not saying all fair). I did think that the chosen evaluation method, the rubric, was to coarse to fully evaluate the assignments. How should we mark creativity? What if I could only choose between 0, 1, 2 and 3 and I thought 3 was too much and 2 not enough? How do we make sure that all students are treated equal? How do we make sure that grading is done objectively and not compared to the solutions that students themselves had given? I sometimes had the feeling that scores were arbitrary, not resulting in a lower score than deserved but more often in a higher score. I’ve read about one course, which I didn’t follow, where an ‘ideal’ solution was provided by the course-leader, to serve as an example solution. I think that this part of the course has to be fine-tuned to actually give the grades that one deserves.
A variation of the written assignments were the tasks for Mathematical Thinking. They were given in digital format and the goal was to make the assignments and discuss them in a local study group. Although this is great pedagogy I’m wondering whether this isn’t allowing for too much freedom: if I would have the resilience to make assignments on my own, and discuss them, why would we need MOOCs or schools at all? I think it is necessary to have some incentive to actually do this, apart from ‘curiosity’. It also adds to motivation: just the fact that students could just NOT do their work and get away with it, wasn’t all that motivating for me.
A third type of assignment were the programming assignments. Both Big Data and Machine Learning had these. I was extremely impressed by the programming assignments in the latter course. Naturally, programming is more suited for automated grading than open essay questions. You ‘just’ had to evaluate the assignments, often functions you had to write, with arbitrary and random numbers and see if they gave the same results as intended. In the Machine Learning course this is implemented in a wonderful way: you program you work, you test it, you then submit it, and it is graded instantly, returning the score to the system.
At the end of most courses, a score of 70% would earn a ‘statement of completion’.
Intellectual and curiosity level
Programming assignment for Machine Learning
It’s hard to say anything sensible about the intellectual level of the various courses, because it depends so much on the background of the students. Personally I though the Machine Learning course really was a challenge, whereby I would feel a great sense of accomplishment when I managed to train a Neural Net to recognize handwriting. But then my background is mathematics and computer science, so I’m bound to like this. Same holds for Web Data and intelligence, although I’ve already written enough about other factors that came into play. Gamification was an interesting course that really made me think and enabled me to argue why I was for or against a certain point regarding gamification. This was a strong point of that course: it did not shy away from mentioning criticism on the gamification, which made sure that it wasn’t considered ‘the next holy grail’ but a scientific critique of good and bad points. This in itself raised the intellectual level of the course, even though the course material itself was quite simple. As I only followed the Mathematical Thinking course because I was curious about the peer assessment I did the first half but then concluded that, for me, the course did not offer enough. This is not to say that lots of people could do with a bit more mathematical thinking. For every course, and so MOOCs as well, one should really consider the intellectual and curiosity level of the course: is it something you want to know or not?
Responsiveness
Of course MOOCs, in this form, are a new phenomenon. New initiatives will have mistakes in them. Sometimes you’ll just have to start a initiative and work from there. But even then, what I find hard to defend, is the fact that course-leaders aren’t really responsive. Another option is to delegate this to a support community, something that was positive about the Mathematical Thinking course. When students have questions or hints on obvious mistakes they have to be dealt with accordingly, or at least acknowledged. In this sense I was disappointed with the ‘Big Data’ course. In one quiz grading seemed wrong, and when this finally was acknowledged, it brought up a whole new series of mistakes, even with scores being deducted when answers were correct. Again, this doesn’t mean that mistakes are not permitted. ‘Machine Learning’ is in its second run and the forums are full with small mistakes on indexes, symbols that were just misplaced on certain slides. But there weren’t big mistakes (especially in assessments) and mistakes were acknowledged and, if possible, addressed. I would think responsiveness of the course-leader and/or community is another success factor.
Conclusion
I will finish the Machine Learning course and then finish for the moment, as research and teaching gets my priority. I think MOOCs show great potential but just as in real life en education the quality of courses/lessons differ. Therefore, MOOCs are not the holy grail of teaching, and we should be cautious that eager managers think costs can be cut because of MOOCs. MOOCs need, to be good, time and money. And a good teacher. There is nothing new under the sun.
1. Try to set an accurate estimate of the time involved, including the nature of the tasks involved.
2. Make sure that lectures are engaging and the quality of the visuals is up to par.
3. Balance out the course so that all the elements fit together: no surprises.
4. Make sure that there are no big mistakes in lectures, and especially in the assessments.
5. Provide a digital narrative with all, but not more than, the course content (I would suggest this would be freely available).
6. Use a limited amount of attempts in quizzes, and limit the maximum score after a certain amount of attempts.
7. Use a variety of question types in an online quiz.
8. Work out the criteria students have to use when peer grading assignments, for example by providing worked-out examples and fine-tune any rubrics.
9. When not using peer assessment, using written assignments in study groups seem a weak point in the current MOOCs.
10. The mechanism for grading programming assignments is outstanding, and could be used for any programming course.
11. Try to maintain the same level in MOOCs as ‘genuine’ academic courses.
12. Make sure that the course-leaders or the community respond swiftly to mistakes, discussions or inquiries by students.
13. Think up metrics that can be shown with course descriptions, so students can get an impression of the quality of the course.