OK, so I’m not the type of person who likes to keep long personal logs or elaborate mindmaps of my thoughts. I prefer short 140 character tweets. I followed quite a few MOOCs already, finished half of them and ‘cheated’ on one because I wasn’t going to make such a concept map any way (#lak13). However, in this case I’d thought I’d make an exception. It’s for Octel, an open course in technology enhanced learning. It could easily be that this is both the first and the last post and I will continue via twitter but then it has been fun while it lasted. Will I learn new stuff? I don’t know. What I do know is that TEL has had my interest for many years now.
Starting as a computer science and mathematics in 1998 I immediately started using quite a lot of technology in the classroom. With digital resources, a website, some dabblings with the first VLE’s (first versions of Moodle) and maths Java applets. First this was quite fragmented.
I then started to participate in projects for techology and maths whereby we tried to design a genuine curriculum with these tools. So tools would not just be an add-on but have a place in the curriculum.
In my later role as head of ICT I had to think about strategy and pedagogy: how are we going to use these tools? Can we entice teachers to use them? Why do they use them or don’t they want to use them? Although personally I think I was a frontrunner I have always thought that change should be gradual and almost develop organically. In a sense, if a lot of teachers aren’t sympathetic towards a certain change, it is OUR job to show what could be won by adopting it. And not grumble about teachers not wanting anything.
In my PhD and now my work as lecturer at the University of Southampton (mathemetics education) the use and pedagogy involved with ICT tool use has a large role. I think I have some novel and explicite thoughts about this, but find that discussing these with others ‘sharpens the mind’. Thus Octel.
For me, the main question about TEL would be how to incorporate it in daily school practice, without being evangelical about it. Of course, some tools are nice and interesting to use, but do they give much in return for the investment. Wouldn’t a classroom discussion face-to-face be more efficient? When would TEL be beneficial? And would it be beneficial for everyone (social inclusion)? Not only the white upper class? How can we show teachers how you can use TEL, again, without being evangelical? And, finally, can we have the patience that is needed to integrate TEL or should we just wait and not do anything? Maybe change will come about any way, but not just tomorrow.
In the last months I have followed several Massive Open Online Courses. Some I followed because I was interested in the topic (e.g. Machine Learning), some I followed because I was curious how the lecturer would address the topic at hand (e.g. Mathematical Thinking). Let me first admit: I did not or have not yet finished all of them. But because I followed several MOOCs, from varying institutions and on varying topics, I think I can give some kind of opinion. The 4 MOOCs (I’m not counting the ones that I just enrolled to see what content there was) I followed or am following are:
Machine Learning. Running August 20th for 10 weeks. Still on course.
Gamification. Running August 27th to October 8th. Final scores still to be determined but already reached the pass rate of 70/100.
Introduction to Mathematical Thinking. Running September 17th for 7 weeks. Started this, but on a personal level, did not learn that much new stuff. Furthermore, for a ‘feel’ of the method, it resembled the gamification course.
Web Intelligence and Big Data. August 27th for 10 weeks. Did not finish it for reasons that will become clear below.
Organisation and planning
There’s a vast difference in the amount of time that is needed for the various courses. According to the course syllabus, the Machine Learning course should have taken me far less time than, for example, the Mathematical thinking course. Even if I take into account the fact that I already know a lot about mathematics, and not so much on Machine Learning, just the simple fact that there are weekly programming assignments in the former, makes this -in my opinion- a much more time-consuming course. With this comes the number of lectures and the duration of the separate videos. I think it would be good if, if possible, the actual amount of time spent by students was monitored and used for more accurate estimates, maybe even taking into account prior knowledge.
Quality of the materials
Issues like these can cause demotivation
One thing that struck me was the quality of the elements of the courses. Some had engaging speakers, others weren’t as good. Some managed to communicate difficult content, others had trouble doing this. Sometimes visuals added to the lecture, sometimes text was hardly readable. One of the most frustrating experiences I had was feeling as if material that was tested in the quizzes and/or other assignments wasn’t covered in the lectures. The different elements of any course should fit together. If this lack of coherence is accompanied by many mistakes, then demotivation sets in pretty quickly. For me this certainly was the case with the Big Data course. Mind you, I did manage to get reasonable scores, but for me this just wasn’t good enough. Sometimes it even seemed as if it was expected that I had already read up about the topic at hand. I think a MOOC should be a self-contained course, otherwise -although more costly- a MOOC is nothing more than a course at a university with a web-presence. In general, most of the courses do not have a specific text- or course-book; only reading suggestions, that don’t always fit in when it comes to the scope of topics covered, or even the difficulty. I think it would be a good idea if every course would have custom book in a digital format that could be downloaded.
Assessments
Typical checkbox question (here from Machine Learning)
Assessment was done in various ways in the courses. The most common tests were multiple choice quizzes, with some open (numerical) questions in the Machine Learning course. The inline lecture questions were good if an explanation was provided when ‘failing’ three times. If that explanation wasn’t there I had the feeling it was more ‘multiple guess’. The way in which the quizzes for scores were implemented varied considerably. One model is to allow infinite (in practice: 100) attempts for a quiz. I think this corresponds mostly with formative assessment: the tests aren’t there to judge you but are there to let you practice. Quizzes, did however, always count for the final score. So in a world where in the end we do have some sort of high scales test (pass or fail) it could lead to situations whereby 50 attempts yield 100 out of 100 points. Now you can perfectly happy with that as a teacher, after all in the end students knew their answers. I feel that the case of ‘unlimited attempts’ was a bit too lenient. A second model gave students a limited amount of attempts, 5 was used in some. This at least made sure that practicing and re-sitting a test was limited, but students could revise. Revision could not be done by scrutinizing all the questions and answers but only the answers. Because most quizzes had a randomization in their answers (I did not see random questions pooled from a larger amount of questions), yielding a different order of answers and even different answers to the questions, it wasn’t always trivial to improve your score. The questions themselves were a mix of radio button questions, permitting only one answer, and checkbox questions, permitting more than one answer. The latter often were true/false questions but because combined they often made up one question with many answer possibilities, I actually found them harder than de radio button questions. Just crossing away the answers that weren’t realistic wasn’t possible in these cases. Another model was a variation of the limited attempts by imposing a penalty for extra attempts, sometimes from the second but also from the third or fourth attempt. I preferred the ones where a deduction was imposed from the third attempt: it allowed students to revise their work once without penalty (for a second attempt), but it made sure that students would not be able to just click around because then they would get a penalty when they would try for real. Finally, some exams only had one attempt. Given the fact that the questions often were closed and multiple choice I think the quality of the questions hasto be awfully unambiguous. I wasn’t sure this was always the case.
This was the process for peer review in the gamification course
Some courses involved written assignments, like the gamification course. They often were used in a peer review setting. Apart from the fact that some browsers seemed to struggle with the Coursera module for peer assessment, I thought the idea of allowing open written assignments and peer review them was a great idea. Of course, it can be the only way in which you would add these, as the number of students is just too big. What I did find daunting was the peer review itself. Not the comments themselves, I found that most of them were fair (I’m not saying all fair). I did think that the chosen evaluation method, the rubric, was to coarse to fully evaluate the assignments. How should we mark creativity? What if I could only choose between 0, 1, 2 and 3 and I thought 3 was too much and 2 not enough? How do we make sure that all students are treated equal? How do we make sure that grading is done objectively and not compared to the solutions that students themselves had given? I sometimes had the feeling that scores were arbitrary, not resulting in a lower score than deserved but more often in a higher score. I’ve read about one course, which I didn’t follow, where an ‘ideal’ solution was provided by the course-leader, to serve as an example solution. I think that this part of the course has to be fine-tuned to actually give the grades that one deserves.
A variation of the written assignments were the tasks for Mathematical Thinking. They were given in digital format and the goal was to make the assignments and discuss them in a local study group. Although this is great pedagogy I’m wondering whether this isn’t allowing for too much freedom: if I would have the resilience to make assignments on my own, and discuss them, why would we need MOOCs or schools at all? I think it is necessary to have some incentive to actually do this, apart from ‘curiosity’. It also adds to motivation: just the fact that students could just NOT do their work and get away with it, wasn’t all that motivating for me.
A third type of assignment were the programming assignments. Both Big Data and Machine Learning had these. I was extremely impressed by the programming assignments in the latter course. Naturally, programming is more suited for automated grading than open essay questions. You ‘just’ had to evaluate the assignments, often functions you had to write, with arbitrary and random numbers and see if they gave the same results as intended. In the Machine Learning course this is implemented in a wonderful way: you program you work, you test it, you then submit it, and it is graded instantly, returning the score to the system.
At the end of most courses, a score of 70% would earn a ‘statement of completion’.
Intellectual and curiosity level
Programming assignment for Machine Learning
It’s hard to say anything sensible about the intellectual level of the various courses, because it depends so much on the background of the students. Personally I though the Machine Learning course really was a challenge, whereby I would feel a great sense of accomplishment when I managed to train a Neural Net to recognize handwriting. But then my background is mathematics and computer science, so I’m bound to like this. Same holds for Web Data and intelligence, although I’ve already written enough about other factors that came into play. Gamification was an interesting course that really made me think and enabled me to argue why I was for or against a certain point regarding gamification. This was a strong point of that course: it did not shy away from mentioning criticism on the gamification, which made sure that it wasn’t considered ‘the next holy grail’ but a scientific critique of good and bad points. This in itself raised the intellectual level of the course, even though the course material itself was quite simple. As I only followed the Mathematical Thinking course because I was curious about the peer assessment I did the first half but then concluded that, for me, the course did not offer enough. This is not to say that lots of people could do with a bit more mathematical thinking. For every course, and so MOOCs as well, one should really consider the intellectual and curiosity level of the course: is it something you want to know or not?
Responsiveness
Of course MOOCs, in this form, are a new phenomenon. New initiatives will have mistakes in them. Sometimes you’ll just have to start a initiative and work from there. But even then, what I find hard to defend, is the fact that course-leaders aren’t really responsive. Another option is to delegate this to a support community, something that was positive about the Mathematical Thinking course. When students have questions or hints on obvious mistakes they have to be dealt with accordingly, or at least acknowledged. In this sense I was disappointed with the ‘Big Data’ course. In one quiz grading seemed wrong, and when this finally was acknowledged, it brought up a whole new series of mistakes, even with scores being deducted when answers were correct. Again, this doesn’t mean that mistakes are not permitted. ‘Machine Learning’ is in its second run and the forums are full with small mistakes on indexes, symbols that were just misplaced on certain slides. But there weren’t big mistakes (especially in assessments) and mistakes were acknowledged and, if possible, addressed. I would think responsiveness of the course-leader and/or community is another success factor.
Conclusion
I will finish the Machine Learning course and then finish for the moment, as research and teaching gets my priority. I think MOOCs show great potential but just as in real life en education the quality of courses/lessons differ. Therefore, MOOCs are not the holy grail of teaching, and we should be cautious that eager managers think costs can be cut because of MOOCs. MOOCs need, to be good, time and money. And a good teacher. There is nothing new under the sun.
1. Try to set an accurate estimate of the time involved, including the nature of the tasks involved.
2. Make sure that lectures are engaging and the quality of the visuals is up to par.
3. Balance out the course so that all the elements fit together: no surprises.
4. Make sure that there are no big mistakes in lectures, and especially in the assessments.
5. Provide a digital narrative with all, but not more than, the course content (I would suggest this would be freely available).
6. Use a limited amount of attempts in quizzes, and limit the maximum score after a certain amount of attempts.
7. Use a variety of question types in an online quiz.
8. Work out the criteria students have to use when peer grading assignments, for example by providing worked-out examples and fine-tune any rubrics.
9. When not using peer assessment, using written assignments in study groups seem a weak point in the current MOOCs.
10. The mechanism for grading programming assignments is outstanding, and could be used for any programming course.
11. Try to maintain the same level in MOOCs as ‘genuine’ academic courses.
12. Make sure that the course-leaders or the community respond swiftly to mistakes, discussions or inquiries by students.
13. Think up metrics that can be shown with course descriptions, so students can get an impression of the quality of the course.