One of the recent episodes on e-Literate TV showcases Michael Feldstein interviewing Simon Buckingham Shum from Open University in the United Kingdom at the 2013 MOOC Research Initiative Conference. If you are interested in the issues related to the use of technology in support of education, e.g., MOOCs, online learning, etc. be sure you check out this new channel and join the conversation. Simon is Professor of Learning Informatics at the UK Open University’s Knowledge Media Institute (KMi), an 80-strong lab at the convergence of learning sciences, web media, collaboration tools and the social/semantic web.
In the video above, Michael opens by telling us that Simon does research on understanding and analyzing student writing and participation in online discussions. While this is such a niche area of research, participation through writing especially in the discussion area is the meat and potatoes of eLearning and yet there isn’t much research done in this area. KMi’s main areas of interest in this instructional strategy are:
- Depth of Learning
- Dispositions for Learning
If you are reading this then you probably teach or design eLearning experiences and your LMS currently provides some sort of basic analytic data for these writing and participatory activities. For example, in my recent intro to web design online course, there was a total of 1,636 threads that occurred in the discussion area. This total includes student’s answers to my questions and their follow up responses to each other’s posts as well. But what does that tell me about student learning? Are my students just chatty about web design or did they learn something?
What Simon is interested in (and I think all exceptional online instructors are as well) is to go beyond this raw data and analyze the quality of the student’s contributions and perhaps see if students are applying other students diverse perspectives to their learning. He also shared that they are interested in understanding how willing a student is to being “stretched and challenged out of their comfort zone. Which is [criterion] for learning because they won’t make any progress.”
In the interview Michael asks Simon, “So you are trying to do this with software. What kinds of queues is the software looking for to determine [depth and dispositions for learning]?”
Simon shared that they are trying to look at learner resilience and resourcefulness. He also reminded us that learners (especially online I believe) are “risk adverse.” Students (especially online) often panic when they can’t find the answer. But if a learner is resilient then they may consistently try to answer the question. If the learner is resourceful but is stuck and not knowing what to do, then they will still have strategies for moving forward, e.g., as for help, search for answers on the internet, etc. These are crucial aspects of the buzzwords like, 21st Century Learning Skills and Digital Media Literacy (DML).
I couldn’t agree more with Simon. My personal teaching philosophy is more of facilitation and project based. I see myself more as a facilitator in that my goal is to help carry the conversation forward and make sure learning is happening. I never want my students to think that I am the one with all the answers, because 1) I am not and after this class they may never get the opportunity to talk with me again and 2) I don’t want to respond to emails all day 🙂 So my strategy is to curate lots of acceptable resources for their learning. If you do not do this for Web Design students they may get the wrong answer from Google. So this is a must.
So when I can see that students are emailing me or mentioning resources in the discussion area I know they are being resilient and not giving up. This is learning. Similarly, this is the objective of Simons software is to pick up on student’s DML, e.g., their resilience, resourcefulness, asking peers for help, curiosity, desire to dig deeper in the subject matter, etc. If it could pick up and analyze what Simon called, “traces that you leave behind as a learner” this would be a truer picture of learning. Here are some examples that he shared of the software in use:
Analyzing chat transcripts in a 2 hour webinar. This is daunting for us to review (unless there is a reason) because there is lots of superficial conversation going on. If you are a fan of #LRNchat like me you know that it is sometimes 50% zany which could unfortunately be because of my own personal sarcasm and witticism. So going back and reviewing the transcripts found at http://lrnchat.com/ would be difficult, but there is learning connections going on there. What Simon wants to do is train a machine to see where some really meaningful exchanges have taken place? To take it further, what if during these meaningful exchanges was when the instructor was prompting discussion and debate? If these meaningful exchanges didn’t occur from the instructor prompting, then the machine would throw a warning sign.
Determining where to put an instructor’s “scarce” attention. As we all know teaching online takes more time than teaching face to face. I have only taught one or two sections of 16 students at a time. One online class is enough let alone two especially when I also have another full time job separate from teaching. When I taught two I practically went bald and the Starbucks had to tell me to find another place to shop because they ran out of coffee. I can’t imagine professional adjuncts out there who teach at 5+ online universities. A close friend of mine hold the gold medal for teaching at 8! But what if these classes didn’t just have 15 students. What if they have 5 thousand? According to Wikipedia, Udacity’s holds the Guinness Book of World Records for largest MOOC (see Massive Open Online Course > North America). Their CS101 has had enrollments of over 300,000 students! Students need, value, and expect… even in a massive course. So what if the analytic data told the instructor or teams of instructors who or what groups needed the most attention? Or what if the machine could simulate that personalized feedback?
In the interview Michael then asks Simon, “What is different in a MOOC environment? What do we have to grapple with in this research environment?”
Simon shared that the risks that MOOCs run is that they try to do eLearning at a massive scale and try to do some assessment. This could be either formative or summative feedback. But are these assessments truly authentic to what the students are expected to learn? Are students just taking multiple choice quizzes or are they solving real problems, doing real things, applying what they are learning, sharing what they are learning, etc.? How can Udacity have authentic assessment with 300,000 students? Simon mentioned some ideas which have worked and some haven’t. For example, do we hire hundreds of mentors for the MOOC? Do the peers assess each other?
What excites Simon is that in a MOOC there is “massive amounts of data.” Forty years ago our educational research normally consisted of a few samples of learners. When I did my action research in graduate school I was praised for having close to 60 participant students. But imagine having 300,000 students and 300,000 data sets?
Michael then asks Simon, “What is still hard about massive amounts of data? What new problems does having that massive data set have for you?”
Simon described it as both ethical and technical problems. We have always heard the saying that the data will speak for itself! Simon reminded us that the researcher has to make an ethical decision on what to research and what to ignore. Has the data been cleaned? How has the data been visualized? What has been left out? What does the researcher want us to see? What do they not want us to see? As I mentioned DML earlier… There is a new literacy for faculty to be able to read and also capture this data.
Simon thinks that in the future we won’t be just capturing “clicks” in our online course, e.g., analytic and even things that he mentioned like, depth/dispositions for learning. The future may hold gathering biometric data and he mentioned the Quantified Self Movement. But what do we really want big brother watching? We heard this month from Wired about the program called PRISM which is a is a clandestine mass electronic surveillance data mining program launched in 2007 by the National Security Agency (see How The NSA Almost Killed The Internet). Scary stuff…