Posts Tagged 'Facebook'

Coming up for air (and spouting off)

A real-time chronicle of a seasoned professor who has just completed giving his first massively open online course.

Almost a month has passed since I last posted to this blog. Keeping my MOOC running took up so much time that, once it was over, I was faced with a huge backlog of other tasks to complete. Taking a good look at the mass of data from the course is just one of several post-MOOC activities that will have to wait until the New Year. So readers looking for statistics, analyses, and conclusions about my MOOC will, I am afraid, have to wait a little bit longer. Like most others giving these early MOOCs, we are doing so on the top of our existing duties; the time involved has yet to be figured into university workloads.

One issue that came up recently was when I put on my “NPR Math Guy” hat and talked with Weekend Edition host Scott Simon about my MOOC experience.

In the interview, I remarked that MOOCs owed more to Facebook than to YouTube. This observation has been questioned by some people, who believe Kahn Academy’s use of YouTube was the major inspiration. In making this comment, they are echoing the statement made by former Stanford Computer Science professor Sebastian Thrun when he announced the formation of Udacity.

In fact, I made my comment to Scott with my own MOOC (and many like it) in mind. Though I have noted in earlier posts to this blog how I studied Sal Khan’s approach in designing my own, having now completed my first MOOC, I am now even more convinced than previously that the eventual (we hope) success of MOOCs will be a consequence of Facebook (or social media in general) rather than of Internet video streaming.

The reason why I felt sure this would be the case is that (in most disciplines) the key to real learning has always been bi-directional human-human interaction (even better in some cases, multi-directional, multi-person interaction), not unidirectional instruction.

What got the entire discussion about MOOCs off in the wrong direction – and with it the public perception of what they are – is the circumstance of their birth, or more accurately, of their hugely accelerated growth when a couple of American Ivy League universities (one of them mine) got in on the act.

But it’s important to note that the first major-league MOOCs all came out of Stanford’s Computer Science Department, as did the two spinoff MOOC platforms, Udacity and Coursera. When MIT teamed up with Harvard to launch their edX platform a few months later, it too came from their Computer Science Department.

And there’s the rub. Computer Science is an atypical case when it comes to online learning. Although many aspects of computer science involve qualitative judgments and conceptual reasoning, the core parts of the subject are highly procedural, and lend themselves to instruction-based learning and to machine evaluation and grading. (“Is that piece of code correct?” Let the computer run it and see if it performs as intended.)

Instructional courses that teach students how to carry out various procedures, which can be assessed to a large degree by automatic grading (often multiple choice questions) are the low hanging fruit for online education. But what about the Humanities, the Arts, and much of Science, where instruction is only a small part of the learning process, and a decidedly unimportant part at that, and where machine assessment of student work is at best a goal in the far distant future, if indeed it is achievable at all?

In the case of my MOOC, “Introduction to Mathematical Thinking,” the focus was the creative/analytic mathematical thinking process and the notion of proof. But you can’t learn how to think a certain way or how prove something by being told or shown how to do it any more than you can learn how to ride a bike by being told or shown. You have to try for yourself, and keep trying, and falling, until it finally clicks. Moreover, apart from some very special, and atypical, simple cases, neither thinking nor proofs can be machine graded. Proofs are more like essays than calculations. Indeed, one of the things I told my students in my MOOC was that a good proof is a story, that explains why something is the case.

For the vast majority of students, discussion with (and getting feedback from) professors, TAs, and other students struggling to acquire problem solving ability and master abstract concepts and proofs, is an essential part of learning. For those purposes, the online version does not find its inspiration in Khan Academy as it did for Thrun, but in Facebook, which showed how social interaction could live on the Internet.

When the online version of Thrun’s Stanford AI class attracted 160,000 students, he did not start a potential revolution in global higher education, but two revolutions, only the first of which he was directly involved in. The first one is relatively easy to recognize and understand, especially for Americans, who for the most part have never experienced anything other than instruction-based education.

For courses where the goal is for the student to achieve mastery of a set of procedures (which is true of many courses in computer science and in mathematics), MOOCs almost certainly will change the face of higher education. Existing institutions that provide little more than basic, how-to instruction have a great deal to fear from MOOCs. They will have to adapt (and there is a clear way to do so) or go out of business.

If I want to learn about AI, I would prefer to do so from an expert such as Sebastian Thrun. (In fact, when I have time, I plan on taking his Udacity course on the subject!) So too will most students. Why pay money to attend a local college and be taught by a (hopefully competent) instructor of less stature when you can learn from Thrun for free?

True, Computer Science courses are not just about mastery of procedures. There is a lot to be learned from the emphases and nuances provided by a true expert, and that’s why, finances aside, I would choose Thrun’s course. But at the end of the day, it’s the procedural mastery that is the main goal. And that’s why that first collection of Computer Science MOOCs has created the popular public image of the MOOC student as someone watching canned instructional videos (generally of short duration and broken up by quizzes), typing in answers to questions to be evaluated by the system.

But this kind of course occupies the space in the overall educational landscape that McDonalds does in the restaurant business. (As someone who makes regular use of fast food restaurants, this is most emphatically not intended as a denigratory observation. But seeing utility and value in fast food does not mean I confuse a Big Mac with quality nutrition.)

Things are very, very different in the Humanities, Arts, and most of Science (and some parts of Computer Science), including all of mathematics beyond basic skills mastery – something that many people erroneously think is an essential prerequisite for learning how to do math, all evidence from people who really do learn how to do math to the contrary.

[Ask the expert. We don’t master the basic skills; we don’t need them because, early on in our mathematic learning, we acquired one – yes, just one – fundamental ability: mathematical thinking. That’s why the one or two kids in the class who seem to find math easy seem so different. In general, they don’t find math easy, but they are doing something very different from everyone else. Not because they are born with a “math gene”. Rather, instead of wasting their time mastering basic skills, they spent that time learning how to think a certain way. It’s just a matter of how you devote your learning time. It doesn’t help matters that some people managed to become qualified math teachers and professors seemingly without figuring out that far more efficient path, and hence add their own voice to those who keep calling for “more emphasis on basic skills” as being an essential prerequisite to mathematical power.]

But I digress. To get back to my point, while the popular image of a MOOC centers on lecture-videos and multiple-choice quizzes, what Humanities, Arts, and Science MOOCs (including mine) are about is community building and social interaction. For the instructor (and the very word “instructor” is hopelessly off target in this context), the goal in such a course is to create a learning community.  To create an online experience in which thousands of self-motivated individuals from around the world can come together for a predetermined period of intense, human–human interaction, focused on a clearly stated common goal.

We know that this can be done at scale, without the requirement that the participants are physically co-located or even that they know one another. NASA used this approach to put a man on the moon. MMOs (massively multiplayer online games – from which acronym MOOCs got their name) showed that the system works when the shared goal is success in a fantasy game world.

Whether the same approach works for higher education remains an open question. And, for those of us in higher education, what a question! A question that, in my case at least, has proved irresistible.

This, then, is the second MOOC revolution. The social MOOC. It’s outcome is far less evident than the first.

The evidence I have gathered from my first attempt at one of these second kinds of MOOC is encouraging, or at least, I find it so. But there is a long way to go to make my course work in a fashion that even begins to approach what can be achieved in a traditional classroom.

I’ll pursue these thoughts in future posts to this blog — and in future versions of my Mathematical Thinking MOOC, of which I hope to offer two variants in 2013.

Meanwhile, let me direct you to a recent article that speaks to some of the issues I raised above. It is from my legendary colleague in Stanford’s Graduate School of Education, Larry Cuban, where he expresses his skepticism that MOOCs will prove to be an acceptable replacement for much of higher education.

To be continued …

Advertisement

Peer grading: inventing the light bulb

A real-time chronicle of a seasoned professor who has just completed giving his first massively open online course.

With the deadline for submitting the final exam in my MOOC having now passed, the students are engaging in the Peer Evaluation process. I know of just two cases where this has been tried in a genuine MOOC (where the M means what it says), one in Computer Science, the other in Humanities, and both encountered enormous difficulties, and as a result a lot of student frustration. My case was no different.

Anticipating problems, I had given the class a much simplified version of the process – with no grade points at stake – at the end of Week 4, so they could familiarize themselves with the process and the platform mechanics before they had to do it for real. That might have helped, but the real difficulties only emerged when 1,520 exam scripts started to make their way through the system.

By then the instructional part of the course was over. The class had seen and worked through all the material in the curriculum, and had completed five machine-graded problem sets. Consequently, there were enough data in the system to award certificates fairly if we had to abandon the peer evaluation process as a grading device, as happened for that humanities MOOC I mentioned, where the professor decided on the fly to make that part of the exam optional. So I was able to sleep at night. But only just.

With over 1,000 of the students now engaged in the peer review process, and three days left to the deadline for completing grading, I am inclined to see the whole thing through to the (bitter) end. We need the data that this first trial will produce so we can figure out how to make it work better next time.

Long before the course launched, I felt sure that there were two things we would need to accomplish, and accomplish well, in order to make a (conceptual, proof-oriented) advanced math MOOC work: the establishment (and data gathering from) small study groups in which students could help one another, and the provision of a crowd-sourced evaluation and grading system.

When I put my course together, the Coursera platform supported neither. They were working on a calibrated peer review module, but implementing the group interaction side was still in the future. (The user-base growth of Coursera has been so phenomenal, it’s a wonder they can keep the system running at all!)

Thus, when my course launched, there was no grouping system, nor indeed any social media functionality other than the common discussion forums. So the students had to form their own groups using whatever media they could: Facebook, Skype, Google Groups, Google Docs, or even the local pub, bar, or coffee shop for co-located groups. Those probably worked out fine, but since they were outside our platform, we had no way to monitor the activity – an essential functionality if we are to turn this initial, experimental phase of MOOCs  into something robust and useful in the long term.

Coursera had built a beta-release, peer evaluation system for a course on Human Computer Interaction, given by a Stanford colleague of mine. But his needs were different from mine, so the platform module needed more work – more work than there was really time for! In my last post, I described some of the things I had to cope with to get my exam up and running. (To be honest, I like the atmosphere of working in startup mode, but even in Silicon Valley there are still only 24 hours in a day.)

It’s important to remember that the first wave of MOOCs in the current, explosive, growth period all came out of computer science departments, first at Stanford, then at MIT. But CS is an atypical case when it comes to online learning. Although many aspects of computer science involve qualitative judgments and conceptual reasoning, the core parts of the subject are highly procedural, and lend themselves to instruction-based learning and to machine evaluation and grading. (“Is that piece of code correct?” Just see if it runs as intended.)

The core notion in university level mathematics, however, is the proof. But you can’t learn how to prove something by being told or shown how to do it any more than you can learn how to ride a bike by being told or shown. You have to try for yourself, and keep trying, and falling, until it finally clicks. Moreover, apart from some very special, and atypical, simple cases, proofs cannot be machine graded. In that regard, they are more like essays than calculations. Indeed, one of the things I told my students was that a good proof is a story, that explains why something is the case.

Feedback from others struggling to master abstract concepts and proofs can help enormously. Study groups can provide that, along with the psychological stimulus of knowing that others are having just as much difficulty as you are. Since companies like Facebook have shown us how to build platforms that support the creation of groups, that part can be provided online. And when Coursera is able to devote resources to doing it, I know it will work just fine. (If they want to, they can simply hire some engineers from Facebook, which is little more than a mile away. I gather that, like Google before it, the fun period there has long since passed and fully vested employees are looking to move.)

The other issue, that of evaluation and grading, is more tricky. The traditional solution is for the professor to evaluate and grade the class, perhaps assisted by one or more TAs (Teaching Assistants). But for classes that number in the tens of thousands, that is clearly out of the question. Though it’s tempting to dream about building a Wikipedia-like community of dedicated, math-PhD-bearing volunteers, who will participate in a mathematical MOOC whenever it is offered – indeed I do dream about it – it would take time to build up such a community, and what’s more, it’s hard to see there being enough qualified volunteers to handle the many different math MOOCs that will soon be offered by different instructors. (In contrast, there is just one Wikipedia, of course.)

That leaves just one solution: peer grading, where all the students in the class, or at least a significant portion thereof, are given the task of grading the work of their peers. In other words, we have to make this work. And to do that, we have to take the first step. I just did.

Knowing just how many unknowns we were dealing with, my expectations were not high, and I tried to prepare the students for what could well turn out to be chaos. (It did.) The website description of the exam grading system was littered with my cautions and references to being “live beta”. On October 15, when the test run without the grading part was about to launch, I posted yet one more cautionary note on the main course announcements page:

… using the Calibrated Peer Review System for a course like this is, I believe, new. (It’s certainly new to me and my assistants!) So this is all very much experimental. Please approach it in that spirit!

Even so, many of the students were taken aback by just how clunky and buggy the thing was, and the forums sprung alive with exasperated flames. I took solace in the recent release of Apple Maps on the iPhone, which showed that even with the resources and expert personnel available to one of the world’s wealthiest companies, product launches can go badly wrong – and we were just one guy and two part-time, volunteer student assistants, working on a platform being built under us by a small startup company sustained on free Coke and stock options. (I’m guessing the part about the Coke and the options, but that is the prevalent Silicon Valley model.)

At which point, one of those oh-so-timely events occurred that are often described as “Acts of God.” Just when I worried that I was about to witness, and be responsible for starting, the first global, massive open online riot (MOOR) in a math class, Hurricane Sandy struck the Eastern Seaboard, reminding everyone that a clunky system for grading math exams is not the worst thing in the world. Calm, reasoned, steadying, constructive posts started to appear on the forum.  I was getting my feedback after all. The world was a good place once again.

Failure (meaning things don’t go smoothly, or maybe don’t work at all) doesn’t bother me. If it did, I’d never have become a mathematician, a profession in which the failure rate in first attempts to solve a problem is somewhere north of 95%. The important thing is to get enough data to increase the chances of getting it right – or far more likely, just getting it better – the second time round. Give me enough feedback, and I count that “failure” as a success.

As Edison is said to have replied to a young reporter about his many failed attempts to construct a light bulb, “Why would I ever give up? I now know definitively over 9,000 ways that an electric light bulb will not work. Success is almost in my grasp.” (Edison supposedly failed a further 1,000 times before he got it right. Please don’t tell my students that. We are just at failure 1.)

If there were one piece of advice I’d give to anyone about to give their first MOOC, it’s this: remember Edison.

To be continued …


I'm Dr. Keith Devlin, a mathematician at Stanford University. I gave my first free, open, online math course in fall 2012, and have been offering it twice a year since then. This blog chronicles my experiences as they happen.

Twitter Updates

New Book 2012

New book 2011

New e-book 2011

New book 2011

March 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

%d bloggers like this: