How is it going this time?

My Mathematical Thinking MOOC is now starting its ninth week out of a possible ten. (The last two weeks are optional, for those wanting to get more heavily involved in the mathematics.)

At the start of the week, registrations were at 38,221, of whom 24,342 had visited the site at least once, with 2,818 logging on in the previous week. But none of those numbers is significant – by which I mean significant in terms of the course I am offering. (People drop in on MOOCs for a variety of reasons besides taking the course.)

The figure of most interest to me is the number of students who completed and submitted the weekly Problem Set. In my sense, those are the real course students. As of last week, they numbered 1,013, and all of them will almost certainly complete the course. That is a big class. The undergraduate class I taught at Princeton this past spring (using my MOOC as one of several resources) had just 9 students.

My MOOC has two main themes: understanding how mathematicians abstract formal counterparts to everyday notions, and how they make use of those abstractions to extend our cognitive understanding of our world.

For much of the time the focus is on language, since that is the mechanism used to formulate and define abstract concepts and prove results about them.

The heavy focus on language and its use in reasoning gives the course appeal to two different kinds of students: those looking to investigate some issues of language use and sharpen their reasoning skills, and those wanting to develop their analytic problem solving skills for mathematics, science, or engineering. (The latter are the ones who typically do the optional final two weeks of the course.)

The pedagogy underlying the course is Inquiry-Based Learning.

To make that approach work in a MOOC, where many students have no opportunity to interact directly with a mathematics expert, I have to design the course in a way that encourages interaction with other students, either on the course Discussion Forum on the course website or using social media or local meetings.

Early in the course, I identify a few students whose Forum posts indicate good metacognitive skills and appoint them “Community Teaching Assistants”. A badge against their name then tells other students that it is worthwhile paying attention to their posts. The CTAs, there are currently thirteen of them, and I also have a back-channel discussion forum to discuss any problematic issues before posting on the public channel.

It seems to work acceptably well. To date, there have been over 3,700 original posts (from 957 students) and 3,639 response comments on the course Discussion Forum.

Since the only practical form of regular performance evaluation in a MOOC involves machine grading – which boils down to some form of multiple choice questions – it’s not possible to ask students to construct mathematical proofs. The process is far too creative.

Instead, I ask them to evaluate proofs (more precisely, purported proofs). To help them do this, I provide a five-point rubric that requires them to view each argument from different perspectives, assigning a “grade” on a five-point numerical scale. See here for the current version of the evaluation rubric.

Notice that the rubric has a sixth category, where they have to summarize their five individual-category evaluations into a single, overall “grade” on the same five-point scale. How they perform the aggregation is up to them. The overall goal is to help the students come to appreciate the different features of proofs, as used in present-day mathematics. The rubric asks them first to look at the proof from the five different perspectives, then integrate those assessments into a single evaluation.

After the students have completed an evaluation of a purported proof, their (numerical) evaluations are machine graded (more about this in a moment), after which they view a video of me evaluating the same proof so they can compare their assessment to one expert.

The goal in comparing their evaluation to mine is not to learn to assign numerical evaluation marks the way I do. For one thing, evaluation of proofs is a very subjective, holistic thing. For another, having been evaluating proofs by both students and experts for many decades, I have achieved a level of expertise that no beginner could hope to match. Moreover, I almost never evaluate using a rubric.

Rather, the point of the exercise is to help the students come to understand what makes an argument (1) a proof, and (2) a good proof, by examining it from different perspectives. (For a discussion of the approach to proofs I take, see my most recent post on my other blog, profkeithdevlin.org.)

To facilitate this, the entire process is set up as a game with rules. (Of course, that is true for any organized educational process, but in the case of my MOOC the course design is strongly influenced by video games – see many of the previous posts in that blog for more on game-based learning, starting here.)

In particular, the points they are awarded (by machine grading) for how close they get to my numerical proof-evaluation score are, like all the points the Coursera platform gives out in my course, very much like the points awarded in a typical video game. They are important in the moment, but have no external significance. In particular, success in the course and the award of a certificate does not depend on a student’s points total. My course offers a learning experience, not a paper qualification. (The certificate attests that they had that experience.)

Overall, I’ve been pleased with the results of this way to handle mathematical argumentation in a MOOC. But it is not without difficulties. I’ll say more in my next post, where I will describe some of the observations I have made so far.

Stay tuned…