Posts Tagged 'proof'

Do all Americans have the same age?

With a special Bonus Feature: A correct proof of Fermat’s Last Theorem that fits in a Tweet.

When the students in my Introduction to Mathematical Thinking MOOC encounter a difficulty with an assignment problem, many of them take to the course Discussion Forum to discuss it. By far the longest single thread in the course was for Problem Set 6, Question 5, a couple of weeks ago, which grew rapidly to 193 original student posts, garnering 1,051 views.

The mathematical topic was proofs by mathematical induction. I had given an example in the video-lecture, and then presented the students with a number of purported induction proofs to evaluate according to the course rubric. (See the previous post in this blog for background on the course structure and its rationale, together with a link to the rubric.)

PS6, Q5 presented them with a purported induction proof that in any finite group of Americans, everyone has the same age (and hence all Americans have the same age). Clearly, this is a ludicrously false claim.

The argument I gave in support of the statement was 19 lines long. Each line comprised a single, fairly simple statement. The lines were numbered. The students’ task was to locate the first line where the proof broke down.

The question had a clear and unambiguous correct answer. The logical chain held up for a certain number of steps, and then the logic failed. But I had constructed the argument with the deliberate intent of making the identification of that failure line a tricky task. (You will find variants of this problem all over the Web. I made it particularly fiendish.)

And fiendish is how the students found it. In fact, only 1 in 5 (exactly 20%) got it right. One other (incorrect) line was chosen by slightly more students (23%), while other lines selected ranged widely over many of the lines. Indeed, there were only two lines of the total 19 that no one selected.

Many interesting points were raised and debated – in many cases in heated fashion – in the ensuing forum discussion. For an online course focused on group discussion, this was easily one of the most successful problems I gave them, with learning taking place on many levels.

One of the meta lessons I wanted this particular exercise to provide was the realization that there is a lot more to proofs than whether they are right or wrong. (See the companion post to this in my blog for a lot more on what role proofs play in mathematics.) The argument I had constructed was, with one subtly positioned logical slip, entirely correct. 18 of the 19 lines are fine. Yet, the claim purportedly being proved in so absurd, in a very real sense the entire argument must be nonsense from the getgo. And so it is.

The widespread belief that proofs are primarily about right and wrong is the argumentation analog of the equally widely held belief that mathematics is about “answer getting” that I discussed in my recent post on Devlin’s Angle for the Mathematical Association of America. (Yes, that makes three Devlin blogs. Everybody has a blog these days. If you want to stand out from the crowd, you need two or more.)

Both beliefs – math is all about answer getting and proofs are all about truth – are, I believe, a consequence of the way mathematics and proofs are presented in our K-12 system. What is taught is so unrepresentative of mathematics as practiced by professional mathematicians, there surely has to be an explanation.

Presumably, the perception that mathematics is about answer getting came about in the days before we had calculators and computers, when (accurate) answer getting was an important part of a useful mathematics education. Its continued survival well into the digital age can probably be ascribed to systemic inertia (of which there is no lack in the world of education), with the additional incentive that right/wrong questions are extremely easy to grade (by machine, if you are an administrator who prefers to buy equipment than pay teachers)!

In contrast, evaluating mathematical thinking and problem solving is much more difficult and requires a lot of time on the part of a skilled teacher.

Similarly, for the simple kinds of proofs encountered in high school, determining whether an argument is correct or not is usually easy, but evaluating it as a proof is much more difficult and requires a lot more skill and experience – as the students in my MOOC have been discovering to their continued great frustration.

The idea that proofs are primarily about truth and correctness is very ingrained. When presented with an argument that is extremely well crafted but has an obvious flaw (so this clearly does not include my Americans’ age example), many students find it hard to evaluate the overall structure of the argument. Yet proofs are all about structure. As I keep emphasizing, to my MOOC students and anyone else who is willing to listen, in effect, proofs are stories mathematicians tell to convince the intended recipient that a certain statement is true.

If you forget that, and focus entirely, or even almost entirely, on logical validity, you end up with absurdities like my example of a logically correct proof of Fermat’s Last Theorem so small it will fit into a Tweet, let alone the margin of a book:


Thanks to some work by Andrew Wiles and Richard Taylor, that tweeted argument is logically correct. Every statement follows logically from the preceding part of the argument. If you want to fault it, you have to examine the structure, pointing out that there are some steps missed out that the intended reader may not be able to reconstruct, especially as there are no reasons given. (See here and here for the missing bits.)

The fact is not that logical correctness is not important. It’s that its importance is only in the context of many other features proofs need to have in order to function as intended.

What features? Well, for starters, how about the features of proofs I list in the rubric for my MOOC?

I’ll tell you one thing. Andrew Wiles would not have had his paper accepted for publication if he had not addressed all the points on that rubric!

No, Wiles did not take my course before proving his famous result. The flow is the other way round. I formulated the rubric to try to identify some of the factors professional mathematicians like Wiles make tacit use of all the time when writing up proofs for publication. You would not believe the objection many people have to a rubric that tries to make that skill set available.

And I’m not talking about the strange folks who post “it’s the end of civilized life as we know it” commentaries on the Drexel Math Forum (cc-ing me directly, because they suspect, rightly, that I don’t frequent the site). Many of the good folks who voluntarily spend ten weeks struggling through my MOOC object as well. And not a few of them indicate in Forum posts where they learned to put so much emphasis on logical correctness. A fictional composite of a fair number of posts I’ve seen over the five runs of my MOOC runs thus: When I was at university, if there was a logical error in my proof, the professor would award zero points.

As a mathematician who knows how f-ing hard it can be to prove an original result, reading those kinds of comments fills me with more dismay that you can possibly imagine.

To end on a positive note, at least you have now seen a concise, but correct proof of Fermat’s Last Theorem.

How is it going this time?

My Mathematical Thinking MOOC is now starting its ninth week out of a possible ten. (The last two weeks are optional, for those wanting to get more heavily involved in the mathematics.)

At the start of the week, registrations were at 38,221, of whom 24,342 had visited the site at least once, with 2,818 logging on in the previous week. But none of those numbers is significant – by which I mean significant in terms of the course I am offering. (People drop in on MOOCs for a variety of reasons besides taking the course.)

The figure of most interest to me is the number of students who completed and submitted the weekly Problem Set. In my sense, those are the real course students. As of last week, they numbered 1,013, and all of them will almost certainly complete the course. That is a big class. The undergraduate class I taught at Princeton this past spring (using my MOOC as one of several resources) had just 9 students.

My MOOC has two main themes: understanding how mathematicians abstract formal counterparts to everyday notions, and how they make use of those abstractions to extend our cognitive understanding of our world.

For much of the time the focus is on language, since that is the mechanism used to formulate and define abstract concepts and prove results about them.

The heavy focus on language and its use in reasoning gives the course appeal to two different kinds of students: those looking to investigate some issues of language use and sharpen their reasoning skills, and those wanting to develop their analytic problem solving skills for mathematics, science, or engineering. (The latter are the ones who typically do the optional final two weeks of the course.)

The pedagogy underlying the course is Inquiry-Based Learning.

To make that approach work in a MOOC, where many students have no opportunity to interact directly with a mathematics expert, I have to design the course in a way that encourages interaction with other students, either on the course Discussion Forum on the course website or using social media or local meetings.

Early in the course, I identify a few students whose Forum posts indicate good metacognitive skills and appoint them “Community Teaching Assistants”. A badge against their name then tells other students that it is worthwhile paying attention to their posts. The CTAs, there are currently thirteen of them, and I also have a back-channel discussion forum to discuss any problematic issues before posting on the public channel.

It seems to work acceptably well. To date, there have been over 3,700 original posts (from 957 students) and 3,639 response comments on the course Discussion Forum.

Since the only practical form of regular performance evaluation in a MOOC involves machine grading – which boils down to some form of multiple choice questions – it’s not possible to ask students to construct mathematical proofs. The process is far too creative.

Instead, I ask them to evaluate proofs (more precisely, purported proofs). To help them do this, I provide a five-point rubric that requires them to view each argument from different perspectives, assigning a “grade” on a five-point numerical scale. See here for the current version of the evaluation rubric.

Notice that the rubric has a sixth category, where they have to summarize their five individual-category evaluations into a single, overall “grade” on the same five-point scale. How they perform the aggregation is up to them. The overall goal is to help the students come to appreciate the different features of proofs, as used in present-day mathematics. The rubric asks them first to look at the proof from the five different perspectives, then integrate those assessments into a single evaluation.

After the students have completed an evaluation of a purported proof, their (numerical) evaluations are machine graded (more about this in a moment), after which they view a video of me evaluating the same proof so they can compare their assessment to one expert.

The goal in comparing their evaluation to mine is not to learn to assign numerical evaluation marks the way I do. For one thing, evaluation of proofs is a very subjective, holistic thing. For another, having been evaluating proofs by both students and experts for many decades, I have achieved a level of expertise that no beginner could hope to match. Moreover, I almost never evaluate using a rubric.

Rather, the point of the exercise is to help the students come to understand what makes an argument (1) a proof, and (2) a good proof, by examining it from different perspectives. (For a discussion of the approach to proofs I take, see my most recent post on my other blog,

To facilitate this, the entire process is set up as a game with rules. (Of course, that is true for any organized educational process, but in the case of my MOOC the course design is strongly influenced by video games – see many of the previous posts in that blog for more on game-based learning, starting here.)

In particular, the points they are awarded (by machine grading) for how close they get to my numerical proof-evaluation score are, like all the points the Coursera platform gives out in my course, very much like the points awarded in a typical video game. They are important in the moment, but have no external significance. In particular, success in the course and the award of a certificate does not depend on a student’s points total. My course offers a learning experience, not a paper qualification. (The certificate attests that they had that experience.)

Overall, I’ve been pleased with the results of this way to handle mathematical argumentation in a MOOC. But it is not without difficulties. I’ll say more in my next post, where I will describe some of the observations I have made so far.

Stay tuned…


I'm Dr. Keith Devlin, a mathematician at Stanford University. I gave my first free, open, online math course in fall 2012, and have been offering it twice a year since then. This blog chronicles my experiences as they happen.

Twitter Updates

New Book 2012

New book 2011

New e-book 2011

New book 2011

February 2017
« Dec    

%d bloggers like this: