I’ve recently gotten involved in a number of online “debates” over the mathematical proof that .999… = 1. On the one hand, I don’t much see the point in debating this. People who refuse to believe that it is true typically seem to not understand basic mathematical concepts, and, on top of that, harbor a strangely deep mistrust of mathematicians. (Really, you don’t think any mathematician ever has thought of your very clever objection that it is just “really really close” to 1?)
But at the same time, I think that there are some objections that stem from simply genuinely misunderstanding how decimal notation works (and, let’s admit, the tricky ideas of infinity and forever). And that’s worth at least some effort to explain.
The proof that is most often shared goes like this:
(1) Let x = .999…..
(2) Then 10x = 9.999….
(3) Then 10x – x = 9.999… – .999…
(4) Then 9x = 9
(5) Then x = 9/9 = 1
(6) Since x = .999… and x = 1, .999… = 1
Line (1) rarely seems to cause any problems. All we’re doing is letting an arbitrary variable, x, be .999… . The second line occasionally causes problems, with the objection that we’re multiplying by 10 on the left, but adding 9 on the right. This is incorrect: in decimal notation, multiplying by ten moves the decimal point over one place (you can try this out on any calculator you like: do .99*10 and you will get 9.9. If you disbelieve the calculator, consider that 1 = 1.000. Then 1.000*10 = 10.00 and 10.00*10 = 100.0 and 100.0*10 = 1000. You can see how multiplying by ten simply shifts the decimal place one over.)
The objection usually follows that then the .999…. after the 9 is one less that the original .999…. But this ignores that the 9’s after the decimal go on forever. Removing one doesn’t stop them going on forever. After all, suppose it did. Then if we divide by ten and get that 9 back on the other side of the decimal, we’d have a finite list of 9’s plus another 9. But that’s still finite, when our original list restored should go on forever again. So despite multiplying by ten, we still have an infinite, unending list of 9’s after the decimal.
The real objections usually turn up with (3) and (4). Here we subtract x, or .999… (which, you’ll recall, is x) from both sides. The idea that 10x-x = 9x is usually uncontested, but the fact that 9.999… – .999… = 9 sometimes causes problems. But if you think about what decimal notation really means, it shouldn’t. All decimal notation is is a series of sums. 9.999…. is really just a shorthand for an infinite sum: 9 + 9/10 + 9/100 + 9/1000 + …… Now, .999… is also shorthand for a series of sums: 0 + 9/10 + 9/100 + 9/1000 + ….
Notice that .999…. expanded is exactly the same as 9.999….. expanded, just without the original 9. Suppose we subtracted these from each other, instead of the decimal shorthand. Then we’d have:
9 + 9/10 + 9/100 + … – 9/10 – 9/100 – ….
For each term after the 9 in 9.999…. we have its negative, that is, we subtract it away and it cancels out. We are left, then, with just 9.
This leads us, finally, to the statement in (4) that 9x = 9. I hope it is uncontroversial that it follows from (4) that x = 1.
Since two things equal to the same thing are equal to each other, as x = 1 and x=.999…, 1 = .999…. I’ve seen objections here that we just “assume” that x = .999… and that that creates a circularity. I honestly don’t understand how anyone could think this well enough to refute it, except to say that, since we originally chose x arbitrarily to equal .999…, and x has no value itself, we’ve proven here that *anything* equal to .999…. is also equal to 1. So start with any number or representation which is equal to .999…. and it will, necessarily, be equal to 1.
Commenter Thaumas Themelios posted another excellent eposition,read the whole comment below:
One variation I’ve used is to ask the person “What is 1/3 in decimal notation?” They say 0.333…. Next, “If 1/3 is one third, i.e. 1 thing divided into 3 parts, and taking only 1 of those parts as being ‘a third’, then what do you get if you put the three thirds back together again?” Obviously, 1. “So, 1/3 multiplied by 3 is equal to 1, correct?” Yes.
“Okay, now take 1/3 written as a decimal and multiply it by 3, so 3 x 0.3333…. = 0.9999…., correct?” Yep. “So, in the first case, you took 1/3, and multiplied it by 3 and got 1. And in the second, you took the same 1/3 and multiplied by 3 and got 0.9999… If you accept the first is true, that 1/3 in decimal is *equal* to 0.3333…, then you must also logically accept that the second is true, that 1 is equal to 0.9999….”