Non-mathematicians are quick to balk when told that 0.999… is the same as 1, and mathematicians are equally quick to laugh at them and mock them for it, but the skeptics really do have a point. Real numbers are used constantly, but are never really defined, at least not until 3rd or 4th year of a four-year pure math program, if then. Abandoned with no true definition of real numbers, and exposed to a non-mathematician’s lifelong math education, it’s entirely reasonable that a person will implicitly identify the real numbers with formal infinite strings of decimals with the obvious order and arithmetic. We cannot blame them when all their lives we’ve sold them one thing and then suddenly in a 2nd semester calculus course we pull a bait-and-switch and declare 0.9999…=1. Sure, we never explicitly said that the reals are their decimal strings, but we never said what exactly they were any other way. The real question is not so much, why is 0.9999…=1, but rather, why do mathematicians pick the definition they do?

There *is* a set of number-like things, call them “pseudo-reals”, with the property that 0.999…≠1. Namely, the set of formal strings satisfying certain rules. A pseudo-real must contain a decimal point, it must contain infinitely many digits to the right of the decimal point. It must contain at least one digit to the left, and if it contains more than one digit to the left, then the leftmost digit must be nonzero. It may or may not be preceded by a minus sign. The “pseudo zero” is the string 0.000…, the “pseudo one” is the string 1.000…, and so on. Now, left in a wilderness without any formal definition to the contrary, I believe most people (implicitly) take these pseudo-reals as their model of the reals. After all, we never give them any alternative! So people are entirely justified, after K-12 math, in thinking that 0.999…≠1. But this system of formal decimal strings is *not* the real numbers.

The reason we don’t use formal strings of digits as our definition of the reals is because these strings have the very bad property that two non-equal numbers can have difference zero. For example, the difference 1-0.999… is zero, because if you begin formally doing the subtraction, every digit ends up being zero. This is a rather nasty property and it prevents all sorts of nice behavior that we’d like our number system to have. This is why mathematicians don’t take these formal strings as our definition of the reals, and thus why 0.999… can end up being equal to 1.

So what *are* the real numbers? There are many different ways to define them. There are Dedekind cuts and there are equivalence classes of Cauchy sequences, but these are both somewhat unnatural. One way of defining the reals is to take the pseudo reals I defined above, those formal strings of decimals, and forcefully declare that the ones we want equal are equal: thus, we declare that 0.9999… is the same as 1.000… and that -783.65999… is equal to -783.66000…. and so on. Again, the reason we do this is because we abhor the situation of two different numbers having zero difference, so we basically cheat by saying that if they’d have zero difference, then we declare them equal.

The construction is made formal using equivalence classes. Basically, the string “1.0000…” is not a number, but instead, the set {“1.0000…”,”0.9999…”} is a number, and when we write “1.0000…” (*or* when we write “0.9999…”), we really mean this set. Some numbers, then, are sets of two strings, and others, like {“0.555…”}, are sets of one string, because there are no other ways to write 0.555… Formally defining the arithmetic and the order on these sets is a real pain. Most mathematicians brush everything under the rug and act as if we were using the more intuitive pseudo-reals, except that whenever something like 0.999… pops up, we remind ourselves that that’s the same as 1.

The stock answer to the question, “Why is 0.999… equal to 1?”, usually involves the fact that the infinite sum 9/10+9/100+… converges to 1. This is entirely unsatisfactory. The formal definition of what it means for an infinite sum to converge, involves differences: namely, for every ε>0, there exists an N so big that whenever M>N, if we add up the first M terms of the sum 9/10+9/100+…, then the result differs from 1 by less than ε. The whole problem with pseudo-real numbers, that is, formal strings of digits, is that differences misbehave. Using the official definition of convergence of an infinite sum, working with pseudo-reals, we get the unsavory fact that 9/10+9/100+… actually converges to *both* 1 and 0.999…, even though both of these are different (as pseudo-reals). Thus, if a student is (implicitly or explicitly) using formal strings of digits as their model of the reals, then the infinite sum explanation only creates further confusion.

Another stock answer goes like this. If X=0.999…, then 10X=9.999…, and so 10X-X=9. By the distributive law, (10-1)X=9, so 9X=9. And this means X=1, since 1 is the only thing which, multiplying 9, makes 9. But hold on a second. If we formally use the multiplication algorithm to multiply 9 by 0.999…, the result comes out to 8.999…, which, in the pseudo-reals, is not the same as 9. What went wrong in our analysis? The culprit was the distributive law. Subtraction (and hence all arithmetic) is so badly-behaved in the pseudo-reals, that the distributive law breaks. It’s true that 10X-X=9, but this does not imply that (10-1)X=9. Instead, it equals 8.999… If only 8.999… were equal to 9 (as it is in the real numbers), then the argument would work, but it isn’t, not in the pseudo-reals. That would be circular reasoning.

To summarize: The real question is not *why* 0.9999…=1, but rather, *what on earth* are real numbers in the first place? There is one tempting answer, according to which, 0.9999… is actually not equal to 1. But this answer is nonstandard because it creates real numbers with very badly behaved subtraction. To avoid the problems with bad subtraction, mathematicians basically throw up their hands and *declare* that 0.9999…=1, though they cloak this with fancy-sounding words like “equivalence class” and “equivalence relation” (or they avoid it altogether using more sophisticated methods of defining the reals).

**FURTHER READING**

Plus-or-Minus Square Root

A Truth-Knowledge Paradox

Meaningful Names of Mathematicians