The best math is the kind that makes you do a double-take, when you read a theorem and can’t believe it’s true. That’s how Goodstein Sequences struck me when I was introduced to them. These number-sequences have the property that they grow for awhile but eventually stop growing and shrink down to zero. The way they’re defined, at first glance you’d think, “no way, they’ll grow forever”.

## Number Systems with Different Bases

We use decimal, which is the base 10 number system. Other popular systems are binary (base 2) and hexadecimal (base 16). In general, for any base b>1, there’s a base-b number system. You can write every number in “base b representation” as a string of digits where each digit ranges from 0 to b-1 (for example, the decimal digits range from 0 to 9). What’s actually going on when a number N has representation, say, “5761″ in base 10, that really means n=5*10^{3}+7*10^{2}+6*10^{1}+1*10^{0}.

For the (weak) Goodstein sequences, we’ll start with a number written in one base, and then increase the base, *without* changing the digits. For example, start with the binary number 1101_{2}, which is 2^{3}+2^{2}+2^{0}=13. Now, leaving the digits the same, 1101, change the base to 3. The number becomes 1101_{3}, which is 3^{3}+3^{2}+3^{0}=37. As you can see, performing this operation usually makes the end number increase. Intuitively, if you don’t pick some stupid number like 1 at the beginning, one expects that “raising the base” should raise the number rather dramatically. Raise the base repeatedly, and the number should explode toward infinity. But what if every time we raise the base, we subtract 1? At first glance, subtracting 1 should be a drop in the bucket, compared to the huge growth which comes from changing the base. But things are not how they seem…

## Weak Goodstein Sequences

The “(weak) Goodstein Sequence starting with n” is defined as follows. The first number in the sequence is n itself. If n=0, the sequence stops there. Otherwise, write n in binary, and then raise the base while keeping the digits the same. Subtract 1 from whatever the result is. This gives you the next term in the series. If it’s 0, stop. Otherwise, write this 2nd term in base 3, pump up the base to 4, and subtract 1. That’s the next number. Keep going like this forever (or until you hit 0).

Amazingly, whatever initial n you choose, you’ll always reach 0 eventually.

### Example: n=3

Let’s look at the n=3 weak Goodstein sequence.

- First Term: 3.
- Writing 3 in binary we get 11
_{2}. - Raising the base, we get 11
_{3}(which is 3+1=4). - Subtracting 1, we get 10
_{3}(which is 3).

- Writing 3 in binary we get 11
- Second Term: 3.
- Writing 3 in base 3 now, we get 10
_{3}. - Raising the base, we get 10
_{4}(which is 4). - Subtracting 1, we get 3
_{4}(which is 3).

- Writing 3 in base 3 now, we get 10
- Third Term: 3, again.
- Writing 3 in base 4, we get 3
_{4}. - Raising the base gives 3
_{5}. Minus 1 makes 2_{5}(which is 2).

- Writing 3 in base 4, we get 3
- Fourth Term: 2.
- Similarly the next terms are 1 and then 0.

The n=3 case turns out to be a little silly. The sequence goes 3,3,3,2,1,0. But surely this is just because we chose the initial n too small. Surely if we took something bigger, like n=10, the sequence would be less well-behaved.

### Example: n=10

- First term: 10.
- In binary, 10 is 1010
_{2}. So we compute 1010_{3}-1=1002_{3}=29.

- In binary, 10 is 1010
- Second term: 29.
- In ternary, 29 is 1002
_{3}. Compute 1002_{4}-1=1001_{4}=65.

- In ternary, 29 is 1002
- Third term: 65.
- In base 4, 65 is 1001
_{4}. Compute 1001_{5}-1=1000_{5}=125.

- In base 4, 65 is 1001
- Third term: 125.
- 125 is 1000
_{5}. Compute 1000_{6}-1=555_{6}=215.

- 125 is 1000
- Fourth term: 215.

So far the sequence shows no sign of slowing down: it goes 10, 29, 65, 125, 215. The next few terms are 284, 363, 452, 551, and 660. To compute the next term after 660, you have to start using some new terminology for digits, because digits above 9 start popping out. For digits above 9, one solution is to wrap them in parentheses, so for example, 54(11)_{12} stands for 5*12^{2}+4*12^{1}+11. Which, incidentally, is 779, the next number in the sequence. Using this digit terminology, the sequence continues:

- 54(11)
_{12}=779 - 54(10)
_{13}=907 - 549
_{14}=1045 - 548
_{15}=1193 - (And so on, eventually reaching…)
- 540
_{23}=2737, which gets followed by - 53(23)
_{24}=2975 - 53(22)
_{25}=3222 - (And so on for an awful long while, until…)
- 530
_{47}=11186 - 52(47)
_{48}=11663 - (And much, much later on…)
- 500
_{383}=733445 - 4(383)(383)
_{384}=737279

If you’re only watching the numbers in decimal form, on the right, it seems hopeless that they’ll ever stop growing, and absurd that they’ll ever reach *zero*. But as you follow the structure on the left, you should start to notice what’s going on. Although the base is getting larger and larger, the digits are slowly wrapping down. But when we reach a 0 in the 1′s place, since we’re “not in ~~Kansas~~ decimal any more”, instead of getting a 9 there when we subtract one, we get b-1 where b is the appropriate base. Thus, it takes a really long time before the digits further to the left ever wrap down. Meanwhile, as the base increases, the numbers on the right merrily increase with no sign of slowing down.

But sooner or later, that 4 in the “100′s place” is going to become a 3, and then even later that will become a 2… and *eventually*, this sequence will reach 0. Though, it’ll sure take a long time.

### Formal Proof using Ordinal Arithmetic

In order to formally prove that the (weak) Goodstein Sequences eventually shrink to 0, one uses what’s called “the well-foundedness of the ordinal ω^{ω}” (ω is the Greek “omega” and ω^{ω} is pronounced “omega to the omega”). ω^{ω} is a set, and its elements are ordered, and “ω^{ω} is well-founded” is the statement that there is no sequence of elements of ω^{ω} which decreases forever. If you take any sequence of elements of ω^{ω} and they seem to shrink and shrink, they have to eventually *stop* shrinking. Another way of saying it is, every subset of ω^{ω} has a least element.

So what *are* the elements of ω^{ω}? They can be very loosely thought of as numbers in “base infinity”. Formally, they are (isomorphically, see the “Notes” below) formal strings of the form:

d_{1}ω^{e1}+d_{2}ω^{e2}+…+d_{k}ω^{ek}

where the d_{1}, d_{2}, …, d_{k} and the e1, e2, …, ek are non-negative integers, d_{1}>0, and e1>e2>…>ek. For example, the following are some sample elements of ω^{ω}:

- 5 (or, formally, 5ω
^{0}) - ω (formally 1ω
^{1}) - 100ω
^{2}+25ω+100 (again, “100″ means 100ω^{0}) - 57ω
^{83}+41ω^{49}+8729ω^{24}+2ω^{4} - And so on.

These sums are ordered “alphabetically”: given two elements, to check which one is smaller, check which one has the smallest left-most power of ω. For example, 9999ω^{5} is smaller than 1ω^{6}. If the left-most exponents are the same, check the left-most coefficients. For example, 5ω^{3} is smaller than 6ω^{3}. If the leftmost exponent and coefficient are the *same*, then move to the next term. For example, 3ω^{9}+1ω^{8} is smaller than 3ω^{9}+2ω^{8}. If one of the elements runs out of terms before the other, in this way, then it is smaller: so e.g. 15ω^{4}+12ω^{3} is smaller than 15ω^{4}+12ω^{3}+8ω^{2}. Of course, if both elements run out of terms at the same time and didn’t differ anywhere, then they were equal all along

Given a number written in a (finite) base b, you can always map it to an element of ω^{ω} by “replacing the b’s with ω’s”. For example, 101_{2}=2^{2}+2^{0} gets mapped to ω^{2}+ω^{0}, and 8043_{10}=8*10^{3}+4*10^{1}+3 gets mapped to 8ω^{3}+4ω^{1}+3.

The key is this. Given a number in a certain base, if you map it into ω^{ω}, you get the same thing as if you raise the base first. For example, 101_{2} maps to ω^{2}+1, but so do 101_{3} and 101_{4} and even 101_{1000000}. The numbers themselves differ but they map to the same element of ω^{ω}.

But if you subtract 1 from a number before mapping it to ω^{ω}, that will result in a smaller end result. For example, 101_{2} maps to ω^{2}+1, but if we subtract 1 first, we get 100_{2}, which maps to ω^{2}. A *smaller* result.

To a monster living in “ω^{ω} space”, the Goodstein Sequence doesn’t seem to grow at all. It seems to *shrink*. That’s because this monster cannot “see” the bases. But it *can* see the results of all those 1-subtractions! And, because ω^{ω} is well-founded, sequences in ω^{ω} cannot decrease forever. The Goodstein Sequences always decrease in ω^{ω}-land, but they can’t decrease forever, so they must somehow stop– the only way for that to happen is if they reach 0 eventually.

### A Slightly Stronger Goodstein Sequence

In the standard (weak) Goodstein Sequences, we raised the base by 1 each time. This fact is never really used in the proof. All the proof uses is that the digits are the same before we subtract 1. So you could modify the Goodstein Sequence and increase the bases at any arbitrary speed, and *still* the sequences would go to zero. For example, you could start in base 2, then double the bases every time, going from binary to base 4 to base 8 to base 16. This would cause the initial terms of the sequence to blow up super-exponentially fast— but still the sequences would have to go to zero.

## Goodstein Sequences

The “weak” Goodstein Sequences I just introduced are a watered down version of the traditional sequences.

To get the “strong” Goodstein Sequences, instead of using base-b representation, one uses a modified version of “Cantor Normal Form”. In short, for “base” b, you start by writing your number as a sum of powers of b times coefficients between 0 and b-1. But then, recursively, you do the same for all the *exponents*– and all *their* exponents, and so on until eventually you have a “tree” of b’s with 0′s as leaves.

**Example** Take b=3, n=1210000000000000022_{3}.

- Start by writing n=1*3
^{18}+2*3^{17}+1*3^{16}+2*3^{1}+2*3^{0}. - The exponents are 18, 17, 16, 1, and 0.
- Recursively, convert the exponents (besides 0) in the same way:
- 18=2*3
^{2} - 17=1*3
^{2}+2*3^{1}+2*3^{0} - 16=1*3
^{2}+2*3^{1}+1*3^{0}

- 18=2*3
- The exponents of the exponents are 2, 1, and 0.
- Recursively, convert
*them*(besides 0) in the same way:- 2=2*3
^{0} - 1=1*3
^{0}

- 2=2*3
- All the exponents-of-exponents-of-exponents are 0 and the recursion ends.
- The final result is: n=1*3
^{2*32*30}+2*3^{1*32*30+2*31*30+2*30}+1*3^{1*32*30+2*31*30+1*30}+2*3^{1*30}+2*3^{0} - This is the (modified) base-3 Cantor Normal Form of 1210000000000000022
_{3}.

The idea behind the strong Goodstein Sequence is as follows. Start with an initial number n. The first term is n. If n=0, stop. Otherwise, to get the 2nd term, write n in base-2 Cantor Normal Form as above. Then go through and *replace all the 2′s with 3′s*. For small numbers, this is the same exact thing as “raising the base”, but for larger numbers, “raising the Cantor-base” can make them blow up in a cosmically large way. Anyway, after replacing all the 2′s with 3′s, that gives you a new number; subtract 1 to get the 2nd element of the sequence. If this 2nd element is 0, stop. Otherwise, write this 2nd element in base-3 Cantor Normal Form, replace the 3′s with 4′s, and subtract one, and so on.

Again, the resulting sequence is guaranteed to shrink to zero and stop.

Some mathematicians have been known to shake their heads in outright disbelief after being told this fact. It seems like upping the Cantor-base should cause the number to blow up in such a giant way that the 1-subtractions can never hope to catch up. But sure enough, those 1-subtractions eventually kill off the whole sequence.

The formal proof for the stronger Goodstein sequences reaching zero uses the well-foundedness of the ordinal ε_{0} (“epsilon naught”). ε_{0} is defined as the smallest ordinal x such that x=ω^{x}. Just as the elements of ω^{ω} are the formal sums of things of the form ω^{n} where n is in ω (the set of natural numbers), ω^{ωω} is the set of formal sums of things of the form ω^{n} where n is in ω^{ω}. And ω^{ωωω} is the set of sums with summands that look like ω^{n} where n is in ω^{ωω}. This process continues; ε_{0} is the union of all of them: the union of ω, ω^{ω}, ω^{ωω}, and so on forever. Some example elements of ε_{0}:

- 1
- 4ω+3
- 8ω
^{2ω+1}+73ω^{ω+7}+ω^{ω+2}+5ω^{43}+8ω - 3ω
^{4ω2ω+9+12ωω+1}+834ω^{52ω}+2ω^{8}+4 - And basically any other “tree” of ω’s you can think of…

Just as numbers in base b map nicely into ω^{ω}, numbers in Cantor Normal Form (base b) map nicely into ε_{0}, in such a way that changing the b doesn’t change the result at all: informally, replace all the b’s with ω’s. To a monster living in “ε_{0} space”, the strong Goodstein Sequences don’t appear to grow at all, but to shrink– and, like every other ordinal, ε_{0} is well-founded, so the Goodstein Sequences are forced to terminate.

## Independence Results

The theorem, “(strong) Goodstein Sequences always shrink to 0″, is very interesting to logicians. The proof uses the fact that ε_{0} is well-founded. Coincidentally, ε_{0} is “the ordinal” of certain important proof systems including Peano Arithmetic (PA). What this means is that ε_{0} is the first ordinal x such that PA *cannot* prove x is well-founded. (Every ordinal is well-founded, but there are only countably many possible proofs in PA, and there are uncountably many ordinals, so PA can’t hope to have an individual well-foundedness proof for each of them. And ε_{0} is the first one for which PA has no proof)

So, if you wanted to prove “Goodstein Sequences go to 0″ using only the axioms of Peano Arithmetic, you’d *have* to come up with some clever alternate proof: you can’t use the fact that ε_{0} is well-founded, because that can’t be proven in PA. This suggests that maybe you can’t prove the theorem at all… and sure enough, you can’t. With Peano Arithmetic alone, it’s impossible to prove “Goodstein Sequences go to 0″. (Of course, the proof-of-no-proof is too advanced for this essay…) This is one of the most important and surprising nonprovability results of elementary arithmetic.

### Notes

In order to clarify the connection between numbers in base b and ordinals, I implicitly reversed the order of ordinal multiplication. Technically, the ordinal 2ω is the same as ω, and what *I* have written here as 2ω, is actually usually written ω*2 (in ordinal arithmetic, a*b is *not* usually equal to b*a). But I think you’ll agree that for the sake of demonstrating that Goodstein Sequences die, the modified notation is far clearer.

**FURTHER READING**

Collatz Recursion

Guessability

Busy Beaver Numbers

Conics in the Wild