Is 0.999... = 1?
This is part of a series on common misconceptions.
Is the following statement true or false?
\[\large 0.999 \ldots = 1 \]
Why some people say it's true: It's so very very close to 1. In fact, it's like \( 0.0000000\ldots \) away from 1.
Why some people say it's false: It's less than one, since it starts with 0.99 instead of 1.00. So it cannot be equal to 1.
The statement \( 0.999 \ldots = 1 \) is \(\color{green}{\textbf{true}}\).
Intuitive explanation: Visualize a number line. If two real numbers \(x\) and \(y\) on the number line are different, then we should see some space between them. In fact, there would be room to place another real number, namely their average \(\frac{x + y}{2}\). Since no number exists between \(0.999\ldots\) and \(1,\) it must be that they are the same. More reluctance against the equivalence stems from a perception that a number cannot have two different names. The reasoning often applied here is, "If a number has two different names, then it cannot truly be the same number." However, this argument doesn't seem to hold water when given a number like \(0.6,\) which has \(\frac{3}{5}\) as an alternative name.
Proof 1:
(In this proof, we will assume that the value exists.)
All decimals of finite length, such as 0.5 and 0.123, and all repeating decimals, such as .333... and .121212... can be easily converted into fractions. This first proof uses a standard technique for converting a repeating decimal into a fraction in order to calculate the 'fraction' that .99999... is equivalent to.
\[ \begin{array} {l r l } \text{Let } & A & = 0. 999 \ldots. \\ \text{Multiplying by 10, we get } & 10 A & = 9. 999 \ldots. \\ \text{Subtracting } A, \text{ we get } & 9 A & = 9. \\ \text{Dividing by 9, we get }& A & = 1. \ _\square \end{array} \]
Proof 2:
Let's evaluate the limit \( A = 0. 999\ldots \). We consider the sum of a geometric progression with infinite terms, with initial term 0.9 and common ratio 0.1. We have
\[ \begin{align} 0.999 \ldots & = 0.9 + 0.09 + 0.009 + \cdots \\ & = 0.9 \times 0.1^0 + 0.9 \times 0.1^1 + 0.9 \times 0.1^2 + \cdots \\ & = \frac{ 0.9 } { 1 - 0.1 } \\ & = 1. \end{align} \]
Since this summation converges, it tells us that the limit exists and \( A = 1 \). \(_\square\)
Rebuttal: \(0.999... \text{ and } 1\) are not equal because they're not the same decimal. With the exception of trailing 0's, any two decimals that are written differently are different numbers.
Reply: \(0.999... = 1\) is another case when two decimals that are written differently are, in fact, the same number. The fact that there are two different ways to write 1 as a decimal is a result of the role that infinite sums have in defining what non-terminating decimals mean.
The decimal system is just a shorthand for writing a number as a sum of the powers of 10, each scaled by an integer between 0 and 9 inclusive. For example, \(0.123\) means \(\frac{1}{10} + \frac{2}{100} + \frac{3}{1000}\). Similarly, \(0.999\ldots\) means \(\frac{9}{10} + \frac{9}{100} + \frac{9}{1000}+\cdots\) and the value of this infinite sum is equal to 1.
Rebuttal: \(0.999\ldots \) only tends to 1. It is not equal to 1. We only have an approximation.
Reply: It is true that for the sequence \( a_n = 0. \underbrace{99 \ldots 9}_{n \, 9's } \), it is the limit as \(n \rightarrow \infty\) that is equal to 1. However, since \( 0.999 \ldots \) is defined to be that limit, it is defined to equal 1.
Without using this definition for infinitely repeating decimals, there would be many numbers, such as \(\frac{1}{3} = .333...\), that we wouldn't be able to write out as decimals since there exists no finite sum of tenths, hundredths, thousandths, etc. that exactly equals \(\frac{1}{3}\).
Rebuttal: Infinite sums don't make any sense. It's not possible to add up infinitely many things, so any infinite sum is only an approximate value, not a real value.
Reply: Without using infinite sums, there would be many numbers, such as \(\frac{1}{3} = .333...\), that we wouldn't be able to write out as decimals. The definition of such an infinite sum is rigorous, but strange in that the sum is defined to equal the limit approached as many terms are added together, whenever this limit exists.
Not all infinite sums of fractions can be evaluated. For example, \(\frac{1}{2} + \frac{2}{3} + \frac{3}{4} + \frac{4}{5} + \cdots\) could not have a real numerical value. But the definition of an infinite sum includes the restriction that an infinite sum only has a well-defined valuation when, as we add up terms of the series, the total sum zooms in towards one, specific value. In the case of \(\frac{3}{10} + \frac{3}{100} + \frac{3}{1000}+\cdots,\) \(\frac{1}{3}\) is the value being approached. There would be no other way to define \(\frac{1}{3}\) as a decimal otherwise since \(\frac{1}{3}\) is not equal to any finite sum of tenths, hundredths, thousandths, etc.
In the case of \(\frac{9}{10} + \frac{9}{100} + \frac{9}{1000}+\cdots,\) 1 is the value being approached as more and more of the terms are added together.
For a more complete explanation of infinite sums, check out the Infinite Sums wiki page.
Rebuttal: In proof 1, we cannot cancel the trailing 9's because there are infinitely many of them. We will always be left with one 9.
Reply: Cancellation doesn't happen "term by term," where we compare the first 9 in \(10A \) with the first 0 in \(A\). We are looking at the difference of these two numbers, and taking it all together. We get a trailing series of 0's, with no "9 at the end."
Rebuttal: In proof 2, we can't just add the digits "term by term."
Reply: This argument is valid in that the perspective that we're adding "term by term" is how we are evaluating the limit.
Try this problem now:
See Also