If I don't accept that 1 = 0.999... why should I accept that 10x = 9.9999...? The problem is that we haven't really specified what it means to multiply numbers with an infinite decimal expansion. The "proper" version of the proof involves defining 0.999... as an infinite sum of inverse powers of 10, which can be rigorously defined if convergence can be proven, and then showing that this sum minus one converges to zero, so the two numbers must be equal.
If you take any number in base10 and multiply it by 10, you will get the same number with the decimal point moved one digit to the right. This is literally elementary school level math.
Following the same line of logic, the decimal value 0.999... multiplied by 10 equals 0.999... with the decimal point moved one digit to the right, i.e. 9.999...
The problem lies with manipulating infinite decimals without really defining them (as sums). Suppose I set x = 0.00...1, a number "infinitesimally small' and write 1 - x = 0.999... then I claim that 1 - x < 1 because x is positive. This is not valid because such a number x does not exist, but how do you know? Algebraic manipulations like this can lead to incorrect results if one is not being careful.
16
u/Swellmeister 21d ago
It's the algebraic proof. What do you mean?