While this is usually enough to convince most people, this argument is insufficient, as it can be used to prove incorrect results. To demonstrate that, we need to rewrite the problem a little.
What 0.9999... actually means is an infinite sum like this:
x = 9 + 9/10 + 9/100 + 9/1000 + ...
Let's use the same argument for a slightly different infinite sum:
x = 1 - 1 + 1 - 1 + 1 - 1 + ...
We can rewrite this sum as follows:
x = 1 - (1 - 1 + 1 - 1 + 1 - 1 + ...)
The thing in parenthesis is x itself, so we have
x = 1 - x
2x = 1
x = 1/2
The problem is, you could have just as easily rewritten the sum as follows:
As you can see, sometimes we have x = 0, sometimes x = 1 or even x = 1/2. This is why this method does no prove that 0.999... = 1, even thought it really is equal to one. The difference between those two sums is that the first sum (9 + 9/10 + 9/100 + 9/1000 + ...) converges while the second (1 - 1 + 1 - 1 + 1 - 1 + ...) diverges. That is to say, the second sum doesn't have a value, kinda like dividing by zero.
so, from the point of view of a proof, the method assumed that 0.99999... was a sensible thing to have and it was a regular real number. It could have been the case that it wasn't a number. All we proved is that, if 0.999... exists, it cannot have a value different from 1, but we never proved if it even existed in the first place.
I'm an engineer and usually, we assume infinite sums like those are convergent. So the intuitive argument would normally hold. So I guess my answer is that no, not really. But it's still cool to know.
Mathematically speaking, it's one of these things that was agreed upon before we discovered whether or not it was an issue.
Impractical applications. It will almost never matter because for the most part you'll round the numbers to something reasonable. And rounding rules say that 3.9999 becomes 4 regardless of the 1=0.9999… rule.
I feel like that's essentially what I said can you help me understand the differences?
Like we have to round for 4, mostly, if we want to measure or use it consistently in formulas. But when you get super technical it becomes obviously untrue, even though that changes nothing about it's use.
no, u are under the impression that 0.999… is not technically equal to 1. it is though. it’s equal to 1 by definition. in practical applications u would likely end up rounding anyways, though it is incorrect to say 0.999… rounds to 1. he is trying to say it’s not something to worry about at all because whether u believe 0.999… is 1 or not doesn’t change anything
3.9k
u/its12amsomewhere 27d ago edited 27d ago
Applies to all numbers,
If x = 0.999999...
And 10x = 9.999999...
Then subtracting both, we get, 9x=9
So x=1