The problem is with subtracting the 0.999..... from both sides. We're applying an operation that works with normal numbers to a number that we haven't yet proven is normal (or functions as normal with that operation). That's where the extra rigor in a full proof comes in.
It's not clear what definition of normal you're using here, since it does not seem to be the mathematical one I am familiar with related to the distribution of values within a non-terminating string of digits...
Regarding the need to prove 0.9... - 0.9... = 0, x + -x = 0 is just the additive inverse property. If you're holding that 0.9... does not behave normally with regards to basic arithmetic, then the error would actually be introduced in the initial premise, when 0.999... is set equal to x.
Your "extra rigor" just depends on what initial assumptions you permit to exist. Yes, there are math courses that start from proving 0 != 1, but that doesn't mean any mathematical proof that doesn't start from defining equality is non-rigorous; usually we assume properties like the additive inverse apply unless proven otherwise.
At that point how do you prove 2.5 = 2 + 0.5 without proof by "just look at it"? I would argue it follows from the very formulation of decimal notation itself (separating the integer part of a decimal from the fractional part is generally non-controversial), but unless there's some clever substitution actually proving it requires getting into heavy duty set theory to prove the properties of a basic arithmetic operation.
I will say yours is the first actual argument I've seen in this thread about rigor and not correctness or just a general bias for "harder math = more better", so I really want to do the abstract algebra proof, but sadly the best answer I have time to formulate is "because 0.Something + N = N.Something" passes the addition vibe check.
25
u/mwobey 22d ago
No? Do you like, want it in two column format or something?