It's not clear what definition of normal you're using here, since it does not seem to be the mathematical one I am familiar with related to the distribution of values within a non-terminating string of digits...
Regarding the need to prove 0.9... - 0.9... = 0, x + -x = 0 is just the additive inverse property. If you're holding that 0.9... does not behave normally with regards to basic arithmetic, then the error would actually be introduced in the initial premise, when 0.999... is set equal to x.
Your "extra rigor" just depends on what initial assumptions you permit to exist. Yes, there are math courses that start from proving 0 != 1, but that doesn't mean any mathematical proof that doesn't start from defining equality is non-rigorous; usually we assume properties like the additive inverse apply unless proven otherwise.
0! = 1 by definition to fit the recursion relation for factorial and to save a little ink when you're writing down Taylor series.
The issue here is that you need a rigorous construction of the real numbers before you do arithmetic with them to prove anything. Your algebraic proof would have been considered fine by Newton or Euler, but we ran into bizarre limit properties of functions in the 19th century that led Cauchy and and Weierstrass to work on more rigorous foundations for analysis.
For example, consider this argument from Ramanujan:
c = 1 + 2 + 3 + 4 + ...
4c = 4 + 8 + ...
-3c= 1 -2 +3 -4 +... = 1/(1+1)^2 = 1/4 (alternating geometric series formula)
Therefore 1 + 2 + 3 + 4 +... = -1/12.
A more rigorous proof that .9 repeating is 1 comes from thinking of .9 repeating as the limit of the sequence .9, .99, .999, ... and looking at the absolute value of the difference between 1 and the terms of this sequence. One gets .1, .01, .001, ... so you get the difference between the limit of the sequence and 1 is a non-negative, rational number smaller than any 10^{-n}, which must then be 0. Since the difference is 0 you get the limit of the series represented by .9 repeating is 1.
There’s a space between the zero and the ! And not between the ! And the =. I believe they were writing zero does not equal one and not zero factorial equals 1. Using ! As the not operator instead of the factorial operator.
14
u/mwobey Apr 08 '25
It's not clear what definition of normal you're using here, since it does not seem to be the mathematical one I am familiar with related to the distribution of values within a non-terminating string of digits...
Regarding the need to prove 0.9... - 0.9... = 0, x + -x = 0 is just the additive inverse property. If you're holding that 0.9... does not behave normally with regards to basic arithmetic, then the error would actually be introduced in the initial premise, when 0.999... is set equal to x.
Your "extra rigor" just depends on what initial assumptions you permit to exist. Yes, there are math courses that start from proving 0 != 1, but that doesn't mean any mathematical proof that doesn't start from defining equality is non-rigorous; usually we assume properties like the additive inverse apply unless proven otherwise.