>“OK, fine. I could do the algebra. What I wanted to know was where in bloody hell did the original formula come from? It's not like you can just pull something like that out of the air. The math induction proof of its correctness tells you not a thing about where the original formula came from.”
>COMMENT: Well, this is indeed a philosophical problem, but it should be noted that the problem lies at the root of number theory and the foundations of mathematics. Mathematics represents an axiomatic system(s), and the principle of induction follows from such axioms. The rigor of mathematics is dependent upon such axioms within the context of whatever level of mathematical system one is dealing with. It is well known that mathematics cannot stand on its own within a self-contained purely logical structure.
>So, if you have heartburn about mathematical induction, it seems to me you should also have heartburn about mathematical foundations generally, and not just certain inductive proofs.
__________________________________________________
I don't have a problem with whether math induction proofs work or not, though there are mathematicians who do argue that the foundations of math induction are not nearly as rock-solid as most people think they are. I am not among them. My complaint is the while verifying a formula using math induction, while it works like a charm, tells you nothing about where the formula came from, which is the more difficult and interesting problem.
As a pandemic project, I came up with as many essentially different derivations of the formula for the sum of the first n squares. I came up with 5, including the induction proof, which was the least informative of the 5. I found a geometric proof that shows exactly where the formula comes from. It turns out to be the product of two simpler formulas. If I can find a diagram online I will post a link. Trying to explain a geometric proof in a text-only format like RFM is too painful to contemplate.
-----------------------------
>“Same with 0.9999... = 1. The key is that the number of 9s is infinite. As someone earlier in the thread said, if they are not the same, give me one number that is larger than 0.99999.... and smaller than 1. If you can't do it, they are the same number, even if they look radically different.”
>COMMENT: The key is that the number of 9999s *never* equals 1. Here is a number larger than 0.99999 and smaller than 1: 0.999999. What am I missing here? Obviously, the difference eventually becomes trivial, but it would seem to me that in "pure" mathematics, where "pure” logical rigor is called for, the distinction needs to be acknowledged and remembered.
I said the key is that the number of 9s is infinite, and then you rebut by giving an example of five 9s, and state that six 9s is more accurate. That is true, but neither five nor six are infinity. 0.999… = 1.0.
https://en.wikipedia.org/wiki/0.999...
Quoting from the article: "This number is equal to 1. In other words, "0.999..." is not "almost exactly" or "very, very nearly but not quite" 1 – rather, "0.999..." and "1" represent exactly the same number."
The article is 12,000+ words long, and explains in mind-numbing detail what I put in much abbreviated form in my post. You can discount wikipedia, or throw around words like ontological all you want. 0.999… = 1.0, as per any freshman calculus course.
Sorry, I'm not budging on this one.
Interesting experiment I do on calculators now and then - ⅓ can't be exactly represented as a decimal fraction unless you allow for an infinite number of 3s (0.333…) nor can it be exactly represented in binary except as an infinitely repeating fraction.
So here's the experiment. Enter 1 in a calculator, divide by 3, hitting = to force the calculation to complete before going on. Then divide the result by 3.
Based on the internal representation of one third, the final answer should have a slight truncation error, and come out as 0.999999999 to however many 9s are the maximum number the calculator can store. On some (most? all?) scientific calculators, if you do that calculation, the answer you get is not 0.999999999, but 1.0.
Scientific calculators often have a rational arithmetic feature, where they will represent one third internally as a 1, a 3, and a symbol indicating that 1 is to be divided by 3. That is the only way you can precisely represent one third in a calculator, as a symbolic expression, rather than as a binary or decimal number. I do not use that feature in my experiment.
So how do some calculators return a 1 for the expression 1 / 3 * 3?
There must be a software check, and if the answer is the binary equivalent of 0.9999999999, the software substitutes the value for 1.0. The software check may be a bit more complicated than that, but that is basically how it would do that.
And what if the calculation legitimately should have returned a 0.999999999 and the calculator incorrectly changed it to 1? Then it introduced a slight error. Then the debate becomes which is the worse, or more likely error, misrepresenting a calculation like 1 / 3 * 3, or the random calculation that just so happens to come out 0.9999999999?
And that question is above my pay grade. I did just do the experiment on the freebie scientific calculator app on my iPad. It returned a 1.0 for 1 / 3 * 3
Edited 1 time(s). Last edit at 09/13/2022 02:25AM by Brother Of Jerry.