Ron Maimon Quora answers (part 3)
They aren't required to learn it, it's an elective. But they should learn it, preferably on their own, because the school doesn't know how to teach physics. Physics is extremely interesting, even the elementary kind. It takes the mathematics you learn in high school and uses it to describe certain natural phenomenon completely, beyond what was imagined possible in the wildest dreams of people like Pythagoras or Archimedes.
If you have a computer, Newton's laws plus a tiny code can produce the motion of the planets around the sun, the motion of a free-twirling baton, the motion of colliding billiards, it's very simple. You can simulate particles on springs, solid lattices, all sorts of crazy force laws, and you can prove all the regularities you see once you learn calculus, the hardest two are proving that the motion in an inverse square law is an ellipse, that the inverse fifth law collides with the force center, and that a bunch of particles with an inverse cube attraction breathe in and out (all are from memory, it's been a while). These regularities were worked out by Newton, some others were worked out in the 19th century.
Writing these types of simulations can be done in high school, even earlier, whenever students learn to program a computer and display pictures on the screen (to see the output). It immediately leads students to appreciate Newton's laws, because suddenly, all the solid objects around them have motions that are easy to simulate, it gives more or less a full understanding of the day-to-day world, ignoring the quantum stuff like material properties and so on.
The curriculum in high school physics is extremely boring, and can be learned instantly by anyone who does the simulation stuff. An exception might be the center of mass theorem, and some mechanics puzzles. Since I had learned this physics already some years before, I made it exciting like this: I made a rule that I must actively ignore the teacher, never look at the book, and do no homework, and I would have to rederive all the formulas for the problems on the test from scratch using nothing except my head. I got all the problems right for three quarters, then on the last quarter, we had an optics quiz.
I had learned from Feynman, so I used a Fermat's principle method to derive the lens law, rather than using the geometric special lines that everyone else uses. It took me 45 minutes to rederive the lens equation knowing all the signs are correct and everything, and this left only a few minutes to do the actual test. I did one problem correctly, so I failed the test. So I had a C on my last semester of high school physics, and the teacher was very happy to give me a C, because he hated me by then, since I had been actively ignoring him for three quarters.
So learn from Feynman, Landau, Dirac, use a computer, but when your school gets to optics, learn the classical methods!
7.9k views · View 79 Upvoters
Answered Sep 18, 2013 · Author has 1.4k answers and 4m answer views
Originally Answered: How likely is it that a mathematics student couldn't solve IMO problems?
Honestly, every math PhD student should be able to easily solve all the IMO (and Putnam) problems, perhaps after some reflection, but best instantly. If you're a math student, you should learn the stupid tricks, they are at the high school level. If you can't solve them, you probably are going to have a hard time solving a hard unsolved problem anyway, so you should learn to do these things first, otherwise, frankly, you are not going to be a very competent student.
But one shouldn't stress out about it, with time and mathematical experience, they all become trivial. Personally, I haven't tried these things in a long time, and I am not sure I can solve all of them instantly, but that just means I am incompetent and old. If you are in a math program, you should definitely sit down and make sure you figure them all out, so that you know every elementary trick out there.
The overlap with professional mathematics is minimal. Professional mathematics is much harder, because it requires a developed insight into the grand plan of a proof, and then breaking it up into details, and so on, and this is much harder than coming up with an isolated clever trick. But you need isolated clever tricks to finish up a hard proof, to finish certain computation, so you should definitely have the complete standard arsenal in your toolbelt.
5.3k views · View 15 Upvoters
Originally Answered: Did Russell understand Godel's incompleteness theorems?
I am pretty sure, from the things I have read him say about this, that Russell didn't bother with Godel's computational formulation of the theorem, but only because he understood a more specialized limited case for his theory of types--- i.e. that you can always extend a theory by using higher types.
Part of the conclusion of Godel's theorem was actually proved within set theory earlier than Godel's theorem, without using the specific method Godel used (although they are related), and without the insight of how general the result is. Around 1929 or 1930, considerations of "inaccessible cardinals" allowed one to see the following: if you have a strongly inaccessible cardinal, then the sets in the hierarchy which are hereditarily less than the first such cardinal make a model for the axioms of set theory.
So restricting to the submodel, you see that in this submodel, all the axioms are true, except the axiom of inaccessible cardinals doesn't hold, because the first inaccessible is not in the model! This means that "There exists an inaccessible cardinal" is unprovable from the axioms of set theory, it is an independent axiom, and this was understood a few years before Godel's theorem, as described in the first chapter of Kanamori's "The Higher Infinite".
The heirarchical construction of the set theory universe is analogous to the higher types in the theory of types. When Russell was asked about Godel's theorem, he nonchalantly replied that he wasn't too impressed with it, because he felt it was simply a more refined version of the idea that the types make an unlimited hierarchy. This glib dismissal makes people say that he was completely clueless.
I don't know Russell's theory of types at all, but the argument he gave seemed to be analogous to the argument above about the inaccessible cardinals. Whenever a hierarchical system has a level which can model the previous levels, the simplest model of the previous level does not include the next level, and so cannot prove the existence of the next level. This is a vague pre-Godel version of the incompleteness theorem, vague only because it is lacking the precise algorithm of the completeness theorem to produce a model from logical axioms, and the precise insight of the incompleteness theorem that any computable axiom system cannot prove it's own consistency. But the primitive insight is halfway there, it's really analogous.
One of Godel's motivatons for proving the theorem, to show that you need an transfinite heirarchy of theories in order to produce all the theorems, not just of set theory, but as he showed, of arithmetic. He succeeded in showing you need a heirarchy, but he didn't actually establish that this hierarchy necessarily involved things like uncountable ordinals. In fact, this is not so.
So I suspect that Russell, while not following the gory details of Godel's proof, realized it was a version of the hierarchy of type things, and this is correct, and all his statements about it come from this earlier realization, which he was more comfortable with. It seems he wasn't unaware that you needed to go up indefinitely to get completeness of mathematics, he probably understood it in the 1920s, in the same vague way explained above.
The misinterpretation of Godel's theorem here is going the other way. People do not appreciate that Godel's theorem is not as much of an obstacle to formalist mathematics as it appears at first glance. What it is saying is that the iterations of the consistency conditions have to go into the transfinite, meaning into infinite orders.
But as Turing argues in 1938, they do not have to go past the Church Kleene ordinal! They never have to be infinitary.
So I think it is fair to say that Russell understood the main idea of Godel's theorem, but in a different way, as is natural in his earlier conception of the mathematical universe, not in the metaphysical way Godel understood it, or the computational way that Turing understood it in the 1938. I think Turing understood it best of all.
2.6k views · View 15 Upvoters
You can do it, but you need to catch up, read the classics, do those Putnam problems until you can solve them, learn the standard curriculum, and most importantly, have some new ideas. The "have a new idea" part is what is difficult, and it is so much more difficult than all the others, that it is really the rate limiting step.
It is not possible to predict if you will have a great new idea, but one can predict from experience that you will have lots of mediocre new ideas for sure. Everyone does. The quantity may vary, the quality may vary, but you'll eventually discover something or other.
1.4k views · View 12 Upvoters
Answered Nov 13, 2013 · Author has 1.4k answers and 4m answer views
Feynman was one of the greatest physicists of the 20th century, and his contributions were unique because he showed people all these things that they had missed, like the path integral, the diagrams, thermodynamic inqualities based on exponential convexity, time ordering of operators, hard sphere model of He4 and vacuum ansatz, partons, quantum computation, vacuum structure of gauge theories, tons of stuff, which people felt really silly for not having seen before he pointed it out, but they didn't see it before he pointed it out, and the large gap between when they could have been done and when he did them shows that he was necessary. Intelligence is not the proper variable to measure, it is the creativity and difficulty of the work.
He was also a phenomenal calculator, he could work through integrals and physical problems very rapidly, and his methods were original, so looked like magic to others. He was a very good puzzle guy, and his adult performance on standard puzzles was about as good as the best folks that do such stuff, but this is not a big trick. It's only notable because as a child, he didn't score phenomenally on IQ tests, but as an adult, he clearly learned to do this, so discrediting the ridiculous claims of IQ testers that they are finding a fixed genetic trait of individuals which is not improved by mental training.
Feynman was an American physicist, like Wheeler, one of the first native American talents for science. He was a role model in the US, but he also became a media figure. As a media figure, he could be annoying, but as a scientist, he was a model for honesty and originality.
14.4k views · View 161 Upvoters
Originally Answered: How can the Banach–Tarski paradox make sense to mathematical laymen?
The Banach Tarsky result is not absolutely true, it is true or false depending on which axioms you like to use. Before you say "but isn't that true of everything?" No, not really. There are things that are computationally absolute, so that once you define the terms in a computational sense, they are just true or false, things like the twin-primes conjecture, or the Riemann hypothesis, or the volume of the sphere in terms of its radius.
The Banach Tarski paradox is not like these other things, because it is not only intuitively false, it is also mathematically false in the most natural axiomatizations of the real numbers. It just happens to be true in the axiomatization of the real numbers that mathematicians have standardized upon, and that's not something to explain, it's something to lament.
Suppose I draw a big box around the sphere, and then choose a point at random inside the box. Since this is important, I will specify exactly how I choose a point at random--- by this I mean that I flip an infinite number of coins, to determine the binary digits of a real number between 0 and 1 one by one, then I rescale this number to the length of the box, and choose another random number and another, and together I get three random real numbers that pick a point uniformly inside the box. It seems intuitively reasonable that I should be able to do this, since I can do the finite process, and the result is certain to converge.
If I can do this, pick a random point in the box, then this point has some probability of landing inside one of the Banach-Tarski pieces. This probability defines the measure of each of the pieces, it can be determined semi-empirically by choosing the points again and again, and then the fraction of the time I land in the set is the ratio of the measure of the set to the measure of the box (this is not quite empirical, because determining if a given real number is in a given set might not be decidable by objective means, but what I mean is that it is consistent to imagine this probability).
This measure, the probability of landing in the set, is unchanged by rotation and translation. So when you rotate and translate these sphere pieces, you just can't make two spheres, because the proability of landing inside the two spheres is greater, the two spheres together have a bigger measure. That is a disproof of the Banach Tarski theorem.
But since you can prove the Banach Tarski theorem in ZFC, you learn that in the standard axiomatization of mathematics it is simply false that every subset is measurable. In other words. This means that the disproof above doesn't work. Why doesn't it work?
The reason is that it is simply false that you can choose a real number at random in ZFC! The concept "pick a real number uniformly at random between 0 and 1" is inconsistent in the standard axiomatization of set theory, and we are supposed to be ok with that. I am not ok with that.
To see the contradiction between infinite random choice and axiom of choice, there is an illustrative puzzle. Suppose I place infinitely many hats, either black or white, on infinitely many heads. I ask the people to guess the color on their head from looking at all the other colors. The people win if only finitely many guess wrong. Can the people win?
If I am allowed to place the hat colors at random, the people can't win. The hat colors are independent, and knowing all the other colors gives you no information about your own. So the answer, in a universe where randomness behaves as it's supposed to, is no. You can't have finitely many guess right.
But in ZFC, the people can win. Define equivalence classes of hat-choices, where two hat-choices are equivalent if they differ in finitely many places. Now "choose" one representative of each class. Then have all the people answer according to the representative that agrees with the infinitely many hats they see. This allows the people to win.
So the concept "an infinite list of independent random bits" is incoherent in set theory, it is just incompatible with the axiom of choice. That means I can't choose a real number at random in the standard axiomatization.
But mathematicians need this concept of a random real number. Probabilists talk about random reals all the time, and also random paths, random walks, and so on. So how do they deal with it?
What they do is sidestep the issue, by defining what is called a sigma-algebra of sets, a collection of sets closed under countable union and intersection, and defining measures only on a sigma algebra. Then they never speak about the random real number itself, rather they speak about the probability that this real number is contained in any given set. By doing this, they define the "random variable" as this collection of probabilities on a restricted universe of sets, which are in the sigma-algebra, so that they never talk about the random real itself, just about the measures of various sets.
This makes probability theory very counterintuitive and onerous--- every statement about "a random variable r" is never a statement about an actual real number, but about a collection of measures on subsets, and then you have to prove a lot of niggling technical theorems that establish that the sets you are determining are always measurable, theorems that are always obvious and annoying. The only point of these theorems is to avoid the non-measurable sets, the things constructed using the axiom of choice.
However, all this rigamarole is completely unnecessary. It is very easy to define the notion of a random real number, if you use modern logic, in particular, forcing. In this case, starting with any countable model of ZFC, you can adjoin a random real number to the model, essentially by just picking the digits one by one at random. That this concept makes formal sense is extremely easy to prove, and the result is called "random forcing", and once you do random forcing, you learn that an adjoined random real will assign a measure to all the subsets of [0,1] in the old model. Further, the whole R of the old model is revealed to be measure zero in the new model--- there is zero chance that the random real was already in the old model (you have to be careful here to talk about the dust of points in the old model, because intervals extend to intervals in the new model, blah blah blah, all this is explained by Solovay).
Using random forcing, Solovay went further, and defined an extension of any given model of ZFC which has the property that every subset of R is Lebesgue measurable! These models have completely normal probability, you can speak about picking real numbers at random with no fear of contradiction. In fact, that is what it means to say you can pick real numbers at random--- it means every subset is measurable.
In these Solovay models, the Banach Tarski theorem becomes false. The old decomposition is just mapping a measure zero dust in the sphere to a measure zero dust in the new spheres, but this measure zero dust just happens to be all the real numbers in the old model, so the old model is under the delusion that it has succesfully mapped all the points in a sphere to all the points in two spheres.
So now you see what the proper intuition for the Banach Tarski theorem is--- it's a theorem about models of the real numbers obeying ZFC. It is consistent to say that the real numbers in one sphere can be matched by rotation and translation to two spheres, because in a particular countable model, all that happens is that the countably many points in the original sphere are matched to the countably many points in the new spheres by rotation and translation.
But it is also manifest that this is not an invariant statement about subsets of R, it is a statement about your particular axiomatization of the real numbers, about the particular way that powerset and choice axioms play together.
So the Banach Tarski paradox is a fake. It can be proved true in ZFC, and it can also be proved false in restrictions of a forcing extension of any model of ZFC. So it's really one of the results that have been overthrown by the forcing revolution, although in the particular axiomatization mathematicians like the most (for stupid historical reasons) it stays true.
So in this case, the layman's intuition is more correct than the mathematician's intuition, and you should not make the theorem make sense, because it really does not make sense. The negation of the theorem is the only thing that makes sense.
4.9k views · View 32 Upvoters
Physicists developed the philosophy of positivism in the late 19th century, and it is the standard philosophy used in the field for day-to-day work. This philosophy was extended to logical positivism in the 20th century, by incorporating formal grammars and computers, and in this form, it is a mature foundation for philosophy.
Logical positivism is pretty much the standard physics philosophy, although most physicists are not versed enough in the philosophy taxonomy to identify it as their philosophy.
50 views · View 3 Upvoters
Answered Sep 4, 2013 · Upvoted by Anjishnu Bandyopadhyay, PhD Student in Experimental Particle Physics from University of Bonn and Tanmay Mudholkar, Grad Student, Physics, currently at CERN · Author has 1.4k answers and 4m answer views
Originally Answered: Are there physicists that are outspokenly anti-positivists?
Most physicists, as Soubhik Bhattacharya has said, don't like philosophy at all, because it is a political game of influence mongering. Also, philosophers have often said extremely stupid things about physics, and they continue to do so, and it is impossible to correct them, because they never learned how to think. Their training is in pompous writing and political persuasion using superficial syllogisms.
But physicists accepted a certain degree of positivism without any question, because it is required for quantum mechanics. The anti-positivist sentiments within physics are then just warnings to not take it overboard, and start to declare that any mathematical construction is not interesting, just because you can't directly observe it.
So for example, ghosts are a useful mathematical formalism. They were introduced by Feynman in the early 1960s, and the modern formalism was developed by the early 1970s. Ghosts are not observable, they are just intermediate states in a relativistic particle formalism, they appear in Feynman diagrams to cancel certain states of intermediate gauge bosons.
Does this mean that ghosts are unimportant? No. Some overzealous positivist would say "You need a ghost free formalism, because ghosts cannot be observed."
But positivism is not the statement that all your ideas need to only refer to directly observable entities. Positivism says that you can freely switch the framework around so long as the observable stuff stays constant. Positivism doesn't say "ghost free formalisms are required", it says "ghost free and ghost formalisms are equivalent" and this is something all physicists would agree with so quickly, they wouldn't even understand that such a thing could be contested by any person in any field. But it is precisely this that is contested in philosophy.
The philosopher will actually consider whether gauge-ghosts are real things, or just imaginary things with no reality. This is considered an actual question.
Carnap explained that such questions are non-questions, they are pseudo-questions born of not carefully defining the basic concepts in your philosophy. This idea is so common in physics, because there are so many formalisms that describe the same theory with a superficially different ontology, that physicists can't possibly not be positivists, at least not past 1950.
The physicists who opposed positivism at one point or another opposed other things for different reasons.
* Einstein said: "yes I said this (that the theory should refer to observables only), but it is nonsense just the same." To Heisenberg.
This was reflecting Einstein's uneasiness with quantum mechanics. The idea that the wavefunction was "how things are" seemed impossible, and you couldn't directly interpret it as "information we have", because it isn't probability, it's something new. So he was confused about this, and he never sorted it out.
Heisenberg said, since the theory is in accord with observation, there is really no problem. Heisenberg, in this case, I think is philosophically right, because I am a positivist. But Einstein could still be right on the physics, I personally put the cutoff for getting 100% convinced at a quantum computer factoring a large integer, like with 10,000 digits.
* Feynman said that "the principle that all things should be measureable was important, but now it's known. Everyone thinks 'consider the measurable things'. But in the future, we need new ideas, so maybe it's not good to consider the measurable things." (or something like this)
The point of this was to attack S-matrix theory. It was difficult, and Feynman correctly believed the strong interaction was a field theory.
* Weinberg attacked positivism.
This was again S-matrix theory, the S-matrix people were attacking quarks as "unobservable" and local fields as "mystical concepts". These things are wrong attacks outside of quantum gravity, you can use microscopic probes to define the quantum fields all the way up to the quantum gravity scales. Soubhik Bhattacharya has addressed these things.
Physicists don't read philosophy, and forget at what low intellectual level the debates in philosophy are conducted. They are basically a bunch of mentally damaged children arguing for political gain, and Carnap was the only adult in the bunch, so they heckled him and buried him deliberately, and only now are people forcing them to reconsider his ideas.
2.1k views · View 24 Upvoters · Answer requested by Frank Heile and Mirzhan Irkegulov
Originally Answered: How does the thinking or talent of a top 99.999% math person differ from a 99% math person?
I am answering not because I am a great mathematician, but because I have read some of them, and I like their work. The difference in higher mathematics is in internalizing proof methods which are generally useless for anything except proving things rigorously. This is a very different activity than internalizing technical skills, it's much more of an art. You have to deeply understand the previous proofs using the techniques, what their limits are, and how to exceed them. You also have to understand why mathematical things are true from their proof. It's an intuition that internalizes the deep methods and makes them obvious, so you don't have to repeat the deduction steps whenever you use them. It is also a kind of mental agility at packing and unpacking proofs into deeper levels of detail. It's very hard to explain, it's like the designer knowing which design elements will really click, it's an art form, but very constrained by logic. There is nothing like it, and the only proper explanation is to read a great mathematician's work in the original.
The level of innateness is like other great art, I would say close to zero. It's not Picasso's brush handling skills that made his paintings great, it's the style, the imagination, the evolved exploration, the individuality. The same holds for a great proof. It's so individual and unique, that it looks like magic that comes from a genetic mutation, but of course it isn't, because it doesn't run in families at all.
10.2k views · View 65 Upvoters · View Sharers
I am not a mathematician, so I hesitate to answer, but someone asked me to answer, and insisted even after I refused.
I like mathematics, I enjoy it the same way I enjoy a symphony, not usually as a participant, but as a spectator. I am a big fan of mathematicians! They consistently and reliably expand the knowledge of humanity, and the usual method they use to do so is by choosing to impose upon themselves a prison sentence of 20 years of hard labor in solitary confinement. This is the intellectual equivalent of the midieval monks who flagellated themselves. I did some flagellation in my youth, and I can say that it was rewarding, and it is extremely important because this mathematical thinking is the only real thinking.
First, the obvious. You should read mathematics! Read great mathematicians, past and present. Read historical work, read present work. Read the original authors, read expositions if you don't get it. If there's a new idea or method, learn it. Read all the works you are interested in, but not the ones you aren't interested in. That's going to be more difficult when you are forced to read stuff you don't care about for a degree. But always make time to read the things you are interested in. I loved transcendence theory, I loved complicated continuous constructions in analysis, I was bored by group theory, but not so much today.
But I fell out of love with mathematics for 10 years. The reason is that I didn't get the foundations straight. I was completely wrecked as a student by foundation-agony, by junior year, I decided the mathematicians were all full of crap, and stopped studying their work, because I couldn't read it anymore. I didn't trust set theory because it kept on proving more and more impossibly wrong things, like the well-ordering theorem (the reals DON'T have a well-order, this is obvious), the existence of a non-measurable set (there is no non-measurable set--- you can pick a real number at random between 0 and 1), and the higher level stuff then became a morass of shaky results that it was impossible to keep straight. Was the Radon-Nykodym theorem true? Actually true? True for some things, not for others? Usually false? Maybe yes, maybe no, there was no path to decide. How abour ultrafilters? Do they actually exist? Do nets make sense? Do they actually generalize sequences? It's a terrible situation to have to qualify all the theorems in your head like this, you need to have a solid framework to hang the results on.
As an undergraduate student, I actually got to the point that I started to suspect that set theory might be omega-inconsistent (this is not true, but this is why you need to sort out foundations). There are theories which are self-consistent, but which prove lies about computer program behavior, saying that certain programs halt when in fact they do not halt. Since set theory was proving all these absurdities about well-orderings and the continuum that I couldn't make sense of, I figured it might just prove that a non-halting program halts, maybe using some ultrafilter construction, and then the Radon Nykodym theorem, then some well-ordering, and presto, this non-halting program is proved to halt. The worst part is that you would never know it, because no matter how long you look at the program, you won't know that it didn't just halt yet. This made me toss and turn, and I decided I didn't need this kind of anguish, and it's the mathematicians fault for telling lies, so I don't need to listen to these bozos.
The resolution came a decade later while talking to a professional mathematician at a coffee shop. He explained to me his own foundational struggle, and learning the axioms of ZF, then the Godel proof of completeness of logic, and so on. He then showed me his own work on complex maps, which I liked a lot, and I got excited about math again, and went back and sorted out the foundation stuff. It was actually very quick, it resolved in about a month as I read "Set-Theory and the Continuum Hypothesis" by Paul Cohen. The original, not other expositions of forcing. The important thing were the Godel completeness theorem (an algorithm for making sense of axiomatic systems), the ZFC axioms (the axioms to make sense of), the Skolem theorem (that the models are really countable), Godel's L--- the straightforward simplest model where the axiom of choice and the continuum hypothesis are naturally true, and the forcing constructions (that the models can be extended so that the continuum is arbitrarily large) where the natural intuitions you have about the continuum can be made true whenever you feel like it.
The point is that axiom systems are describing countable models, not some abstract universe. Once you understand this, all the results that are uncomfortable become obvious--- you can immediately interpret any theorem you read in an analysis or topology book as "true in L", this allows you to hang it on your "L" rack. Then you can understand any intuitive probability or measure theory construction as "true in Solovay's universe", and the results which are embedding measure theory in L, you can hang on the "useless bullshit" rack.
Then you understand that the set-theories with powerset are themselves only reflections in the sense of Godel's theorem of set-theories without powerset. The set theories without powerset are reflections of arithmetic, and arithemetic is a reflection of its fragments, and this ultimately hits bedrock in computing things with integers. So the set theories are NOT omega-inconsistent, they are perfectly fine, they are extensions by Godel's method of previous consistent theories using ordinal chains.
This point of view resolves the foundations anxiety entirely, but it revives certain questions that were politically closed. One becomes interested in demonstrating the consistency of set-theory by finitary means again, using large countable ordinals (these are finitary when they can be represented on a computer). The modern version of Hilbert's program is called "Ordinal analysis", and it continues on in complete isolation within logic, but they proved a bunch of things, including the consistency of Kripke-Platek set theory a while ago, and some bigger set theories more recently. Rathjen has written about this.
The upshot of this is that all the ordinals are countable, the reals are an ordinal in any theory only because they are model-reals, every set of reals is measurable (simultaneously true, but in a different more Platonic model of the reals), and the results of mathematics need to be classified in the "L" "non-L" way to sort out the theorems properly. From this point on, I had no more difficulty with any of the literature, other than the usual ones of time and difficulty.
The questions that come up when reading the logic literature unfortunately are completely different from the mainstream of mathematics. But I think that this literature is really grappling with the full complexity of mathematics in full generality, while some more specific domain, like scheme theory, is really about sorting out the regularities in more traditional questions about prime numbers and so on. So I like the logic literature, becaue it looks more free of human bias about what is important. But the other stuff is nice too, not knocking it, and it seems to be where all the revolutionary stuff is happening today.
My personal taste in mathematics is to please prove the obvious stuff that nobody can prove, statistical regularities that are obviously true, yet completely unreachable by any known orderly method of progress, because the results are statistical, they aren't organized. I think the biggest advance here is the Appel and Haken method, their proof of the 4-color theorem, because this seems to be a path that is unexplored and promising. The amazing thing there is that they only needed to use heuristic probabilistic estimates, because they then used a computer program and checked various discharging algorithms until they found one that worked. Any one discharging algoritm proves a bunch of useless things about the existence of various random subgraphs, but if all the subgraphs allow you to remove a 4-coloring obstruction, then you prove the theorem. But they knew that if they search long enough through discharging algorithms they would find one that works, and this was simply from their heuristics, and they only needed one example to get a proof.
This method seems very promising in attacking superficially insurmountable problems. You can prove a lot of individually useless theorems automatically about subproblems, the theorems only prove the result when the decomposition somehow covers the space of all the examples, and you patch these automatically proved sub-theorems together to prove the result by doing an automated search. All you need are some heuristic estimates on how likely each sub-theorem is to be automatically provable and to cover enough of the cases to prove the whole theorem. I would love to try to do some theorems like this when I have some free time, and try to prove some statistically obvious thing, like the normality of some number. But this is not likely to produce anything in such generality, so it's something you play with, but not seriously.
2.6k views · View 18 Upvoters
Answered Jul 28, 2013 · Upvoted by Vladimir Novakovski, silver medals, IOI 2001 and IPhO 2001 and Anurag Bishnoi, Ph.D. Mathematics, Ghent University (2016) · Author has 1.4k answers and 4m answer views
Originally Answered: Can mathematical thinking be taught?
Of course it can be taught, otherwise no one would know it. Mathematics is not at all natural, there are isolated cultures like the Piraha which have no mathematics at all, they lack words for counting numbers, and this is probably what all human cultures looked like until the agricultural revolution. There is not a single person in these cultures, no matter how socially skilled or well-spoken, who has any mathematical skill at all.
Unlike language, socialization, visual arts, music, which are ancient and universal to all human cultures, mathematics evolved very recently, probably just after agriculture, in tandem with reading and writing. So it is extremely unnatural for the brain, your brain will revolt, just like it revolts when you try to learn to read. But you have to remember, it's the same for everyone else! We are all in the same boat.
The difference in achievement is mostly due to a deep commitment from the individual to practice internally, with intellectual honesty, until full understanding comes, and not to accept being ignorant of something that someone else knows. This takes time and dedication, and it requires exposure to good literature, and extensive practice in original mathematical thinking, even when the result ends up being well known and sub-optimal. It also requires knowing what you know, so that you don't read things that you can't understand and fake it (although you can do that at first, as a way of figuring out what you don't know, it should be a prelude to going over it again later with the goal of reproducing it internally in a completely original self-derived way).
You can go through mathematics historically, and you can recapitulate the whole history in a few years of single-minded dedication, by first learning Euclidean geometry, then algebra, then coordinate geometry and calculus and infinite series, differential equations, group theory and linear algebra and complex analysis, then rings and fields and differential geometry, early 20th century stuff, then algebraic geometry and stochastic stuff, and all the diverse things mathematicians study today, with number theory running throughout. If you study it deeply, each topic can take a lifetime, but for the main results, you just want to know the classic results, the stuff that is sufficient for 80% of the applications. This is the traditional method of mathematics education, and it's important, it's good to do.
But there is also a shortcut today--- you can learn to program a computer! This is how most people acquire fast mathematical literacy, and the computer has caused a revolution in technical literacy, and an attendant revolution in mathematics. There are more deep, difficult, conjectures that have fallen in the last 2 decades than at any other time in history, and they keep falling left and right. It's like the second reneissance in mathematics.
If you learn to program, there is no way to avoid fluency in the most important parts of mathematics eventually, it comes with the project. You need to understand algorithms, counting, recursion, coordinate geometry (if you are doing graphics), differential equations (if you doing simulation), discrete groups (if you are doing permutations), combinatorics (from everything), Kleene algebras (from regular expressions), and number theory (from cryptography). You can implement and get intuition for finite fields, lie algebras, anything, with just a little work. Further, the mathematics you will encounter will not be musty stuff that smells 300 years old, but new exciting stuff, where nobody has any idea how to proceed reliably, like the 3N+1 conjecture, cellular automata, fractals and renormalization, logic, things that are close to the complex questions you expect from mathematics in its most natural state.
The only thing you will not learn from programming is the more sophisticated analytic or geometric methods, and these are important too. It is also extremely important to learn set-theory well, this forms the backbone of mathematics, and also you need to learn some category theory, which forms the annoying language for modern mathematics, but these are both reasonably straightorward if you keep a computational perspective throughout.
10.9k views · View 83 Upvoters · Answer requested by Mirzhan Irkegulov
His Nobel prize was for the CNO cycle, and the elucidation of energy production in stars, something which was the main focus of his research in nuclear physics, and closely related to his bomb work. But this stuff, which calculation heavy, and which teaches you most about our universe, is not a conceptual mathematical breakthrough. That's not what physicists are usually after, if it happens, it's a side effect.
In my opinion, his greatest unique conceptual mathematical breakthrough has to be the Bethe Ansatz, around 1930. This inspiration for the method is something extremely intuitive, collision theory in two dimensions, the property you notice that conservation of energy and momentum in 2d requires particles to either keep their momentum or swap their momentum. But this simple observation allows you to solve the complete energy eigenstates and S-matrix for many interacting quantum fields in 2d. This is something you would hardly guess is possible. Zamolodchikov and others extended the ideas to give similar solutions for many other 2d theories in the 1980s and 1990s.
Bethe invented the method to apply it to the Heisenberg model of a 1d quantum spin-chain. The solution is similar in elegance to the more celebrated (and equally beautiful) Onsager solution in 1941, but it's technically completely different and in my opinion more difficult, because the Ising model maps to a free Fermionic system, while Bethe actually solved interacting nonlinear systems. The only competing mathematically exact methods in 2d are due to Belavin Polyakov Zamolodchikov, and came many years later. These are related, but are conceptually completely different--- Bethe's methods are not limited to massless theories, while the BPZ stuff classifies conformal (massless) theories by nature.
The Bethe Ansatz not only began the field of 2d quantum field theory, a field which became extremely important as a source of examples, and also as physics when string theory came around, it also allowed you for the first time to see by example that quantum fields could be completely consistent.
2k views · View 9 Upvoters
Updated Dec 6, 2013 · Author has 1.4k answers and 4m answer views
As a young person he wasn't too distinguished. But his later career was fantastic. Schrodinger solved the hydrogen atom in 1926 by the orbital method that is used today. He formulated time-independent perturbation theory, multi-particle wave quantum mechanics, and proved the equivalence of Heisenberg's formalism to his own. His methods are what you read in quantum mechanics textbooks today, so they are too familiar, and so lose their sparkle.
But here is a fantastic later purely mathematical contribution. In the 1940s, Schrodinger developed a weird method of generalized raising and lowering operators, in order to give a class of exact solutions for the Schrodinger equation. The method was probably inspired by another thing Schrodinger discovered, which is that the Schrodinger equation is a diffusion equation in imaginary time. When there is a potential, the diffusion is biased as if the particle is diffusing thermodynamically in a different "potential" (this is not called the potential, it's not the same function, it's minus the log of the ground state wavefunction. It should be called the "superpotential", but for some reason, physicist call the derivative of this the superpotential. I will break with tradition and call it the superpotential.)
One way to interpret the raising/lowering formalism is as stepping from one Langevin potential to a potential that could be seen as coming from reversing the sign on the superpotential. The two problems have the same eigenvalues, except to the extent that the ground state is lost (the inverted potential has no ground state). This was a big advance in the understanding of diffusion and of quantum mechanics both, but it remained sort of distant from the mainstream.
This method was rediscovered in the 1980s, when Witten formulated the supersymmetric quantum mechanics. This led to the solution of a bunch of quantum mechanics problems, the so called "shape invariant" potentials (terrible name), and then people realized that this is just the same class of problems that Schrodinger solved. There is a nice book by Junkers that explained the method, and the exact solutions, and this has become an active little subfield. But Witten's work came more than 40 years after Schrodinger understood this result!
Schrodinger's work was not so formalism heavy, and his mathematics was more traditional physics stuff, wave equations and so on. But he was first rate.
4.1k views · View 31 Upvoters
Originally Answered: Is it true that Albert Einstein failed Mathematics many times during his school days?
Einstein was a good student in mathematics as a child, but as a young adult, he was not a star in higher mathematics, and his mathematics professor Hermann Minkowski called him a "lazy dog". The reason is that Einstein was interested in physics, and he saw higher mathematics as a distraction. He was under the impression that any mathematics he needed he could create from scratch, and this hubris has been an inspiration for physicists since.
When Minkowski saw the special theory of relativity, he was astonished that such an incredible theory could come from such a bad student. It is Einstein's weakness in mathematics as a young adult which is the real source of the idea that Einstein was a mediocre mathematics student.
Einstein studied mathematics seriously starting in 1909, when, motivated by the equivalence principle, he realized he needed to learn differential geometry. It took him several years, but by 1915, he was as good at it as Hilbert, or Noether, two leading mathematicians of his day. His tutor and sounding board was Marcel Grossman, with whom he developed the early sketches of General Relativity. But the final theory Einstein did by himself.
In his later life, Einstein was comfortable with mathematics, so much so that his work was paid more attention in mathematics departments than in physics departments. This changed in the 1960s, as General Relativity became incorporated into mainstream physics, because it was rederived as the theory of the self-interacting spin-2 field. Today, General Relativity is quaint as physics, it's old enough to be classical. But in mathematics, it is yeilding nice results, because proving certain extremely physically intuitive facts is more difficult than would appear at first glance. Some of the more modern results are the positive mass theorem, and the local stability theorem for Minkowski space.
18.7k views · View 82 Upvoters · View Sharers
Ron MaimonAnswered Jan 3, 2014 · Upvoted by Anurag Bishnoi, Ph.D. Mathematics, Ghent University (2016) and David Joyce, Ph.D. Mathematics, University of Pennsylvania (1979) · Author has 1.4k answers and 4m answer viewsThe plagiarism in this case goes the other way, it's the Harvard mathematicians attempting to steal Perelman's work and claim it for themselves. It was the subject of an article in the New Yorker ( http://www.newyorker.com/archive... )
Two Harvard mathematicians working under Yau, Xi-Ping Zhu and Huai-Dong Cao, claimed to have closed the (nonexistent) gaps in what they characterized as the sketch provided by Perelman. This process of "completing" proofs by lesser known figures was just a codified and accepted form of academic theft, which was tolerated in the 1970s and 1980s, because nobody had a good enough access to all the literature to find the obscure papers that were being cribbed.
This type of thing serves a minor purpose, in that it usually extends the results incrementally, and advertizes them, and rewrites them in more accessible language, and allows people to see that the work is correct by linking it to other work. But in the dark ages of the 1970s and 1980s, simply by doing this pedestrian work requiring no major ingenuity or years of isolated brain-breaking labor, these folks would have gotten the majority of the credit for solving the Poincare problem, while Perelman would have languished in obscurity.
But today, we have an internet, and Perelman's work was online, and written exceptionally clearly. So in a remarkable unprecedented demonstration of the power of the internet, these prominent and powerful Harvard geometers with a ton of clout were basically told by the mathematical community to go shove it. It was the unemployed and isolated Perelman's result, and they had done nothing signifcantly new.
This was like a dawn in the field of mathematics. It announced that the dark ages are over, that the plagiarism and horrible academic ethics that characterized the 1970s and 1980s are done, finished. Can't get away with this crap anymore. This unethical bullshit alienated great mathematicians like Grothendieck and Perelman from professional mathematics. It really sucks balls when great famous people do it, it's not like they need to do this kind of crap.
The same problem occured in physics in the 1970s and 1980s, the Russians were often the victims, but I don't want to name names. It's all finished. Can't get away with it anymore.8.9k views · View 156 Upvoters
Answered Jan 18, 2013 · Author has 1.4k answers and 4m answer views
Originally Answered: What are some examples of mathematical theorems which were commonly accepted at one point but have since been shown to be false?
There is only one real example here, the clarification of the concept of set produced by the method of forcing. This showed that the following theorems, which are true in the standard axiomatization, are false in a very precise and literal sense, they produce objects which can be consistently excluded in other axiomatizations which agree on the result of all computations. That means that these theorems assert the existence of objects which are at the same time impossible to demonstrate in any concrete form and are consistent to reject.
When you have a theorem that tells you that a certain object exists, you expect that the object exists, so that if you deny that it exists, you'll get into trouble in some literal, computational, sense. Cohen showed that these theorem can be denied without contradiction with any computation, so that whether you believe them or not is up to you.
But for theorems that assert the existence of something, being free to deny the existence of this thing is tantamount to a refutation. So these existence proofs were simply refuted by Paul Cohen, and this coup was carried out without showing any problem with the proof, rather by showing problems with the axiomatic conception underlying the proof.
Here are the theorems that used to be true but are now either dubious or false (depending on who you ask), They are still true in standard axiomatizations, of course:
* The real numbers (or any other set) can be well ordered.
This theorem was proved in axiomatic set theory around the turn of the 20th century, and was considered just plain true until 1963. In 1963, Paul Cohen demonstrated that starting with any model in which this is true, one could easily add new symbols for new real numbers which make this statement false.
So it's status becomes nebulous. I would consider it obviously false, but most mathematicians just relegate it to the category of neither false nor true, rather, false or true according to convenience, and according to which model you feel like looking at. This category is always present when you consider models of set theories with uncountable collections as large as the real numbers or larger.
The method was sufficiently general and sufficiently independent of the axiomatization to show that it is always better to think of the result as false, at least when you are considering the idealization of the collection of all real numbers, rather than some specific model of the real numbers in some specific set theory.
So that today, we know that there is no way to produce a well ordering of the reals by any procedure, nor to define what it means to have a well ordering of the real numbers using any method that can be evaluated on more than a countable subset of the reals.
* There exists a non-measurable set
This was also a theorem, and again, Cohen's method allowed Solovay to show that it was not true in any reasonable meaning of the word "true", as applied to the collection of all real numbers. This example subsumed the previous one, because if the reals are well orderable, then they have a nonmeasurable set. The sets which you produce which are non-measurable, when interpreted in a specific model, like Godel's L, are really measure zero in this view, because the well-orderable parts of the real number are always little dinky measure zero peices, and ultimately, in the objective computational sense, countable pieces.
There are many consequences of this theorem which are either true or false according to how you decide to make a model of set theory:
* You can cut up the sphere into a finite number of parts and rearrange them by rotation and translation into a sphere of twice the size.
This is false when every subset of R is measurable, I consider it false. It is a theorem that every part of the sphere that can be defined in any reasonable sense has measure, only if you start doing induction on the reals can you partition the sphere in this way.
* Every vector space has a basis
this is false when every subset of R is measurable, I consider it false.
* The double-dual of an infinite vector space is always larger than the vector space.
This is surprisingly false for the example of the vector space of infinite terminating sequences when every subset of R is measurable.
* The dual of L_p is L_q for all but one pair of dual values p and q.
When every subset is measurable, it's true for all dual pairs, even L_0 and L_infty. In standard axiomatizations, it's not true for that pair.
* There exists a nonprincipal ultrafilter on the integers
false when every subset is measurable.
In addition to these theorems, which, if you are honest, were simply overturned by Paul Cohen, there were proposed axioms or higher constructions which were shown to be inconsistent, or incongruent with intuition.
* The existence of an elementary embedding from the set theoretic universe to itself.
This was shown to be inconsistent with the axiom of choice by Kunen. Whether it is consistent in schemes without choice, like in the measurable universe is an open question. This was a proposed axiom, so it was really a conjecture that it was consistent, and this conjecture was disproved. So I don't think it counts as an example. The forcing examples are the only real examples.
926 views · View 3 Upvoters