Sheng Li (gmachine1729) wrote,
Sheng Li
gmachine1729

Ron Maimon answers about physics and math on Quora (part 1)

Quora is a totally fucked up place nowadays. Good thing it is banned in China. I also had friction with an idiot Chinese-American, who was an engineer at Quora at that time.


But Ron Maimon did give some really good answers on there.


My friend Brian Bi wrote a Python script for downloading Quora comments: https://github.com/t3nsor/quora-backup. I am too lazy to actually use it, it's designed for nerdy computer people who use Linux, not accessible for your "average grandma." Not only that, even Brian Bi didn't do reverse engineering enough of Quora API, or Quora purposely did obfuscation. I vaguely recall it even went against their policy or something like that. This was something I had in mind when I wrote www.disqussearch.com, which to my disappointment is not easily found on search engines anymore. (By the way, somebody can use Brian Bi's tool to extract all of Ron Maimon's Quora comments and send to me.)


Ron Maimon's answers especially on physics and math are worth reading. I'll copy them over here. It's a pain in the look them up one by one through search engine, not to mention I have to go across the firewall.


Expect this to be a gradual process, too much pain to do this one by one all at once.


https://gmachine1729.com/%e8%af%ad%e5%bd%95-%d1%86%d0%b8%d1%82%d0%b0%d1%82%d1%8b-quotes/ron-maimon-answers-about-physics-and-math-on-quora/

Is Ed Witten really the world's greatest living theoretical physicist?


I think the greatest living theoretical physicist is Stanley Mandelstam. His thinking and insights (usually with Chew) are the only reason there is such a thing as string theory. But this is just a stupid opinion, like "what's your favorite pizza topping".



Physics is not a sport, like chess, where you can be the best by winning. It is not a competition, or rather, the competition is against nature, and each discovery is a win where nobody loses. You discover stuff, and you tell people, and then you go discover something else. At the end of your life, if you're lucky, like Ed Witten, or any of the other folks, you have at best a handful of discoveries compared to the size of the field. Then to ask who is greater, it's a question of whether discovery X plus discovery Z is more important than discovery Y, which is completely inane.


Witten is a great physicist, and never speak ill of a great physicist. However, his number one position has been granted by a corrupt and wrong political process, similar to the h-index, and this is not an acceptable way to go about doing science. It turns a discovery art into a contact sport where the main activity is citation sowing and reaping. The people who win at contact sports are the ones that trample over the field and hurt others.


The physics h-index works like any other star-making procedure, you select a small basket of people to be famous, using early career competence as a test. Then you apply political selection on the famous folks after the fact to get the "best of the famous". This process is bankrupt, because the best most original ideas come from absolutely nowhere, from the bottom of the barrel, from complete nobodies, just by the laws of statistics, because there are more nobodies than famous people. Nobody listens to these nobodies. In the old days, you needed people on top to endorse them, otherwise, they were just thrown out, like Everett, or the string theorists.


If you have famous people around, in the world before the internet, especially when hardly anybody could actually read the whole literature, like physics or mathematics in 1983, the famous people could sometimes get more famous by taking the work of a complete nobody, and republishing it as their own. In the early 1980s, nobody could read the whole literature, and you could get away with it, because nobody would know except for the author, and the author wouldn't find a job, because people would assume that the nobody was plagiarizing the somebody, rather than the other way around. Of course this doesn't work today. This type of corruption became worse during the reign of Ed Witten's. Einstein, Feynman, Schwinger, 'tHooft, Susskind always did stuff that was unmistakably completely 100% original, they never ever stepped on anyone else's toes.


Since the process of making Ed Witten leader was political, one should describe how it works for future generations, so they will see how fragile pre-internet science was: the way you got more famous is by making famous research buddies who you cite, and pull up, and they pull you back, in a corrosive feedback process that requires a feedback amplificaiton mechanism to select a few people for the top, this is the h-index. This process of feedback citation marginalizes all really good people, because a person with a new idea is not going to get cited, they are going to be laughed at, no citations , then the idea suddenly becomes obvious, no citations again (Einstein's Nobel prize winning photon paper has, like, 4 citations). This is not some weird exception, it is all the best work.


Ed Witten was transformational, because Ed Witten, through intelligence, foresight, and political shrewdness, made this horrific crappy system work pretty ok, at least throughout the 1980s and 1990s, by first rising to the top (quickly) through making the right friends and doing a bunch of competent field theory research with the right people, then once he got to the top, quickly recognizing and pulling up the RIGHT PEOPLE, the completely original people who were stomped on through the 1970s, the string theory people, and at the same time, all the while doing his own completely original work, which was unusually heavily mathematical, and pushed the field forward also. Ed Witten became a leader essentially because he was the only baby boomer on the East Coast physics departments who actually could read. He became a superstar when he endorsed strings, thereby giving East Coast journal people a way to check whether string papers are correct (ask Ed to referee it), and suddenly the field boomed, and everyone needed to make friends with Ed, because he was going to referee their string papers. Back then, people who weren't John Schwarz or Michael Green couldn't evaluate string papers.


The baby boomers had a drug catastrophe in the 1970s, which played a role in this. When people are burned out, they needed someone to follow in order to know what to do. Ed Witten played this leadership role in physics, emulating and displacing 't Hooft somewhat, who was the previous leader. I am trying very hard not to insult Witten here, rather to insult everyone else of his generation instead.


What is it like to take Harvard's Math 55, purported the "most difficult undergraduate math class in the country," teaching four years of math in two semesters?


It's not hard, if you know how to prove things coming in, but if you don't already know proofs before you start, you just shouldn't take it. You won't learn how to prove things rigorously in the first two weeks before the first problem set is due. If you expect to learn the material from the class, don't. Learn it a year or two before you go in, it will then be a breezy review with good peers, and it will introduce you to new stuff.


Because the class assumes familiarity with rigorous proofs, it mostly consisted of freshman from accelerated schools, who had exposure to proofs in high school. I was one of the few public school students, but I knew all the stuff from independent reading, so I was much much better prepared than the special school students. The class is simply another stupid method of social selection--- take a certain fraction of the undergrads and give them special attention, and groom them for the Putnam (Harvard takes this seriously), and for a mathematics career. It's a method of talent selection which is busted, like all other such methods.


If you take the class, for the sake of your TA, don't write out rigorous proofs in full. Lots of students write out the solutions in lemma-theorem form, proving everything from rock bottom. I did this also. This makes your problem set ENORMOUS. You don't need to prove the commutativity of integer addition. You should learn what the main idea of the proof is, and what can be taken for granted. This is not so easy to do in an undergraduate proof class, where nearly all the proofs are of obvious facts.


My complaint in hindsight is that the class didn't sufficiently emphasize computational skills--- you learn linear algebra without ever getting practice with row-reducing, or any other rote procedure. These are not conceptually difficult, but they are useful, and require practice, and this is more useful for undergrads than memorizing some specially selected route (as good as any other) through the rigorous development. I had personally already done some practical linear algebra, so it wasn't a big deal for me, and I assumed everyone else was the same, but now I realize that's not true. The other students did absolutely no mathematical reading at all before taking the class, and for them, it just wasn't enough computational exercises. So there are often terrible gaps in the knowledge of the math-55 folks because they know abstract things without enough dirty computation. Also, they tend to become cocky from being selected as special, and this makes them useless. Perhaps I was saved by the fact that I wanted to be a physicist, so I didn't care about the mathematicians, beyond poaching their methods and training my brain.


To learn how to prove things for the purposes of getting into the class and doing well, it is sufficient to become well acquainted with the material in a few standard rigorous undergraduate textbooks, I read Lang's Calculus, Mukres topology, some books on General Relativity, and Dirac's quantum mechanics, and this was far more than enough, it made the class boring, at least after the second problem set. The class only covers material that is standard undergraduate fare everywhere else, except rigorously. I cannot emphasize this enough, there is no magic, there is nothing in the syllabus that is beyond the standard undergraduate multivector calculus, linear algebra, except of course, you need to prove everything. The only magic is in an occasional aside by the instructor, or a special topic.


The instructor my year was Noam Elkies, who has a wonderful insight into undergraduate teaching. He presented a strange introduction to Riemannian integration which develops finitely-additive measure theory instead of doing Riemann sums. It's equivalent, and perhaps a little cleaner. In hindsight, I just wish they had gone straight to Lebesgue integration, there was no point in learning finitely additive measure separately. I also remember Koerner's book on Fourier transforms being assigned, and I read that cover to cover, because it's a great book. The lectures on Fejer's proof and the FFT algorithm stick out in my mind as particularly insightful, I still have no problem writing an FFT routine when I need one. The rest is lost in my memory.


I took it in 1992, and I also TA'd it in 1993. While I have happy memories of the class when I took it, the TAing phase was difficult. I was a sophomore TAing 40 freshman! That was about double the number of students my year. And I had to take 3 undergrad classes plus 2 grad classes each semester that year, so my workload was approximately double that of a grad student--- approximately 10 problems every week for math-55, meaning I had to write clean solutions for the problems, and I had to read 400 amateurish crappy enormously long proofs every week, in addition to doing 2 graduate problem sets, 2 undergrad problem sets, and a bunch of reading for whatever dippy core humanities course I was forced to take that semester. It was too much. The pay for an undergrad TA was also ridiculous, it was peanuts. But it was better than cleaning toilets, which is what I did my first year.


In the second semester, the class covered differential forms, while I introduced tensor analysis in section, to explain what these were, really. That was a mistake, the students didn't like it, and they also didn't appreciate that I would translate everything to tensor language, and then translate back to forms. But that was the only real collision between me and the instructor. The rest of the course was easy, because it was a subset of what Elkies covered.


I also remember making a mistake in one of my early sections--- I said that a proof didn't require choice, because I could see the construction more or less, but a bright student said "you are choosing a sequence", and I said "oh yeah, I guess it does require choice". Today, I would make the distinction between countable and uncountable choice, but at the time, I didn't. Other than that, I remember having an easy time presenting proofs, because I had practiced presenting the proof in my head to learn the material.


TA'ing the problem sets meant that you have to find the mistakes in all of them. This took a long time. It made me lose sleep, and pull all-nighter after all-nighter. My social life disintegrated, and I think I went a little bit crazy. I would wander around Harvard Square at 4AM getting burgers at "The Tasty" (now defunct), and making friends with homeless people, before going back to my dormitory. But the students liked me, because I was close in age to them, I knew all the pitfalls of the class, I proved things well in section because I prepared well, and I actually read and understood each of their proofs, and commented on it. Also, I would make sure if there was an insightful original idea on one of their proofs, I would give more than full credit, so that you would get credit also for part of a problem you didn't do, because you had an original idea somewhere else. The students appreciated this. I also explained the proofs from first principles, in a very rigorous way that I was really into back then. The students all said I was very helpful, and this was rewarding.


The one thing I learned from TA'ing that class was how to read crappy proofs very fast and find the mistake (if any), and this was a good skill to develop. This was probably the first time I acquired proficiency in quickly reading and evaluating mathematical proofs, from TAing, not from taking the class. Taking classes is useless for this.


I remember some problems from the first year, but only from one problem set, the first one in math55 proper. First, there was a superficially trivial problem regarding vector space duals that required the axiom of choice to solve in the infinite dimensional case. Elkies and the TA told me it didn't require choice, but I kept on telling them that I thought it did, because whatever I tried without choice didn't work. After hecktoring me a while, they realized it did require choice, so I got an undeserved reputation for being really smart. I talked to Dylan about this, and he told me why some people disbelieve choice, constructive principles and all that, although he tried to make it clear he wasn't one of those people. This made a huge impression on me, I immediately embraced the constructive thing. I reevaluated the proof of the well-orderability of the reals, and realized it makes no sense. I read "constructive analysis". I eventually got suspicious of all of classical mathematics by the time I took a grad real analysis course, and I gave up on math for another decade or so, before learning some logic. So you should make peace with the axiom of choice, and Cohen's book "Set Theory and the Continuum Hypothesis" is really the only way to do so.


This problem set had 9 problems, all of which were good mathematical puzzles--- they were genuine interesting things. They weren't even graduate level stuff, but they were challenging. One of the easier ones I remember was to show that the dual of the vector space of eventully zero sequences of reals was bigger than the space itself. This I remember doing by finding an uncountable linearly independent set. There was another straightforward problem, which asked to calculate the number of bases of an n-dimensional vector space over Z mod p, this was simple combinatorics, but it took me a while to figure out what was being asked (this was half the battle in the days before the internet). I did all the problems except for number 8, which stumped me. The problem asked to show that in Z mod 2 (the field with two elements) the diagonal of a symmetric matrix is in the span of the column vectors. The key idea was presented in lecture, but you had to take notes. It was a difficult problem for undergrads. I later figured out that a symmetric matrix in Z mod 2 is really an antisymmetric matrix also, that is the key idea. This was a nice problem, it was the last nice problem.


I remember being unhappy that I didn't solve all the problems on that problem set. But then when it came back, the mean on that problem set was 2 out of 9, meaning 2 problems solved out of 9, and I had 8 out of 9, missing that stupid span Z mod 2 problem. Noam Elkies was told to tone it down in difficulty, and unfortunately, he did. The rest of the problem sets that year were loads of boring extremely straightforward standard exercizes, with an occasional good problem.


The one other experience that sticks in my mind was the first math-25 problem set, before the class split into math-55, which was trivially easy. I knew how to do all the problems immediately, but I wanted to socialize with some of the girls in the class. So I joined a study group with 2 female students. I thought I would play it dumb for a while, as they debated how to do the problem, then I would say "Hey! I have an idea! Maybe you do this...." and explain the obvious correct trivial solution, pretending to not see it all at once, and in this way, impress the crap out of them.


So we went into a room in the library, and they started blabbing about their stupid totally wrong ideas about how to solve the problem. I pretended to listen for 5 minutes, nodding my head at all the stupid wrong things, then I said "Hey, I have an idea! Why don't we try this..." and then explained the answer in 2 lines. Then I would sit down from the chalkboard and they would be stunned by my insight. That was the plan anyway.


I did that with one of the problems. And I sat down at the end, and I figured they were stunned by the brilliance (because I knew the answers, the problems were dead easy). Instead, they just looked at each other in a funny way. Then one of the girls said "let's try another one...", and I let them blab with their ignorant jibberish, then stood up at the blackboard, explained the problem clearly and completely, and sat down.


The response was, "Ok, I think we should dissolve the study group." I said "Ok", and went off to write the solutions by myself. It took an hour or two, and when I was done, I walked by the room, and saw the two of them back in there, discussing it without me!


I wondered why they dissolved the group and reformed it without me, then it slowly dawned on me. They thought I was full of shit! They were not only too stupid to solve the problems, they were too stupid to recognize the correct solutions when it was shoved in their faces! This taught me a valuable lesson about how mathematically ignorant Harvard students are.


Of course I got a perfect score, and they got a very low score (although how they avoided getting a zero, I don't know). So don't be intimidated by upper class twits, a studious working class type person can easily run circles around them, they are not capable of thinking logically. These comments apply only to Harvard undergrads, not to MIT undergrads or Harvard grads, and not even to all Harvard undergrads, they have a few real nerds too.


As a TA, an undergrad student attempted to seduce me (she didn't make it to math-55), in take-home finals an undergrad wanted to cheat off me (I gave him the wrong answer). Many students copied my answers to problem sets in both mathematics and physics, by pretending to work together. That should give you the idea of the level of ethics we're talking about. I think if you don't give your answers away, and stay away from unethical social schmoozers, hang out with grad-students and professors, and listen to the professors only, you'll be fine. There are lots and lots of Harvard students who think they are Nietzsche's supermen and superwomen, and act accordingly.


How good in math was Werner Heisenberg?


Heisenberg's development of Matrix Mechanics is so technically difficult to follow in the original, that into the 1960s and 1970s, great physicists would scratch their heads to understand exactly what he was doing. This despite the fact that they already knew quantum mechanics!


There is a quick simplified walk-through on Matrix Mechanics on Wikipedia, which omits the difficult dispersion relations that Heisenberg actually used in the paper (these Kramers-Heisenberg relations motivated the derivation of the on-diagonal commutation relation), the Wikipedia article substitutes a more conceptual Dirac idea for what Heisenberg actually did, which was a differentiation with respect to an integer parameter (the matrix index). But the spirit is the same, and you will get a sense of what a tour-de-force this was.


The great talent of Heisenberg was to be able to find his way in cases where the mathematics was indeterminate, like extracting the correct relations in the more complete quantum theory from the primitive relations in the old quantum theory. The answer isn't uniquely determined, except, that up to common sense, it is, so you just barely can do it. And still, Heisenberg did it. There is nothing like this, it was one of the greatest ever intellectual feats of the human mind, and physicists were so honest in 1925, that despite the fact that only about five people understood him (Kramers, Jordan, Bethe, Pauli, Bohr), he was still recognized for this achievement, at the age of 22.


His other contributions were to develop various early quantum field theories, and (independently) the Kolmogorov theory of turbulence, extending it to correlation function relations (this he did while imprisoned after the war). But I don't think mathematicians care about such things, because these are physical results, not cases where you produce a logical outcome from definite deductions. It's the physicist kind of mathematical thinking, and in Heisenberg's case, he used it to do pure magic.


How does the thinking or talent of a top 99.999% percentile math person differ from a 99% percentile math person?


I am answering not because I am a great mathematician, but because I have read some of them, and I like their work. The difference in higher mathematics is in internalizing proof methods which are generally useless for anything except proving things rigorously. This is a very different activity than internalizing technical skills, it's much more of an art. You have to deeply understand the previous proofs using the techniques, what their limits are, and how to exceed them. You also have to understand why mathematical things are true from their proof. It's an intuition that internalizes the deep methods and makes them obvious, so you don't have to repeat the deduction steps whenever you use them. It is also a kind of mental agility at packing and unpacking proofs into deeper levels of detail. It's very hard to explain, it's like the designer knowing which design elements will really click, it's an art form, but very constrained by logic. There is nothing like it, and the only proper explanation is to read a great mathematician's work in the original.


The level of innateness is like other great art, I would say close to zero. It's not Picasso's brush handling skills that made his paintings great, it's the style, the imagination, the evolved exploration, the individuality. The same holds for a great proof. It's so individual and unique, that it looks like magic that comes from a genetic mutation, but of course it isn't, because it doesn't run in families at all.


Who are some of the most underrated physicists?


The most underrated are those that contributed enormous things, but are not fully recognized for their contributions. This means, you probably never heard of them. I will focus on those theorists I know are shafted:


1. Ernst Stueckelberg: This fellow invented relativistic perturbation theory in 1934, almost 2 decades before Feynman and Schwinger. He discovered that positrons are back-in-time electrons in 1938 (and Feynman got the idea through Wheeler indirectly from him). Stueckelberg proposed renormalizable electrodynamics in 1941 but his paper was rejected from physical review, it took Hans Bethe's 1947 Lamb-Shift estimate to reintroduce the subject. Stueckelberg should have received the 1965 Nobel prize along with Schwinger and Feynman, while Tomonaga could have shared his with Luttinger for 1d liquids (which is a bigger contribution of Tomonaga's anyway). Stueckelberg didn't rest on his laurels, he went on to discover the Abelian Higgs mechanism and the renormalization group too. Each of these are major discoveries in themselves, but to have one person discover all of them raises him to Bohr-Einstein status. He died insane and neglected, although he received some awards late in life. He is the godfather of underrecognized physicists, and he must be at the top of any list.


Why was he underrecognized? He was antisocial.


2. Alexei Starobinsky: This Russian fellow discovered inflation, Alan Guth was second. There is a lot of chauvinism in physics, and the great Soviet scientists were often neglected for no good reason. The mechanism was somewhat different, but the main predictions were the same, and the Soviet school calculated the fluctuations in CMB long before the famous inflation conference in 1983 reproduced their results (in a more primitive approximation, and with mistakes).


Why was he underrecognized? He was Soviet.


3. David John Candlin: this guy invented the Fermionic path integral, but credit accrued to Berezin, who wrote about it in a book a decade later. There is no dispute that David John Candlin is the inventor, his paper is a clear description of the anticommuting variables, reconstructing the state space, and producing the integral for them with the modern definition. Berezin was no slouch either, but he didn't invent the thing. This is not an attempt to steal credit from Berezin, but to attribute the result properly: David John Candlin is the sole inventor. Everyone else in 1956 had the wrong idea, including Feynman, Schwinger, and Salam. David John Candlin is still alive, and lives in Edinburgh, so there might be time to get his historical perspective on those events.


Why was he underrecognized? He didn't publish a lot.


4. Stanley Mandelstam: This guy is certainly the greatest living physicist, although he is very old. In 1957, he discovered the double-dispersion relations, and essentially refounded S-matrix theory, which was proposed by Heisenberg in 1941, but lay dormant for nearly 20 years. This theory became string theory, after many twists and turns, and Mandelstam is the original formulator of 2-d conformal fields, fermionic correlation functions, string field theory, and the arguments for finiteness of string perturbation theory which convinced the world that the theory had no ultraviolet divergences. He also made pioneering contributions to field theory, and really, he deserves a major overdue Nobel Prize, but he'll never get it.


Why was he underrecognized? He was too advanced for people to understand.


5. You can't say Mandelstam without Geoffrey Chew. This fellow prosalytized for S-matrix theory so effectively, it dominated high energy physics from 1964 to 1974. He proposed that Regge trajectories describe hadronic physics, along with Frautshi, and Mandelstam's theory of cross-channel high-energy/unphysical-angle relations gave the theory mathematical form. This is the birth of string theory. The string description of hadrons is underrecognized today.


Why was he underrecognized? He was not a formal wizard (unlike his collaborator Mandelstam), and people characterized him as a dimwit, ridiculously, since his physical intuition was spot on and now known to be more correct than that of Gell-Mann, Mandelstam, Weinberg, and other formal wizards. His phenomenological calculations were also sound and competent.


6. Vladimir Gribov: Another S-matrix giant. This fellow gave form to Pomaranchuk's idea that proton-proton and proton-antiproton collisions have equal cross sections at high energy, and predicted the Pomeron trajectory (attributed to Chew and Frautschi in the west, wrongly, although possibly independently). This prediction is stunning, and it is completely verified in the 1990s when proton anti-proton collisions at hundreds of GeVs showed that the cross sections do become equal. Did the S-matrix folks who predicted this in the early 1960s get a Nobel prize? No, they were booted out of academia, and mostly had to scrounge around in accelerators.


Why was he underrecognized? He was Soviet.


7. Tamiaki Yoneya: This obscure Japanese physicist was first to discover that string theory includes a graviton, a real graviton, not just a spin-2 particle that could be a graviton. He made the argument exceedingly elegant throughout the 1970s. He is still active in string theory today, and his underrecognition seems to be fixing itself.


Why was he underrecognized? He was a string theorist in the 1970s.


8. Joel Scherk: This guy is the godfather of modern physics. Although he is well known to string theorists, he is not well known enough, and he was driven to madness and possible suicide just before 1980, His death is mysterious, there are several conflicting reports, but his mental deterioration is well attested, and it is perhaps due to the fact that string theory was so thoroughly neglected in the 1970s.


Why was he underrecognized? He was insane. Also, string theorist in the 1970s.


9. Shoichi Sakata: Sakata proposed that hadrons are made of the proton, neutron and lambda. While this is incorrect in the details, the model works well, because these three particles are stand-ins for the up, down, and strange quarks, except with integer electric charges. His model was the direct precurser of the quark model, and is the reason that Gell-Mann and Zweig were able to formulate the correct theory independently. But he was first, and made a major contribution, if not the major contribution, to this idea.


Why was he neglected? He was a Marxist.


10. Pasqual Jordan: This guy co-discovered quantum mechanics, discovered Fermionic fields independently of Fermi, and made major contributions to early field theory.


Why was he neglected? He joined the Nazi party. This one I can sort of understand.


11. Iosif Khriplovich: This physicist discovered the negative beta function (asymptotic freedom) in nonabelian gauge theory in 1968-1969, three years before 'tHooft discovered it (but Veltman did not allow him to publish), and five years before the pioneering papers by Coleman/Politzer and Gross/Wilczek that established the result for good. The Nobel prize to Gross Politzer and Wilczek should have gone to Khriplovich, who had a much more physical argument than a direct calculation with a finicky sign, he showed why the effect happens physically, that it is due to gluon polarization. David Gross is no slouch, he could have won for greater contributions, such as the heterotic strings, or the Gross Neveu model, or a host of things (he is really great), while Wilczek could have won for condensed matter anyons or the superconducting strong-matter high-pressure state (he also has great discoveries). The beta function was not the only great contribution Khriplovich made, he also discovered parity violating effects due to the weak interactions in atomic physics, and explained them as due to nuclear anisotropies interacting with the electron fluid. This research continues, and it is truly remarkable, considering the amount of speculation on P-violation in atomic physics in the 1980s, speculation which post-dated both Khriplovich's theories and the experiments which verified them.


Why was he neglected? He was Soviet.


12. Robert Kraichnan: He is responsible for modern turbulence theory, including the inverse cascade in 2d. The inverse cascade is the prediction that turbulence in 2d takes small scale disturbances up to large scales, violating decades of physical intuition from 3d turbulence and the statistical ultraviolet catastrophe, it is truly a remarkable prediction. He is also responsible for many statistical physics models of turbulence, including the first "large N" approximation, something which took over physics when 'tHooft discovered a more central high-energy version (although one can see Wigner and Dyson's random matrix theory as another precursor to this). Anyway, he was working for decades on this, and received adequate support, so you can't complain too much. But nobody read him.


Why was he neglected? He was not in academia.


13. Tony Skyrme: Tony Skyrme discovered his eponymous model in 1960. It took a full 2 decades for this model to be rediscovered in large N QCD by Rajeev, Nair, Balachandran and then by Witten, and then he got some recognition, but promptly died. While trying to get a better handle on 4d Skyrmions in the 1960s, he also discovered the interpretation of 2d solitons like those in the sine-Gordon model as Fermions in a dual description, something which was refined by Coleman and Mandelstam in the mid 1970s into an exact identity of two dimensional field theories, the two dimensional bosonization/fermionization which is so central to physics today.


Why was he underrecognized? He was doing unfashionable unified field theoretical physics when simple particle models were in vogue.


14. Leo Kadanoff: He discovered the modern renormalization group, and the operator product relations which are central to determining critical exponents, work which was turned into an elegant 2d theory by Belavin, Polyakov, Zomolodchikov. He isn't neglected anymore, but he was not awarded the Nobel prize with Kenneth Wilson (much to Wilson's surprise), and he should have been (along with Wolfhardt Zimmermann, another neglected giant whose 1950s work was the true source of the operator product expansion, and which is now active mathematics, thanks to Connes and Kreimer). Kadanoff still keeps plugging away, and his stature keeps growing, so this is fixing itself.


Why was he negelected? Damned if I know. Perhaps that's why he is neglected less and less with time.


And finally, I must end this list with a choice that is sure to be controversial, and is the most scandalous:


15. Martin Fleischmann: Having discovered the most suprising thing in the universe, namely that deuterated palladium sustains nuclear reactions, almost certainly of the deuteron-deuteron fusion sort, his reputation was scandalously
blackened, and his great work diminished, until his name became synonymous with fraudulent or delusional science. It is clear now, two decades later, than he was not delusional, but this realization came too late for Fleischmann, who was suffering from Parkinson's disease at the end of his life. His memory drives one to work harder, with no hope of compensation, every day.


Why was he neglected? He was a chemist. Chemists are not allowed to discover fundamental challenges to all known nuclear physics, and his discovery stepped on well financed hot-fusion toes.


That's the end for now. I could go on, because so many of the major discoveries in physics are scandalously underrecognized, Many of the physicists on the list who got some credit for the discoveries of others were still underrecognized for their own original contributions. There is not so much bad-faith--- a lot of things were discovered simultaneously in ignorance of prior work--- and the mechanism of credit accrual is mysterious and capricious, very rarely accruing credit to the proper author (but it happened: Einstein, Bohr, Heisenberg, Dirac, Feynman, Schwinger, Dyson, these folks got credit for their own original work) Attention to research was a scarce quantity in pre-internet times because it took decades to understand what the people were talking about. The Feynmans and Schwingers of the world are an exception, not the rule. I hope people try to emulate the people on this list, not those that suppressed and heckled them.


I edited this for typos, included an extra neglected fellow, and added information on David John Candlin status.


What is renormalization group theory?


Renormalization group theory is the theory of the continuum limit of certain physical systems that are hard to make a continuum limit for, because the parameters have to change as you get closer to the continuum.


A continuum means continuous space, parametrized by real numbers in cartesian coordinates. This is always an idealization, so you can model it as a lattice of points, like a square grid which gets finer and finer, and limit means that the grid is disappearing by getting smaller, so you are approaching an idealized continuous space. The continuum limit is difficult when the limit requires you to change the model as the lattice length becomes small.


For a simple example, consider the idea of length of a rough curve, or area or a rough surface. A simple model for this is the Koch curve, or Koch snowflake (it's on Wikipedia). At each stage in the transformation defining the Koch curve, the length of the curve increases by a factor of 4/3. So you fix an atomic scale, at which the curve is resolved as a series of straight lines, and then the length is blowing up according to the law:


L = C (4/3)^N


Where N = 1/e, where e is the cutoff. The coefficient "C" is the size of the curve, the quantity 4/3 is the blowing-up exponent. The appropriate quantities to consider in the limit of small e is C, not L.


The same parameter swap happens when you have a statistical phase transition. For the simplest example, consider the Ising model, where you have one bit at each point, and the probability of two neighbors being the same is enhanced by a factor of e^J. As the factor J gets bigger, there comes a point where there is a transition, on infinite size lattices of dimension 2 or more, where most the bits will have a tendency to be one, or minus one. You can simulate this as described on Wikipedia, and see the transition with your own eyes (there are also applets on the internet).


When you are close to the transition, you can describe the Ising model using the average value of the bits (change the bit values to be -1,1 instead of 0,1 for convenience, because then there is a symmetry between the values on flipping sign). The average value can be defined on a very big ball, or using some smoothing function, like summing all the values with a Gaussian weight, it doesn't matter how you smooth it up very much.


The resulting average field varies from point to point, and if you look at all Ising model configurations, it will have a tendency to want to be the same at neighboring points, but also a tendency to be pushed away from 0, and the two compete.


You can make a grid-based model for this field, where the probability of every field configuration is the exponential of minus S, where


𝑆=∑⟨𝑖,𝑗⟩(𝜙𝑖−𝜙𝑗)2+∑𝑖𝑡𝜙2𝑖+𝜆𝜙4𝑖S=∑⟨i,j⟩(ϕi−ϕj)2+∑itϕi2+λϕi4


You can heuristically derive this approximation as Landau does, and this is done in many books, my favorite is Polyakov's "Gauge Fields and Strings". The derivation is relatively straightforward, and you can also try to do it yourself.


This model is the workhorse of elementary renormalization theory. In the continuum limit (when you consider the ball large, or equivalently, the lattice small) the first term is the gradient, it makes $\phi$ want to be constant from point to point, and it comes from the fact that averages don't change very much on overlapping balls.


The potential term, the quadratic-quartic term, is enforcing the tendency for the field to be two-valued like the Ising model is. You recover the Ising model by taking t to minus infinity at fixed lambda, where the field has two values at each point.


There is a correlation length in this model, which means that at a generic value of t, the field 𝜙ϕ at two points is independent of the precise field value at another point far away. The independence is not complete, but the correlation between the two falls off exponentially. The exponential rate is called the correlation length.


The point of this theory is that as you tune t, at some negative value of t, there is a phase transition. At the phase transition, the correlations no longer drop off exponentially, you get a power law.


At this magic point, you can take the limit of t getting closer and closer to the transition, but rescaling the lattice to keep the correlation length fixed. This limit is a continuum limit, because the lattice gets smaller. In this limit, you get a continuum theory.


Define the coefficent t' to be the difference between t and the critical point. As the lattice gets smaller, the coefficient t' has to go to zero as a precise power law. This power law defines how you get the continuum theory. Renormalization group theory is the theory which allows you to calculate the exact law for the way the coefficient changes.


Historically, it wasn't thought of this way. Historically, it was discovered in quantum field theory, where people thought of things in the continuum to begin with. The correlation length in the dictionary to quantum field theory is the mass of the particle described by the field. Since the mass is finite, and the lattice spacing is zero, the correlation length is infinitely larger than the lattice spacing.


So you can see that, if you make a teeny tiny lattice, and you have a finite mass particle, you need to make the correlation length enormous compared to the lattice spacing to define the field theory, so that the theory is tuned to extremely close to the critical point. The scaling laws are funny, so when you take the limit of a pure continuum, you have to change the parameters of the lattice theory as you make the lattice small, exactly according the scaling laws of the phase-transition theory.


This relationship was understood in the 1970s. Before this, renormalization theory meant doing hokey things with perturbation expansions of continuum theories that gave infinities. The reason for the infinities is completely understood now---- you are expanding a theory with somewhat altered fractional scaling laws in a power series using a theory with different scaling laws. That's a little bit of a lie (at least in 4 dimensions). In 4 dimensional quantum field theory, you are expanding the theory with log-altered scaling laws, meaning that it's not quite a different power, but it's trying to be, because a log is like a power that is going to zero.


The best introduction to renormalization group theory, in my opinion, is the Migdal Kadanoff transformation for the Ising model. This defined the modern field.


In this transformation, you define a block of spins as a single spin, and you make up a majority-rule for deciding what the block spin is supposed to be. This is described in several books.


It's a huge field, reviewed well in several places. Kenneth Wilson's 1974 Reviews of Modern Physics article is one of the best sources for learning the theory, and there he describes the transformations. It's a little old-fasioned today, and it is complemented by reading the high-energy theory of perturbative renormalization. I am not writing more, because it is possible to write a book.


However, you asked about soft-condensed matter and simulation. In soft condensed matter, it is applied wherever you have a nontrivial continuum limit to take, with natural fractal shapes. Examples are depinning, certain sand-pile phenomena and other examples of self-organized criticality, disordered systems, any sort of statistical theory, really. It's easier to ask where it is NOT applied, because it is the general way of making a model at long distances where the atomic scale becomes unimportant, whatever that "atomic" scale might be (it might not be atoms).


In some cases, like the tight-binding model reducing to the Schrodinger equation, or the analogous thing in high energy physics, of a free lattice field turning into a free continuum field, the theory is trivial--- it is just dimensional analysis. The simplest nontrivial example is the Ising model. http://en.wikipedia.org/wiki/Ising_model


Why should high school students learn physics?


They aren't required to learn it, it's an elective. But they should learn it, preferably on their own, because the school doesn't know how to teach physics. Physics is extremely interesting, even the elementary kind. It takes the mathematics you learn in high school and uses it to describe certain natural phenomenon completely, beyond what was imagined possible in the wildest dreams of people like Pythagoras or Archimedes.


If you have a computer, Newton's laws plus a tiny code can produce the motion of the planets around the sun, the motion of a free-twirling baton, the motion of colliding billiards, it's very simple. You can simulate particles on springs, solid lattices, all sorts of crazy force laws, and you can prove all the regularities you see once you learn calculus, the hardest two are proving that the motion in an inverse square law is an ellipse, that the inverse fifth law collides with the force center, and that a bunch of particles with an inverse cube attraction breathe in and out (all are from memory, it's been a while). These regularities were worked out by Newton, some others were worked out in the 19th century.


Writing these types of simulations can be done in high school, even earlier, whenever students learn to program a computer and display pictures on the screen (to see the output). It immediately leads students to appreciate Newton's laws, because suddenly, all the solid objects around them have motions that are easy to simulate, it gives more or less a full understanding of the day-to-day world, ignoring the quantum stuff like material properties and so on.


The curriculum in high school physics is extremely boring, and can be learned instantly by anyone who does the simulation stuff. An exception might be the center of mass theorem, and some mechanics puzzles. Since I had learned this physics already some years before, I made it exciting like this: I made a rule that I must actively ignore the teacher, never look at the book, and do no homework, and I would have to rederive all the formulas for the problems on the test from scratch using nothing except my head. I got all the problems right for three quarters, then on the last quarter, we had an optics quiz.


I had learned from Feynman, so I used a Fermat's principle method to derive the lens law, rather than using the geometric special lines that everyone else uses. It took me 45 minutes to rederive the lens equation knowing all the signs are correct and everything, and this left only a few  minutes to do the actual test. I did one problem correctly, so I failed the test. So I had a C on my last semester of high school physics, and the teacher was very happy to give me a C, because he hated me by then, since I had been actively ignoring him for three quarters.


So learn from Feynman, Landau, Dirac, use a computer, but when your school gets to optics, learn the classical methods!


What do mathematicians and physicists think of each other?


The main issue dividing the physicists from all other academic fields (including mathematics) is the philosophy of positivism, that physicists accept and everyone else stupidly rejects. This positivism is the thing that allows physicists to make progress. Mathematicians have rejected positivism in favor of certain types of idealism, since they needed transfinite ordinals, and they weren't sure where to stop, and one is still not sure where to stop, regarding these.


Positivism is the philosophical position that in order to give meaning to something, you need to reduce it to sense perceptions, or primitive impressions about the world, like "I see a blue patch of this and so size", or primitive concepts about numbers, like "three is bigger than two". The idea is that certain questions, like "Where did the universe come from?", when turned into a question about sense perception, have no meaning. There is no sense-perception which is different depending on the putatively different answer to "Where did the universe come from?" so the question is just meaningless blather, as is most of philosophy.


This is important, because in physics, you don't know exactly a-priori what the primary concepts are in the description of nature, and the positivism allows you to identify these, because only the ones you can measure are the ones you can be certain are important to include. "Where exactly is the electron in the ground state of Hydrogen?" sounds like a reasonable question, but when you formulate it experimentally, whenever you make a way to test exactly where it is, the experiments conflict with the requirement of staying in the ground state, so the positivism does not allow you to say that there is a definite position in the ground state, the ideas are "complementary" in Bohr's way of saying it.


The physicists refined positivism a lot in the 20th century, and made it completely coherent, at the same time as academics outside of physics, at least in the West, were busy rejecting the idea. Positivism confuses people endlessly, but it's one of the pillars of modern physics, the thing that allowed relativity, quantum mechanics, and string theory to get formulated.


The idea of "reduction to sense impressions and primitive mathematics" was not made formal and precise by Mach. The tools weren't available. But by the 1930s, you could make the idea precise using a computer. To say "I see a blue patch" can mean "My mental model can be considered to contain the same computational structure as when it is given this presentation of pixels in a png." Similarly, the number 3 can be axiomatized or represented internally in a computer's memory. The computational representation of nature, and of our sense impressions, allows you to give the fundamental ideas of positivism:


* You have a computational model of the ideas in minds
* You have a computational description of natural law.
* You have an embedding that shows you how the top one sits inside the second one.


For a trivial example, you can take a Newtonian world, and imagine a being made of atoms doing some computation, then the abstract computation sits inside the Newtonian model, and the computation is just embedded in the Newtonian world.


A nontrivial example is Everett's many worlds. In this case, the computer is doing classical (probabilistic) computation, the world is quantum mechanical, and the embedding is into a particular branch of the wavefunction, which branch is selected according to the data that ends up on the computer at the end. If the probabilities are close to 1, you reproduce the Newtonian model, but it requires this branch selection, which is a nontrivial embedding, because quantum mechanics is not a model which computes anything, it gives you superpositions of possible computations, not definite answers.


A nontrivial non-quantum example is duplicated observers. This asks "If I make an atom-by-atom copy of you, which way does your consciousness go?" Philosophers started thinking about this in the 1980s starting with Dennett's "Where Am I?" But this is just the classical analog of the stuff going on in Everett's many-worlds model, and the answer is ambiguous, and this bothers philosophers. This problem, like all philosophical problems, vanishes once you formulate it positivistically.


The issue in all this is that there is extra information in the embedding of the computation in the world, and this extra information is not in the physics. In the case of duplicated Newtonian observers, you get a new bit which answers the question "which copy am I?" after the duplication which wasn't there before. The introduction of new bits of information to describe observations is one issue people have with quantum mechanics--- these bits, they feel, should be present in the physics. This was Einstein's "realism" postulate.


When logic and computers are involved, positivism becomes logical positivism, and in this form, it dominated European thinking until World War II, and continued in the Soviet Union until 1991. The impact of logical positivism was trememdous, since it gave a way to separate bullshit from thinking: if you can write a computer program to see what you are modeling, it's thinking. If you can't, not even in principle, then it's bullshit.


In mathematics, the main schism is the fact that mathematicians still have a little bit of bullshit left. The bullshit is theorems that cannot be given a straightforward computational interpretation at all.


These theorems are all about the idealization of the real numbers as a set, meaning a collection of discrete elements which can be well ordered (matched to an ordinal number). It is manifestly obvious to any schoolchild that the real numbers cannot be well ordered in any sense of the word, they are vastly too big, yet, in order to enter mathematical discourse, you are forced to reject this obvious fact.


To make this precise took until the 1960s, and because it was positivism, and Soviet sounding, it was not properly advertized in the west. This was Paul Cohen's forcing. The ideological battles of the cold war had a terrible negative impact here, suppressing the full power of forcing.


that the mathematicians have a requirement in their field, which is that "a result" is something which has an embedding in a particular formal system, something like ZF. This is fine. The problem is that they also standardized on an idealistic interpretation of these axioms that includes an idea that there is a definite "set of real numbers", with a definite "ordinal number", and so on, and this is what modern mathematicians call Platonism. The Platonism is fine for integers, countable ordinals up to Chuch Kleene, but it stops making positivistic sense for uncountable ordinals, or for real number ordinals.


The rejection of positivism by mathematicians led to obfuscations in certain areas:


* probability is formulated over measurable sets, and one spends time constructing measures to demonstrate their consistency with the well-orderable universe of sets.


This is complete crap from the positivist point of view. It is clear since 1972 at least that all sets constructed in the usual sense are measurable, and that it is only impredicatively defined collections which give anything "nonmeasurable". This means that measure theory is screwed up, and the screw up is completely rejected within physics. In physics-math, all sets are measurable.


So physicists are allowed to say the following:


* consider a random configuration of the Ising model on an infinite lattice...


This makes no sense in standard mathematics, since a random infinite collection of bits is incompatible with the existence of a non-measurable set.


Further, physicists can say


* consider the limiting distribution of a measure on fields defined on a lattice, where you adjust the constants appropriately to allow the lattice to become fine...


this is renormalization. The mathematicians have a hard time with this, partly because the randomness aspect is obfuscated.


There is no time to waste on this nonsense in physics, and this is not something one should ask the physicists to fix. Instead you must demand of the mathematicians: please adopt positivism, and allow people to work in a universe where all sets are measurable, and they don't need to work to establish measure theory exists. 

Tags: ron maimon, 互联网行业, 数学/математика, 数学家/математики, 物理/физика, 物理学家/физики, 美国
Subscribe

Posts from This Journal “ron maimon” Tag

  • Post a new comment

    Error

    Anonymous comments are disabled in this journal

    default userpic

    Your reply will be screened

  • 0 comments