Ron Maimon Quora answers (part 4)

Ron Maimon

Answered Jul 27, 2013 · Upvoted by Richard Zhou, B.S./M.S. Mathematics, Yale University (2022) and Pepito Moropo, B.S. / M.S. Mathematics · Author has 1.4k answers and 4m answer views

Originally Answered: What is it like to take Harvard's Math 55?

It's not hard, if you know how to prove things coming in, but if you don't already know proofs before you start, you just shouldn't take it. You won't learn how to prove things rigorously in the first two weeks before the first problem set is due. If you expect to learn the material from the class, don't. Learn it a year or two before you go in, it will then be a breezy review with good peers, and it will introduce you to new stuff.

Because the class assumes familiarity with rigorous proofs, it mostly consisted of freshman from accelerated schools, who had exposure to proofs in high school. I was one of the few public school students, but I knew all the stuff from independent reading, so I was much much better prepared than the special school students. The class is simply another stupid method of social selection--- take a certain fraction of the undergrads and give them special attention, and groom them for the Putnam (Harvard takes this seriously), and for a mathematics career. It's a method of talent selection which is busted, like all other such methods.

If you take the class, for the sake of your TA, don't write out rigorous proofs in full. Lots of students write out the solutions in lemma-theorem form, proving everything from rock bottom. I did this also. This makes your problem set ENORMOUS. You don't need to prove the commutativity of integer addition. You should learn what the main idea of the proof is, and what can be taken for granted. This is not so easy to do in an undergraduate proof class, where nearly all the proofs are of obvious facts.

My complaint in hindsight is that the class didn't sufficiently emphasize computational skills--- you learn linear algebra without ever getting practice with row-reducing, or any other rote procedure. These are not conceptually difficult, but they are useful, and require practice, and this is more useful for undergrads than memorizing some specially selected route (as good as any other) through the rigorous development. I had personally already done some practical linear algebra, so it wasn't a big deal for me, and I assumed everyone else was the same, but now I realize that's not true. The other students did absolutely no mathematical reading at all before taking the class, and for them, it just wasn't enough computational exercises. So there are often terrible gaps in the knowledge of the math-55 folks because they know abstract things without enough dirty computation. Also, they tend to become cocky from being selected as special, and this makes them useless. Perhaps I was saved by the fact that I wanted to be a physicist, so I didn't care about the mathematicians, beyond poaching their methods and training my brain.

To learn how to prove things for the purposes of getting into the class and doing well, it is sufficient to become well acquainted with the material in a few standard rigorous undergraduate textbooks, I read Lang's Calculus, Mukres topology, some books on General Relativity, and Dirac's quantum mechanics, and this was far more than enough, it made the class boring, at least after the second problem set. The class only covers material that is standard undergraduate fare everywhere else, except rigorously. I cannot emphasize this enough, there is no magic, there is nothing in the syllabus that is beyond the standard undergraduate multivector calculus, linear algebra, except of course, you need to prove everything. The only magic is in an occasional aside by the instructor, or a special topic.

The instructor my year was Noam Elkies, who has a wonderful insight into undergraduate teaching. He presented a strange introduction to Riemannian integration which develops finitely-additive measure theory instead of doing Riemann sums. It's equivalent, and perhaps a little cleaner. In hindsight, I just wish they had gone straight to Lebesgue integration, there was no point in learning finitely additive measure separately. I also remember Koerner's book on Fourier transforms being assigned, and I read that cover to cover, because it's a great book. The lectures on Fejer's proof and the FFT algorithm stick out in my mind as particularly insightful, I still have no problem writing an FFT routine when I need one. The rest is lost in my memory.

I took it in 1992, and I also TA'd it in 1993. While I have happy memories of the class when I took it, the TAing phase was difficult. I was a sophomore TAing 40 freshman! That was about double the number of students my year. And I had to take 3 undergrad classes plus 2 grad classes each semester that year, so my workload was approximately double that of a grad student--- approximately 10 problems every week for math-55, meaning I had to write clean solutions for the problems, and I had to read 400 amateurish crappy enormously long proofs every week, in addition to doing 2 graduate problem sets, 2 undergrad problem sets, and a bunch of reading for whatever dippy core humanities course I was forced to take that semester. It was too much. The pay for an undergrad TA was also ridiculous, it was peanuts. But it was better than cleaning toilets, which is what I did my first year.

In the second semester, the class covered differential forms, while I introduced tensor analysis in section, to explain what these were, really. That was a mistake, the students didn't like it, and they also didn't appreciate that I would translate everything to tensor language, and then translate back to forms. But that was the only real collision between me and the instructor. The rest of the course was easy, because it was a subset of what Elkies covered.

I also remember making a mistake in one of my early sections--- I said that a proof didn't require choice, because I could see the construction more or less, but a bright student said "you are choosing a sequence", and I said "oh yeah, I guess it does require choice". Today, I would make the distinction between countable and uncountable choice, but at the time, I didn't. Other than that, I remember having an easy time presenting proofs, because I had practiced presenting the proof in my head to learn the material.

TA'ing the problem sets meant that you have to find the mistakes in all of them. This took a long time. It made me lose sleep, and pull all-nighter after all-nighter. My social life disintegrated, and I think I went a little bit crazy. I would wander around Harvard Square at 4AM getting burgers at "The Tasty" (now defunct), and making friends with homeless people, before going back to my dormitory. But the students liked me, because I was close in age to them, I knew all the pitfalls of the class, I proved things well in section because I prepared well, and I actually read and understood each of their proofs, and commented on it. Also, I would make sure if there was an insightful original idea on one of their proofs, I would give more than full credit, so that you would get credit also for part of a problem you didn't do, because you had an original idea somewhere else. The students appreciated this. I also explained the proofs from first principles, in a very rigorous way that I was really into back then. The students all said I was very helpful, and this was rewarding.

The one thing I learned from TA'ing that class was how to read crappy proofs very fast and find the mistake (if any), and this was a good skill to develop. This was probably the first time I acquired proficiency in quickly reading and evaluating mathematical proofs, from TAing, not from taking the class. Taking classes is useless for this.

I remember some problems from the first year, but only from one problem set, the first one in math55 proper. First, there was a superficially trivial problem regarding vector space duals that required the axiom of choice to solve in the infinite dimensional case. Elkies and the TA told me it didn't require choice, but I kept on telling them that I thought it did, because whatever I tried without choice didn't work. After hecktoring me a while, they realized it did require choice, so I got an undeserved reputation for being really smart. I talked to Dylan about this, and he told me why some people disbelieve choice, constructive principles and all that, although he tried to make it clear he wasn't one of those people. This made a huge impression on me, I immediately embraced the constructive thing. I reevaluated the proof of the well-orderability of the reals, and realized it makes no sense. I read "constructive analysis". I eventually got suspicious of all of classical mathematics by the time I took a grad real analysis course, and I gave up on math for another decade or so, before learning some logic. So you should make peace with the axiom of choice, and Cohen's book "Set Theory and the Continuum Hypothesis" is really the only way to do so.

This problem set had 9 problems, all of which were good mathematical puzzles--- they were genuine interesting things. They weren't even graduate level stuff, but they were challenging. One of the easier ones I remember was to show that the dual of the vector space of eventully zero sequences of reals was bigger than the space itself. This I remember doing by finding an uncountable linearly independent set. There was another straightforward problem, which asked to calculate the number of bases of an n-dimensional vector space over Z mod p, this was simple combinatorics, but it took me a while to figure out what was being asked (this was half the battle in the days before the internet). I did all the problems except for number 8, which stumped me. The problem asked to show that in Z mod 2 (the field with two elements) the diagonal of a symmetric matrix is in the span of the column vectors. The key idea was presented in lecture, but you had to take notes. It was a difficult problem for undergrads. I later figured out that a symmetric matrix in Z mod 2 is really an antisymmetric matrix also, that is the key idea. This was a nice problem, it was the last nice problem.

I remember being unhappy that I didn't solve all the problems on that problem set. But then when it came back, the mean on that problem set was 2 out of 9, meaning 2 problems solved out of 9, and I had 8 out of 9, missing that stupid span Z mod 2 problem. Noam Elkies was told to tone it down in difficulty, and unfortunately, he did. The rest of the problem sets that year were loads of boring extremely straightforward standard exercizes, with an occasional good problem.

The one other experience that sticks in my mind was the first math-25 problem set, before the class split into math-55, which was trivially easy. I knew how to do all the problems immediately, but I wanted to socialize with some of the girls in the class. So I joined a study group with 2 female students. I thought I would play it dumb for a while, as they debated how to do the problem, then I would say "Hey! I have an idea! Maybe you do this...." and explain the obvious correct trivial solution, pretending to not see it all at once, and in this way, impress the crap out of them.

So we went into a room in the library, and they started blabbing about their stupid totally wrong ideas about how to solve the problem. I pretended to listen for 5 minutes, nodding my head at all the stupid wrong things, then I said "Hey, I have an idea! Why don't we try this..." and then explained the answer in 2 lines. Then I would sit down from the chalkboard and they would be stunned by my insight. That was the plan anyway.

I did that with one of the problems. And I sat down at the end, and I figured they were stunned by the brilliance (because I knew the answers, the problems were dead easy). Instead, they just looked at each other in a funny way. Then one of the girls said "let's try another one...", and I let them blab with their ignorant jibberish, then stood up at the blackboard, explained the problem clearly and completely, and sat down.

The response was, "Ok, I think we should dissolve the study group." I said "Ok", and went off to write the solutions by myself. It took an hour or two, and when I was done, I walked by the room, and saw the two of them back in there, discussing it without me!

I wondered why they dissolved the group and reformed it without me, then it slowly dawned on me. They thought I was full of shit! They were not only too stupid to solve the problems, they were too stupid to recognize the correct solutions when it was shoved in their faces! This taught me a valuable lesson about how mathematically ignorant Harvard students are.

Of course I got a perfect score, and they got a very low score (although how they avoided getting a zero, I don't know). So don't be intimidated by upper class twits, a studious working class type person can easily run circles around them, they are not capable of thinking logically. These comments apply only to Harvard undergrads, not to MIT undergrads or Harvard grads, and not even to all Harvard undergrads, they have a few real nerds too.

As a TA, an undergrad student attempted to seduce me (she didn't make it to math-55), in take-home finals an undergrad wanted to cheat off me (I gave him the wrong answer). Many students copied my answers to problem sets in both mathematics and physics, by pretending to work together. That should give you the idea of the level of ethics we're talking about. I think if you don't give your answers away, and stay away from unethical social schmoozers, hang out with grad-students and professors, and listen to the professors only, you'll be fine. There are lots and lots of Harvard students who think they are Nietzsche's supermen and superwomen, and act accordingly.

81.5k views · View 626 Upvoters · View Sharers

Ron Maimon

Answered Feb 12, 2013 · Author has 1.4k answers and 4m answer views

IQ tests work to measure generic problem solving ability, and general mental agility. They are useful for identifying mental retardation, environmental toxin exposure, genetic deficits, learning disabilities of certain types, and various other sources of cognitive damage. But they are useless at the high end, because the tests do not measure a trait which is out there to be measured. Further, I do not consider anything but perfect performance on these tests, answering all the questions correctly, acceptable. One should train on a sample problem set which includes all of the dozen or so different puzzles that IQ testers like to test, until one can do them all. This is not prohibitively difficult for most people, it is harder to learn a real skill.

The historical point of IQ tests is to take the diversity of human intellectual achievement and produce a number for "intelligence" which will have a mean and a standard deviation, like "height". In order to do this, you need to produce a list of questions where the number of correct answers is roughly Gaussian distributed. For cognitive tasks, this is extremely difficult, because people vary too much!

For example, if one gives a person chess problems, and uses the size of the search space as a measure, the number of chess problems solved by different people will have a massive tail on the distribution. It is not just familiarity with the game that is important--- there will be variance even among children and among people who have been exposed to the game for equal amounts of time.

The differences are due to the internalized search algorithms that are produced unconsciously in the brain, and these are very difficult to understand, because they are not done by the conscious mind. So if you use chess as an IQ-test, you will find that there is a massive difference in the time taken to solve problems and find best-moves between people at all levels.

But now, if you are an IQ tester, you need to make a bell-curve, a Gaussian distribution. So what do you do? You put a list of 20 very simple problems gradually getting harder, then a list of 20 ever harder problems growing exponentially fast in difficulty. In this way, you produce a measure of chess-performance which looks like a bell-curve. This is why IQ tests always have this ridiculous break-point where the problems go from super-easy to very very difficult. It is the only way to shoehorn a massive tail into a bell curve.

The result is designed to produce a bell curve for the number of answers, and it does. But by studying chess for a long time, with the correct approach, meaning training your unconscious search algorithm, and learning from the moves found by the great masters of the past, you can improve your ability to the point where you can ace any such chess IQ test.

IQ testers use other puzzles, not chess, but the principles are the same. The puzzles become exponentially harder, so that different people will break at different points on the high end.

The reason these tests are useful for identifying cognitive deficits is because the low-level questions are very finely grained--- they discriminate very well between people with even minute toxin exposure. If you are tired, or confused, or feeling weak, or mentally debilitated by some factor, you will have a drop in your performance on the test, and this can be detected.

The same reason makes it so that the tests are useless for testing for exceptional talent--- they are not at all finely grained at the high end. The high end parts of the test consist of extremely challenging tasks that get exponentially harder, and depends very strongly on which types of cognitive search algorithms you have internalized.

The original point of these tests, the reason they were introduced, was to give a scientific reason to allow you to discriminate between people and ethnic groups. This is why they were introduced, and they were used to select people for high positions throughout the 20th century, with not so great results (although better than hereditary aristocracy, for sure, because anyone could learn to do well on IQ type tests).

I think that the proper use of these tests is as a personal challenge. When you see an IQ test, try to do all the problems, then when you fail to so some, learn to do all the problems, and practice with enough sample tests (only the difficult problems), until you can do all of the problems instantly. This is great training for the mind so long as you don't waste too much time. Once you do this, you are well prepared for other challenges.

4.1k views · View 42 Upvoters

Ron Maimon

Answered Oct 5, 2013 · Upvoted by Anurag Bishnoi, Ph.D. Mathematics, Ghent University (2016) · Author has 1.4k answers and 4m answer views

There is no special math gene, as Terrence Tao has explained, mathematics is done by ordinary people working very hard to acquire the skills. But it is extremely time consuming, and if you happen to acquire the purely social label of "genius", it's a tremendous advantage, because people will just leave you the heck alone to study what you think is necessary, you will have access to top-notch mathematical people who will explain to you the tricks left out of the literature, and people will throw money at you to support you financially through your youth. Without this, if you do math, you will have no time to support yourself, and you will eventually acquire the social label of "worthless bum", as for example, the great mathematician Ramanujan did in India, before his work was recognized.

This is the mechanism by which mathematicians pick the people they want to have around, those who contribute to the field. The social mechanisms that prevent mathematical advancement are simply the requirement to fend off starvation by selling your labor, and the social mechanisms that hide mathematical knowledge behind walls of jargon academia.

Besides Wikipedia and Q&A sites, which serve to clarify the jargon, there are also blogs. Tao has done a lot, with his blog, to make these ideas widely accessible. He knows all the special tricks, and when he sees one that isn't widely known, he lets you know about it in relatively simple language, stripping away the specialized jargon, on his blog. He can do this because he was both labelled a genius in childhood, and also justified this label by doing great work, and in adulthood, he was thankfully recognized with a fields medal. Otherwise he would have to work hard for tenure and keep all his specialist knowledge hoarded in his head, only to escape occasionally in academic papers.

9.5k views · View 55 Upvoters · Answer requested by Mirzhan Irkegulov

Ron Maimon

Answered Jan 9, 2014 · Author has 1.4k answers and 4m answer views

Originally Answered: What are some of the best papers you've ever read (any field of science - biology, economics, astrophysics, geology, cognitive psychology etc)?

The paper that solved physics: [hep-th/9610043] M Theory As A Matrix Model: A Conjecture

It's by Tom Banks, Willy Fischler, Steven Shenker, and Lenny Susskind, this is the BFSS paper. it is, in my opinion, the greatest text ever written in all of science, greater than Newton and Einstein with some Hawking on top.

Before this paper, every method of calculating in physics was limited in principle by some approximation, of one kind or another. If you were doing field theory, your calculation didn't take into account gravity. If you were doing stirng theory, you were perturbative, and you couldn't describe black hole formation well.

This paper has no such barrier. It defines how to calculate EVERYTHING in a flat M-theory. Everything. No approximations. This was the first time this had ever been done for any model in physics which included gravity.

The extension and analysis of this paper, and incorporations of other insights, led directly to AdS/CFT, with Maldacena's famous work in 1997, and WItten's, and Gubser, Klebanov, Polyakov, and all the rest. But this was the first major shock of the second superstring revolution, the first completely nonperturbative calculation method. This paper was the first time a real theory of everything, and I mean a completely computable theory of everything in every domain, had ever been written down.

There is an important antecedant to BFSS in earlier work in string triangulation models, which suggested a similar mathematical construction. But this was in the late 80s, and there was no argument that such a calculation method would be a complete description of the physics. The BFSS paper came with a full realization of holographic principle, and the reason that a full accounting of a black hole, any black hole, would account for everything else, so it was surely including all the physics.

it is also mathematically not difficult to describe, it's an ordinary quantum mechanics matrix model. I remember where I was when I first heard about this, it was around 1996 or 1997, and Willy Fischler came and gave a talk about it at Cornell. At the time, I was a grad student in the back of the small seminar room, I remember he presented the ideas, wrote down the particle Lagrangian, and everyone applauded politely, and there were a few questions. But I was LIVID. I seriously considered sitting on my hands! I thought "Does this clown really think that this trivial particle model, defined on 0+1 dimensions, includes the ENTIRE physics of an 11 dimensional theory, which is otherwise ill defined? It's obviously nonsense, why is everyone applauding? Why isn't he being booted out of the building!"

it took a few years before I understood the reasoning behind it, and then to see that it was actually correct required a lot more checks and thinking. But it was correct, and it was a greater conceptual revolution than anything that came before,

It came from four middle aged string theorists who had been working on related ideas for decades. It did NOT come from a young genius, or the most famous names (although they became famous names subsequently).

Given this advance, and the related AdS/CFT program, we have an in-principle method of generating theories of everything for cold spaces. This is unbelievable, because it solves the problem of physics COMPLETELY in cold space times. To say that nobody saw this coming is an understatement, I thought it would take a century to do, in 1995, when it was already being finished.

I remember sitting at Santa Barbara in 1998 or 1999 with this nondescript mild mannered middle aged physicist who I didn't recognize. After some chit chat, I started blabbing away about how the BFSS paper, that I had been going over for a long time, just couldn't possibly be right as physics, because it couldn't possibly reconstruct the space time correctly from the kinematics, and it didn't have an obvious background independence blah blah blah, all this stuff. The guy stopped me and said "Before you go on, I'm Tom Banks." I said "Ok, now I'm spooked!" I wasn't really spooked, but I thought it was funny. He didn't talk to me after that.

2.4k views · View 14 Upvoters

Ron MaimonAnswered Jan 5, 2014 · Upvoted by Anjishnu Bandyopadhyay, PhD Student in Experimental Particle Physics from University of Bonn · Author has 1.4k answers and 4m answer viewsCondensed matter physics is an open ended project, it will remain open forever, materials are arbitrarily fascinating. So the question only makes sense for fundamental physics, for the laws which can be thought of as lying beneath, I don't want to offend condensed matter people, no condensed matter in this answer.

String theory, while very well developed, is not a complete theory, in that we don't know many of the predictions, even for very stupid elementary questions, like "what happens to stuff you throw in a charged black hole in an AdS space?" There folks saying there are firewalls, and while I think this is certainly false, you need to demonstrate it with a good method to calculate black hole interiors. There are lots of questions like this. But since we already have an in-principle method of computing everything within AdS/CFT models, at least since 1995-1997, I will also ignore questions of the incompleteness of string theory, and pretend we have already answered all the questions in every asymptotically cold background, because we can in principle. These questions are the ones that most string theorists study actively, so I won't say anything about the current research interest of most string theorists either. In principle, we could figure it out using a big computer (just we didn't do it yet).

I also will assume that string theory is correct, in the sense that it describes our universe, with its gravity, not just mathematical universes with mathematical gravity. Although there is no airtight evidence for this at the moment, it is the best theoretical position to take, given the uniqueness properties of string theory. Because of this I will ignore all the alternative ideas to string theory, like loop quantum gravity, or more speculative ideas about triangulations and so on, because unless they are linked to string theory, they seem to be all wrong.

So, under these constraints, what is still completely mysterious in physics?

From my biased perspective, there are three major things;

1. Finding our vacuum, with it's SUSY breaking and cosmological constant.

At the moment, string theory cannot make predictions, because it's like we know Newton's laws, and that planetary orbits are ellipses, but we don't know where they are, so we can't say where they're going. Finding the vacuum is difficult, people have been looking for a long time.

There are canonical heterotic style models which look good as a first pass approximation, these came from Yale in the last few years, but these are usually supersymmetric. If our vacuum is not supersymmetric at all, which is looking more likely from current LHC studies, then you need to find a good method of producing non SUSY vacua with small cosmological constant. This is something we don't know how to do, but there is an example or two within string theory.

With or without SUSY, the cosmological constant is mysterious, and it is not clear what principle you learn from the fact that it is small, but not zero. Maybe it's only anthropic, which would mean you don't learn anything systematic.

2. Asymptotically thermal space-times, deSitter spaces.

These are theoreticaly intractable right now, because deSitter spaces are not cold, and it is not even clear that their evolution is unitary. Sorting out how to describe deSitter space might not look like it is so difficult as compared to figuring out string theory, but it's a major, major difficulty. To see the problem, the cosmological horizon is finite area, and finite area suggests finite Hilbert space size. But the universe is expanding, so is the Hilbert space growing?

This stuff is endlessly confusing, and both Tom Banks and Leonard Susskind have written extensively about this without any really solid theoretical conclusion emerging.

3. Quantum computing: is quantum mechanics exact?

This is the big one. While quantum mechanics could be exact, I have nothing against this idea, I am completely happy with philosophically positivist interpretations equivalent to many worlds, there is a theoretical argument against this. It's philosophical prejudice, to some extent, so it could be totally wrong, but similar philosophical prejudice has been useful in the past.

The principle that is violated is that a physical system of size X should be able to only compute the answers to problems which are polynomial in X. The idea is that the universe is described well by a random access machine of a size which is comparable to the physical numer of particles. It's not true in quantum mechanics, it is true in classical mechanics, and it seems to be preferrable, since the exponential growth is a sort of mysterious extra processing which is counterintuitive.

That doesn't mean the universe doesn't do extra processing, but it is a reason to ask for evidence. We can't have evidence yet for exponentially huge computation, simply because we can't do the computation to check if quantum mechanics is correct in those delicate highly superposed quantum computer cases.

So it is still reasonably possible that quanutm mechanics will fail for quantum computation, as 'tHooft and others have suggested over the decades. This is a distillation of Einstein's complaint with QM, that the wavefunction is too enormous to be a fundamental object, and looks like a stochastic tool to describe something else. But Bohr might also be right here, and maybe one should shut up and stop telling God what to do.

The way to resolve this experimentally is to build a quantum computer and see if it works. The way to make progress on this theoretically is to make a nonlocal plausible hidden variables theory that works to reproduce small quantum systems and fails for large ones. This is extremely challenging, but 't Hooft has taken steps in this direction (although I disagree that he has solved it, as he claimed in some recent papers that I personally found completely wrong in technical details for reasons I wrote on Stackexchange).1.9k views · View 28 Upvoters

Ron Maimon

Answered Sep 15, 2013 · Author has 1.4k answers and 4m answer views

These are all fabrications, purposefully done, so as to make the IQ test look like it is measuring something intrinsic to a person, like height.

A good rule of thumb for historical IQ is given by the Flynn effect, you can estimate that, were they to take a modern IQ test, all of them would score about the same as people did back when IQ tests were first introduced, possibly a little lower the further back you go in time. So you need to automatically remove about 30 points from the IQ intuition for people being scored today (the tests have linearly gotten harder with time, the mean stays the same, though, because that's how you center it).

So Benjamin Franklin would probably score around 120, due to 18th century mathematical training, musicians like Beethoven, around 70-80 (neither vocabulary nor mathematical training), politicians like Jefferson around 100, because due to their their wide vocabulary (although still small by modern standards) would be a little better than the average person today on the verbal parts, but their mathematics performance would be abysmal.

Newton, who was both well read and highly mathematical, might have scored 120 on a modern test, maybe 130. These are guesses by me, but they are no less guesses than the stuff you read. My guesses are at least proper estimates, considering the Flynn business, the enormous increase of the test strength with time.

3.6k views · View 8 Upvoters

Ron Maimon

Answered Feb 6, 2013 · Author has 1.4k answers and 4m answer views

It is not exactly intelligence that you are talking about, rather what people call social-intelligence, the ability to manipulate your social environment by making deals with other people and getting them to do what you want. This is the intelligence of "Survivor", not "Nova".

Devoting your efforts to learning this activity is what separates the middle-class from the working class, and it is both useless to society and anti-correlated with real intelligence. Mathematical intelligence requires ignoring social cues to get at the truth. Politeness gets in the way, being social gets in the way, and the socially intelligent generally suck at any activity where the test of success is objective: like mathematics, or science, or chess. The middle class is at a disadvantage in such things because they train to be circumspect and indirect, for the purposes of social politics.

The thing that separates the super-rich from the middle-class is not just social intelligence, it is also the ability to embrace tremendous inequality and be willing to get ahead even when this involves pushing others down. This requires the belief that one is special and deserves special compensation for one's own uniqueness. This idea is both statistically stupid, there are a lot of people in the world, and kind of evil. So to become super-rich you generally have to be stupid at statistics and be completely amoral. This tendency doesn't come naturally, so you need people like nietzsche and Ayn Rand to say it's ok. There is always a paucity of philosophers willing to promote inequality, so the super-rich are always on the lookout for someone who says "inequality is good", and then they reward this person with money and prominence.

Since neither evil nor social intelligence is particularly useful for humanity, the real problem is that one lives in a society where social political games can let you get ahead in the first place. The solution to this has been known for decades, and it was implemented in the middle decades of the 20th century: you tax individuals at punitive rates and make wage supplements for poorer people, and then you get rough equality. This is not harmful to economic growth, because income equality gets you closer to the ideal efficient market of economics textbooks, where all incomes are equal through competition.

But there is a problem with government imposed equality, in that it can interfere with certain business activity. Not through loss of incentive, that never happens--- even if the richest person only makes 10% more money, people strive to be this person--- but through loss of capital for ventures. If there are no wealthy people, ventures become much more conservative, becuase you need to finance ventures by committee of many people, since no one person has enough money to finance the venture. Committees are always slightly stupider than their stupidest member, so you get no bold ideas financed.

There are probably good ways to ensure equality while at the same time ensuring that capital for venture is available. One way is to have a probabilistic component for venture: so your bank rolls a die for each proposal and picks a "venture king" at random, and then the venture king determines whether the venture lives or dies, after discussing it with everyone, and getting input on the proper financing and so on.

618 views · View 9 Upvoters


Anonymous comments are disabled in this journal

default userpic

Your reply will be screened