With the perennial race-IQ controversy back in the news, due to The New York Times attempting to put the boot in on 90-year-old Nobelist James D. Watson from the left and Nassim Nicholas Taleb thundering on against IQ from the right, it’s worth calling attention to one of the most elegant statistical essayists of the 21st century, La Griffe du Lion.
La Griffe, the Zorro of statisticians, wields Occam’s razor with aplomb unique in recent decades. He is a thinker comparable in significance to Taleb, even though their approaches are 180 degrees opposite.
In truth, the scientific question of whether the racial gap in IQ is partially genetic in origin gets people so worked up emotionally because it is widely seen as a proxy for the moral debate over: Who is more to blame for black dysfunction, whites or blacks?
But, as the pseudonymous La Griffe has demonstrated, the quantitative tools helpful for thinking rationally about IQ are broadly applicable to a host of interesting questions.
La Griffe is the anti-Taleb who shows how much insight can even now be extracted by lucid thought about Taleb’s bête noire, the Gaussian probability distribution (a.k.a. the bell curve), when applied to subjects of public dispute.
La Griffe uses rather simple, somewhat stylized statistical math to slash through to the heart of controversies such as:
(1) How honest are Chicago police promotion exams?
(2) What’s the likelihood of a white resident being violently victimized in a neighborhood tipping black?
(3) How much money in annual pay does affirmative action benefit blacks and cost whites?
(4) How big is the difference in Athletic Quotient (AQ) between blacks and whites?
(6) Is the death penalty racially biased? Against whom?
(7) Should Al Gore have won the 2000 election?
(8) What matters more for the wealth of nations: average IQ or the smart fraction?
(9) Why is the IQ race gap narrower in Baltimore?
(10) Why is the race gap in imprisonment wider in more liberal states?
(11) What is the Fundamental Constant of Sociology?
Taleb, in contrast, writes eloquently upon the failings of the normal probability distribution. Bell curves, in which most examples fall in the middle and the number of extreme data points are few and symmetrically distributed, are misleading representations of that part of reality that Taleb memorably calls Extremistan.
In contrast, height follows the Gaussian rules of what Taleb dismisses as Mediocristan: Nobody in recorded human history has ever been nine feet tall. Height is more or less distributed according to a bell curve, with most people being not too many inches different from the average height for their sex, race, age, class, and so forth.
It’s not implausible that many other harder-to-measure traits such as intelligence or golf talent are also laid out along similar bell-curve lines.
On the other hand, many metrics we find interesting are in Extremistan. As Charles Murray, coauthor of The Bell Curve, noted in his 2003 book Human Accomplishment, the distribution of career victories in PGA tournaments more closely follows an L-shaped power law than a bell curve.
This is not to say that Tiger is 80/43rds better than Phil: Over the course of their careers, the difference in average score per round (a measure from Mediocristan) is barely perceptible. But in terms of winning (a measure from Extremistan), this small difference adds up to a big advantage for Tiger over Phil (and for Phil over third place).
Branko Milanovic nicely summarizes Taleb’s contribution:
Taleb argued that the number of phenomena with such asymmetric distributions is much greater than was commonly thought and that lots of our thinking errs by tacitly assuming normal distributors. Like Moliere’s Mr. Jourdain we have become Gaussian without thinking or knowing that we are.
For example, during the Housing Bubble before the 2008 crash, it was widely assumed by Wall Street that mortgage defaults had to be inherently rare because they were distributed according to a bell curve cleverly adapted from life-insurance actuarial science in 2000: Li’s Gaussian copula function.
This worked for a half-dozen years because during times of sharply rising house prices, foreclosures mostly happened to the individuals who for unusual, often tragic reasons didn’t sell their houses to avoid default: For example, the husband died and the wife became so depressed she couldn’t be bothered to do anything to meet the mortgage payments until the sheriff finally evicted her.
But when housing prices stopped rising in 2006–07, defaults suddenly went from unlikely events only occurring out in the far tail of the bell curve to everyday occurrences, taking down with them huge financial products concocted assuming that foreclosures were always unusual events out on the tail of a normal distribution.
Still, much of the world is indeed Mediocristan. The 33 essays by La Griffe show just how much can still be wrung from applying the logic of simple bell curves to contemporary social controversies.
In case you are wondering, I could make a plausible guess who the distinguished author likely is, but I don’t believe in doxxing anybody who chooses a pseudonym. La Griffe observes:
There are very few moments in a man’s existence when he experiences so much hostility, or meets with so little benevolence, as when he challenges fashionable perceptions of race…. As for our efforts, we can be certain of only one thing—vilification. It could drive a man to pseudonymity.
Like Taleb, La Griffe du Lion is not lacking in self-confidence. He takes his alias from Sir Isaac Newton’s appellation, as recounted in E.T. Bell’s classic Men of Mathematics:
In 1696 Johann Bernoulli and Leibniz concocted between them two devilish challenges…. After the problem had baffled the mathematicians of Europe for six months…Newton heard of it for the first time on January 29, 1696…. After dinner he solved the problem…and the following day communicated his solutions to the Royal Society anonymously…. But for all his caution he could not conceal his identity…. On seeing the solution Bernoulli at once exclaimed, “Ah! I recognize the lion by his paw.”
Interestingly, the normal probability distribution wasn’t discovered until long after Newton’s lifetime. The relatively late development of statistics remains puzzling. Apparently, until the second half of the 19th century, the best minds were less interested in correlation than in causation. They wanted to continue Newton’s breakthroughs in astrophysics.
Thus, in 1809–10, the geniuses Gauss and Laplace worked out the math of the normal distribution to deal with the annoyance of the random errors in astronomical observations. But the notion that daily life resembles what astronomers think of as mistakes took generations to sink in.
Finally, in the later 19th century, Maxwell observed that the bell curve sometimes popped up in physics, and Galton began to apply it to the social sciences.
When the concept of IQ was invented in the early 20th century, it was originally denoted in terms of age ratios—e.g., a 6-year-old who can do what the average 9-year-old can do would have a 9/6 ratio, which when multiplied by 100 would equal 150.
But what about adults? So IQ scoring was recalibrated according to the normal distribution, with a mean score of 100 and a standard deviation of 15. Under this system, 70 falls at the 2nd percentile, 85 at the 16th, 100 at the 50th, 115 at the 84th, and 130 at the 98th.
In the U.S., whites on average outscore blacks by about 15 points, which La Griffe calls the Fundamental Constant of Sociology for how widely relevant it is.
In truth, we don’t really know that intelligence is distributed according to the normal distribution. It’s what you get when you sample randomly, so it’s not implausible that many real-world phenomena work like that. Plus, it seems to predict life outcomes surprisingly well. Finally, as La Griffe shows, the math is wonderfully easy.
Taleb is scornful of the assumptions that IQ tests can be trustworthy at the right tails—is a person who scores 160 (99.99th percentile) more likely to succeed in a moneymaking career than a person who scores 145 (99.7th percentile)?
Some evidence suggests that IQ testing still works out in the tails. But most IQ tests are validated for merely the middle 95% of the population, so Taleb is right that our confidence should be lower in Extremistan. Plus, with intensely smart people, it often makes more sense to spend less time devising tests to tell them apart and instead to put them to work on actual problems and see what they come up with.
La Griffe’s shtick, in contrast, is not to worry much about Taleb’s obsession with whether the tails are fatter or thinner than the normal distribution would predict. The basic bell curve is good enough for government work, such as for analyzing the effects of affirmative action.
This enables La Griffe to calculate in either direction, from Mediocristan to Extremistan or vice versa. Tell him how far apart the white 50th percentile is from the black 50th percentile and he can guesstimate how many white and black superstars there are.
Or tell him how many black and white players there are among high scorers in the NBA and he can estimate the racial makeup of a public high school basketball team. For example, judging from the racial composition of the NBA, a school that is 75% white and 25% black is likely to have four black starters and one white.
Or if in 2000, 80 of the fastest sprinters in the world were black and 14 were white, what are the odds that a random white can outrun a random black? A not-bad 28%.
But the odds that the fastest football player out of the 22 on the field is white are lower.
So, if Taleb and La Griffe have opposite approaches, which one is right?
Well, reality is complicated, so they can both be highly useful.