One-Page Answer to Pinker’s Notion that Violence Has Dropped Since 1945

Nassim just posted this one-page refutation to Stephen Pinker’s claim that violence has dropped since 1945. On his facebook page he says that “journalist-passing-for-scientist” Pinker cites “political science bloggers innocent of fat tails, who seem clueless about the difference between data and information. How to separate anecdote from evidence, sampling error from truth, journalism from science? Well there is something called a “test statistic.”
This also illustrates how to do rigorous statistics in the absence of a textbook recipe for a fat-tailed process, by means of Monte Carlo analyses. I will be teaching a course called “Extreme Risk Analytics” at NYU-Engineering this fall and will have to produce an 80 page lecture notes booklet, which I will write progressively from interaction with the class. SILENT RISK is too advanced, so I need a more introductory book.”

Violent Warfare: On the Wane?

The following article by Mark Buchanan was recently published on Medium. It discusses recent analysis by Nassim, along with Pasquale Cirillo, of historical warfare statistics. This analysis contradicts the popular idea that future violent wars are unlikely:

Violent warfare is on the wane, right?

Many optimists think so. But a close look at the statistics suggests that the idea just doesn’t add up.

A spate of recent and not so recent books have suggested that “everything is getting better,” that the world is getting more peaceful, more civilized, and less violent. Some of these claims stand up. In his book The Better Angels of Our Nature, psychologist Steven Pinker made the case that everything from slavery and torture to violent personal crime and cruelty to animals has decreased in modern times. He presented masses of evidence. Such trends, it would certainly seem, are highly unlikely to be reversed.

Pinker also suggested — as have others, including historian Niall Ferguson — that something big has changed about violent warfare since 1945 as well. Here too, the world seems to have become much more peaceful, as if war is becoming a thing of the past. As he wrote,

… wars between great powers and developed nations have fallen to historically unprecedented levels. This empirical fact has been repeatedly noted with astonishment by many military historians and international relations scholars…

Pinker admits that this might be a statistical illusion; perhaps we’ve just experienced a recent lull, and war will resume with its full historical fury sometime soon. But he thinks this is unlikely, for a variety of reasons. These include…

the fact that the drop in the frequency of wars among great powers and developed states has been so sudden and massive (essentially, to zero) as to suggest a qualitative change; that territorial conquest has similarly all but vanished in the planning and outcomes of wars; that the period without major war has also seen sharp reductions in conscription, length of military service, and per-GDP military expenditures; that it has seen declines in every exogenous variable that are statistically predictive of militarized disputes; and that war rhetoric and war planning have disappeared as live options in the political deliberations of developed states in their dealings with one another. None of these observations were post-hoc, offered at the end of a fortuitously long run that was spuriously deemed improbable in retrospect; many were made more than three decades ago, and their prospective assessments have been strengthened by the passage of time.

Again, Pinker has gone to lengths to emphasize that none of this proves anything abut future wars. But it is strongly suggestive, he believes, that this is significant evidence for such a belief.

Nassim Taleb criticized Pinker’s arguments a few years ago, arguing that Pinker didn’t take proper account of the statistical nature of war as a historical phenomenon, specifically as a time series of events characterized by fat tails. Such processes naturally have long periods of quiescence, which get ripped apart by tumultuous upheavals, and they lure the mind into mistaken interpretations. Pinker responded, clarifying his view, and the quotes above come from that response . Pinker acknowledged the logical possibility of Taleb’s view, but suggested that Taleb had “no evidence that is true, or even plausible.”

That has now changed. Just today, Taleb, writing with another mathematician, Pasquale Cirillo, has released a detailed analysis of the statistics of violent warfare going back some 2000 years, with an emphasis on the properties of the tails of the distribution — the likelihood of the most extreme events. I’ve written a short Bloomberg piece on the new paper, and wanted to offer a few more technical details here. The analysis, I think, goes a long way to making clear why we are so ill-prepared to think clearly about processes governed by fat tails, and so prone to falling into interpretive error. Moreover, it strongly suggests that hopes of a future with significantly less war are NOT supported by anything in the recent trend of conflict infrequency. The optimists are fooling themselves.

Anyone can read the paper, so I’ll limit myself to simply summarizing a few of the main points:

  1. Following many historians, Cirillo and Taleb use the number of casualties as a measure of the size of a conflict. Obviously, since the human population has grown with time, larger wars have become possible. So, they sensibly treat the data in a fractional sense — looking at the number of deaths as a fraction of the human population.
  2. For nearly more than 50 years, going back to Lewis Fry Richardson, it’s been known that the cumulative distribution of wars by size follows a rough power law, the number of events larger than size S being proportional to 1/S raised to an exponent α. This is also an approximation, of course, because there is an absolute maximum possible size for a conflict — it can’t be more than then entire population. Hence, the power law form can only hold over a certain range. To take this into account, Cirillo and Taleb also rescale the data to take into account the finite size of the human population.
  3. Having done this, they find using various statistical methods that α falls within the range 0.4 to 0.7. For maximum sensitivity in the statistical tests, they derive this by focusing mostly on the largest wars over the 2000 years, those equivalent (in today’s numbers) to at least 50,000 casualties.
  4. Note on the value of α — this is smaller than the exponent known to hold for either earthquakes or financial market fluctuations. This implies that statistics of wars is even more prone to large fluctuations than these other processes, which are of course highly erratic themselves.
  5. It also implies that the sample mean over any period is NOT a very useful statistic for estimating the TRUE mean of the underlying statistical process. For example, it turns out that, for a process following this statistical pattern, one should expect fully 96% of all observations to fall below the true mean of the process. This brings home just how non-Gaussian and non-normal this process is. We’re used to thinking that, if we observe instances from some random process, we ought to (very crudely) see events about half above and half below the mean. Instead, in this process, one should expect that almost all observations will be below, and even far below, the actual mean. We almost always see fewer wars than we, in a sense, should. The process is set up almost perfectly to make an observer complacent about the possibility of large events.
  6. Related to the above, it also turns out to be >90% certain that the true mean of the process is higher than the observed mean. What we have seen in the record of wars over the past 70 years, for example, almost certainly offers an underestimate of the true likelihood of wars. The statistical process makes rare but large events so likely that looking forward on the basis of recent past observations is a recipe for unwarranted optimism. In actual numbers, Cirillo and Taleb find that the true expected mean — say, the number of deaths we should expect over the next half century — is actually about three times higher than what we’ve seen in the past.

There are quite few other gems in the analysis, but these seem to me to be the most important.

One final thing, and maybe this is most important. Cirillo and Taleb make a strong argument that the quantity that one should study and try to estimate from the statistics is the tail exponent α (see point 5 above). This is certainly not easy to estimate, and it takes a lot of data to get even a crude estimate, but working with α is a much better way of getting at the true mean of the process than working with the sample mean over various periods. Looking at past events, and estimating the average number over any period, is simply a bad way to go about thinking about any process of this kind. The sample mean is NOT a mathematically sound estimate of the true mean of the process. For more on this, see Taleb’s comments at the top of the 3rd page of his earlier criticism of Pinker’s argument.

And that, I think, is pretty good reason to believe that all talk of the dwindling likelihood of wars based on recent past experience is mostly based on illusion and people telling themselves convincing but probably unfounded stories. Sure it looks as if things are getting more peaceful. But, looking at the mathematics, that’s exactly what we should expect to see, even if we’re most likely due for a much more violent future.

SSRN: Constantine Sandis & Nassim Nicholas Taleb – The Skin In The Game Heuristic for Protection Against Tail Events

The Skin In The Game Heuristic for Protection Against Tail Events

Constantine Sandis
Oxford Brooks

Nassim Nicholas Taleb
NYU-Poly; Université Paris I Panthéon-Sorbonne – Centre d’Economie de la Sorbonne (CES)

July 30, 2013

Standard economic theory makes an allowance for the agency problem, but not the compounding of moral hazard in the presence of informational opacity, particularly in what concerns high-impact events in fat tailed domains. But the ancients did; so did many aspects of moral philosophy. We propose a global and morally mandatory heuristic that anyone involved in an action which can possibly generate harm for others, even probabilistically, should be required to be exposed to some damage, regardless of context. While perhaps not sufficient, the heuristic is certainly necessary hence mandatory. It is supposed to counter risk hiding and transfer in the tails. We link the rule to various philosophical approaches to ethics and moral luck.

http:// papers. ssrn. com/ sol3/ papers.cfm? abstract_id=2298292

Taleb’s MOOCs | Binary vs Vanilla Payoffs and Predictions: An error in the research/risk literature

“Micro-Mooc on a paper by Taleb and Tetlock (one manifestation of the LUDIC FALLACY). There are serious statistical differences between predictions, bets, and exposures that have a yes/no type of payoff, the “binaries”, and those that have varying payoffs, which we call the “vanilla”. Real world exposures tend to belong to the vanilla category, and are poorly captured by binaries. Yet much of the economics and decision making literature confuses the two. Vanilla exposures are sensitive to Black Swan effects, model errors, and prediction problems, while the binaries are largely immune to them. The binaries are mathematically tractable, while the vanilla are much less so. Hedging vanilla exposures with binary bets can be disastrous–and because of the human tendency to engage in attribute substitution when confronted by difficult questions,decision-makers and researchers often confuse the vanilla for the binary.”
The paper is here: http :// papers. ssrn. com/ sol3/ papers.cfm? abstract_id= 2284964
More general Fat Problems with Tails: http:// www. fooled by randomness. com/ FatTails. html

Paper: Problem with Economics

From Nassim Nicholas Taleb Facebook Page:

Friends, I am presenting this document (summary of recent work) explaining what is wrong with economics models at a conference in France (which is not fully infected with the Anglo-American disease).
Please let me know if you find mistakes as I cut/pasted from *Fat Tails & Fragility*.

https:// dl. drop box user content. com/ u/ 50282823/ Problems%20 with%20 Economics.pdf

A Brief Exposition of Violations of Scientific Rigor In Current Economic Modeling

Nassim Nicholas Taleb
NYU-Poly Institute

July 2013

This is a brief summary of the problems discussed in philosophical terms in The Black Swan and Antifragile with a more mathematical exposition in Fat Tails and Antifragility (2013). Most of the text was excerpted from the latter book.

Note that this is not a critique of modern economic modeling from outside, but from within, using mathematics to put the methods claimed under scrutiny.

The message is simple: focus on measurable robustness to model error and convex heuristics, instead of relying on “scientific” measurements and models. For these measurements tend to cause blowups. And we can measure fragility, not quite statistical risks.