Video: Nassim’s Keynote at the 2015 Fletcher Conference on Managing Political Risk

This video is of Nassim’s keynote address at this year’s Fletcher Conference on Managing Political Risk.

The conference’s website provides the text of some of the address, as follows:

Keynote address by Nassim Nicholas Taleb

Nadim Shehadi, moderator

Let’s start with the notion of fat tails. A fat tail is a situation in which a small number of observations create the largest effect. When you have a lot of data, and the event is explained by the smallest number of observations. In finance, almost everything is fat tails. A small number of companies represent most of the sales; in pharmaceuticals, a small number of drugs represent almost all the sales. The law of large numbers: the outlier determines outcomes. In wealth, if you sample the top 1% of wealthy people you get half the wealth. In violence – a few conflicts (e.g. World Wars I and II) represent most of the deaths in combat: that is a super fat tail.

So why is the world becoming more and more characterized by fat tails? Because of globalization. More “winner takes all” effects. You have fewer crises, but when they happen they are more consequential. And the mean is not visible by conventional methods.

Now, moral hazard. Banks like to make money. Under fat tails, large numbers operate slowly. Let’s say you get a bonus for each year you make money. Then in 1982, banks lost more money than they did in their history. Then in 2007-2008, $4.7 trillion were lost. Then bankers wrote letters about how the odds were so low that the event was as much of a surprise to them as it was to you. Any situation in which you see the upside without the downside, you are inviting risks. People will tell you something is very safe, when in fact it is dangerous. Visible profits, and invisible losses. People are getting bonuses on things that are extremely high risk. And then the system collapses.

If you have skin in the game at all times, this does not happen. Modernity: a situation in which people get benefits from the action, but the adverse effects do not touch them. You hide risks to improve your year end job assessment. Bear Stearns never lost money – until they lost money.

Hedge fund managers are forced to eat their own cooking. When the fund loses money, the hedge fund manager loses his own money: he has skin in the game. You have fools of randomness, and crooks of randomness. Driving on a highway, you could go against traffic and kill 30 people – why does that not happen more often? Because types of people who would do this kill themselves along with others, so they filter themselves out of the system. Entrepreneurs, who make mistakes, are effectively dead if there is a filtering system. Suicide bombers kill themselves – so we can’t talk about them as a real threat to the system. So there is a filtering mechanism. People don’t survive high risk. If they have skin in the game, traders don’t like high risk.

Let’s now talk about fragility. The past does not predict the future. The black swan idea is not to predict – it is to describe this phenomena, and how to build systems that can resist black swan events. We define fragility as something that does not like disorder. What is disorder? Imagine driving a car 50 times into a wall at 1 mph, and then once at 50 mph: which would hurt you more. So there is an acceleration of fragility. The goal is to be anti-fragile.

There are two types of portfolios: 1) if there is bad news you lose money, 2) if there is bad news you win money. One doesn’t like disorder, one likes disorder. One is fragile, one is anti-fragile. Size (such as size of debt, size of a corporation) makes you more fragile to disorder.

 

Questions & Answers:

  •  Do the people of ISIS returning home pose a risk?
    • This is not a risk. Debt is a risk. ISIS makes the newspapers and people talk about it but the real risks are not ISIS – the real risk is ebola, because it can spread. And the next ebola will be worse. So when people ask me to talk about risk, an epidemic is the biggest risk.
  •  Can you discuss some examples in the world that are fragile, examples of the fat tail?
    • The Soviet Union did not collapse because of the regime but because of the size. Similarly, a lot of people don’t fully understand the history of Italy, before unification. There was constant, low grade turmoil. After unification, there were infrequent but deep problems. The risks facing us today, are the real things that can harm us and spread uncontrollably.
  •  Should we still think about risks on a country level? How do we think about transnational risks?
    • Cybersecurity – banks spend 5% of their money on it. Netflix engineers failures every day. They pay an army of agents to try to destroy their system, to discover their vulnerabilities. Things that experience constant stress are more stable. In cybersecurity, there are a lot of risks, but we’re doing so much to protect against it that we don’t need to worry much. But eventually the cost of controlling these risks might explode.
  •  What is your blind spot?
    • If I knew my blind spots, they wouldn’t be blind spots. I’m developing something that is improving stress testing. The good thing about fragility theory is you can touch a lot of things. I want to make narrow improvements, little by little, not try to save the world.
  •  Is statistics useless or are there some redeeming qualities?
    • Any science becomes applied mathematics and if it’s not applied mathematics yet, it is not a science. Stats is used mechanistically. Statisticians need to make risk an application of probability theory. A lot of the people doing this come from the insurance industry.
  •  How does bad data effect your work?
    • When you have a lot of variables, but not much data per variable, you are more likely to have spurious correlations. And when you have a lot of data, you are likely to find a stock figure that correlates with your blood pressure – that’s spurious. More data is not always good.
    • Another problem is that if I want to write a paper, I test, test, test something until it fits my expectations – and I won’t reveal to you how many times I have tried. If there is someone doing this for a living, for money, then I don’t trust them.
  •  This is a great system you’re developing but can it be misused?
    • The problem is in the math and in the ethics.
  •  If we stop using statistics, how can we make decisions? Don’t we have to make assumptions?
    • Have skin in the game. Only use statistics for decisions if the stats are reliable. Joseph Stiglitz is blocking evolution – he made a prediction about Fannie Mae not collapsing, and it collapsed – and yet he’s still lecturing us on what to do next.

 

One-Page Answer to Pinker’s Notion that Violence Has Dropped Since 1945

Nassim just posted this one-page refutation to Stephen Pinker’s claim that violence has dropped since 1945. On his facebook page he says that “journalist-passing-for-scientist” Pinker cites “political science bloggers innocent of fat tails, who seem clueless about the difference between data and information. How to separate anecdote from evidence, sampling error from truth, journalism from science? Well there is something called a “test statistic.”
This also illustrates how to do rigorous statistics in the absence of a textbook recipe for a fat-tailed process, by means of Monte Carlo analyses. I will be teaching a course called “Extreme Risk Analytics” at NYU-Engineering this fall and will have to produce an 80 page lecture notes booklet, which I will write progressively from interaction with the class. SILENT RISK is too advanced, so I need a more introductory book.”

Pixar President Ed Catmull’s Creativity Inc Demonstrates Nassim’s Influence

In a book review called Finally: A Business Memoir That Owes More To Nassim Taleb Than To Jack Welch, David Shaywitz describes similarities he sees between Nassim’s thinking and the thinking of Pixar President Ed Catmull, as revealed in his memoir Creativity Inc. Shaywitz writes:

… in a fashion that seems equally indebted to Montaigne’s On Experience and Taleb’s The Black Swan, Catmull contemplates the challenges of managing in a world where, inevitably, there will be so much that’s hidden, and that you can never see.

This is precisely the question that vexed Taleb as well; as I phrased it in my review, “How do you function in a world where accurate prediction is rarely possible, where history isn’t a reliable guide to the future and where the most important events cannot be anticipated?”

Much like Taleb, Catmull isn’t looking for certitude, and would profoundly (and appropriately) distrust it if he saw it.  But the alternative is finding a way to function and achieve balance – a very dynamic and ever-changing balance – in a world that’s constantly shifting.

For Catmull, encouraging employees to surface and solve problems, and to candidly share critique is both a daily challenge and an existential need, without which creative businesses are destined to fail.

Read the entire book review at Forbes.

Violent Warfare: On the Wane?

The following article by Mark Buchanan was recently published on Medium. It discusses recent analysis by Nassim, along with Pasquale Cirillo, of historical warfare statistics. This analysis contradicts the popular idea that future violent wars are unlikely:

Violent warfare is on the wane, right?

Many optimists think so. But a close look at the statistics suggests that the idea just doesn’t add up.

A spate of recent and not so recent books have suggested that “everything is getting better,” that the world is getting more peaceful, more civilized, and less violent. Some of these claims stand up. In his book The Better Angels of Our Nature, psychologist Steven Pinker made the case that everything from slavery and torture to violent personal crime and cruelty to animals has decreased in modern times. He presented masses of evidence. Such trends, it would certainly seem, are highly unlikely to be reversed.

Pinker also suggested — as have others, including historian Niall Ferguson — that something big has changed about violent warfare since 1945 as well. Here too, the world seems to have become much more peaceful, as if war is becoming a thing of the past. As he wrote,

… wars between great powers and developed nations have fallen to historically unprecedented levels. This empirical fact has been repeatedly noted with astonishment by many military historians and international relations scholars…

Pinker admits that this might be a statistical illusion; perhaps we’ve just experienced a recent lull, and war will resume with its full historical fury sometime soon. But he thinks this is unlikely, for a variety of reasons. These include…

the fact that the drop in the frequency of wars among great powers and developed states has been so sudden and massive (essentially, to zero) as to suggest a qualitative change; that territorial conquest has similarly all but vanished in the planning and outcomes of wars; that the period without major war has also seen sharp reductions in conscription, length of military service, and per-GDP military expenditures; that it has seen declines in every exogenous variable that are statistically predictive of militarized disputes; and that war rhetoric and war planning have disappeared as live options in the political deliberations of developed states in their dealings with one another. None of these observations were post-hoc, offered at the end of a fortuitously long run that was spuriously deemed improbable in retrospect; many were made more than three decades ago, and their prospective assessments have been strengthened by the passage of time.

Again, Pinker has gone to lengths to emphasize that none of this proves anything abut future wars. But it is strongly suggestive, he believes, that this is significant evidence for such a belief.

Nassim Taleb criticized Pinker’s arguments a few years ago, arguing that Pinker didn’t take proper account of the statistical nature of war as a historical phenomenon, specifically as a time series of events characterized by fat tails. Such processes naturally have long periods of quiescence, which get ripped apart by tumultuous upheavals, and they lure the mind into mistaken interpretations. Pinker responded, clarifying his view, and the quotes above come from that response . Pinker acknowledged the logical possibility of Taleb’s view, but suggested that Taleb had “no evidence that is true, or even plausible.”

That has now changed. Just today, Taleb, writing with another mathematician, Pasquale Cirillo, has released a detailed analysis of the statistics of violent warfare going back some 2000 years, with an emphasis on the properties of the tails of the distribution — the likelihood of the most extreme events. I’ve written a short Bloomberg piece on the new paper, and wanted to offer a few more technical details here. The analysis, I think, goes a long way to making clear why we are so ill-prepared to think clearly about processes governed by fat tails, and so prone to falling into interpretive error. Moreover, it strongly suggests that hopes of a future with significantly less war are NOT supported by anything in the recent trend of conflict infrequency. The optimists are fooling themselves.

Anyone can read the paper, so I’ll limit myself to simply summarizing a few of the main points:

  1. Following many historians, Cirillo and Taleb use the number of casualties as a measure of the size of a conflict. Obviously, since the human population has grown with time, larger wars have become possible. So, they sensibly treat the data in a fractional sense — looking at the number of deaths as a fraction of the human population.
  2. For nearly more than 50 years, going back to Lewis Fry Richardson, it’s been known that the cumulative distribution of wars by size follows a rough power law, the number of events larger than size S being proportional to 1/S raised to an exponent α. This is also an approximation, of course, because there is an absolute maximum possible size for a conflict — it can’t be more than then entire population. Hence, the power law form can only hold over a certain range. To take this into account, Cirillo and Taleb also rescale the data to take into account the finite size of the human population.
  3. Having done this, they find using various statistical methods that α falls within the range 0.4 to 0.7. For maximum sensitivity in the statistical tests, they derive this by focusing mostly on the largest wars over the 2000 years, those equivalent (in today’s numbers) to at least 50,000 casualties.
  4. Note on the value of α — this is smaller than the exponent known to hold for either earthquakes or financial market fluctuations. This implies that statistics of wars is even more prone to large fluctuations than these other processes, which are of course highly erratic themselves.
  5. It also implies that the sample mean over any period is NOT a very useful statistic for estimating the TRUE mean of the underlying statistical process. For example, it turns out that, for a process following this statistical pattern, one should expect fully 96% of all observations to fall below the true mean of the process. This brings home just how non-Gaussian and non-normal this process is. We’re used to thinking that, if we observe instances from some random process, we ought to (very crudely) see events about half above and half below the mean. Instead, in this process, one should expect that almost all observations will be below, and even far below, the actual mean. We almost always see fewer wars than we, in a sense, should. The process is set up almost perfectly to make an observer complacent about the possibility of large events.
  6. Related to the above, it also turns out to be >90% certain that the true mean of the process is higher than the observed mean. What we have seen in the record of wars over the past 70 years, for example, almost certainly offers an underestimate of the true likelihood of wars. The statistical process makes rare but large events so likely that looking forward on the basis of recent past observations is a recipe for unwarranted optimism. In actual numbers, Cirillo and Taleb find that the true expected mean — say, the number of deaths we should expect over the next half century — is actually about three times higher than what we’ve seen in the past.

There are quite few other gems in the analysis, but these seem to me to be the most important.

One final thing, and maybe this is most important. Cirillo and Taleb make a strong argument that the quantity that one should study and try to estimate from the statistics is the tail exponent α (see point 5 above). This is certainly not easy to estimate, and it takes a lot of data to get even a crude estimate, but working with α is a much better way of getting at the true mean of the process than working with the sample mean over various periods. Looking at past events, and estimating the average number over any period, is simply a bad way to go about thinking about any process of this kind. The sample mean is NOT a mathematically sound estimate of the true mean of the process. For more on this, see Taleb’s comments at the top of the 3rd page of his earlier criticism of Pinker’s argument.

And that, I think, is pretty good reason to believe that all talk of the dwindling likelihood of wars based on recent past experience is mostly based on illusion and people telling themselves convincing but probably unfounded stories. Sure it looks as if things are getting more peaceful. But, looking at the mathematics, that’s exactly what we should expect to see, even if we’re most likely due for a much more violent future.