Video: Nassim’s Keynote at the 2015 Fletcher Conference on Managing Political Risk

This video is of Nassim’s keynote address at this year’s Fletcher Conference on Managing Political Risk.

The conference’s website provides the text of some of the address, as follows:

Keynote address by Nassim Nicholas Taleb

Nadim Shehadi, moderator

Let’s start with the notion of fat tails. A fat tail is a situation in which a small number of observations create the largest effect. When you have a lot of data, and the event is explained by the smallest number of observations. In finance, almost everything is fat tails. A small number of companies represent most of the sales; in pharmaceuticals, a small number of drugs represent almost all the sales. The law of large numbers: the outlier determines outcomes. In wealth, if you sample the top 1% of wealthy people you get half the wealth. In violence – a few conflicts (e.g. World Wars I and II) represent most of the deaths in combat: that is a super fat tail.

So why is the world becoming more and more characterized by fat tails? Because of globalization. More “winner takes all” effects. You have fewer crises, but when they happen they are more consequential. And the mean is not visible by conventional methods.

Now, moral hazard. Banks like to make money. Under fat tails, large numbers operate slowly. Let’s say you get a bonus for each year you make money. Then in 1982, banks lost more money than they did in their history. Then in 2007-2008, $4.7 trillion were lost. Then bankers wrote letters about how the odds were so low that the event was as much of a surprise to them as it was to you. Any situation in which you see the upside without the downside, you are inviting risks. People will tell you something is very safe, when in fact it is dangerous. Visible profits, and invisible losses. People are getting bonuses on things that are extremely high risk. And then the system collapses.

If you have skin in the game at all times, this does not happen. Modernity: a situation in which people get benefits from the action, but the adverse effects do not touch them. You hide risks to improve your year end job assessment. Bear Stearns never lost money – until they lost money.

Hedge fund managers are forced to eat their own cooking. When the fund loses money, the hedge fund manager loses his own money: he has skin in the game. You have fools of randomness, and crooks of randomness. Driving on a highway, you could go against traffic and kill 30 people – why does that not happen more often? Because types of people who would do this kill themselves along with others, so they filter themselves out of the system. Entrepreneurs, who make mistakes, are effectively dead if there is a filtering system. Suicide bombers kill themselves – so we can’t talk about them as a real threat to the system. So there is a filtering mechanism. People don’t survive high risk. If they have skin in the game, traders don’t like high risk.

Let’s now talk about fragility. The past does not predict the future. The black swan idea is not to predict – it is to describe this phenomena, and how to build systems that can resist black swan events. We define fragility as something that does not like disorder. What is disorder? Imagine driving a car 50 times into a wall at 1 mph, and then once at 50 mph: which would hurt you more. So there is an acceleration of fragility. The goal is to be anti-fragile.

There are two types of portfolios: 1) if there is bad news you lose money, 2) if there is bad news you win money. One doesn’t like disorder, one likes disorder. One is fragile, one is anti-fragile. Size (such as size of debt, size of a corporation) makes you more fragile to disorder.

 

Questions & Answers:

  •  Do the people of ISIS returning home pose a risk?
    • This is not a risk. Debt is a risk. ISIS makes the newspapers and people talk about it but the real risks are not ISIS – the real risk is ebola, because it can spread. And the next ebola will be worse. So when people ask me to talk about risk, an epidemic is the biggest risk.
  •  Can you discuss some examples in the world that are fragile, examples of the fat tail?
    • The Soviet Union did not collapse because of the regime but because of the size. Similarly, a lot of people don’t fully understand the history of Italy, before unification. There was constant, low grade turmoil. After unification, there were infrequent but deep problems. The risks facing us today, are the real things that can harm us and spread uncontrollably.
  •  Should we still think about risks on a country level? How do we think about transnational risks?
    • Cybersecurity – banks spend 5% of their money on it. Netflix engineers failures every day. They pay an army of agents to try to destroy their system, to discover their vulnerabilities. Things that experience constant stress are more stable. In cybersecurity, there are a lot of risks, but we’re doing so much to protect against it that we don’t need to worry much. But eventually the cost of controlling these risks might explode.
  •  What is your blind spot?
    • If I knew my blind spots, they wouldn’t be blind spots. I’m developing something that is improving stress testing. The good thing about fragility theory is you can touch a lot of things. I want to make narrow improvements, little by little, not try to save the world.
  •  Is statistics useless or are there some redeeming qualities?
    • Any science becomes applied mathematics and if it’s not applied mathematics yet, it is not a science. Stats is used mechanistically. Statisticians need to make risk an application of probability theory. A lot of the people doing this come from the insurance industry.
  •  How does bad data effect your work?
    • When you have a lot of variables, but not much data per variable, you are more likely to have spurious correlations. And when you have a lot of data, you are likely to find a stock figure that correlates with your blood pressure – that’s spurious. More data is not always good.
    • Another problem is that if I want to write a paper, I test, test, test something until it fits my expectations – and I won’t reveal to you how many times I have tried. If there is someone doing this for a living, for money, then I don’t trust them.
  •  This is a great system you’re developing but can it be misused?
    • The problem is in the math and in the ethics.
  •  If we stop using statistics, how can we make decisions? Don’t we have to make assumptions?
    • Have skin in the game. Only use statistics for decisions if the stats are reliable. Joseph Stiglitz is blocking evolution – he made a prediction about Fannie Mae not collapsing, and it collapsed – and yet he’s still lecturing us on what to do next.