The COVID-19 pandemic has been a sobering reminder of the extensive damage brought about by epidemics, phenomena that play a vivid role in our collective memory, and that have long been identified as significant sources of risk for humanity. The use of increasingly sophisticated mathematical and computational models for the spreading and the implications of epidemics should, in principle, provide policy- and decision-makers with a greater situational awareness regarding their potential risk. Yet most of those models ignore the tail risk of contagious diseases, use point forecasts, and the reliability of their parameters is rarely questioned and incorporated in the projections. We argue that a natural and empirically correct framework for assessing (and managing) the real risk of pandemics is provided by extreme value theory (EVT), an approach that has historically been developed to treat phenomena in which extremes (maxima or minima) and not averages play the role of the protagonist, being the fundamental source of risk. By analysing data for pandemic outbreaks spanning over the past 2500 years, we show that the related distribution of fatalities is strongly fat-tailed, suggesting a tail risk that is unfortunately largely ignored in common epidemiological models. We use a dual distribution method, combined with EVT, to extract information from the data that is not immediately available to inspection. To check the robustness of our conclusions, we stress our data to account for the imprecision in historical reporting. We argue that our findings have significant implications, including on the extent to which compartmental epidemiological models and similar approaches can be relied upon for making policy decisions.
Third conversation between Nassim Nicholas Taleb & Yaneer Bar-Yam about uncertainty, certainty and what to do when there is a systemic risk; what not to do when a truck is headed your way. How acting early would have cost less? They also discuss:
John Ioannidis recent post “we are making decisions without reliable data”
Why we should make decisions without reliable data & use precautionary principles
How the costs would be so much smaller if we would have acted earlier.
Nassim cites Gabriel Mathy’s paper, Trade, Exchange Rate Regimes and Output Co-Movement: Evidence from the Great Depression to prove a point on why economists and academics don’t really understand Fat Tails.
Let’s start with the notion of fat tails. A fat tail is a situation in which a small number of observations create the largest effect. When you have a lot of data, and the event is explained by the smallest number of observations. In finance, almost everything is fat tails. A small number of companies represent most of the sales; in pharmaceuticals, a small number of drugs represent almost all the sales. The law of large numbers: the outlier determines outcomes. In wealth, if you sample the top 1% of wealthy people you get half the wealth. In violence – a few conflicts (e.g. World Wars I and II) represent most of the deaths in combat: that is a super fat tail.
So why is the world becoming more and more characterized by fat tails? Because of globalization. More “winner takes all” effects. You have fewer crises, but when they happen they are more consequential. And the mean is not visible by conventional methods.
Now, moral hazard. Banks like to make money. Under fat tails, large numbers operate slowly. Let’s say you get a bonus for each year you make money. Then in 1982, banks lost more money than they did in their history. Then in 2007-2008, $4.7 trillion were lost. Then bankers wrote letters about how the odds were so low that the event was as much of a surprise to them as it was to you. Any situation in which you see the upside without the downside, you are inviting risks. People will tell you something is very safe, when in fact it is dangerous. Visible profits, and invisible losses. People are getting bonuses on things that are extremely high risk. And then the system collapses.
If you have skin in the game at all times, this does not happen. Modernity: a situation in which people get benefits from the action, but the adverse effects do not touch them. You hide risks to improve your year end job assessment. Bear Stearns never lost money – until they lost money.
Hedge fund managers are forced to eat their own cooking. When the fund loses money, the hedge fund manager loses his own money: he has skin in the game. You have fools of randomness, and crooks of randomness. Driving on a highway, you could go against traffic and kill 30 people – why does that not happen more often? Because types of people who would do this kill themselves along with others, so they filter themselves out of the system. Entrepreneurs, who make mistakes, are effectively dead if there is a filtering system. Suicide bombers kill themselves – so we can’t talk about them as a real threat to the system. So there is a filtering mechanism. People don’t survive high risk. If they have skin in the game, traders don’t like high risk.
Let’s now talk about fragility. The past does not predict the future. The black swan idea is not to predict – it is to describe this phenomena, and how to build systems that can resist black swan events. We define fragility as something that does not like disorder. What is disorder? Imagine driving a car 50 times into a wall at 1 mph, and then once at 50 mph: which would hurt you more. So there is an acceleration of fragility. The goal is to be anti-fragile.
There are two types of portfolios: 1) if there is bad news you lose money, 2) if there is bad news you win money. One doesn’t like disorder, one likes disorder. One is fragile, one is anti-fragile. Size (such as size of debt, size of a corporation) makes you more fragile to disorder.
Questions & Answers:
Do the people of ISIS returning home pose a risk?
This is not a risk. Debt is a risk. ISIS makes the newspapers and people talk about it but the real risks are not ISIS – the real risk is ebola, because it can spread. And the next ebola will be worse. So when people ask me to talk about risk, an epidemic is the biggest risk.
Can you discuss some examples in the world that are fragile, examples of the fat tail?
The Soviet Union did not collapse because of the regime but because of the size. Similarly, a lot of people don’t fully understand the history of Italy, before unification. There was constant, low grade turmoil. After unification, there were infrequent but deep problems. The risks facing us today, are the real things that can harm us and spread uncontrollably.
Should we still think about risks on a country level? How do we think about transnational risks?
Cybersecurity – banks spend 5% of their money on it. Netflix engineers failures every day. They pay an army of agents to try to destroy their system, to discover their vulnerabilities. Things that experience constant stress are more stable. In cybersecurity, there are a lot of risks, but we’re doing so much to protect against it that we don’t need to worry much. But eventually the cost of controlling these risks might explode.
What is your blind spot?
If I knew my blind spots, they wouldn’t be blind spots. I’m developing something that is improving stress testing. The good thing about fragility theory is you can touch a lot of things. I want to make narrow improvements, little by little, not try to save the world.
Is statistics useless or are there some redeeming qualities?
Any science becomes applied mathematics and if it’s not applied mathematics yet, it is not a science. Stats is used mechanistically. Statisticians need to make risk an application of probability theory. A lot of the people doing this come from the insurance industry.
How does bad data effect your work?
When you have a lot of variables, but not much data per variable, you are more likely to have spurious correlations. And when you have a lot of data, you are likely to find a stock figure that correlates with your blood pressure – that’s spurious. More data is not always good.
Another problem is that if I want to write a paper, I test, test, test something until it fits my expectations – and I won’t reveal to you how many times I have tried. If there is someone doing this for a living, for money, then I don’t trust them.
This is a great system you’re developing but can it be misused?
The problem is in the math and in the ethics.
If we stop using statistics, how can we make decisions? Don’t we have to make assumptions?
Have skin in the game. Only use statistics for decisions if the stats are reliable. Joseph Stiglitz is blocking evolution – he made a prediction about Fannie Mae not collapsing, and it collapsed – and yet he’s still lecturing us on what to do next.