Nassim joins Jean-Philippe Bouchaud, founder and chairman of Capital Fund Management, in a debate at the at the 2015 Chaire PARI Conference.
This video is of Nassim’s keynote address at this year’s Fletcher Conference on Managing Political Risk.
The conference’s website provides the text of some of the address, as follows:
Keynote address by Nassim Nicholas Taleb
Nadim Shehadi, moderator
Let’s start with the notion of fat tails. A fat tail is a situation in which a small number of observations create the largest effect. When you have a lot of data, and the event is explained by the smallest number of observations. In finance, almost everything is fat tails. A small number of companies represent most of the sales; in pharmaceuticals, a small number of drugs represent almost all the sales. The law of large numbers: the outlier determines outcomes. In wealth, if you sample the top 1% of wealthy people you get half the wealth. In violence – a few conflicts (e.g. World Wars I and II) represent most of the deaths in combat: that is a super fat tail.
So why is the world becoming more and more characterized by fat tails? Because of globalization. More “winner takes all” effects. You have fewer crises, but when they happen they are more consequential. And the mean is not visible by conventional methods.
Now, moral hazard. Banks like to make money. Under fat tails, large numbers operate slowly. Let’s say you get a bonus for each year you make money. Then in 1982, banks lost more money than they did in their history. Then in 2007-2008, $4.7 trillion were lost. Then bankers wrote letters about how the odds were so low that the event was as much of a surprise to them as it was to you. Any situation in which you see the upside without the downside, you are inviting risks. People will tell you something is very safe, when in fact it is dangerous. Visible profits, and invisible losses. People are getting bonuses on things that are extremely high risk. And then the system collapses.
If you have skin in the game at all times, this does not happen. Modernity: a situation in which people get benefits from the action, but the adverse effects do not touch them. You hide risks to improve your year end job assessment. Bear Stearns never lost money – until they lost money.
Hedge fund managers are forced to eat their own cooking. When the fund loses money, the hedge fund manager loses his own money: he has skin in the game. You have fools of randomness, and crooks of randomness. Driving on a highway, you could go against traffic and kill 30 people – why does that not happen more often? Because types of people who would do this kill themselves along with others, so they filter themselves out of the system. Entrepreneurs, who make mistakes, are effectively dead if there is a filtering system. Suicide bombers kill themselves – so we can’t talk about them as a real threat to the system. So there is a filtering mechanism. People don’t survive high risk. If they have skin in the game, traders don’t like high risk.
Let’s now talk about fragility. The past does not predict the future. The black swan idea is not to predict – it is to describe this phenomena, and how to build systems that can resist black swan events. We define fragility as something that does not like disorder. What is disorder? Imagine driving a car 50 times into a wall at 1 mph, and then once at 50 mph: which would hurt you more. So there is an acceleration of fragility. The goal is to be anti-fragile.
There are two types of portfolios: 1) if there is bad news you lose money, 2) if there is bad news you win money. One doesn’t like disorder, one likes disorder. One is fragile, one is anti-fragile. Size (such as size of debt, size of a corporation) makes you more fragile to disorder.
Questions & Answers:
- Do the people of ISIS returning home pose a risk?
- This is not a risk. Debt is a risk. ISIS makes the newspapers and people talk about it but the real risks are not ISIS – the real risk is ebola, because it can spread. And the next ebola will be worse. So when people ask me to talk about risk, an epidemic is the biggest risk.
- Can you discuss some examples in the world that are fragile, examples of the fat tail?
- The Soviet Union did not collapse because of the regime but because of the size. Similarly, a lot of people don’t fully understand the history of Italy, before unification. There was constant, low grade turmoil. After unification, there were infrequent but deep problems. The risks facing us today, are the real things that can harm us and spread uncontrollably.
- Should we still think about risks on a country level? How do we think about transnational risks?
- Cybersecurity – banks spend 5% of their money on it. Netflix engineers failures every day. They pay an army of agents to try to destroy their system, to discover their vulnerabilities. Things that experience constant stress are more stable. In cybersecurity, there are a lot of risks, but we’re doing so much to protect against it that we don’t need to worry much. But eventually the cost of controlling these risks might explode.
- What is your blind spot?
- If I knew my blind spots, they wouldn’t be blind spots. I’m developing something that is improving stress testing. The good thing about fragility theory is you can touch a lot of things. I want to make narrow improvements, little by little, not try to save the world.
- Is statistics useless or are there some redeeming qualities?
- Any science becomes applied mathematics and if it’s not applied mathematics yet, it is not a science. Stats is used mechanistically. Statisticians need to make risk an application of probability theory. A lot of the people doing this come from the insurance industry.
- How does bad data effect your work?
- When you have a lot of variables, but not much data per variable, you are more likely to have spurious correlations. And when you have a lot of data, you are likely to find a stock figure that correlates with your blood pressure – that’s spurious. More data is not always good.
- Another problem is that if I want to write a paper, I test, test, test something until it fits my expectations – and I won’t reveal to you how many times I have tried. If there is someone doing this for a living, for money, then I don’t trust them.
- This is a great system you’re developing but can it be misused?
- The problem is in the math and in the ethics.
- If we stop using statistics, how can we make decisions? Don’t we have to make assumptions?
- Have skin in the game. Only use statistics for decisions if the stats are reliable. Joseph Stiglitz is blocking evolution – he made a prediction about Fannie Mae not collapsing, and it collapsed – and yet he’s still lecturing us on what to do next.
The Bloomberg Businessweek website has a feature piece on Robert Rubin, in the article Nassim is interviewed and shares his view on President Clinton’s former Treasury Secretary and former Citigroup executive.
“Nobody on this planet represents more vividly the scam of the banking industry,” says Nassim Nicholas Taleb, author of The Black Swan. “He made $120 million from Citibank, which was technically insolvent. And now we, the taxpayers, are paying for it.”
Nassim Nicholas Taleb doesn’t know Rubin personally. He admits that his antipathy, like that of so many Rubin critics, is fueled by symbolism. “He represents everything that’s bad in America,” he says. “The evil in one person represented. When we write the history, he will be seen as the John Gotti of our era. He’s the Teflon Don of Wall Street.” Taleb wants systemic change to prevent what he terms the “Bob Rubin Problem”—the commingling of Wall Street interests and the public trust—“so people like him don’t exist.”
Nassim Taleb has released a paper with the IMF: A New Heuristic Measure of Fragility and Tail Risks: Application to Stress Testing
From Business Insider: NASSIM TALEB: The Fed Is Looking At The Banking System All Wrong
Nassim Taleb has long been a critic of traditional forecasting methods like the ones underlying these stress tests. He even coined a now oft-repeated term to capture his criticism – “black swan” – which became a huge New York Times bestselling book.
Now, he warns that “fragility is especially high for the banks with the worst outcomes” according to a new metric he’s developed to better analyze the risks facing the banks.
In a new white paper with researchers at the IMF, Taleb explains the reason why all of the stress tests conducted by central banks and international financial institutions like the Federal Reserve, the ECB, and the IMF come up short:
First, many stress tests focus on the point estimates of very few scenarios, and often pay little attention to how the impact would change in case of different scenarios, e.g., a slightly more severe one. Second, if stress tests do not take into account the possibility of model and parameter error, it can be misleading to rely only on the point estimates of even well-designed stress tests. Without considering the potential for these errors, one could miss the convexities/non-linearities that can lead to serious financial fragilities.
A better approach, according to Taleb and his IMF co-authors Elie Canetti, Tidiane Kinda, Elena Loukoianova, and Christian Schmeider, is to measure the difference between outcomes arising from different scenarios instead of focusing on the estimates of potential losses themselves.
According to Taleb, this is the real way to measure the “fragility” of a bank or a country in the event of a negative economic shock. Because point estimates are so prone to errors from faulty model assumptions, measuring the distance between them to detect how quickly losses pile up as the economic shock gets larger becomes a vastly more reliable measure of risk.
In other words, it’s not the size of the losses themselves that is important. Instead, it’s the rate of change of potential losses as the economic situation deteriorates that determines how fragile a bank is, by Taleb’s standards.