#### Risk

On Medium, Nassim posts a continuation of his previous article on the Minority Rule:

Let us take the idea of the last chapter [the intransigent minority’s disproportional influence] one step further, get a bit more technical, and generalize. It will debunk some of the fallacies we hear in psychology, “evolutionary theory”, game theory, behavioral economics, neuroscience, and similar fields not subjected to proper logical (and mathematical) rigor, in spite of the occasional semi-complicated equations. For instance we will see why behavioral economics will necessarily fail us even if its results were true at the individual level and why use of brain science to explain behavior has been no more than great marketing for scientific papers.

Consider the following as a rule. Whenever you have nonlinearity, the average doesn’t matter anymore. Hence:

The more nonlinearity in the response, the less informational the average.

For instance, your benefit from drinking water would be linear if ten glasses of water were ten times as good as one single glass. If that is not the case, then necessarily the average water consumption matters less than something else that we will call “unevenness”, or volatility, or inequality in consumption. Say your average daily consumption needs to be one liter a day and I gave you ten liters one day and none for the remaining nine days, for an average of one liter a day. Odds are you won’t survive. You want your quantity of water to be as evenly distributed as possible. Within the day, you do not need to consume the same amount water every minute, but at the scale of the day, you want maximal evenness.

The effect of the nonlinearity in the response on the average –and the informational value of such an average –is something I’ve explained in some depth in Antifragile, as it was the theme of the book, so I will just assume a summary here is sufficient. From an informational standpoint, someone who tells you “We will supply you with 0ne liter of water liter day on average” is not conveying much information at all; there needs to be a second dimension, the variations around such an average. You are quite certain that you will die of thirst if his average comes from a cluster of a hundred liters every hundred days.

Note that an average and a sum are mathematically the same thing up to a simple division by a constant, so the fallacy of the average translate into the fallacy of summing, or aggregating, or looking at collective that has many components from the properties of a single unit.

As we saw, complex systems are characterized by the interactions between their components, and the resulting properties of the ensemble not (easily) seen from the parts.

There is a rich apparatus to study interactions originating from what is called the Ising problem, after the physicist Ernst Ising, originally in the ferromagnetic domain, but that has been adapted to many other areas. The model consists of discrete variables that represent atoms that can be in one of two states called “spins” but are in fact representing whether the state is what is nicknamed “up” or “down” (or can be dealt with using +1 or −1). The atoms are arranged in a lattice, allowing each unit to interact with its neighbors. In low dimensions, that is that for every atom you look at an interaction on a line (one dimensional) between two neighbors one to its left and one to its right, on a grid (two dimensional), the Ising model is simple and lend itself to simple solutions.

One method in such situations called “mean field” is to generalize from the “mean”, that is average interaction and apply to the ensemble. This is possible if and only if there is no dependence between one interaction and another –the procedure appears to be the opposite of renormalization from the last chapter. And, of course, this type of averaging is not possible if there are nonlinearities in the effect of the interactions.

More generally, the Übererror is to apply the “mean field” technique, by looking at the average and applying a function to it, instead of averaging the functions –a violation of Jensen’s inequality [Jensen’s Inequality, definition: a function of an average is not an average of a function, and the difference increases with disorder]. Distortions from mean field techniques will necessarily occur in the presence of nonlinearities.

What I am saying may appear to be complicated here –but it was not so with the story of the average water consumption. So let us produce equivalent simplifications across things that do not average.

From the last chapter [Minority Rule],

The average dietary preferences of the population will not allow us to understand the dietary preferences of the whole.

Some scientist observing the absence of peanuts in U.S. schools would infer that the average student is allergic to peanuts when only a very small percentage are so.

Or, more bothersome

The average behavior of the market participant will not allow us to understand the general behavior of the market.

These points appear clear thanks to our discussion about renormalization. They may cancel some stuff you know. But to show how under complexity the entire field of social science may fall apart, take one step further,

The psychological experiments on individuals showing “biases” do not allow us to understand aggregates or collective behavior, nor do they enlighten us about the behavior of groups.

Human nature is not defined outside of transactions involving other humans. Remember that we do not live alone, but in packs and almost nothing of relevance concerns a person in isolation –which is what is typically done in laboratory-style work.

Some “biases” deemed “irrational” by psycholophasters interested in pathologizing humans are not necessarily so if you look at their effect on the collective.

What I just said explains the failure of the so-called field of behavioral economics to give us any more information than orthodox economics (itself rather poor) on how to play the market or understand the economy, or generate policy.

But, going further, there is this thing called, or as Fat Tony would say, this ting called game theory that hasn’t done much for us other than produce loads of verbiage. Why?

The average interaction as studied in game theory insofar as it reveals individual behavior does not allow us to generalize across preferences and behavior of groups.

Groups are units on their own. There are qualitative differences between a group of ten and a group of, say 395,435. Each is a different animal, in the literal sense, as different as a book is from an office building. When we focus on commonalities, we get confused, but, at a certain scale, things become different. Mathematically different. The higher the dimension, in other words the number of possible interactions, the more difficult to understand the macro from the micro, the general from the units.

Or, in spite of the huge excitement about our ability to see into the brain using the so-called field of neuroscience:

Understanding how the subparts of the brain (say, neurons) work will never allow us to understand how the brain works.

So far we have no f***g idea how the brain of the worm C elegans works, which has around three hundred neurons. C elegans was the first living unit to have its gene sequenced. Now consider that the human brain has about one hundred billion neurons. and that going from 300 to 301 neurons may double the complexity. [I have actually found situations where a single additional dimension may more than double some aspect of the complexity, say going from a 1000 to 1001 may cause complexity to be multiplied by a billion times.] So use of never here is appropriate. And if you also want to understand why, in spite of the trumpeted “advances” in sequencing the DNA, we are largely unable to get information except in small isolated pockets of some diseases.

Understanding the genetic make-up of a unit will never allow us to understand the behavior of the unit itself.

A reminder that what I am writing here isn’t an opinion. It is a straightforward mathematical property.

I cannot resist this:

Much of the local research in experimental biology, in spite of its seemingly “scientific” and evidentiary attributes fail a simple test of mathematical rigor.

This means we need to be careful of what conclusions we can and cannot make about what we see, no matter how locally robust it seems. It is impossible, because of the curse of dimensionality, to produce information about a complex system from the reduction of conventional experimental methods in science. Impossible.

My colleague Bar Yam has applied the failure of mean-field to evolutionary theory of the selfish-gene narrative trumpeted by such aggressive journalists as Richard Dawkins and Steven Pinker and other naive celebrities with more mastery of English than probability theory. He shows that local properties fail, for simple geographical reasons, hence if there is such a thing as a selfish gene, it may not be the one they are talking about. We have addressed the flaws of “selfishness” of a gene as shown mathematically by Nowak and his colleagues.

Hayek, who had a deep understanding of the properties of complex systems, promoted the idea of “scientism” to debunk statements that are nonsense dressed up as science, used by its practitioners to get power, money, friends, decorations, invitations to dinner with the Norwegian minister of culture, use of the VIP transit lounge at Kazan Airport, and similar perks. It is easier to take a faker seriously, since science doesn’t look neat and cosmetically appealing. So with the growth of science, we will see a rise of scientism, and my general heuristics are as follows: 1) look for the presence of simple nonlinearity, hence Jensen’s Inequality. If there is such nonlinearity, then call Yaneer Bar Yam at the New England Complex Systems Institute for a friendly conversation about the solidity of the results ; 2) If the paper writers use anything that remotely looks like a “regression” and “p-values”, ignore the quantitative results.

## Where You Cannot Generalize from Knowledge of Parts

###### Category

Nassim, along with his five colleagues in the Real World Risk Institute, offers a Qualitative Mini Certificate in Risk (real world risk, not risk management) for risk professionals and analysts interested in how what they know applies to the real word, professional risk takers (with some basic familiarity with technical language) willing to gain perspective and understand how to use the research without falling into model error, and other executives/decision makers… literally any risk taker with some technical understanding. The next intense one-week workshop will take place from June 6th to the 10th at the Princeton Club in New York City.

A new Quantitative Mini Certificate in Risk, “the only quantitative program embedded in the real world,” will be offered from August 15th to the 19th in Stony Brook, New York. This program is designed for professional quantitative risk takers and managers interested in depth and links to reality.

For more information about these programs, visit the Real World Financial Institute’s website or email the Program Coordinator, Alicia Bentham-Williams, at alicia@realworldrisk.com or info@realworldrisk.com.

## Nassim’s Mini Certificates in Risk

###### Category

Along with a long list of global thought leaders, Nassim will be speaking at this year’s SALT Conference at the Bellagio in Las Vegas on the weekend of May 10-13th:

The SkyBridge Alternatives (SALT) Conference is committed to facilitating balanced discussions and debates on macro-economic trends, geo-political events and alternative investment opportunities within the context of a dynamic global economy. With thought leaders, public policy officials, business professionals and investors from over 42 countries and 6 continents, the SALT Conference provides an unmatched opportunity for attendees from around the world to connect with global leaders and network with industry peers.

SALT is produced by SkyBridge SALT, LLC, an affiliate of SkyBridge Capital, a global investment firm with approximately \$12.6 billion in assets under management or advisement as of January 31, 2016. Unique and forward looking, SkyBridge Capital is a pioneer in the alternative asset management industry leading an evolution in the fund of funds space. SkyBridge has redefined the fund of funds investment model by developing a thematic and tactical multi-strategy investment approach to consistently generate attractive risk-adjusted returns. The firm’s investment strategy is enhanced by their unparalleled access to industry decision makers and global financial leaders – including money managers, economists and policy makers– whose insights are integral to shaping our global outlook and opportunity sets. The firm is headquartered in New York and also has a presence in Zürich, Switzerland and Seoul, South Korea.

## Nassim Will Speak at the 8th Annual SALT Conference in Las Vegas May 10-13

###### Category

Nassim kicks off The Bank of England’s One Bank Flagship Seminar, the first such seminar offered by the bank in an effort at greater transparency:

The first part of this talk – The Law of Large Numbers in the Real World – presents fat tails, defines them, and shows how the conventional statistics fail to operate in the real world, particularly with econometric variables, for two main reasons: 1) we need a lot, a lot more data for fat tails; and 2) we are going about estimators the wrong way. The second part – Detecting Fragility – presents heuristics to detect fragility in portfolios. Fragility is shown to be ‘anything that is harmed by volatility’. The good news is that while (tail) risk is not measurable, fragility is.

## Tail Risk Measurement Heuristics

###### Category

Nassim joins Jean-Philippe Bouchaud, founder and chairman of Capital Fund Management, in a debate at the at the 2015 Chaire PARI Conference.

## Video: Nassim on Modeling Financial Risks: Relevance and Resilience (In French)

###### Category

Discussing the concept of antifragility, Nassim remotely addresses the Theory of Constraints International Conference held in Capetown South Africa from September 6th to 9th, 2015. The theme of the conference was how to use the Theory of Constraints to transform organizations and people from fragile (harmed by volatility) to robust (not harmed by volatility) to antifragile (benefiting from volatility).

## Video: Nassim on the Concept of Antifragility

###### Category

Thanks to QuantLabs.net for the link.

## How Increasing Benefits Increases the Risk of Ruin

###### Category

Originally published May 24th 2014

Sornette vs. Taleb Diametrically Opposite Approaches to Risk & Predictability

## [VIDEO] Diplomatic Debate Between Taleb and Sornette on Approaches to Risk & Predictability

###### Category

Quickly recorded. You do not decrease tail risk by increasing benefits, you decrease tail risk by decreasing tail risk.