Links

On Medium, Nassim posts a continuation of his previous article on the Minority Rule:

Let us take the idea of the last chapter [the intransigent minority’s disproportional influence] one step further, get a bit more technical, and generalize. It will debunk some of the fallacies we hear in psychology, “evolutionary theory”, game theory, behavioral economics, neuroscience, and similar fields not subjected to proper logical (and mathematical) rigor, in spite of the occasional semi-complicated equations. For instance we will see why behavioral economics will necessarily fail us even if its results were true at the individual level and why use of brain science to explain behavior has been no more than great marketing for scientific papers.

Consider the following as a rule. Whenever you have nonlinearity, the average doesn’t matter anymore. Hence:

The more nonlinearity in the response, the less informational the average.

For instance, your benefit from drinking water would be linear if ten glasses of water were ten times as good as one single glass. If that is not the case, then necessarily the average water consumption matters less than something else that we will call “unevenness”, or volatility, or inequality in consumption. Say your average daily consumption needs to be one liter a day and I gave you ten liters one day and none for the remaining nine days, for an average of one liter a day. Odds are you won’t survive. You want your quantity of water to be as evenly distributed as possible. Within the day, you do not need to consume the same amount water every minute, but at the scale of the day, you want maximal evenness.

The effect of the nonlinearity in the response on the average –and the informational value of such an average –is something I’ve explained in some depth in Antifragile, as it was the theme of the book, so I will just assume a summary here is sufficient. From an informational standpoint, someone who tells you “We will supply you with 0ne liter of water liter day on average” is not conveying much information at all; there needs to be a second dimension, the variations around such an average. You are quite certain that you will die of thirst if his average comes from a cluster of a hundred liters every hundred days.

Note that an average and a sum are mathematically the same thing up to a simple division by a constant, so the fallacy of the average translate into the fallacy of summing, or aggregating, or looking at collective that has many components from the properties of a single unit.


As we saw, complex systems are characterized by the interactions between their components, and the resulting properties of the ensemble not (easily) seen from the parts.

There is a rich apparatus to study interactions originating from what is called the Ising problem, after the physicist Ernst Ising, originally in the ferromagnetic domain, but that has been adapted to many other areas. The model consists of discrete variables that represent atoms that can be in one of two states called “spins” but are in fact representing whether the state is what is nicknamed “up” or “down” (or can be dealt with using +1 or −1). The atoms are arranged in a lattice, allowing each unit to interact with its neighbors. In low dimensions, that is that for every atom you look at an interaction on a line (one dimensional) between two neighbors one to its left and one to its right, on a grid (two dimensional), the Ising model is simple and lend itself to simple solutions.

One method in such situations called “mean field” is to generalize from the “mean”, that is average interaction and apply to the ensemble. This is possible if and only if there is no dependence between one interaction and another –the procedure appears to be the opposite of renormalization from the last chapter. And, of course, this type of averaging is not possible if there are nonlinearities in the effect of the interactions.

More generally, the Übererror is to apply the “mean field” technique, by looking at the average and applying a function to it, instead of averaging the functions –a violation of Jensen’s inequality [Jensen’s Inequality, definition: a function of an average is not an average of a function, and the difference increases with disorder]. Distortions from mean field techniques will necessarily occur in the presence of nonlinearities.

What I am saying may appear to be complicated here –but it was not so with the story of the average water consumption. So let us produce equivalent simplifications across things that do not average.

From the last chapter [Minority Rule],

The average dietary preferences of the population will not allow us to understand the dietary preferences of the whole.

Some scientist observing the absence of peanuts in U.S. schools would infer that the average student is allergic to peanuts when only a very small percentage are so.

Or, more bothersome

The average behavior of the market participant will not allow us to understand the general behavior of the market.

These points appear clear thanks to our discussion about renormalization. They may cancel some stuff you know. But to show how under complexity the entire field of social science may fall apart, take one step further,

The psychological experiments on individuals showing “biases” do not allow us to understand aggregates or collective behavior, nor do they enlighten us about the behavior of groups.

Human nature is not defined outside of transactions involving other humans. Remember that we do not live alone, but in packs and almost nothing of relevance concerns a person in isolation –which is what is typically done in laboratory-style work.

Some “biases” deemed “irrational” by psycholophasters interested in pathologizing humans are not necessarily so if you look at their effect on the collective.

What I just said explains the failure of the so-called field of behavioral economics to give us any more information than orthodox economics (itself rather poor) on how to play the market or understand the economy, or generate policy.

But, going further, there is this thing called, or as Fat Tony would say, this ting called game theory that hasn’t done much for us other than produce loads of verbiage. Why?

The average interaction as studied in game theory insofar as it reveals individual behavior does not allow us to generalize across preferences and behavior of groups.

Groups are units on their own. There are qualitative differences between a group of ten and a group of, say 395,435. Each is a different animal, in the literal sense, as different as a book is from an office building. When we focus on commonalities, we get confused, but, at a certain scale, things become different. Mathematically different. The higher the dimension, in other words the number of possible interactions, the more difficult to understand the macro from the micro, the general from the units.

Or, in spite of the huge excitement about our ability to see into the brain using the so-called field of neuroscience:

Understanding how the subparts of the brain (say, neurons) work will never allow us to understand how the brain works.

So far we have no f***g idea how the brain of the worm C elegans works, which has around three hundred neurons. C elegans was the first living unit to have its gene sequenced. Now consider that the human brain has about one hundred billion neurons. and that going from 300 to 301 neurons may double the complexity. [I have actually found situations where a single additional dimension may more than double some aspect of the complexity, say going from a 1000 to 1001 may cause complexity to be multiplied by a billion times.] So use of never here is appropriate. And if you also want to understand why, in spite of the trumpeted “advances” in sequencing the DNA, we are largely unable to get information except in small isolated pockets of some diseases.

Understanding the genetic make-up of a unit will never allow us to understand the behavior of the unit itself.

A reminder that what I am writing here isn’t an opinion. It is a straightforward mathematical property.

I cannot resist this:

Much of the local research in experimental biology, in spite of its seemingly “scientific” and evidentiary attributes fail a simple test of mathematical rigor.

This means we need to be careful of what conclusions we can and cannot make about what we see, no matter how locally robust it seems. It is impossible, because of the curse of dimensionality, to produce information about a complex system from the reduction of conventional experimental methods in science. Impossible.

My colleague Bar Yam has applied the failure of mean-field to evolutionary theory of the selfish-gene narrative trumpeted by such aggressive journalists as Richard Dawkins and Steven Pinker and other naive celebrities with more mastery of English than probability theory. He shows that local properties fail, for simple geographical reasons, hence if there is such a thing as a selfish gene, it may not be the one they are talking about. We have addressed the flaws of “selfishness” of a gene as shown mathematically by Nowak and his colleagues.

Hayek, who had a deep understanding of the properties of complex systems, promoted the idea of “scientism” to debunk statements that are nonsense dressed up as science, used by its practitioners to get power, money, friends, decorations, invitations to dinner with the Norwegian minister of culture, use of the VIP transit lounge at Kazan Airport, and similar perks. It is easier to take a faker seriously, since science doesn’t look neat and cosmetically appealing. So with the growth of science, we will see a rise of scientism, and my general heuristics are as follows: 1) look for the presence of simple nonlinearity, hence Jensen’s Inequality. If there is such nonlinearity, then call Yaneer Bar Yam at the New England Complex Systems Institute for a friendly conversation about the solidity of the results ; 2) If the paper writers use anything that remotely looks like a “regression” and “p-values”, ignore the quantitative results.

On Medium, Nassim explains how once an intransigent minority reaches a tiny percentage of the total population, the majority of the population will naturally succumb to their preferences:

The best example I know that gives insights into the functioning of a complex system is with the following situation. It suffices for an intransigent minority –a certain type of intransigent minorities –to reach a minutely small level, say three or four percent of the total population, for the entire population to have to submit to their preferences. Further, an optical illusion comes with the dominance of the minority: a naive observer would be under the impression that the choices and preferences are those of the majority. If it seems absurd, it is because our scientific intuitions aren’t calibrated for that (fughedabout scientific and academic intuitions and snap judgments; they don’t work and your standard intellectualization fails with complex systems, though not your grandmothers’ wisdom).

The main idea behind complex systems is that the ensemble behaves in ways not predicted by the components. The interactions matter more than the nature of the units. Studying individual ants will never (one can safely say never for most such situations), never give us an idea on how the ant colony operates. For that, one needs to understand an ant colony as an ant colony, no less, no more, not a collection of ants. This is called an “emergent” property of the whole, by which parts and whole differ because what matters is the interactions between such parts. And interactions can obey very simple rules. The rule we discuss in this chapter is the minority rule.

The minority rule will show us how it all it takes is a small number of intolerant virtuous people with skin in the game, in the form of courage, for society to function properly.

This example of complexity hit me, ironically, as I was attending the New England Complex Systems institute summer barbecue. As the hosts were setting up the table and unpacking the drinks, a friend who was observant and only ate Kosher dropped by to say hello. I offered him a glass of that type of yellow sugared water with citric acid people sometimes call lemonade, almost certain that he would reject it owing to his dietary laws. He didn’t. He drank the liquid called lemonade, and another Kosher person commented: “liquids around here are Kosher”. We looked at the carton container. There was a fine print: a tiny symbol, a U inside a circle, indicating that it was Kosher. The symbol will be detected by those who need to know and look for the minuscule print. As to others, like myself, I had been speaking prose all these years without knowing, drinking Kosher liquids without knowing they were Kosher liquids.

Figure 1 The lemonade container with the circled U indicating it is (literally) Kosher.

Criminals With Peanut Allergies

A strange idea hit me. The Kosher population represents less than three tenth of a percent of the residents of the United States. Yet, it appears that almost all drinks are Kosher. Why? Simply because going full Kosher allows the producer, grocer, restaurant, to not have to distinguish between Kosher and nonkosher for liquids, with special markers, separate aisles, separate inventories, different stocking sub-facilities. And the simple rule that changes the total is as follows:

A Kosher (or halal) eater will never eat nonkosher (or nonhalal) food , but a nonkosher eater isn’t banned from eating kosher.

Or, rephrased in another domain:

A disabled person will not use the regular bathroom but a nondisabled person will use the bathroom for disabled people.

Granted, sometimes, in practice, we hesitate to use the bathroom with the disabled sign on it owing to some confusion –mistaking the rule for the one for parking cars, under the belief that the bathroom is reserved for exclusive use by the handicapped.

Someone with a peanut allergy will not eat products that touch peanuts but a person without such allergy can eat items without peanut traces in them.

Which explains why it is so hard to find peanuts on airplanes and why schools are peanut-free (which, in a way, increases the number of persons with peanut allergies as reduced exposure is one of the causes behind such allergies).

Let us apply the rule to domains where it can get entertaining:

An honest person will never commit criminal acts but a criminal will readily engage in legal acts.

Let us call such minority an intransigent group, and the majority a flexible one. And the rule is an asymmetry in choices.

I once pulled a prank on a friend. Years ago when Big Tobacco were hiding and repressing the evidence of harm from secondary smoking, New York had smoking and nonsmoking sections in restaurants (even airplanes had, absurdly, a smoking section). I once went to lunch with a friend visiting from Europe: the restaurant only had availability in the smoking sections. I convinced the friend that we needed to buy cigarettes as we had to smoke in the smoking section. He complied.

Read the rest of the article.

On facebook, Nassim recently posted the following statement about journalistic ethics in light of the current controversy over Hulk Hogan’s successful lawsuit for $140 million dollars against Gawker Media. Hulk Hogan was backed by tech billionaire Peter Thiel in his efforts:

PUTTING SKIN IN THE GAME OF JOURNALISTS
[CITIZENS vs GAWKER and CITIZENS vs JOURNALISM]

Journalists –as any guild, care about their peers and their community more than the general public. Except that we cannot afford to have such a community engage in a conspiracy against the laymen since they represent our interests, us the lay crowd; they are supposed to stand for the general public against inner circles of power. Journalism arose from the need to expose falsehood, take risks in exposing matters detrimental to the public; in short, counter the agency problem of the powerful. But, it is turning out, the journalism model can also work in the opposite manner: members have been effective in escaping having skin in the game –only whistleblowers and war correspondents currently do.

So one can see how this severe agency problem can explode with the Gawker story. The English tabloid machine came to the U.S. in full force with Gawker, founded by a firm that specializes in dirt on the internet. By dirt I don’t mean a fraudulent transaction abetted by some power: no, the kind of dirt that takes place in bedrooms (and even in bathrooms).

They sell voyeurism, predator voyeurism.

In other words they want to harm citizens by disclosing their private information and posting their videos without their permission in the interest of selling information. And without being accountable for it.

Gawker having posted a video of a celebrity having sex without his permission incurred a monstrous judgment of $140 million. The suit will bankrupt Gawker. Most of all, the judgment revealed that such a predatory business model will not survive, not because it is immoral, but because it has tail risks. For America has tort laws and a legal mechanism by which people harmed by corporations can be compensated for it –a mechanism that flourished thanks to Ralph Nader. It, along with the First Amendment protect citizens by putting skin in the game of the corporations.

Gawker is trying to make a First Amendment argument and unfortunately journos appear to find this justified –while normal citizens are horrified. Liberty in the thoughts of the founding fathers was not about voyeurism, but about public matters.

Gawker argued that because the person committing sex on the video they posted was a public person, that it became a “public” matter exempted from privacy protection. People failed to see that should that argument be true, then next someone spying on any public figure should be allowed to post their bedroom activity (including Hillary Clinton, Obama, anyone)… (Gawker has ruined the lives of 21 year olds posting their sex tapes and their reaction was outrageous; in one instance their lawyer Gaby Darbyshire e-mailed the woman who was in a revenge sex tape, defending the video as “completely newsworthy” and scolding her about how “one’s actions can have unintended consequences.”)

Peter Thiel, a billionaire with a vendetta against Gawker funded a law suit. Revenge motives perhaps, but this is how the market works: Gawker tries to make money therefore they need to live with the risk of someone trying to make money from their demise.

(You make money from the demise of a 21 yo, someone will make money from yours. You make yourself a vehicle for revenge porn; you become the subject of someone’s revenge. You engage in bullying someone financially weaker than you; someone stronger will bully you. There is no reason Gawker should be the only one to use asymmetry given that their very business is asymmetry against weak people–and this is general as the media is asymmetrically strong against citizens, what is commonly called “bullying” ).
I would have personally shorted Gawker (if they were publicly listed) to make money from their collapse. And I am ready to fund lawsuits against journalists who break some intellectual rules and distort people’s positions (strawman arguments).

Any journalist who supports Gawker in the name of the First Amendments fails to understand that they as a community are committing suicide because they are trivializing the reasons behind the First Amendment –and they make it conflict with other fundamental rights. And a corporation trying to warp our sacred values should go bankrupt. And anyone, like Peter Thiel, who accelerates such bankruptcy, should be thanked.

Nassim shares an excerpt from his work-in-progess Skin in the Game, at Evonomics. In a fascinating article called How To Legally Own Another Person, he discusses how and why well-paid employees behave much like slaves. It begins:

In its early phase, as the church was starting to get established in Europe, there was a group of itinerant people called the gyrovagues. They were gyrating and roaming monks without any affiliation to any institution. Theirs was a free-lance (and ambulatory) variety of monasticism, and their order was sustainable as the members lived off begging and from the good graces of townsmen who took interest in them. It is a weak form of sustainability, as one can hardly call sustainable a group of a people with vows of celibacy: they cannot grow organically and would need continuous enrollment. But their members managed to survive thanks to help from the population, which provided them with food and temporary shelter.

Sometimes around the fifth century, they started disappearing –they are now extinct. The gyrovagues were unpopular with the church, banned by the council of Chalcedon in the Fifth Century, then again by the second council of Nicaea about three hundred years later. In the West, Saint Benedict of Nurcia, their greatest detractor, favored a more institutional brand of monasticism and ended up prevailing with his rules that codified the activity, with a hierarchy and strong supervision by an abbot. For instance, Benedict’s rulesiii, put together in a sort of instruction manual, stipulate that a monk’s possessions should be in the hands of the abbot (Rule 33) and Rule 70 bans angry monks from hitting other monks.

Why were they banned? They were, simply, totally free. They were financially free, and secure, not because of their means but because of their wants. Ironically by being beggars, they had the equivalent of f*** you money, the one can get more easily by being at the lowest rung than by being member of the income dependent class.

You can read the rest of the article at Evonomics.

» Property Soul, Notes from a Singapore property investor writes Six reasons why property is not an antifragile investment after meeting Nassim at his recent talk there.

» Zero Hedge talks Antifragility on Prepared? When Ebola hits your town you will want to be antifragile.

» Nouriel Roubini on CNBC reveals his black swan scenarios.

» Business Insider names Nassim one of The 25 Most Successful Wharton Business School Graduates.

» Antifragility explored when applied to raising children on Why Parents Inadvertently Hinder The Success Of Their Children on Forbes.

» Lorin Hochstein discusses the fragile side of cloud software concluding the “future of cloud software is systems that fail much less often, but much harder” on Cloud software, fragility and Air France 447.

Got any other links? Let us know in the comments!

From Nassim Taleb’s Facebook Page:

Life is Randomness! Life is Antifragility!

More evidence that you are alive if & only if you like volatility. More evidence of Jensen’s inequality (convex response). This article passed my filter, my bi-monthly linking allowance. (via Steven Stogatz)

Stochastic properties of neurotransmitter release expand the dynamic range of synapses.

Yang H, Xu-Friedman MA.
Department of Biological Sciences, University at Buffalo, State University of New York, Buffalo, New York 14260.

Release of neurotransmitter is an inherently random process, which could degrade the reliability of postsynaptic spiking, even at relatively large synapses. This is particularly important at auditory synapses, where the rate and precise timing of spikes carry information about sounds. However, the functional consequences of the stochastic properties of release are unknown. We addressed this issue at the mouse endbulb of Held synapse, which is formed by auditory nerve fibers onto bushy cells (BCs) in the anteroventral cochlear nucleus. We used voltage clamp to characterize synaptic variability. Dynamic clamp was used to compare BC spiking with stochastic or deterministic synaptic input. The stochastic component increased the responsiveness of the BC to conductances that were on average subthreshold, thereby increasing the dynamic range of the synapse. This had the benefit that BCs relayed auditory nerve activity even when synapses showed significant depression during rapid activity. However, the precision of spike timing decreased with stochastic conductances, suggesting a trade-off between encoding information in spike timing versus probability. These effects were confirmed in fiber stimulation experiments, indicating that they are physiologically relevant, and that synaptic randomness, dynamic range, and jitter are causally related.

http://www.ncbi.nlm.nih.gov/pubmed/24005293