Category Archives: Fat Tails

Thoughts on Brexit and Persistent Complex Systems

All complex biological systems have boundaries. Cells have membranes, and some have walls. Multicellular organisms are bounded in skin, and there are many internal barriers that limit access to select agents (e.g. the blood-brain barrier). Swarms, flocks, and herds limit their exposure to predators by aggregating spatially, forming a boundary between in- and out-herd. Human societies live more peacefully with their neighbors when their boundaries are clearly established, often by physical features like mountains and rivers.

This is not a coincidence. For all of these systems, what is most essential to their persistence is their internal organization and selective interfaces with the environment. This organization is not a given, it has been achieved over the chronicle of evolutionary history. For all of these systems, to ‘open them up’ means a breakdown of that organization. Consider what happens to a cell when you ‘open up’ its membrane and allow any agents in the environment to flow freely through it. The organization is lost — the cell is lost.

The United Kingdom has made history by voting for their independence, and taking a step in reaffirming their functional boundaries. We will see more of this in the coming weeks, months, and years. Despite those who cite fragile economic predictions as reasons to ‘remain’ subject to centralized bureaucratic actors, there are much more basic reasons to ‘leave’, and the economists don’t have them in their equations.

In biological systems, boundaries are permeable, but not arbitrarily — they are semi-permeable. Systems which depend on their internal organization for persistence in the face of uncertainty must be free to manage their own semi-permeable boundaries, else they will make a Darwinian exit, making room for those organizations that are more able and willing to do so.

Climate Models and Precautionary Measures

Forthcoming in Issues in Science And Technology Summer 2015

Joseph Norman, Rupert Read, Yaneer Bar-Yam, and Nassim Nicholas Taleb

The policy debate with respect to anthropogenic climate-change typically revolves around the accuracy of models. Those who contend that models make accurate predictions argue for specific policies to stem the foreseen damaging effects; those who doubt their accuracy cite a lack of reliable evidence of harm to warrant policy action.

These two alternatives are not exhaustive. One can sidestep the “skepticism” of those who question existing climate-models, by framing risk in the most straight- forward possible terms, at the global scale. That is, we should ask “what would the correct policy be if we had no reliable models?”

We have only one planet. This fact radically constrains the kinds of risks that are appropriate to take at a large scale. Even a risk with a very low probability becomes unacceptable when it affects all of us – there is no reversing mistakes of that magnitude.

Without any precise models, we can still reason that polluting or altering our environment significantly could put us in uncharted territory, with no statistical track- record and potentially large consequences. It is at the core of both scientific decision making and ancestral wisdom to take seriously absence of evidence when the consequences of an action can be large. And it is standard textbook decision theory that a policy should depend at least as much on uncertainty concerning the adverse consequences as it does on the known effects.

Further, it has been shown that in any system fraught with opacity, harm is in the dose rather than in the na- ture of the offending substance: it increases nonlinearly to the quantities at stake. Everything fragile has such property. While some amount of pollution is inevitable, high quantities of any pollutant put us at a rapidly increasing risk of destabilizing the climate, a system that is integral to the biosphere. Ergo, we should build down CO2 emissions, even regardless of what climate-models tell us.

This leads to the following asymmetry in climate policy. The scale of the effect must be demonstrated to be large enough to have impact. Once this is shown, and it has been, the burden of proof of absence of harm is on those who would deny it.

It is the degree of opacity and uncertainty in a system, as well as asymmetry in effect, rather than specific model predictions, that should drive the precautionary measures. Push a complex system too far and it will not come back. The popular belief that uncertainty undermines the case for taking seriously the ’climate crisis’ that scientists tell us we face is the opposite of the truth. Properly understood, as driving the case for precaution, uncertainty radically underscores that case, and may even constitute it.

Student’s T Random Walks

A few 2D random walks with magnitudes drawn from the Student’s T distribution. The distributions become progressively more fat-tailed further down the page. Graphs can zoom and pan. Zooming is really instructive with respect to the fat-tailed dynamics. Much of the micro detail is lost at the scale of the largest jumps — zooming in reveals just how large the rare jumps are relative to the ‘typical’ ones.

Antifragile Random Walks

A million timesteps with a Pareto distribution with $latex \alpha =1 $ and mode shifted down to $latex -11 $ from $latex 1 $. Notice how for most time steps, the walk moves downward. However, the rarer upticks are large, orders of magnitude larger than downward movements.

In [1]:
T = 1000000
X = np.zeros(T)

for i in range(5):
    for t in range(T-1):
        X[t+1] = X[t] + np.random.pareto(1) - 12


Cauchy Random Walks, 2D and 3D

1 million steps, with step size determined by a Cauchy distribution, and angle(s) by a flat distribution.

In [1]:
T = 1000000
X = np.zeros((T,2))


for t in range(T-1):
    stepSize = np.random.standard_cauchy()
    direction = np.random.rand()*2*math.pi
    xStep, yStep = cos(direction)*stepSize, sin(direction)*stepSize
    X[t+1,0] = X[t,0] + xStep
    X[t+1,1] = X[t,1] + yStep
In [2]:
X = np.zeros((T,3))

for t in range(T-1):
    stepSize = np.random.standard_cauchy()
    direction1 = np.random.rand()*2*math.pi
    direction2 = np.random.rand()*2*math.pi

    xStep = cos(direction2)*cos(direction1)*stepSize
    yStep = sin(direction1)*stepSize
    zStep = sin(direction2)*cos(direction1)*stepSize
    X[t+1,0] = X[t,0] + xStep
    X[t+1,1] = X[t,1] + yStep
    X[t+1,2] = X[t,2] + zStep
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')  
frame1 = plt.gca()