Category Archives: Fragility

The response to Houston flooding should not be (primarily) about climate change

As the ongoing devastation around Houston, TX continues to unfold from Hurricane Harvey flood waters, it is crucial that we begin learning lessons to better mitigate these risks in the future — for Houston and elsewhere. 

Inevitably, both scientific and political discussions have turned their focus towards climate change, and to what degree changes in the climate may have played a role in this and other disasters. 

Here, I do not intend to develop an argument about whether or not anthropogenic climate change is real or not, nor whether it has been adequately demonstrated scientifically (importantly, those are two separate issues).

Rather, this is an argument about how to best mitigate the risks of such disasters moving forward regardless of whether or not the culprit is climate change. 

The crucial question is: given our current state of knowledge and capacity to act, what should we do?

There is a deep problem in our underestimation of extreme events under fat-tails. This problem is being both technically addressed and made famous by Nassim Taleb, and remains a central point regardless of climate change effects. We must get better at anticipating and addressing extreme events.

For the sake of discussion, let’s assume that climate change is real, and has caused an increase in the frequency of extreme weather events. 

In that case, there are two pathways to addressing the problem and reducing the impact of extreme events:

1) Adjust the climate such that there are fewer extreme weather events

2) Adjust infrastructure and behavior to lessen the impact of extreme events

Let’s consider two aspects of each: (1) our capacity to affect, control, and engineer, and (2) the risks associated with such an undertaking. 

Climate controllability and risks

The controllability of climate is low. This is most essentially due to our poor understanding of it. Most policy proposals intended to influence climate are focused on reduction of CO2 emissions. This is wise in the sense of the sensibility of a via negativa approach (that is, remove rather than add), however it suffers from our inability to control this variable immediately and directly (shall we use Houston as leverage for negotiating with China?). Moreover, there is uncertainty as to how effective this approach would be even if it were practically achievable. 

Geo-engineering approaches are an alternative approach. These again suffer form our poor understanding of the system we are attempting to control or influence, and are likely to induce unintended consequences at the scale of the engineering, that is the global scale. There is the very real possibility we would make things worse, not better, with such an undertaking.


Infrastructure and behavior controllability and risks

The controllability of local infrastructure is high. It demands buy-in from a much smaller number of stakeholders. Construction methods are well-established and can be modeled reasonably well.

Moreover, controllability of behavior is high. Individuals and city planners can reduce the number of residents in known flood zones. 

The risks associated with unintended consequences are at the local scale, so even where they occur, their impact will be bounded. 

 ControllabilityRisks
ClimateLOW

due to uncertainty and difficulty of buy in
HIGH

due to uncertainty and global scale
InfrastructureHIGH

due to well-established methodologies and achievable buy-in
LOW

due to well-established methodologies and local nature of intervention and higher-order effects

Closing

Scientific and policy discussions about the role of climate change are reasonable and appropriate following a devastating weather event like we are witnessing in Houston. However, they should not be the primary focus of effort and attention. We would be well-advised to learn how better craft our exposure to extreme events, and to better anticipate their eventual occurrence through non-naive risk analysis incorporating studies of tail-behavior. 

These issues are not mutually exclusive, and I don’t intend to portray them as such. Nevertheless, for long-term mitigation of the impact of extreme events, it is vital to focus on our exposure to them rather than trying to control them. 

In other words, don’t try to change the random variable X, instead change your exposure, f(X) (credit for this framing to Nassim Taleb). 

For more on climate and precaution in the face of uncertainty, see our letter in Issues in Science and Technology here.

 

Russian Intentions and American Elections

By now it is not news to anyone that there are serious questions and allegations circulating about Russian influence on our presidential election process and the appropriateness of ongoing and former professional relationships that members of the Executive administration have with Russian individuals and institutions. 

My intention here is not to take sides or make claims about who knew what when – the truth is I have no idea, and almost certainly neither do you. Rather, this brief analysis is intended to highlight points of general consensus and implications therefrom for U.S. policy, strategy, and posture moving forward.

Russian strategy is focused on generating opportunities through disruption. This play is so effective because it is much easier to disrupt a process than it is to achieve a specific end state by influencing or controlling it in specific, planned ways. Once disruption is achieved, opportunities can be identified and exploited. In the context of other nation states, this is typically of the form of undermining trust in institutions, feeding into a fractured citizenry who are forced to channel significant resources to addressing internal tensions.

While there is some indication that Russia may have had a bias against Clinton, there is also indication they, along with almost all major American outlets, analysts, and pollsters expected her to win. If this is true, it implies any information campaign leveled against Clinton was designed primarily to undermine trust in the next administration, not to install Trump.

This is revealing. Again, Russia’s primary aim is to undermine trust in U.S. institutions. Having an administration that is favorable to aspects of their agenda may have been a secondary, but certainly not a primary goal. It is too strategically narrow and provides little optionality.

Which brings us to the ongoing political polarization and dynamics in the U.S. The narrative of Russian influence has become a centerpiece of this polarization: Democrats are claiming the illegitimacy of the president and his people due to as-yet-unrevealed evidence of direct collusion with a foreign government; Republicans are attempting to refocus attention towards the implied illegality of the source of the little information available, i.e. leaks.

While there is no doubt both sides are playing politics, it is worth asking, which of these behaviors does more to undermine our trust in institutions?

The answer is obvious. 

So, ironically, exaggerating the extent of Russian influence on our election serves the Russian agenda most directly. It (a) erodes trust in our basic institutions, and (b) gives plausible deniability to the Russians to continue to sow discord in the U.S. by bolstering this political divide.

The implications for U.S. strategy and posture are clear: the way to most directly serve the Russian agenda is to undermine trust in our institutions. Suggesting, without proof, that our elected representatives are in cahoots with Russia does exactly this.

The message from our leaders must be delivered in concert: Russia is an adversary who we must take seriously; our elections were fair and there is no evidence of direct interference in them; regardless of our differences, the government is legitimate, and innocent until proven otherwise.

 


Note: If any of our intelligence agencies possesses information that suggests Donald Trump occupying the White House is putting the nation in clear and present danger, then their failure to act in a timely manner to rectify the situation would be a failure of the greatest proportions. Because they have apparently not taken any actions to indicate such knowledge, I suspect there is no such danger.

Complex Systems Science: An Informal Overview — Part III: Synthesis of purposeful systems

 This is Part 3 of a multipart series, see Part 1 and Part 2 for additional context.

Purposeful System Synthesis

When we look around the world, we notice that many systems seem to work. That is, they accomplish some task that is useful, they fulfill a purpose or serve a function. The heart beats in order to pump oxygen through the distributed tissues of the body, and inner ear bones play a role in transducing air pressure waves into neural patterns that can inform behavior.

These terms — use, purpose, function — are extremely familiar to us. We live with them as concrete realities. I use the car to get to the store, to get food, to cook a meal, to eat with the family.

Reductionism has no place for these realities. It has no room function or purpose. In a reductionistic universe, nothing happens in order that something else can happen; it just happens. Period. End of story.

Anything that appears otherwise is an epiphenomenon, essentially an illusion.

In biology this deficiency has been most acute. Reductionistic biologists are forced into continuous doublespeak in which they discuss how living systems function — with the implicit understanding that the things they discuss don’t really exist and don’t really have any functions.

This is not mere semantics. This philosophical-blockage has delayed many important conversations in science, and in connecting science to a mature ability to synthesize purposeful systems.

Of course we can salvage a scientific understanding of function as soon as we admit emergent phenomena into our discourse. Functions emerge out of relations. The function of the heart is not something to be found by looking at the heart, but at looking at its network(s) of relationships. For a highly-recommended deep dive into this topic, see Life Itself by Robert Rosen.

Not all emergent properties are functional, but all function is emergent.

Design

For manmade systems, the typical answer to “how did that system get organized so that works?” is “someone put it that way”. That is, the organization is imposed by an external agent who understand how the parts work together to make something useful happen.

This is also how creationists explain where we, as biological creatures, came from, and why we are organized the way we are: an agent arranged us that way. “Intelligent design”.

Of course as a scientific answer to the question of our existence this is no good. But it still sounds pretty good for cars.

Self-organization

How do things get organized when there is no one to organize them? Simply, they organize themselves!

In self-organized systems, all the parts that compose the system just “do their thing”. No part needs to know about the system it’s a part of, its organization, or even that it exists. Each part interacts locally with its neighbors (in physical or abstract space), and often its behaviors are characterized by a simple set of rules. Order that persists is a consequence of there being something globally stable about an arrangement that these parts discover by chance — by wiggling around randomly, essentially. 

A tangible example is the formation so-called micelles. Micelles are physical systems that are similar in many ways to cellular membranes found in living organisms. They are organized in a roughly spherical pattern, embodying a boundary between an ‘internal’ and ‘external’ environment.

 

This arrangement is entirely a consequence of the properties of the molecules and their relations with each other and their local environment. The relevant properties are as follows: some lipid molecules happen to be structured with a head and a tail. Further, the heads of the lipids are attracted to water, whereas the tails are repelled. This polarity doesn’t mean much for a single lipid molecule, but when a bunch of them get near each other, something special happens: the tails, being repelled by the water they are in, find refuge in huddling together, so to speak. The more that bunch together, the less water there is locally, and the less repulsive that environment becomes to the tails. The heads, being attached to the tails, can’t go far. But they don’t mind being wet, so they point out away from the huddled tails. And voila, a membranous sphere.

That’s it. No designer, no constructor, no external agent, but an organized system. Organization for free. (Well, sort of.)

Micelles are not in any direct sense “functional”. But cell membranes are. Every cell relies on the self-organization of its membrane in order that it persists by constraining critical operations within a semi-controlled environment.

Evolution

Not all such arrangements persist, of course. Things break. Cells die. The volatility of the environment tests the fragility of everything, weeding out those patterns that do not withstand the variability.

This is, simply, Natural Selection. With enough time, and therefore enough volatility, the patterns that persist are those which are able to respond to volatility by adjusting their internal patterning and/or modifying their exposure. In other words, things become alive.

These are the two sisters of evolution: creation and destruction. Self-organization provides a rich variety of ordered patterns, environmental stress tests these structures for ability to persist.

Engineering

Engineering is the practice of synthesizing systems to solve human problems. Many of the problems we face today are of enormous complexity. The systems we synthesize in an attempt to address these problems necessarily involve many interacting parts including individuals, organizations, and technologies.

Traditional engineering practices are reductionistic, and assume that a plan of roughly the following form will successfully solve any given problem:

    1. Break problem into pieces
    2. Construct a component that solves each problem-piece
    3. Put pieces together into working whole

The realities that throw a wrench in this process when it comes to large-scale complex systems are myriad, but the regularity of costly failures that result from its application is reason enough to look for a more sufficient way of thinking and doing. 

A figure-ground reversal is needed in the engineering practices in order to facilitate the synthesis of purposeful systems whose complexity is outside the cognitive scope of any individual: a shift in emphasis from the specific structure of a complete solution, to the evolutionary environment in which problem-solving systems can evolve.

Without further argument about the potential for evolution to generate complex adaptive systems with the ability to solve a huge variety of problems, I offer several practical principles informed by evolutionary synthesis for systems engineers and systemic designers to consider in the face of complex real world challenges.

Practical principles:

    • Foster (non-toxic) variety

Evolution happens over ensembles, not individuals. Without variety there is no potential for evolution. Consider how variety is generated in the system, and foster it even when ‘reasonable’ solutions are already discovered. Never put all your eggs in one basket.

    • When resources are abundant, foster the non-obviously-useful

Unlike explicitly designed systems, what is not obviously useful, sometimes is, or can be.  Our inner ear bones that we use to hear evolved from the jawbone of our fishlike ancestors. The reductionistic engineer would have optimized away our ability to hear long ago.

    • Allow for heredity

Systems that show signs of success should be able to pass on their form to subsequent generations. The nature and mechanism of this process will vary from domain to domain.

    • Detect and fail fast, and local, the toxic

Again, harmful varieties should be rooted out as early and as locally as possible, before becoming systemic.

    • Coevolve components

Things work well together when evolved together. The corollary is: don’t expect components that did not coevolve to work well together.

    • Expose to the ecological early

Exposure to the real problem environment the system is supposed to operate in during development/evolution of varieties will buffer against over-designing, and provide an opportunity for the maturation of systems that can handle the true complexity of their task.

    • But not too early

Sometimes it may benefit a system to have some simulated experiences or otherwise explore its range of behaviors with buffered consequences before deployment. This can be seen in the biological world for example in the propensity for play in the most complex organisms. Balancing the potential benefits of playtime with the need to get a big boy job is an art, not a science.

    • Figure-ground reversal: attend more to the selective and generative aspects of the evolutionary environment, less to a specific imagined solution

This does not imply imagined solutions should play no role, but that they should be part of an ensemble of potential solutions. Again: eggs, baskets.

    • Resist the temptation to scale quickly a promising solution

Solutions should prove themselves in time. Often, success can be incidental but look causal; we are fooled by randomness. Moreover some malignancies develop slowly and quietly. We will be thankful when they show up that we moved slowly.

Complex Systems Science: An Informal Overview — Part II: Organization and Scale

This is part 2 of a series informally introducing and discussing ideas in complex systems science, and their relevance to how we build our world. Here is part 1. 

Complex Systems Science and the Special Sciences

We are familiar with science being broken down into different categories depending on what is being studied: particle physics (e.g. electrons), chemistry (molecules), biology (organisms), psychology (minds), sociology (groups of humans), etc. Call these the special sciences as their role is to look into a certain kind of stuff.

Complex systems science is not defined by what the stuff under study is, but rather how one asks and attempts to answer questions about whatever stuff is of interest. Recall that in complex systems, the properties we are interested in might emerge from interactions among components, i.e. emergent properties. For this reason, in complex systems science we pay special attention to the interactions and relationships among the parts, and how they give rise to (emergent) patterns of behavior.

We can do this in physical systems, biological systems, social systems, or any other system of interest. The answers we get will often look remarkably different than those from the special sciences.

Organization and Interdependence

When we attend to the interactions and relationships in a system, the organization of the stuff becomes more central to our understanding than the stuff itself. To illustrate this point, imagine a mad scientist takes each cell of your body one by one and relocates it to a random location — would you feel much like yourself? I think not. When the organization is disrupted, so are the interactions, and the nature of the system changes.

This also means when you change one part of the system, you may affect a larger portion of, or even the whole, system. This is because the behavior of the parts are interdependent. What part A is doing affects what part B & C are doing (and perhaps vice verse) — what my heart is doing affects what my lungs and muscles are doing. Interdependent behavior presents all kinds of challenges to standard statistical approaches which assume the independence of parts of a system.

Whether or not the change in one part of a system has affects on other parts of the system depend on its organization. If you had to choose between losing a kidney or a heart, which would you choose? Would a tree do better off losing ten-thousand leaves or one trunk?

These are hints to be cautious of centralization, and to use redundancy for robustness when possible. When we build systems we should ask ourselves, “what would happen if X failed?” — even if we are pretty sure X won’t fail.

More is Different

‘More is different’ is another way of saying ‘emergence happens’. It is no easy task to predict what the emergent effects will be when we scale a system (i.e. increase its size/number of components), especially when operating under reductionistic assumptions (emergent effects will always surprise the reductionist).

When engineering systems, emergent effects are often detrimental, or even catastrophic, to the integrity of the system, and therefore the purpose it was intended to fulfill. This is because, at the smaller scale, what appear as irrelevant side-effects (which may not have been noticed or attended to at all) are able to be absorbed or dissipated into the system’s environment in some way or another. When we grow the system, these ‘side-effects’ can coalesce and become relevant to the behavior of the system.

This is why we don’t see land animals much bigger than elephants throughout Earth’s history: the mechanical forces that are mere side-effects for smaller critters become causes of failure. Darwin puts a harsh limit on the scale of a design motif.

There are countless engineering failures that are of enormous cost to society (e.g. F-35, USS Zumwalt, the global financial system). Overgrown elephants.

The holy grail of systems engineering is to leverage emergence rather than fighting against it. Nature manages to do this via evolutionary tinkering. Perhaps we can take a cue from her.

Thoughts on Brexit and Persistent Complex Systems

All complex biological systems have boundaries. Cells have membranes, and some have walls. Multicellular organisms are bounded in skin, and there are many internal barriers that limit access to select agents (e.g. the blood-brain barrier). Swarms, flocks, and herds limit their exposure to predators by aggregating spatially, forming a boundary between in- and out-herd. Human societies live more peacefully with their neighbors when their boundaries are clearly established, often by physical features like mountains and rivers.

This is not a coincidence. For all of these systems, what is most essential to their persistence is their internal organization and selective interfaces with the environment. This organization is not a given, it has been achieved over the chronicle of evolutionary history. For all of these systems, to ‘open them up’ means a breakdown of that organization. Consider what happens to a cell when you ‘open up’ its membrane and allow any agents in the environment to flow freely through it. The organization is lost — the cell is lost.

The United Kingdom has made history by voting for their independence, and taking a step in reaffirming their functional boundaries. We will see more of this in the coming weeks, months, and years. Despite those who cite fragile economic predictions as reasons to ‘remain’ subject to centralized bureaucratic actors, there are much more basic reasons to ‘leave’, and the economists don’t have them in their equations.

In biological systems, boundaries are permeable, but not arbitrarily — they are semi-permeable. Systems which depend on their internal organization for persistence in the face of uncertainty must be free to manage their own semi-permeable boundaries, else they will make a Darwinian exit, making room for those organizations that are more able and willing to do so.

The Moral Case Against Projecting Pathological Certainty

The sciences have greatly enriched human understanding of the world in which we find ourselves, moving us from magical explanations of phenomena to tested and scrutinized conceptual and mathematical models. Perhaps ironically, one of the insights science has delivered to humanity is the vast uncertainty we face when dealing with complex systems – especially living systems.

Mathematical statistics provide a rigorous approach to quantifying uncertainty and places clear bounds on what claims one can and cannot make with scientific near-certainty. When an individual claims certainty on some matter and appeals to ‘science’ as justification, that individual should be compelled to demonstrate how this certainty follows from rigorous analysis, including that the underlying assumptions of the mathematical tools applied are met in the real-world system of interest. Short of this, one can only adopt an attitude of certainty as a non-scientific opinion. We call such an abuse of the term ‘science’ to justify a non-scientific opinion pathological certainty

When pathological certainty is projected as expert advice to be trusted by non-experts, and when those who would place trust in the supposed expert bear real risks, there is great cause for moral concern.

Simply, in cases where there is vast scientific uncertainty and there exists the potential for severe harm to people and/or the environment, it is deeply immoral to project an image of science-backed certainty when adopting an advisory role to the public at large.

There is no such thing as an ‘anti-science’ position

A position on an issue, say a policy perspective on climate change, cannot in and of itself be ‘pro-‘ or ‘anti-science’ — only a position coupled with the reasoning for said position is sufficient for claiming that a position is appropriately informed by science or not.

In recent times, popular narratives have emerged that label some positions as inherently ‘anti-science’. Setting aside for the moment the fact that some positions are ‘a-scientific’ (that is, we can hold a position for non-scientific reasons), it is crucial to see why the ‘anti-science’ accusation is often a strawman and a red herring that works against fair-minded discussion and debate. This oversimplification is leveraged by those with agendas to silence dissenting views, which are the lifeblood of scientific progress.

For a position to be considered informed by science, the underlying reason for the position must accord with sound scientific reasoning (and not, as many seem to believe, whether the position conforms to some, oft-imagined, ‘consensus’ on the issue). This means conclusions are constrained by the underlying assumptions and limitations of the statistical tools used as part of the reasoning process. A detailed analysis of those constraints is beyond the scope of this post.

Consider the following claims:

1) “I believe in climate change, because yesterday it was hot outside.”

2) “I am skeptical of the predictive value of climate models because of structural uncertainties in the modeling approaches, and the significant impact this can have on long-term projections”

Which is a more scientifically sound position? (I should note here that my perspective on climate change is a precautionary one).

Another example:

1) “GMOs are safe because there is nothing different about them from regular food.”

2) “Transgenic methodologies are extremely novel, harm can take significant time to surface (e.g. prion diseases such as bovine spongiform encephalopathy can take decades for symptoms to emerge), large-scale complex systems are notoriously difficult to predict, there is very little research done on ecological risks associated with large-scale genetic intervention; because of these reasons, and others, a precautionary approach to GMOs is warranted.”

Crucially, science by itself says little about the way we ought to address risk. Consider a situation in which we have 95% confidence of a favorable outcome. Would you ride in a plane based on those statistics?

A position is not pro- or anti-science because of its conclusions, but because of how those conclusions were reached. This does not guarantee the correctness of the conclusions, but focusing on the arguments and having fair-minded debates in good faith is the only way we will reach the appropriate conclusions — not through oversimplifications and pro/anti tribalism.

Climate Models and Precautionary Measures

Forthcoming in Issues in Science And Technology Summer 2015

Joseph Norman, Rupert Read, Yaneer Bar-Yam, and Nassim Nicholas Taleb


The policy debate with respect to anthropogenic climate-change typically revolves around the accuracy of models. Those who contend that models make accurate predictions argue for specific policies to stem the foreseen damaging effects; those who doubt their accuracy cite a lack of reliable evidence of harm to warrant policy action.

These two alternatives are not exhaustive. One can sidestep the “skepticism” of those who question existing climate-models, by framing risk in the most straight- forward possible terms, at the global scale. That is, we should ask “what would the correct policy be if we had no reliable models?”

We have only one planet. This fact radically constrains the kinds of risks that are appropriate to take at a large scale. Even a risk with a very low probability becomes unacceptable when it affects all of us – there is no reversing mistakes of that magnitude.

Without any precise models, we can still reason that polluting or altering our environment significantly could put us in uncharted territory, with no statistical track- record and potentially large consequences. It is at the core of both scientific decision making and ancestral wisdom to take seriously absence of evidence when the consequences of an action can be large. And it is standard textbook decision theory that a policy should depend at least as much on uncertainty concerning the adverse consequences as it does on the known effects.

Further, it has been shown that in any system fraught with opacity, harm is in the dose rather than in the na- ture of the offending substance: it increases nonlinearly to the quantities at stake. Everything fragile has such property. While some amount of pollution is inevitable, high quantities of any pollutant put us at a rapidly increasing risk of destabilizing the climate, a system that is integral to the biosphere. Ergo, we should build down CO2 emissions, even regardless of what climate-models tell us.

This leads to the following asymmetry in climate policy. The scale of the effect must be demonstrated to be large enough to have impact. Once this is shown, and it has been, the burden of proof of absence of harm is on those who would deny it.

It is the degree of opacity and uncertainty in a system, as well as asymmetry in effect, rather than specific model predictions, that should drive the precautionary measures. Push a complex system too far and it will not come back. The popular belief that uncertainty undermines the case for taking seriously the ’climate crisis’ that scientists tell us we face is the opposite of the truth. Properly understood, as driving the case for precaution, uncertainty radically underscores that case, and may even constitute it.

Antifragile Random Walks

A million timesteps with a Pareto distribution with $latex \alpha =1 $ and mode shifted down to $latex -11 $ from $latex 1 $. Notice how for most time steps, the walk moves downward. However, the rarer upticks are large, orders of magnitude larger than downward movements.

In [1]:
T = 1000000
X = np.zeros(T)

for i in range(5):
    for t in range(T-1):
        X[t+1] = X[t] + np.random.pareto(1) - 12

    plt.plot(X)
    plt.show()