Category Archives: Precautionary principle

The response to Houston flooding should not be (primarily) about climate change

As the ongoing devastation around Houston, TX continues to unfold from Hurricane Harvey flood waters, it is crucial that we begin learning lessons to better mitigate these risks in the future — for Houston and elsewhere. 

Inevitably, both scientific and political discussions have turned their focus towards climate change, and to what degree changes in the climate may have played a role in this and other disasters. 

Here, I do not intend to develop an argument about whether or not anthropogenic climate change is real or not, nor whether it has been adequately demonstrated scientifically (importantly, those are two separate issues).

Rather, this is an argument about how to best mitigate the risks of such disasters moving forward regardless of whether or not the culprit is climate change. 

The crucial question is: given our current state of knowledge and capacity to act, what should we do?

There is a deep problem in our underestimation of extreme events under fat-tails. This problem is being both technically addressed and made famous by Nassim Taleb, and remains a central point regardless of climate change effects. We must get better at anticipating and addressing extreme events.

For the sake of discussion, let’s assume that climate change is real, and has caused an increase in the frequency of extreme weather events. 

In that case, there are two pathways to addressing the problem and reducing the impact of extreme events:

1) Adjust the climate such that there are fewer extreme weather events

2) Adjust infrastructure and behavior to lessen the impact of extreme events

Let’s consider two aspects of each: (1) our capacity to affect, control, and engineer, and (2) the risks associated with such an undertaking. 

Climate controllability and risks

The controllability of climate is low. This is most essentially due to our poor understanding of it. Most policy proposals intended to influence climate are focused on reduction of CO2 emissions. This is wise in the sense of the sensibility of a via negativa approach (that is, remove rather than add), however it suffers from our inability to control this variable immediately and directly (shall we use Houston as leverage for negotiating with China?). Moreover, there is uncertainty as to how effective this approach would be even if it were practically achievable. 

Geo-engineering approaches are an alternative approach. These again suffer form our poor understanding of the system we are attempting to control or influence, and are likely to induce unintended consequences at the scale of the engineering, that is the global scale. There is the very real possibility we would make things worse, not better, with such an undertaking.


Infrastructure and behavior controllability and risks

The controllability of local infrastructure is high. It demands buy-in from a much smaller number of stakeholders. Construction methods are well-established and can be modeled reasonably well.

Moreover, controllability of behavior is high. Individuals and city planners can reduce the number of residents in known flood zones. 

The risks associated with unintended consequences are at the local scale, so even where they occur, their impact will be bounded. 

 ControllabilityRisks
ClimateLOW

due to uncertainty and difficulty of buy in
HIGH

due to uncertainty and global scale
InfrastructureHIGH

due to well-established methodologies and achievable buy-in
LOW

due to well-established methodologies and local nature of intervention and higher-order effects

Closing

Scientific and policy discussions about the role of climate change are reasonable and appropriate following a devastating weather event like we are witnessing in Houston. However, they should not be the primary focus of effort and attention. We would be well-advised to learn how better craft our exposure to extreme events, and to better anticipate their eventual occurrence through non-naive risk analysis incorporating studies of tail-behavior. 

These issues are not mutually exclusive, and I don’t intend to portray them as such. Nevertheless, for long-term mitigation of the impact of extreme events, it is vital to focus on our exposure to them rather than trying to control them. 

In other words, don’t try to change the random variable X, instead change your exposure, f(X) (credit for this framing to Nassim Taleb). 

For more on climate and precaution in the face of uncertainty, see our letter in Issues in Science and Technology here.

 

The Moral Case Against Projecting Pathological Certainty

The sciences have greatly enriched human understanding of the world in which we find ourselves, moving us from magical explanations of phenomena to tested and scrutinized conceptual and mathematical models. Perhaps ironically, one of the insights science has delivered to humanity is the vast uncertainty we face when dealing with complex systems – especially living systems.

Mathematical statistics provide a rigorous approach to quantifying uncertainty and places clear bounds on what claims one can and cannot make with scientific near-certainty. When an individual claims certainty on some matter and appeals to ‘science’ as justification, that individual should be compelled to demonstrate how this certainty follows from rigorous analysis, including that the underlying assumptions of the mathematical tools applied are met in the real-world system of interest. Short of this, one can only adopt an attitude of certainty as a non-scientific opinion. We call such an abuse of the term ‘science’ to justify a non-scientific opinion pathological certainty

When pathological certainty is projected as expert advice to be trusted by non-experts, and when those who would place trust in the supposed expert bear real risks, there is great cause for moral concern.

Simply, in cases where there is vast scientific uncertainty and there exists the potential for severe harm to people and/or the environment, it is deeply immoral to project an image of science-backed certainty when adopting an advisory role to the public at large.

There is no such thing as an ‘anti-science’ position

A position on an issue, say a policy perspective on climate change, cannot in and of itself be ‘pro-‘ or ‘anti-science’ — only a position coupled with the reasoning for said position is sufficient for claiming that a position is appropriately informed by science or not.

In recent times, popular narratives have emerged that label some positions as inherently ‘anti-science’. Setting aside for the moment the fact that some positions are ‘a-scientific’ (that is, we can hold a position for non-scientific reasons), it is crucial to see why the ‘anti-science’ accusation is often a strawman and a red herring that works against fair-minded discussion and debate. This oversimplification is leveraged by those with agendas to silence dissenting views, which are the lifeblood of scientific progress.

For a position to be considered informed by science, the underlying reason for the position must accord with sound scientific reasoning (and not, as many seem to believe, whether the position conforms to some, oft-imagined, ‘consensus’ on the issue). This means conclusions are constrained by the underlying assumptions and limitations of the statistical tools used as part of the reasoning process. A detailed analysis of those constraints is beyond the scope of this post.

Consider the following claims:

1) “I believe in climate change, because yesterday it was hot outside.”

2) “I am skeptical of the predictive value of climate models because of structural uncertainties in the modeling approaches, and the significant impact this can have on long-term projections”

Which is a more scientifically sound position? (I should note here that my perspective on climate change is a precautionary one).

Another example:

1) “GMOs are safe because there is nothing different about them from regular food.”

2) “Transgenic methodologies are extremely novel, harm can take significant time to surface (e.g. prion diseases such as bovine spongiform encephalopathy can take decades for symptoms to emerge), large-scale complex systems are notoriously difficult to predict, there is very little research done on ecological risks associated with large-scale genetic intervention; because of these reasons, and others, a precautionary approach to GMOs is warranted.”

Crucially, science by itself says little about the way we ought to address risk. Consider a situation in which we have 95% confidence of a favorable outcome. Would you ride in a plane based on those statistics?

A position is not pro- or anti-science because of its conclusions, but because of how those conclusions were reached. This does not guarantee the correctness of the conclusions, but focusing on the arguments and having fair-minded debates in good faith is the only way we will reach the appropriate conclusions — not through oversimplifications and pro/anti tribalism.

Climate Models and Precautionary Measures

Forthcoming in Issues in Science And Technology Summer 2015

Joseph Norman, Rupert Read, Yaneer Bar-Yam, and Nassim Nicholas Taleb


The policy debate with respect to anthropogenic climate-change typically revolves around the accuracy of models. Those who contend that models make accurate predictions argue for specific policies to stem the foreseen damaging effects; those who doubt their accuracy cite a lack of reliable evidence of harm to warrant policy action.

These two alternatives are not exhaustive. One can sidestep the “skepticism” of those who question existing climate-models, by framing risk in the most straight- forward possible terms, at the global scale. That is, we should ask “what would the correct policy be if we had no reliable models?”

We have only one planet. This fact radically constrains the kinds of risks that are appropriate to take at a large scale. Even a risk with a very low probability becomes unacceptable when it affects all of us – there is no reversing mistakes of that magnitude.

Without any precise models, we can still reason that polluting or altering our environment significantly could put us in uncharted territory, with no statistical track- record and potentially large consequences. It is at the core of both scientific decision making and ancestral wisdom to take seriously absence of evidence when the consequences of an action can be large. And it is standard textbook decision theory that a policy should depend at least as much on uncertainty concerning the adverse consequences as it does on the known effects.

Further, it has been shown that in any system fraught with opacity, harm is in the dose rather than in the na- ture of the offending substance: it increases nonlinearly to the quantities at stake. Everything fragile has such property. While some amount of pollution is inevitable, high quantities of any pollutant put us at a rapidly increasing risk of destabilizing the climate, a system that is integral to the biosphere. Ergo, we should build down CO2 emissions, even regardless of what climate-models tell us.

This leads to the following asymmetry in climate policy. The scale of the effect must be demonstrated to be large enough to have impact. Once this is shown, and it has been, the burden of proof of absence of harm is on those who would deny it.

It is the degree of opacity and uncertainty in a system, as well as asymmetry in effect, rather than specific model predictions, that should drive the precautionary measures. Push a complex system too far and it will not come back. The popular belief that uncertainty undermines the case for taking seriously the ’climate crisis’ that scientists tell us we face is the opposite of the truth. Properly understood, as driving the case for precaution, uncertainty radically underscores that case, and may even constitute it.