Climate sensitivity: a review of Friedrich et al. (2016)

Full citation: Friedrich, T., Timmermann, A., Tigchelaar., M., Timm, O. E. & Ganopolski, A. (2016) Nonlinear climate sensitivity and its implications for future greenhouse warming. Science Advances 2 (11) e1501923.

Available here under open access.

 

My housemate, Ben, sent me an article published in The Independent last Friday that reported that “temperature rise of more than 7°C within a lifetime” is projected by a paper published in Science Advances last week. A few people mentioned it to me over the weekend, in fact. Granted, the scenario sounds hellish, but as is common with science reporting in the media, I think it is likely an over-exaggeration. Call me cynical, but it will probably get the Indy more hits than if they reported the actual conclusions of the study, which are broadly in agreement with the projections in the IPCC’s 5th Assessment Report, widely considered the most authoritative synthesis of information from the climate change literature.

Models and proxies

The paper by Friedrich et al. uses palaeoclimate proxies (data, such as marine isotopes in sedimentary cores, that are used to infer past climate using some very clever methodologies) for sea surface temperature (SST) to calculate the surface air temperature (SAT), and hence to find the climate sensitivity. Climate sensitivity is a measure of the climate system’s response to a perturbation such as the emission of greenhouse gases, and is typically expressed as the annual mean surface temperature change associated with a doubling of atmospheric CO2 relative to pre-Industrial levels (°C per 2 x CO2). Perturbations are expressed as radiative forcing, in units of W m-2, which is defined as a change in the planetary energy balance caused by a radiative flux at the top of the atmosphere. Radiative forcing essentially measures the difference between the energy absorbed by the Earth and what is reflected back out to space.

They use a palaeoclimate model to test the specific equilibrium climate sensitivity for a given radiative forcing, which for convenience we’re going to call S from now on, too. This is run for the same time period for which proxy data are available so the two datasets are comparable. However, there are different problems with each of their methods. Firstly, the “SST proxy network spatially under-samples SAT variability”, which means that this data does not capture the extremes in variation between different locations (think about the difference in temperature between Antarctica and the Sahara, for instance, and you’ll begin to understand why this might be a problem – how do we know that these samples are representative of global mean climate? What even is ‘mean climate’ anyway? I think that might be another question for another time…). Secondly, the “climate model simulation underestimates the magnitude of the reconstructed SST variability”, so many of the extremes in SAT are smoothed out, removing the complexity that is present in the observational record. This is demonstrative of a wider problem with these two methods: point-based observations provide an in-depth snapshot view of conditions at a particular location, while models provide a broader overview of what is going on over a larger area, but might gloss over any heterogeneity. The authors point to a need for a combined approach to overcome these limitations.

Despite these problems, the two records match well in terms of the timing and overall scale of temperature change. To correct the modelled S, which is too low, they apply a correction factor based on proxy data. They also used the difference between the anomalies in SST and SAT in eight PMIP3 models (Paleoclimate Model Inter-comparison Project) between the Last Glacial Maximum (LGM), a cold period when temperatures were considerably colder than today, and the pre-Industrial period, a relatively warm inter-glacial period, to calculate a conversion factor between SST and SAT that they can then apply to contemporary data. The problem I see with this is that there is no guarantee that this factor has remained static over the last 800,000 years. There may have been a change in the system, and/or this scaling factor may be dynamic, and related to uncertain or unknown feedbacks in the system. If this were the case, it would make interpreting these proxy data much more complex.

State-dependent climate sensitivity

They find that S is dependent on the background climate state, that is, whether the Earth is in a cold (glacial) or warm (inter-glacial, such as we are currently in) phase. They report that they can say with 99.99% certainty that S differs between the cold and warm periods. Specifically, they find that S is twice as large during warm periods as it is in cold periods (4.88°C per 2 x CO2 vs. 1.78°C per 2 x CO2, respectively).

What this means, they argue, is that using the mean value for S irons out the differences between warm and cold phases, and reduces our ability to accurately project temperature change in the short term, over human timescales. Indeed, using Friedrich et al.’s mean (3.22°C per 2 x CO2), which is virtually identical to the IPCC’s CMIP5-derived result (3.2°C per 2 x CO2), will result in lower projected warming by 2100. We’re currently in an interglacial, so we should be using the estimate for S that applies during a warm phase, what the authors call Swarm.

Equilibrium or transience?

There’s one part of the term ‘equilibrium climate sensitivity’ that we haven’t discussed: the ‘equilibrium’ part. Equilibrium is reached when all parts of the system are balanced; when there are no continuing inputs or losses. And then there’s the how long it takes for this to happen. Even if we stopped emitting greenhouse gases tomorrow, the system would still not be in equilibrium – it takes many years for a system as large and complex as the climate to balance, and there are in-built time lags that control how fast it can be reached.

This complicates things further, because this means that projections up until 2100 cannot rely on the assumption that the climate will be at equilibrium by the end of the century. This is especially true if we continue to burn fossil fuels and emit greenhouse gases as we currently do, because there will be an ongoing radiative forcing associated with these activities. This means a different measure must be used, called the transient climate response (TCR), which takes into account factors like thermal inertia and the change in the ocean’s ability to uptake heat. Based on Swarm, the TCR, calculated by Friedrich et al. is around 2.74°C per 2 x CO2.

Projecting future change

Using this TCR, Friedrich et al. take the IPCC’s RCP8.5 scenario, which projects temperature change given radiative forcing of 8.5 W m-2, and apply their values to project future temperature change. In this way, it is a comparison, or validation, of the IPCC’s scenario. They find that their projections overlap with the upper ranges of the ensemble model run that the IPCC largely used to generate their scenarios, CMIP5, thereby lending support to these projections. However, their estimates of mean SAT change are slightly higher than those given in the IPCC’s most recent report.

Their results suggest that global SAT increase from 1880-2100 will amount to 5.86°C (4.78 – 7.36°C) – 1°C higher compared to the 4.86°C (3.42 – 6.40°C) simulated by the CMIP5 ensemble. This would mean that global mean temperatures would be higher than anything seen in the last 784,000 years, perhaps by as much as three times the peak temperatures seen in the record.

Now, they don’t say why they have chosen to use RCP8.5, but this may be because our current global emissions trajectories put us on course to realise this fairly nightmarish scenario by 2100. Perhaps they wanted to examine the worst-case scenario. Whatever their reasoning, what this means is that their conclusions (i.e. the absolute values for warming) are limited in their application to RCP8.5.

RCP8.5 is an already extreme scenario, which is highly non-linear. Radiative forcing this high could produce positive feedbacks that reinforce warming and perhaps induce regime shifts to new climate states. It is thought that this has happened before in Earth’s history. However, what the Independent article fails to mention, is that this all hinges on a scenario where we do not change our emissions, and where we take no action on climate change. This is a business-as-usual scenario. Yes, it produces warming of around 5°C if you take the CMIP5 mean, or 6°C based on Friedrich et al.’s results, but this is contingent on our doing nothing.

Uncertainty

There is considerable uncertainty associated with climate sensitivity, and there are many methods to measure it. Paleoclimate data are often used to infer the value of S, and usually come out with values as wide ranging as 1-6°C per 2 x CO2. The IPCC’s 2013 Fifth Assessment Report noted that equilibrium climate sensitivity is “likely” (66% chance) between 1.5 – 4.5°C, although the range is 2 – 4.5°C with a mean of 3.2°C using models and ensembles like CMIP5.

Generally, estimates of S based on paleoclimate records tend to be towards the higher end of the scale, whereas those based on instrumental records and modelling are slightly lower – as shown below in the figure taken from IPCC AR5, Ch. 12 (Collins et al., 2013), Box 12.2, Fig. 1. This, along with the huge divergence between studies and methods, further reinforces the point that our understanding of the climate system, and of its sensitivity to change, is severely limited at present.

ipcc-ar5-ch-12-collins-et-al-2013-ecs-estimate-pdfs

Uncertainty comes from several sources. Firstly, there is uncertainty in the observations (proxies) themselves. Mismatches during marine isotope stages MIS5e and MIS11 (specific periods in time) are widely reported, and can be explained by summertime biases in SST proxies and age uncertainties, as well as model uncertainties. Second, there is uncertainty from the methods used to analyse these data. The uncertainty in this paper of the combined SAT reconstruction is ± 2.12°C, which is huge relative to the magnitude of their results. Thirdly, there are considerable uncertainties in models – there is lots we don’t know, and models are necessarily at coarse resolution when they are run for hundreds of thousands of years. Fourth, the size of the temperature difference between the LGM and pre-Industrial period is poorly constrained. Taking a value of 3°C yields an Swarm of 2.52°C per 2 x CO2, which equates to 2100 warming of 3.84°C. Using a 4°C estimate from Annan & Hargreaves (2012) results in 2100 warming of 4.68°C.  That’s a big difference. Further, Swarm is higher based on proxies than on models, despite the larger amplitude of the difference between SAT at the LGM and pre-industrial period. Friedrich et al. use the mean of proxy-derived (5°C) and model-derived (6.5°C) estimates before 10,000 years ago (the limit of proxy data coverage) and thereafter just the model-derived estimate to calculate S, so this could affect their results. Lastly, climate responses differ across the globe – sensitivity is particularly high at the poles due to a phenomenon called polar amplification. The authors test uncertainties using an Antarctic reconstruction and two values for polar amplification to scale it. For the high amplification factor (1.9), projected warming could reach 6.32°C and for the low amplification factor (1.2) it could reach 4.87°C by 2100. Any combination of all of these uncertain parameters leads to a vast range in the final projections for future change.

But what does it mean?

What this shows is that nothing is certain – models are biased, proxies are uncertain, and all of the uncertainties associated with every step of the process to calculate climate sensitivity, and hence project future change are complicated by what we do not know. The palaeoclimatic data-constrained estimates of temperature response to RCP8.5 are ~16% higher than those projected by CMIP5. We may see temperatures rise by 7°C in the next 85 years as the Independent says, but we cannot be sure. Besides, this extreme example is at the top of the range of values given in this paper, which in itself is quite extreme. The authors note that warming of 4°C is “likely” by the end of the century, a figure that is still alarming, but still within the range of possibilities discussed by the IPCC.

The IPCC’s predictions represent an analysis of all the available literature on the subject. This by definition takes into account extreme results on either side of the average. Unhelpful representations of scientific results such as this article reinforce misunderstanding of science among the general public. There is no wonder that the comments are full of marginal views when the article itself takes the most extreme end of the spectrum of results and presents them as truth. Rather than writing articles that get newspapers hits, personally I reckon it would probably be better to report on the science.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s