Documenting the Global Extent of the Medieval Warm Period

Repost from Watts up with That here: WUWT

Some typos in the text and diagrams have been corrected

Introduction

In this article I pose the following questions:

  • Was the Medieval Warm Period (MWP) a global event?
  • Were the MWP temperatures higher than recent times?

The reasons for asking these questions are that climate establishment have tried to sideline the MWP as a purely local North Atlantic event. They also frequently state that current temperatures are the highest ever.

I attempt to answer these questions below.

Mapping Project for the Medieval Warm Period

I use the mapping project for the Medieval Warm Period (MWP) developed by Dr Sebastian Luening and Fritz Vahrenholt to establish the global extent of the MWP. This project is a considerable undertaking and I commend the authors for their work.

Luening states on Research Gate that,

“The project aims to identify, capture and interpret all published studies about the Medieval Climate Anomaly (Medieval Warm Period) on a worldwide scale. The data is visualized through a freely accessible online map: http://t1p.de/mwp.”

I show a screenshot from the project in Figure 1 but I recommend that you use the online version at http://t1p.de/mwp because it contains a wealth of data, including abstracts and links to the individual papers.

(Note – The t1p.de/mwp URL given may or may not be reachable by some viewers as it’s one of those mangled “URL shorteners” that may or may not work. It does not work for me. The author claims the URL is valid, and offered no alternate, I independently found this link to a Google maps document which might work better for some readers – Anthony)

fig-1-screenshot-of-mwp-project

key-to-figure-1

Figure 1: Screenshot of MWP Mapping Project (Source: Luening http://t1p.de/mwp downloaded 27-Dec-2016)

A cursory inspection of Figure 1 indicates that there are a large number of warm study locations dispersed throughout the world. However, to determine the global numbers for the Warm-Cold-Neutral-Dry-Wet studies, I downloaded the mapped data for the 934 studies that were available on 30 December 2016 and these are summarised in Figure 2.

fig-2a-temperature-hydroclimate-data

 

fig-2b-temperature-data-only

Figure 2: Results from MWP Mapping Project (Source: Luening http://t1p.de/mwp downloaded 30-Dec-2016)

The following observations are evident from Figure 2;

a. The number of Warm studies (497) greatly exceed the other studies, namely, 53% of the studies when temperature and hydroclimate data is used and 88% when temperature only data is used.

b. The number of Cold studies (18) is very small, at 2-3% of the overall studies.

c. The number of Neutral studies (53) is comparatively low, at 6-9% of the overall studies.

d. The number of studies that report only Hydroclimatic data is not insignificant. The number of Dry studies (184) and the number of Wet studies (182) are 20% and 19% of the overall studies respectively.

In summary, the overwhelming evidence from the Luening MWP Mapping Project to date is that the MWP was globally warm but it is not immediately obvious what the definition of warm is?

Descriptions of warm or cold are given the individual studies. For example, Kuhnert & Mulitza (2011: GeoB 9501) (extracted from Luening) states that the,

“Medieval Warm Period 800-1200 AD was about 1.1°C warmer (50 years mean) than subsequent Little Ice Age.”

Now, whilst this description is useful, it does not allow us to compare MWP temperatures with modern warming. Therefore, I compare modern temperatures with the MWP below.

How Warm was the MWP?

Several of the references in the Luening MWP Mapping project are also so referenced in Ljungqvist (2010), therefore the latter may be used to estimate the temperature in the Northern Hemisphere (NH) during the MWP.

Note that I have used the Ljungqvist (2010) NH paper because I already had the data available on my laptop. However, readers could carry out a similar exercise by using their preferred study to derive appropriate temperatures for global or hemispherical temperatures for comparison with the present day.

Ljungqvist published his data along with his paper (other climate scientists should do the same) and Figure 3 shows my version of the Ljungqvist chart (reproduced using his data). I have also annotated the chart to show previous warm and cold periods.

fig-3-estimate-of-nh-temperature-ljungqvist-2010

Figure 3: Estimate of Extra-tropical Northern Hemisphere (90–30°N) Decadal Mean Temperature Variations (after Ljungqvist, 2010)

a. All values are decadal mean values.

b. Proxy temperatures are shown in the thick black line and are relative to the 1961–1990 mean instrumental temperature from the variance adjusted CRUTEM3+HadSST2 90–30°N record.

c. Two standard deviation error bars are shown in light grey shading.

d. CRUTEM3+HadSST2 instrumental temperatures are shown in the red line.

The following observations are evident from Figure 3:

a. The Medieval Warm Period (MWP) temperature is warmer than the Modern Warm Period proxy temperature – at no time do the modern proxies exceed the MWP proxies.

b. The recent instrumental temperature is warmer than the Medieval Warm Period temperature.

c. The MWP is obviously preindustrial and it would be logical to use the MWP temperature as preindustrial temperature instead of the Little Ice Age (LIA) favoured by the climate establishment.

Observations (a) and (b) above are based on Ljungqvist’s comments. He states that [emphasis added],

“…the Medieval Warm Period, from the ninth to the thirteenth centuries, seem to have equalled or exceeded the AD 1961-1990 mean temperature level in the extra-tropical Northern Hemisphere.”

He also adds the following words of caution [emphasis added],

Since AD 1990, though, average temperatures in the extra-tropical Northern Hemisphere exceed those of any other warm decades the last two millennia, even the peak of the Medieval Warm Period, if we look at the instrumental temperature data spliced to the proxy reconstruction. However, this sharp rise in temperature compared to the magnitude of warmth in previous warm periods should be cautiously interpreted since it is not visible in the proxy reconstruction itself.”

Regarding Ljungqvist’s note of caution and item (b) above, comparing instrumental temperatures with those derived from proxies is not an, “apples for apples” comparison because proxy temperatures are damped (flattened out) whilst thermometers respond very quickly to changes in temperature.

Ljungqvist describes the dampening proxy temperatures thus [emphasis added],

“The dating uncertainty of proxy records very likely results in “flattening out” the values from the same climate event over several hundred years and thus in fact acts as a lowpass filter that makes us unable to capture the true magnitude of the cold and warm periods…What we then actually get is an average of the temperature over one or two centuries.”

However, the flattening out of proxy records is not the only problem when comparing proxies with instrumental temperatures – there is also the “divergence problem.” We examine this for the Ljungqvist’s 1850-1989 calibration period in Figure 4.

fig-4-comparison-of-proxy-instrumental-temps

 

Figure 4: Comparison of Proxy & Instrumental Decadal Mean Temperature Variations (after Ljungqvist, 2010)

The decadal correlation (r = 0.95, r² = 0.90) between proxy and instrumental temperature is very high during the calibration period AD 1850-1989. However, note that the 1990-1999 instrumental peak is not used in the calibration. Probably because Figure 4 shows that the instrumental peak temperature is 0.39°C for 1990-1999, which diverges from the proxy temperature of 0.06°C – a difference of 0.33°C.

This divergence is outside the 2 standard deviation error bars of ±0.12°C reported by Ljungqvist and it illustrates the divergence problem in a nutshell – proxies do not follow the instrumental record for recent high temperatures. The reason for this divergence is that the proxy response is probably nonlinear instead of the linear response that is assumed in all climate reconstructions to date.

Ljungqvist explains the nonlinearity [emphasis added],

“One circumstance that possibly has led to an underestimation of the true variability is that we must presuppose a linear response between temperature and proxy. If this response is nonlinear in nature, which is often likely the case, our interpretations necessarily become flawed. This is something that may result in an underestimation of the amplitude of the variability that falls outside therange of temperatures in the calibration period.”

As a structural engineer, I agree with Ljungqvist that nonlinearity is often likely to be the case for proxies. Most natural materials, e.g., timber, rock, etc., exhibit linear-nonlinear behaviour. However, this concept is difficult to explain in words, therefore, I explain the concept diagrammatically in Figure 5.

fig-5-indicative-nonlinear-response

 

Figure 5: Indicative Nonlinear Response for a Temperature Proxy

For clarity, I show the positive part of the temperature response in Figure 5 but it should be noted that there would be a similar negative response. I now explain Figure 5 as follows:

a. Line OAB represents the linear proxy response to temperature currently used in reconstructions. Parameter P1 yields a temperature T1 (via point A) and the response increases linearly so that parameter P2 indicates a higher temperature T2 (via point B).

b. Line OACD represents the indicative nonlinear proxy response to temperature. Parameter P1 is in the linear portion of the curve and therefore results in the same temperature T1 as that in item (a) above.

c. In addition to item (b), parameter P1 at point C is in the nonlinear portion of the curve, which would result in the temperature T2, which is significantly higher than the value of T1 obtained from the linear response via point A.

In summary, a nonlinear proxy-temperature response can result in two temperature values for a single proxy value, which could be problematic in deciding which temperature result to use.

However, for modern temperatures, the nonlinear response of the proxy can be calibrated against the instrumental record. For example, if the temperature results in Figure 4 were for a single proxy then the following procedure could be used:

a. The nonlinear curve would be calibrated so that temperature T1 = 0.08°C would be in the linear portion of the curve in Figure 5.

b. The temperature T2 = 0.39 °C would be calibrated to be in the nonlinear portion of the curve in Figure 5.

Note that the linear response used in Figure 4 shows T2 = 0.06°C but the nonlinear response procedure above allows the calibration T2 = 0.39°C. This type of nonlinear calibration would mean that historical temperatures would be higher than those currently estimated by using a linear proxy-temperature response.

Furthermore, the calibration in modern times is relatively straightforward because we can compare proxies with instrumental temperatures to determine which temperature is correct, T1 or T2. However, historical temperatures are more problematic because we do not have instrumental records for several thousands of years to determine the correct temperature, T1 or T2. Nevertheless, we can estimate the correct temperature by deploying a similar methodology to that used by earthquake engineers as explained below.

Earthquake engineers routinely design structures for 1-in-500 year seismic events and for special structure (e.g., nuclear installations) 1-in-5,000 year events. However, they have only had the seismograph to measure the strength of earthquakes from 1935, when it was invented by Charles Richter. Earthquake engineers solve this short instrumental record dilemma by investigating historical descriptions of seismic events.

For example, the historical record for an earthquake may include a description such as, “The earth boiled.” This literal description does not mean that the earth actually reached boiling. It means that the groundwater in the soil bubbled to the surface – a process known as liquefaction of the soil. Earthquake engineers can then use this description to estimate the amount of shaking that would cause liquefaction and thus estimate the strength of the earthquake at that location.

In climate science, the temperature, T1 or T2, could also be estimated from historical descriptions by using changes in vegetation, human habitation, etc. For example, if our proxy were in Greenland then the following outcomes are possible:

a. If the land were in permafrost at the date of the proxy then the lower temperature T1 would be used.

b. It the land was being cultivated by Vikings at the date of the proxy (i.e., no permafrost) then the higher temperature T2 would be used.

The above nonlinear methodology would obviously require more work by paleo scientists when compared with the current linear calibration methods but this work is no more than is routinely carried out by earthquake engineers.

Additionally, up-to-date proxies for the post-2000 period would be useful to determine if the divergence between proxies and instrumental temperatures has increased. The proxies would also enable a more accurate nonlinear proxy-temperature calibration, say, point D in Figure 5. Strangely, there seems to be a reluctance in the climate establishment to publish up to date proxies.

Using a nonlinear proxy-temperature response would result in historical high temperatures being calculated to be higher and historical low temperatures being calculated to be lower.

Conclusions

A review of the global extent of the MWP is presented and the following conclusions are offered:

  1. The MWP was a global event and a large number of studies show that warming events overwhelming outnumber cold events.
  2. However, the not insignificant number of dry or wet events recorded in the MWP Mapping Project would suggest that perhaps the Medieval Climate Anomaly would be a better description than the MWP.
  3. NH temperatures during the MWP were at least as warm those in 1980-1989 instrumental record.
  4. Recent instrumental temperatures show higher temperatures when compared with the MWP proxies. However, instrumental temperatures should not be compared directly with proxy temperatures because this is not an “apples for apples” comparison. Proxy temperatures are dampened (flattened) out on decadal or greater scales.
  5. Recent proxy records diverge from instrumental temperatures – instruments show higher readings when compared with proxies.
  6. The divergence problem in item (5) above is probably due to a linear proxy-temperature response being assumed in current temperature reconstructions. A nonlinear proxy-temperature response would achieve more accurate results for historical high and low temperatures and achieve a better correlation with recent instrumental data.

Until there is a good correlation between instrumental temperatures and proxies, no reputable scientist can definitely state that current temperatures are the highest ever.

Afterword

Regarding the 2°C limit proposed by the climate establishment, the following phrase (based on the facts presented in this article) does not sound nearly as alarming as is currently promulgated in the media:

“We know that the preindustrial MPW temperatures exceeded those of the LIA by ≈ 1°C without CO2 but we also think that a similar temperature rise from the LIA to 1980-1989 was caused by CO2. Therefore, we think that we need to limit CO2 emissions to stop a further (dangerous) 1°C rise. We think.”

I think that there is too much thinking going on and not enough real climate science.

A good start would be to use nonlinear proxy-temperature responses to establish accurate historical high and low temperatures.

Is 1 °C Halfway to Hell?

Pro-anthropogenic Global Warming web sites are concerned that we are about to exceed the 1 °C value for the temperature anomaly (see Met Office, New Scientist and Skeptical Science) but is this really “uncharted territory” (Met Office) or halfway to New Scientist’s global warming hell of 2 °C?

It is shown in the following discussion that “1 °C: Halfway to Hell” is an ill-chosen headline – a more appropriate headline would be, “1 °C: Halfway to the Optimum”.

HadCRUT4 Data

The HadCRUT temperature anomalies are shown in Figure 1.

Figure 1-HadCRUT4 Overlay on 1850-1900 Ave-CompressedFigure 1: HadCRUT 1850-1900 Temperature Anomaly with 1961-1990 Overlay (After: Met Office Chart)

The Met Office chart in Figure 1 is unusual in that the HadCRUT anomaly data are usually referenced to the 1961-1990 mean temperature. Using a pre-industrial mean of 1850-1900 is not strictly correct because the industrial revolution began from approximately 1760 to sometime between 1820 and 1840 (Wikipedia). Consequently, the use of an 1850-1900 mean for pre-industrial temperatures is an arbitrary choice.

Therefore, I have overlain the HadCRUT4 data (1961-1990 mean), plotted as the blue line and it is evident from Figure 1 that using a mean of 1850-1900 raises the anomaly by ≈ 0.3 °C, when compared with the 1961-1990 mean. This gives the pre-industrial anomaly greater visual impact than the usual HadCRUT4 values.

The use of the 1850-1900 mean as the basis for the 1 °C rise is unusual because the IPCC reports have always refer to the 1961-1990 mean from HadCRUT for climate projections, e.g., see IPCC AR4 FAQ 3.1, Figure 1. However, we can use other data to determine a real pre-industrial mean as discussed below.

A Different Pre-industrial Benchmark

A chart from Ljungqvist (2010) is presented in Figure 2 that shows temperature fluctuations in the northern hemisphere for the last two millennia.

LjungqvistFigure 2: Reconstructed Extra-tropical (30-90 °N) Decadal Temperature Anomaly to 1961-1990 mean (after Ljungqvist, 2010)

The purpose of Ljungqvist (2010) is to assess the amplitude of the pre-industrial temperature variability. It is evident from Figure 2 that there have been previous warm and cold periods over the last two millennia but that most of the temperatures have been cooler than the 1961-1990 mean. Furthermore, Ljungqvist notes that the Roman Warm Period (RWP) and the Medieval Warm Period (MWP)

seem to have equalled or exceeded the AD 1961-1990 mean temperature level in the extra-tropical Northern Hemisphere.”

The following points are worth noting from Figure 2:

  1. The instrumental date (shown in red at the right hand side of the chart) represents a comparatively small portion of the data available.
  2. The modern proxy peak temperature is 0.082 °C, which is 0.114 °C lower than the MWP peak of 0.196 °C.
  3. Using 1850-1900 as the base for pre-industrial temperature is a relatively cold benchmark for temperature measurements. For example, the 1850-1920 instrumental mean is -0.299 °C, which is 0.495 °C lower the MWP peak.
  4. It is apparent that we could use the 1961-1990 as a suitable base for pre-industrial temperatures. We could even use the mean of the Medieval Warm Period, which is 0.041 °C higher than the 1961-1990 mean.

It is also worth noting that one of Ljungqvist’s conclusions is that,

“Since AD 1990, though, average temperatures in the extra-tropical Northern Hemisphere exceed those of any other warm decades the last two millennia, even the peak of the Medieval Warm Period, if we look at the instrumental temperature data spliced to the proxy reconstruction. However, Ljungqvist stresses that, “However, this sharp rise in temperature compared to the magnitude of warmth in previous warm periods should be cautiously interpreted since it is not visible in the proxy reconstruction itself [my emphasis]

Indeed, Ljungqvist states that the proxy records result in “flattening out” the values,

“that makes us unable to capture the true magnitude of the cold and warm periods in the reconstruction… What we then actually get is an average of the temperature over one or two centuries.”

In other words when comparing earlier temperatures we should be careful when comparing temperature readings – we should only compare proxies with proxies and not proxies with thermometers.

Same Data: Different Perception

There are good scientific reasons for displaying temperatures as anomalies because it allows widely different temperatures from geographically disparate regions to be compared. Nevertheless, they do not need to be displayed in the format of Figure 1. It appears that one of the intentions of displaying temperatures in the Figure 1 format is to depict the temperature rise as being unusual and rising rapidly. However, this is not the case.

A temperature change of approximately 1 °C for 165 years from 1850 to 2015 is almost undetectable by human beings. To illustrate this I use actual HadCRUT4 global temperatures (not anomalies) in Figure 3.

Global Ave Temperature 1850-1900Figure 3: Global Average Temperature (1850-2015)

The chart presented in Figure 3 uses the HadCRUT 1961-1990 mean (14 °C) global temperature as its baseline (see FAQ here) and the actual temperatures are derived by adding or subtracting the anomalies data here from the 14 °C baseline. Figure 3 is based on a diagram by Anthony Watts (published in Climate Change the Facts, 2014) and it gives a less alarming view of global warming when compared with the anomaly diagram in Figure 1.

The following points are worth noting from Figure 3:

  1. The current high value for temperature (September 2015) is 14.70 °C and the lowest recorded value of 13.45 °C occurred in 1911. The starting value of the HadCRUT4 series is 13.63 °C in 1850.
  2. I have used temperatures from my own personal experience to determine a reasonable scale for the vertical axis, ranging from a high of 51 °C in Dubai to a low of -16 °C in Scotland.
  3. I also show temperatures from my new home in Sydney. These are more benign than those in item (2) above but they still show a large range from a high of 45.8 °C to a low of 2.1 °C.

Furthermore, by using temperature data from environs in which I have lived, it is evident that a temperature change of 1 or 2 °C is very small and is not unusual for most flora, fauna and humans. Nevertheless, let us examine if a 2 °C rise would cause serious climatic damage by discussing the Holocene Optimum.

The Holocene Optimum

The first IPCC report FAR presented the diagrams shown if Figure 4 for temperature variations over the last ten thousand years, 4(b), and the last one thousand years, 4(c).

The charts in Figure 4 are based on the work of Lamb, who was the founding director of the Climatic Research Centre (CRU) that produces the HadCRUT temperature data in conjunction with the Met Office. HadCRUT data is used extensively by the IPCC.

Schematic of Glbal Temp Variation-Lamb-IPCC FARFigure 4: Schematic Diagrams of Global Temperature Variations (Source: FAR Figure 7.1)

The similarity between Figure 4(c) and Ljungqvist’s chart in Figure 2 is remarkable, considering that Figure 4 was published in 1990. The dotted line in diagrams 4(b) and 4(c) is stated in FAR as nominally representing conditions near to the beginning of the twentieth century. Unfortunately, the diagrams in Figure 4 do not show values for the temperature scale. Therefore, I use Marcott et al (2013), which is referenced in AR5 WG1, to supply these values.

Marcott et al (2013) has been criticised for showing a spurious uptick in temperature in the 20th century. Indeed, Marcott stated in the paper and in RealClimate that is uptick is “probably not robust.” Consequently, I have used Roger Pielke, Jr’s version of Marcott’s diagram as Figure 5, in which the spurious data are deleted.

Marcott Fig-1B Pielke-AmendmentFigure 5: Holocene Global Temperature Variations (Source: Marcott Figure 1B amended by Pielke)

Approximately 80% of the Marcott et al (2013) proxies are of marine origin and consequently underestimate the variability in land temperatures. Nevertheless, several useful conclusions are obtained by Marcott et al (2013), namely:

  1. “Global temperature for the decade 2000-2009 has not exceeded the warmest temperatures in the early Holocene.”
  2. “The early Holocene warm interval is followed by a long-term global cooling of ≈ 0.7 °C from 5,500 BP to 1850.”
  3. “The Northern Hemisphere (30-90°) experienced a temperature decrease of ≈ 2 °C from 7,000 BP to 1850.”

Spatial Distribution of Temperature during Holocene Climatic Optimum

Renssen et al (2012) use a computer simulation to derive early Holocene temperature anomalies. They call the Holocene Climatic Optimum the Holocene Thermal Maximum (HTM) and, in referring to their simulation, they state that,

“The simulated timing and magnitude of the HTM are generally consistent with global proxy evidence, with some notable exceptions in the Mediterranean region, SW North America and eastern Eurasia.”

The Renssen et al (2012) computer simulation is cited in AR5 WG1 and it presents the spatial distribution of peak temperature anomalies during the Holocene Climatic Optimum relative to a  1000-200 BP pre-industrial mean (see Figure 6).

Global characterization of the Holocene Thermal Maximum-Renssen et al-2012-Fig 3A.-compressedFigure 6: Global Variation of Holocene Thermal Maximum Anomalies (Source: Renssen et al, 2012)

It is evident from Figure 6 that most of Europe and North America experienced an anomaly of 2-3 °C during the Holocene Thermal Maximum (HTM) and Renssen et al (2012) offer the following conclusions:

  1. “At high latitudes in both hemispheres, the HTM anomaly reached 5 °C.”
  2. “Over mid-to-high latitude continents the HTM anomaly was between 1 and 4 °C.”
  3. “The weakest HTM signal was simulated over low-latitude oceans (less than 0.5 C) and low latitude continents (0.5-1.5 °C).”

I reiterate that Renssen et al (2012) use a pre-industrial mean of (1,000 to 200) BP, which is ≈ 0.3 °C less than the HadCRUT4 (1961-1990) mean. Therefore, we should add ≈ 0.3 °C to their values when comparing them with modern-day temperatures. Not withstanding the aforementioned, it should be noted that the Renssen et al values are peak values and that global temperatures would be lower than their peak values.

Discussion

Current temperatures are examined with regard to the approaching 1 °C anomaly and the following standpoints are evident from the discussion:

  1. Portraying current temperatures as an anomaly from the 1850-1900 mean gives the false impression that current temperatures are high because it is shown that temperatures during this period were very low when compared with other warm periods, either in the last two millennia (Ljungqvist, 2010) or in the early Holocene (Marcott et al, 2013 or Renssen et al, 2012).
  2. A reasonable mean for pre-industrial temperatures would be 1961-1900 because this mean compares well with actual mean temperatures that occurred during times that really were pre-industrial, e.g., the Roman Warm Period and the Medieval Warm Period.
  3. The change in temperature during the last 165 years is hardly visible in Figure 3 but such plots wouldn’t normally get people overly concerned. Conversely, when an anomaly plot is deployed, the vertical scale is highly magnified as shown in Figure 1. The magnified vertical scale gives a steep slope to the temperature rise in modern times, which conveys the impression that global warming is proceeding rapidly. To the contrary, and in reality, Figure 3 shows that temperatures have been very stable over the last century and a half.
  4. Less worrying anomaly plots than that shown in Figure 1 are presented in Figures 2 and 5. These show that current temperatures are not unusual when compared with earlier warm periods.
  5. Figure 6 (Renssen et al, 2012) shows that many parts of the world experienced temperatures during the early Holocene that were significantly greater than 2 °C above the pre-industrial

Conclusions

The following conclusions are evident from the above:

  1. Portraying current temperatures as an anomaly from an 1850-1900 pre-industrial mean gives the false impression that current temperatures are high because temperatures during 1850-1900 were amongst the lowest in the last 10,000 years.
  2. Global temperature for the decade 2000-2009 has not reached the warmest temperatures in the early Holocene.
  3. Northern Hemisphere temperatures would need to increase by at least 2 °C above the (1850-1900) pre-industrial mean to reach temperatures experienced during the Holocene Climatic Optimum.

I contend that “1 °C: Halfway to Hell” is an inappropriate headline – a more appropriate headline would be, “1 °C: Halfway to the Optimum”, especially, if you live in the Northern Hemisphere.

Hansen 1988 Revisited

Hansen’s 1988 temperature projections have recently received quite a bit of attention, e.g., RealClimate, WUWT and SkS. The pro-AGW sites state than Hansen has done very well, whereas the anti-AGW say that he hasn’t. Therefore, I thought that it would be a good time to revisit Hansen’s work to determine how well he did?

Temperature Sensitivity & What Can We Learn?

Dana1981 @ SkS states that:

“The observed temperature change has been closest to Scenario C, but actual emissions have been closer to Scenario B. This tells us that Hansen’s model was “wrong” in that it was too sensitive to greenhouse gas changes. However, it was not wrong by 150%, as Solheim claims. Compared to the actual radiative forcing change, Hansen’s model over-projected the 1984-2011 surface warming by about 40%, meaning its sensitivity (4.2°C for doubled CO2) was about 40% too high.

What this tells us is that real-world climate sensitivity is right around 3°C, which is also what all the other scientific evidence tells us. Of course, this is not a conclusion that climate denialists are willing to accept, or even allow for discussion.”

Perhaps. Climate sensitivity may be ≈ 3°C but we can also learn several other things as discussed below.

How Well Did Hansen Do?

Hansen Compared With the Real World

Figure 1 shows Hansen’s scenarios compared with the GISS Land-Ocean Index (LOTI). I have also added Dana1981’s data as Scenario D. This is the Scenario B data but with the temperature sensitivity reduced from 4.2°C to 2.7 °C. Dana did this by multiplying the Scenario B data by a factor of (0.9*3/4.2), which equates to temperature sensitivity of 2.7 °C (see SkS for the data). The SkS estimate for Scenario D appears to be based on Schmidt (2009).

Figure 1: Hansen’s 1988 Scenarios compared with Real-world Temperatures

It is evident from Figure 1 that the best fit for real world temperatures is Scenario C. However, the pro-AGW in SkS state that Scenario C is irrelevant because it uses the “wrong” sensitivity of 4.2°C and incorrect emissions. Therefore, perhaps I should modify my conclusion to real-world temperatures are following Scenario D, which has the “right” temperature sensitivity of 2.7°C and emissions that are close to actual emissions. It makes no difference; Scenarios C and D are similar, although Scenario D has tended to under-predict temperatures for the last 30 years or so.

2012 Projections

Hansen’s temperature projections for 2012 are compared with the LOTI data in Table 1. It should be noted that the 2012 LOTI temperature estimate is based on the 12-month running average from Jun-2011 to May 2012.

Scenario

2012 Anomaly (°C)

Comparison

With LOTI

(%)

Source

A

1.18

226%

Hansen (1988a)

B

1.77

205%

Hansen (1988a)

C

0.60

116%

Hansen (1988a)

D

0.67

128%

Dana (2011)

LOTI

0.52

100%

GISS LOTI

Note: The comparison with LOTI is based on Scenario/LOTI.

Table 1: Comparison of Hansen’s 1988 Temperature Projections for 2012

Comparing Hansen’s temperature projections with LOTI, it is evident that Hansen’ didn’t do very well.

Scenarios A and B overestimated real-world temperatures by a whopping 126% and 105% respectively. Scenario D over-predicts by 28% and even the no-increase-in-emissions Scenario C over-predicts real-world temperatures by 16%.

What do we learn? We could argue that climate sensitivity should be reduced to ≈ 2.1°C to correspond to the 28% over-prediction in Scenario D. However, I would suggest that we wait a few more years to determine the trend more accurately.

2019 Projections

The timeline for Hansen’s temperature projections for 2019 is presented in Table2. A summary of the comments made by different commentators are included to show how the favoured scenario/projection evolved with time.

Scenario

2019 Anomaly (°C)

Comparison

With

Scenario D (%)

Source

Comments
B

1.10

160%

Hansen (1988a)

In May 1988, Hansen states in AGU paper that, “Scenario A, assumes that growth rates of trace gas emissions typical of the 1970s and 1980s will continue indefinitely…[but]…since it is exponential, must eventually be on the high side of reality in view of finite resource constraints…Scenario B is perhaps the most plausible of the three cases.”
A

1.57

227%

Hansen (1988b)

In June 1988, Hansen states to US Congressional Committee that Scenario A was “business as usual.”
B

1.10

160%

Hansen (2005)

Hansen states that, “In my testimony in 1988, and in an attached scientific paper… Scenario A was described as “on the high side of reality”…The intermediate Scenario B was described as “the most plausible”… is so far turning out to be almost dead on the money.”
B

1.10

160%

Hansen (2006)

Hansen assesses the predictions and states that the close agreement, “for the most realistic climate forcing (scenario B) is accidental.” He states current estimate for sensitivity is 3 ± 1°C.
B-

1.00

144%

Schmidt (2007)

RealClimate blog, Schmidt states that forcings in Scenario B are “around 10% overestimate.”
B-

1.00

144%

Schmidt (2009)

RealClimate blog, Schmidt states that Scenario B, “is running a little high compared with the actual forcings growth (by about 10%)”
B

1.00

144%

Schmidt (2011)

RealClimate blog by Schmidt, “As stated last year, the Scenario B in that paper is running a little high compared with the actual forcings growth (by about 10%)”
D

0.69

100%

Dana (2011)

Skeptical Science blog, climate sensitivity reduced from 4.2 to 2.7°C for Scenario B. Use this as the benchmark for comparison.
?

?

?

Schmidt (2012)

RealClimate blog, Schmidt states that Scenario B, “is running warm compared to the real world (exactly how much warmer is unclear)”
C

0.61

88%

Hansen (1988a))

Hansen’s original Scenario C. This is the commitment scenario with emissions held at year 2000 levels. Include this as a measure of how well the other scenarios perform.

Note: The comparison with Scenario D is based on Scenario/Scenario D

Table 2: Evolution of Hansen’s 1988 Temperature Projections for 2019

It is evident from the timeline and narrative in Table 2 that the evolution in temperature is generally downwards; apart from the brief upwards spurt for US Congressional Committee presentation in June 1988 (more on this in unethical conduct later in this blog).

The following points are also evident:

  • There is a large reduction in the estimate for the 2019 temperature anomaly from Hansen’s estimate of 1.57°C in 1988 (as presented to the US Congress) to Dana’s estimate of 0.69°C in 2011.
  • Until recently (Schmidt, 2012) the overestimate in Scenario B was portrayed as ≈ 10% but Dana at SkS (2011) showed that the overestimate was ≈ 44%.

What do we learn? All of the pro-AGW blogs states that the Hansen Scenario B was pretty good estimate. I suggest that an error of ≈ 44% is pretty bad.

Unethical Behaviour

Hansen’s paper Hansen (1988a) was published in August 1988 but it is important to note that it was accepted for publication on 6 May 1988. This date is particularly relevant because Hansen stated on 6 May 1988 that:

Yet, one month later Hansen (1988b)
in his congressional testimony here he described Scenario A as “business as usual” (see below):


Notice that Scenario A is stressed to be “business as usual”. No mention to Congress that Scenario B was “most plausible” and that Scenario A was “on the high side of reality”.

Later (2006), Hansen re-worded his 1988 congressional testimony to be Scenario A, “was described as on the high side of reality”.


From the foregoing, it is evident that Hansen did not describe to Congress in 1988 that Scenario A was on the “high side of reality”. At best, he has been economical with the truth by re-writing history and (at worst) he has been unethical and totally unprofessional.

Conclusions

I offer the following conclusions regarding Hansen 1988:

  • Temperature forecasts (sorry, should I use the politically correct term projections?) for 2019 have plummeted from 1.57°C in 1988 to 0.69°C in 2011.
  • Estimates of temperature are in error by ≈ 60 for Scenario B and 127% for Scenario A.
  • Climate sensitivity has also fallen from ≈ 4.2°C to ≈ 2.1-2.7°C, i.e., it has fallen to 50-64% of Hansen’s 1988 estimates.

These sorts of errors do not represent pretty good estimates.

Climate Models 2011: Same Data – Different Conclusions

In his blog post 2011 Updates to model-data comparisons at Real Climate, Gavin Schmidt shows the diagram in Figure 1.

Figure 1: Real World Temperatures Compared with IPCC Model Ensemble (Schmidt, 2012)

Gavin states that, “Overall, given the latest set of data points, we can conclude (once again) that global warming continues.” My perception was that there had been some cooling over the last 15 years, therefore I have decided to check Gavin’s claims.

Gavin explains that the chart shows the annual mean anomalies from the IPCC AR4 models plotted against the surface temperature records from the HadCRUT3v, NCDC and GISTEMP products (it really doesn’t matter which). Everything has been baselined to 1980-1999 (as in the 2007 IPCC report) and the envelope in grey encloses 95% of the model runs.

At first glance the chart seems to show a good correspondence between real world temperature and the average of the IPCC models. However, the correspondence does not look quite so good when you compare the chart with the AR4 charts. I have updated AR4 Figures 1.1 and TS.26 to include the HadCRUT data up to May 2012 and discuss these as follows.

Figure 2 is derived from Figure 1.1 of IPCC AR4.

Figure 2: Global Average Temperature Compared with FAR, SAR & TAR (after AR4 Figure 1.1)

It should be noted in Figure 2 that I could not get the HadCRUT3 temperature to match exactly with the values in Figure 1.1 in AR4. Therefore, I had to adjust the HadCRUT3 data by adding 0.026 °C. I am not sure why I had to make the adjustment in the HadCRUT3 data, perhaps it is just a printing error in the AR4 diagram but this error also repeats elsewhere. It may be coincidence but the average temperature for 1961-1990 on which HadCRUT3 is based is 0.026 °C. Therefore, it may be that the AR4 chart is normalised to a zero temperature for the 1961-1990 period. However, I can find no information that confirms that this adjustment should be made.

Notwithstanding the above, it is evident from Figure 2 that the correlation between the adjusted HadCRUT3 data and the original AR4 Figure 1.1 data is very good. This applies to both the individual data points and the smoothed data. It is also evident that the temperature trend is significantly below the FAR estimate and is at the very low ends of the SAR and TAR estimates.

In order to compare Gavin’s diagram with actual global temperatures, I use Figure TS.26 from AR4 a shown in Figure 3.

Figure 3: Model Projections of Global Mean Warming Compared with Observed Warming (after AR4 Figure TS.26)

The following points should be noted regarding Figure 3 compared with AR4 Figure TS.26:

  1. I have deleted the FAR, SAR and TAR graphic from Figure TS.26 in Figure 3 because they make the diagram more difficult to understand and because they are already presented in Figure 2, in a form that is much easier to assimilate.
  2. The temperature data shown in AR4 Figure 1.1 does not correspond to that shown in Figure TS.26. The Figure 1.1 data appear to be approximately 0.02 °C higher than the corresponding data in Figure TS.26. I have assumed that this is a typographical error. Therefore, I have used the same 0.026 °C adjustment to the HadCRUT3 data in Figure 3 that was used for Figure 2.
  3. My adjusted HadCRUT3 data points are typically higher than those presented in Figure TS.26.
  4. Despite items (1), (2) and (3) above, there is very good agreement between the smoothed data in TS.26 and the adjusted HadCRUT3 data, particularly for the 1995-2005 period. It should be noted that AR4 uses a 13-point filter to smooth the data whereas HadCRUT uses a 21-point filter. Nevertheless, AR4 states that the 13-point filter gives similar results to the 21-point filter.

Comparing Gavin’s projections in the RC chart in Figure 1 with the official AR4 projections in Figure 3, the following points are evident:

  1. The emissions scenarios and their corresponding temperature outcomes are clearly shown in the AR4 chart. Scenarios A2, A1B and B1 are included in the AR4 chart – scenario A1B is the business-as-usual scenario. None of these scenarios are shown in the RC chart.
  2. Real world temperature (smoothed HadCRUT3) is tracking below the lower estimates for the Commitment emissions scenario., i.e., emissions-held-at-year-2000 level in the AR4 chart. There is no commitment scenario in the RC chart to allow this comparison.
  3. The smoothed curve is significantly below the estimates for the A2, A1B and B1 emissions scenarios. Furthermore, this curve is below the error bars for these scenarios, yet Gavin shows this data to be well within the error bands.
  4. The RC chart shows real world temperatures compared with predictions from models that are an “ensemble of opportunity”. Consequently, Gavin states, “Thus while they do span a large range of possible situations, the average of these simulations is not ‘truth’.” [My emphasis].

In summary, TS.26 from AR4 is useful for comparing real world temperature data with the relevant emissions scenarios. To the contrary, Gavin uses a chart which compares real world temperature data with average model data for which he states does not represent “truth.” I suggest that this is not much of a comparison and I conclude that the AR4 chart is a much more informative comparison.

I also conclude that it is evident from Figure 3 (AR4 Figure TS.26) that there has been a pause in global warming and that some cooling is occurring. It is certainly not as Gavin concluded that, “Overall, given the latest set of data points, we can conclude (once again) that global warming continues.” Whether or not this cooling pause is a longer-term phenomenon or temporary pause only time will tell.

2010 – The Hottest Year on Record: Is this a Cause for Concern?

GISS report that 2010 has tied with 2005 as being the hottest year on record. James Hansen, the director of GISS, said that, “If the warming trend continues, as is expected, if greenhouse gases continue to increase, the 2010 record will not stand for long.”

Is this a cause for concern?

GISS Data Compared with Hansen’s Scenarios (2006)

The GISS Land Ocean Temperature Index (LOTI) data up to January 2011 are shown in Figure 1. They are compared with the global warming models presented by Hansen (2006).


Figure 1: Scenarios A, B and C Compared with Measured GISS Land-Ocean Temperature Index (after Hansen, 2006)

The blue line in Figure 1 denotes the GISS LOTI data and Scenarios A, B and C describe various CO2 emission outcomes. Scenarios A and C are upper and lower bounds. Scenario A is “on the high side of reality” with an exponential increase in emissions. Scenario C has “a drastic curtailment of emissions”, with no increase in emissions after 2000. Scenario B is described as “most plausible” which is expected to be closest to reality. The original diagram can be found in Hansen (2006). It is interesting to note that, in his testimony to US Congress, Hansen (1988) describes Scenario A as “business as usual”, which somewhat contradicts his “on the high side of reality” statement in 2006.

It is evident from Figure 1 that the best fit for actual temperature measurements is currently the emissions-held-at-year-2000-level Scenario C. The current temperature anomaly is 0.61  °C. Therefore, even with temperatures at record highs, we are not experiencing the runaway temperatures predicted for the “business-as-usual” Scenario A. Indeed, for Scenario C with emissions curtailed at year-2000 levels, the rate of temperature increase is an insignificant 0.01 °C/decade.

It is also worth noting that we are currently at the lower end of the range of estimated temperatures for the Holocene optimum and the prior interglacial period. These occurred without human intervention or huge increases in carbon dioxide.

HadCRUT3 Compared with IPCC AR4

The above comparison based on Hansen (2006) uses relatively old climate models. Therefore, I have compared current HadCRUT3 temperature data with the latest IPPC AR4 (2007) models in Figure 2.

Figure 2: IPCC Scenarios A1B, A2 & B1 Compared with HadCRUT3 Temperature Data (after AR4, 2007)

Figure 2, which is based on IPCC AR4 Figure TS.26. I have added the HadCRUT3 data as blue dots. The black dots in the original TS.26 diagram appear to be HadCRUT3 data but are slightly misaligned. Therefore, I offset the HadCRUT3 data by adding 0.018°C to achieve a reasonable fit with the individual data points shown in AR4. The blue line with white dots the smoothed HadCRUT3 data. It is evident from Figure 2 that the smoothed curve gives an excellent fit with observed data presented as the solid black line in AR4. The current temperature anomaly is 0.52  °C.

The observed temperature trends in Figure 2 are significantly below the “likely” warming scenarios presented in AR4. Furthermore, as with the GISS data, the current HadCRUT3 trend is similar to the emissions-held-at-year-2000-level scenario.

Conclusions

Two comparisons are presented that compare GISS LOTI data and HadCRUT3 data with their respective temperature simulation models and the following conclusions are offered:

  1. Observed temperatures are significantly below the “most plausible” or “likely” high emissions scenarios. Instead, they are on a trajectory that is similar to the emissions-held-at-year-2000-level scenarios.
  2. Current temperatures are at the lower end of the range of estimated temperatures for the Holocene optimum and the prior interglacial period. These temperatures occurred without human intervention.

In summary, global temperatures may be reaching record highs but they are not following “runaway” trajectories suggested by computer models. Instead, they are following an insignificant warming trend of approximately 0.01 °C/decade.

Notwithstanding the above, it should be noted that time period for the comparison of actual temperature measurements with those predicted by computer models is still relatively short. Hansen (2006) suggests that we could expect reasonable results for distinction between the scenarios and useful comparison with the real world by 2015.