Electricity Generation: Wind Power is more Than Twice the Cost of Coal

                                                               Cartoon by Josh cartoonsbyjosh.com


Proponents of renewable energy often cite GenCost 2018 to argue that wind power is now less costly than coal-fired power. However, inspection of GenCost 2018 shows that not all of the costs of electricity generation are included in its estimate of the Levelised Cost of Electricity (LCOE). The LCOE presented in GenCost 2018 is effectively the “farm-gate” cost of energy, i.e., the price required by the generator to break even at its site of generation.

In a similar manner to the “farm-gate” cost of milk, the GenCost 2108 LCOE does not represent the final price to the consumer, since it fails to include the cost of transportation (transmission lines). GenCost 2018 also does not include the cost of power plant degradation and demolition, etc. These (omitted) costs are not insignificant as explained below:

  1. GenCost 2018 uses it uses very high capacity factors for wind (from 38% to 44%), whereas real-world capacity factors are in the range 33% to 38% or lower. For example, Rutovitz et al (2017) state that wind farms in Australia have an average capacity factor of 33% and GHD (2018) assumes a 38% capacity factor. Reducing the capacity factor from 44% to 33% increases the LCOE for wind by approximately 33%.
  2. GenCost 2018 does not include degradation in performance with time, which is estimated to be 1.6% per annum (Staffell & Green, 2014). Including the loss in performance increases the LCOE for wind by approximately 20%.
  3. GenCost 2018 uses a design life of 25 years but, typically, wind turbines do not last longer than 20 years (Coultate & Hornemann, 2018) and Hughes (2012) suggests a 15-year economic life for wind turbines. Reducing the design life from 25 years to 20 years increases the LCOE for wind by approximately 8%.
  4. GenCost 2018 neglects the cost of transmission lines and demolition, which are usually higher for renewables than for coal-fired power. Including the costs of transmission and demolition increases the LCOE for wind by an average of 35%.

Including all of the items listed in (1) to (4) above more than doubles the GenCost 2018 LCOE for wind power as shown in Figure 1.

These costs should be included in the GenCost 2018 LCOE calculation if an accurate comparison with other sources of power is to be made.


Figure 1:  Comparison of GenCost 2018 LCOE’s with Real-world LCOE (i.e., when the Cost of Transmission Lines, Demolition, etc., are Included), Source: McFarlane (2019)

It is evident from Figure 1(b) that:

  1. The cost of wind power (with 6 hours storage) is approximately 2½ to 3¼ times the cost of existing coal power and approximately 2 to 2½ times the cost of new coal power.
  2. The cost of standalone wind power (with no storage) is approximately 1½ to 2 times the cost of existing coal power and approximately 1¼ to 1½ times the cost of new coal power. However, no storage would incur the additional cost of back-up by fossil-fuel plants.

It should be emphasised that the real-world LCOE values for standalone wind presented in Figure 1(b) are verified by Stock et al (2016), which presents reverse auction values for wind power in ACT that are in the $73 to $92 per MWh range. This range compares well with the $79 to $95 per MWh range for the real-world values presented in Figure 1(b) above, which gives confidence in the accuracy of the real-world LCOE estimates presented in this review.

Furthermore, the conclusion from Figure 1(b) that wind power is more expensive than conventional generation is corroborated by the fact that real-world experience shows that those countries with the highest generation from renewables also have the highest electricity costs as shown in Figure 2.

Figure 2:  Cost of Residential Electricity Compared with Installed Capacity of Renewables (after MacDonald, 2018)

It is evident from Figure 2 that those countries with the highest penetration of renewable electricity (Germany and Denmark) have the highest electricity costs, which leads to the obvious conclusion that renewables are more costly than conventional generation.

In summary, it is shown that the levelised cost of electricity generated by wind power is significantly more expensive than coal-fired power (by a factor of 2 to 3).

Therefore, it is recommended that any new generation capacity in Australia should include coal-fired power, not only because it is cheaper than wind but also because it is more reliable and provides power on an as-needed basis.

References

Brailsford et al, 2018, Powering Progress: States Renewable Energy Race, Louis Brailsford, Andrew Stock, Greg Bourne and Petra Stock, published by Climate Council of Australia Ltd 2018

https://www.climatecouncil.org.au/wp-content/uploads/2018/10/States-Renewable-Energy-Report.pdf

Coultate & Hornemann, 2018, Why wind-turbine gearboxes fail to hit the 20-year mark, The Renewable Energy Handbook (Wind), 2018

GenCost 2018, Graham, P.W., Hayward, J, Foster, J., Story and Havas, L., 2018, GenCost 2018 CSIRO, Australia

https://www.csiro.au/en/News/News-releases/2018/Annual-update-finds-renewables-are-cheapest-new-build-power

Hughes, 2012, The Performance of Wind Farms in the United Kingdom and Denmark, Published by the Renewable Energy Foundation

https://www.ref.org.uk/attachments/article/280/ref.hughes.19.12.12.pdf

MacDonald, 2018, A Look at Impacts of Wind and Solar Electric Generation on Electricity Price, Energy Performance Measurement Institute (EPMI)

McFarlane, 2019, Levelised Cost of Electricity: A Comparison between Wind and Coal Power

https://1drv.ms/b/s!AlUVozCtzbL2h6IPrDEj_I1UF4qs1g

Rutovitz et al, 2017, Rutovitz, J., McIntosh, B., Morris, T. and Nagrath, K. (2017) Wind Power in Australia: Quick Facts. Prepared for the Climate Media Centre and Australian Wind Alliance by the Institute for Sustainable Futures, UTS

Click to access 2017_Wind_Power_in_Australia_ISF.pdf

Staffell & Green, 2014, How does wind farm performance decline with age? Renewable Energy

https://www.sciencedirect.com/science/article/pii/S0960148113005727

Stock et al, 2016, Territory trailblazer: How the ACT became the renewable capital of Australia, published by the Climate Council of Australia Limited

The 1970s Global Cooling Consensus was not a Myth

Introduction

This is a repost from my article in WUWT. Figures 1 and 2 have been added to the post because they were missed out in WUWT.

Purpose of Review

Whether or not there was a global cooling consensus in the 1970s is important in climate science because, if there were a cooling consensus (which subsequently proved to be wrong) then it would question the legitimacy of consensus in science. In particular, the validity of the 97% consensus on global warming alleged by Cook et al (2013) would be implausible. That is, if consensus climate scientists were wrong in the 1970s then they could be wrong now.

It is not the purpose of this review to question the rights or wrongs of the methodology of the 97% consensus. For-and-against arguments are presented in several peer-reviewed papers and non-peer-reviewed weblogs. The purpose of this review is to establish if there were a consensus in the 1970s and, if so, was this consensus cooling or warming?

In their 2008 paper, The Myth of the 1970s Global Cooling Scientific Consensus, Peterson, Connolley and Fleck (hereinafter PCF-08) state that, “There was no scientific consensus in the 1970s that the Earth was headed into an imminent ice age. Indeed, the possibility of anthropogenic warming dominated the peer-reviewed literature even then.” This conclusion intrigued me because, when I was growing up in the early 1970s, it was my perception that global cooling dominated the climate narrative. My interest was further piqued by allegations of “cover-up” and “skulduggery” in 2016 in NoTricksZone and Breitbart.

Therefore, I present a review that examines the accuracy of the PCF-08 claim that 1970s global cooling consensus was a myth. This review concentrates on the results from the data in the peer-reviewed climate science literature published in the 1970s, i.e., using similar sources to those used by PCF-08.

Review of PCF-08 Cooling Myth Paper

The case for the 1970s cooling consensus being a myth relies solely on PCF-08. They state that,”…the following pervasive myth arose: there was a consensus among climate scientists of the 1970s that either global cooling or a full-fledged ice age was imminent…A review of the climate science literature from 1965 to 1979 shows this myth to be false. The myth’s basis lies in a selective misreading of the texts both by some members of the media at the time and by some observers today. In fact, emphasis on greenhouse warming dominated the scientific literature even then.” [Emphasis added].

PCF-08 reached their conclusion by conducting a literature review of the electronic archives of the American Meteorological Society, Nature and the scholarly journal archive Journal Storage (JSTOR). The search period was from 1965 to 1979 and the search terms used were “global warming”, “global cooling” and a variety of “other less directly relevant” search terms. Additionally, PCF-08 evaluated references mentioned in the searched papers and references mentioned in various history-of-science documents.

In total, PCF-08 reviewed 71 papers and their survey found 7 coolingpapers, 20 neutral papers and 44 warming papers. Their results are shown in their Figure 1.

A cursory examination of Figure 1 indicates that there is a 62% warming consensus if we use all the data and this consensus increases to 86% pro-warming, if we were to ignore the neutral papers (as was done in the 97% consensus). Therefore, the Figure 1 data seems to prove the contention in PCF-08 that 1970s global cooling was a myth.

However, I find it difficult to believe that the 1970s media “selectively misread” the scientific consensus of the day and promoted a non-existent cooling scare. Therefore, I present an alternative to the PCF-08 analysis below.

Methodology of this Review

In this review, I use an identical methodology to PCF-08, i.e., I examine peer-reviewed scientific journals. Non-peer-reviewed newspaper and magazine articles are not used. A significantly larger number of papers are presented in the current review than were used in PCF-08.

The PCF-08 database of articles is used but this is extended to examine more literature. Note that examining all of the scientific literature would have been beyond my resources. However, my literature survey was facilitated by the work of Kenneth Richard in 2016 (hereinafter, KR-16) at NoTricksZone, in which he has assembled a large database of sceptical peer-reviewed literature.

Some people may wish to ignore the KR-16 database as being from a so-called “climate denier” blog. However, almost all of the papers in KR-16 are from peer-reviewed literature and consequently it is a valid database. It is also worth noting that 16 of the papers used in the KR-16 database are also contained in the PCF-08 database.

The combined PCF-08 and KR-16 databases form the benchmark database for the current review. It was intended to significantly extend the benchmark database but, on searching the relevant journals, only 2 additional papers were found and these were added to form the database for this review.

It should be noted that KR-16 states that there were over 285 cooling papers. However, many of these papers were deleted from the current review as not being relevant. For example, several papers were either outside the 1965-1979 reference period or they emphasise the minor role of CO2 but do not consider climate trends.

I agree with PCF-08 that no literature search can be 100% complete. I also agree that a literature search offers a reasonable test of the hypothesis that there was a scientific consensus in the 1970s. I reiterate that the resulting database used in this review is significantly larger than that used by PCF-08 and consequently it should offer a more accurate test of the scientific consensus in the 1970s.

Most of the papers in the review database acknowledge the global cooling from the 1940s to the 1970s (typically 0.3 °C global cooling). Therefore, deciding between cooling, neutral or warming was relatively straightforward in most cases; namely did the paper expect the climate regime during the 1940s-1960s period to either to continue from the date that the paper was published, or did it expect a different climate regime in the medium-to-long-term?

Notwithstanding the straightforward test described above, some of the papers make contradictory statements and are thus more difficult to classify. Consequently, their classification can include an element of subjectivity. Fortunately, there are very few papers in this category and consequently an inappropriate classification does not materially affect the overall results.

The test criteria are summarised in Table 1.

ClassificationTest of Classification of PapersTypical Examples from Papers
CoolingCooling expected to either continue or initiateKukla & Kukla (1972) “…the prognosis is for a long-lasting global cooling more severe than any experienced hitherto by civilized mankind.”
NeutralEither non-committal on future climate change or expects warming or cooling to be equally possibleSellers (1969) “The major conclusions that removing the arctic ice cap would have less effect on climate than previously suggested, that a decrease of the solar constant by 2-5% would be sufficient, to initiate another ice age, and that man’s increasing industrial activities may eventually lead to the elimination of the ice caps and to a climate about 14C warmer than today…”
WarmingWarming expected to either continue or initiateManabe & Weatherald (1967) “According to our estimate, a doubling of the CO, content in the atmosphere has the effect of raising the temperature of the atmosphere (whose relative humidity is fixed) by about 2C.”
Table 1: Summary of Classification System for Papers

The search terms “global cooling” and “global warming” used by PCF-08 are used in this review but they have been expanded to include “cool”, “warm”, “aerosol” and “ice-age” because these, more general terms, return a larger number of relevant papers. Additional search terms such as “deterioration”, “detrimental” and “severe” have also been included. These would fit into the PCF-08 category of “other less directly relevant” search terms. 

Several of the papers in the database are concerned about the effects of aerosol cooling and they state that this effect dominates the effect of the newly emerging CO2-warming science. Indeed, a few papers warn of CO2cooling.

However, PCF-08 do not include any papers that refer to aerosol cooling by a future fleet of supersonic aircraft (SST’s) but several papers in the 1970s assumed an SST fleet of 500 aircraft. This seems incongruous now but, to show that this number of aircraft is not unrealistic; Emirates Airlines currently have a fleet of 244 (non-supersonic) aircraft and 262 more on order. Therefore, I have included papers that refer to the effects of aerosols from supersonic aircraft and other human activities. Of course, supersonic travel was killed-off by the mid-1970s oil crisis.

Furthermore, a number of PCF-08 and KR-16 papers were re-classified (from cooling, neutral or warming) as summarised Table 2.

ReferenceOriginalAmended
Sellers (1969)WarmingNeutral
Benton (1970)WarmingNeutral
Rasool and Schneider (1972)NeutralCooling
Machta (1972)WarmingNeutral
FCSTICAS (1974)WarmingCooling
National Academy of Sciences (1975)NeutralCooling
Thompson, 1975WarmingNeutral
Shaw (1976)NeutralCooling
Bryson and Dittberner (1977)NeutralCooling
Barrett, 1978NeutralCooling
Ohring and Adler (1978)WarmingNeutral
Stuiver (1978)WarmingNeutral
Sagan et al. (1979)NeutralCooling
Choudhury and Kukla, 1979NeutralCooling
a. Amended Classifications to PCF-08 
ReferenceOriginalAmended
Budyko, 1969CoolingWarming
Benton (1970)CoolingNeutral
Mitchell, 1970CoolingNeutral
Mitchell (1971)CoolingWarming
Richmond, 1972CoolingNeutral
Denton and Karlén, 1973CoolingWarming
Schneider and Dickinson, 1974CoolingNeutral
Moran, 1974CoolingNeutral
Ellsaesser, 1975CoolingNeutral
Thompson, 1975CoolingNeutral
Gates, 1976CoolingNeutral
Zirin et al., 1976CoolingNeutral
Bach, 1976CoolingWarming
Norwine, 1977CoolingWarming
Paterson, 1977CoolingNeutral
Schneider, 1978CoolingWarming
b. Amended Classifications to KR-16 
Table 2: Amendments to Classification of Papers in Database

Two examples of the amendments to the classification of the papers in the database are explained below:

  1. The Benton (1970) paper is classified as “Cooling” in KR-16 but the paper states that, “In the period from 1880 to 1940, the mean temperature of the earth increased about 0.60C; from 1940 to 1970, it decreased by 0.3-0.4°C…The present rate of increase of 0.7 ppm per year [of CO2] would therefore (if extrapolated to 2000 A.D.) result in a warming of about 0.60C – a very substantial change…The drop in the earth’s temperature since 1940 has been paralleled by a substantial increase in natural volcanism. The effect of such volcanic activity is probably greater than the effect of manmade pollutants… it is essential that scientists understand thoroughly the dynamics of climate.” [Emphasis added]. Consequently, this paper is re-classified as neutral in this review. Not the “Cooling” classification in KR-16 and not the “Warming” the classification in PCF-08).
  2. The Sagan et al. (1979)  paper is classified as “Neutral” in PCF-08 but the paper states that, “Observations show that since 1940 the global mean temperature has declined by -0.2 K…Extrapolation of present rates of change of land use suggests a further decline of -1 K in the global temperature by the end of the next century, at least partially compensating for the increase in global temperature through the carbon dioxide greenhouse effect, anticipated from the continued burning of fossil fuels.” [Emphasis added]. Therefore, this paper is re-classified as cooling in this review (conforming to the KR-16 classification).

Results from Review & Discussion

The review database contains a total 190 relevant papers, which is 2.7 times the size of the PCF-08 database. Of the 190 papers in the review database, 162 full papers/books and 25 abstracts were reviewed (abstracts were used when the full papers were either pay-walled or could not be sourced). Furthermore, 4 warming papers from PCF-08 were not reviewed because they could not be sourced. Therefore, the PCF-08 classification was used for these papers in this review.

The results from the review are summarised in Figure 2.

It is evident from Figure 2 that, for the 1965-1979 reference period used by PCF-08, the number of cooling papers significantly outnumbers the number of warming papers. It is also apparent that there are two distinct sub-periods contained within the reference period, namely:

  1. The 1968-1976 period when the 65 cooling papers greatly outnumber the 22 warming papers (74% to 26%), if we ignore the neutral papers (as was done in the Cook et al (2013). The 74% to 26% majority is an overwhelming cooling consensus.  Additionally, this is probably the period when the 1970s “global cooling consensus” originated because cooling was clearly an established scientific consensus – not the myth that PCF-08 contend.
  2. The 1977-1979 period when warming papers slightly outnumber the cooling papers (52% to 48%) – a warming majority but not a consensus.

The following observations are also worth noting from Figure 2 for the 1965-1979 reference period:

  1. Of the 190 papers in the database, the respective number of papers are 86 cooling, 58 neutral and 46 warming. In percentage terms, this equates to 45% cooling papers, 31% neutral papers and 24% warming papers, if we use all of the data.
  2. The cooling consensus increases to 65% compared with 35% warming – a considerable cooling consensus, if we ignore the neutral papers (as was done in the Cook et al (2013).
  3. The total number of cooling papers is always greater than or equal to the number of warming papers throughout the entire reference period.

Although not presented in Figure 2, it is worth noting that 30 papers refer to the possibility of a New Ice-Age or the return to the “Little Ace-Age” (although they sometimes they used the term “Climate Catastrophic Cooling”). Timescales for the New Ice Age vary from a few decades, through a century or two, to several millennia. The 30 “New Ice Age” papers are not insignificant when compared with the 46 warming papers.

Conclusions

A review of the climate science literature of the 1965-1979 period is presented and it is shown that there was an overwhelming scientific consensus for climate cooling (typically, 65% for the whole period) but greatly outnumbering the warming papers by 3-to-1 during the 1968-1976 period, when there were 63 cooling papers (74%) compared with 22 warming (26%).

It is evident that the conclusion of the PCF-08 paper, The Myth of the 1970s Global Cooling Scientific Consensus, is incorrect. The current review shows the opposite conclusion to be more accurate. Namely, the 1970s global cooling consensus was not a myth – the overwhelming scientific consensus was for climate cooling.

It appears that the PCF-08 authors have committed the transgression of which they accuse others; namely, “selectively misreading the texts” of the climate science literature from 1965 to 1979. The PCF-08 authors appear to have done this by neglecting the large number of peer-reviewed papers that were pro-cooling.

I find it very surprising that PCF-08 only uncovered 7 cooling papers and did not uncover the 86 cooling papers in major scientific journals, such as, Journal of American Meteorological Society, Nature, Science, Quaternary Research and similar scientific papers that they reviewed. For example, PCF-08 only found 1 paper in Quaternary Research, namely the warming paper by Mitchell (1976), however, this review found 19 additional papers in that journal, comprising 15 cooling, 3 neutral and 1 warming.

I can only suggest that the authors of PCF-08 concentrated on finding warming papers instead of conducting the impartial “rigorous literature review” that they profess.

If the current climate science debate were more neutral, the PCF-08 paper would either be withdrawn or subjected to a detailed corrigendum to correct its obvious inaccuracies.

Afterword

I reiterate that no literature survey can be 100% complete. Therefore, if you uncover additional references then please send them to me in the comments. It would make this review much better if we could significantly increase the number of relevant references.

Additionally, if you disagree with the classification of some of the references then please let me know why you disagree and I will consider appropriate amendments. Your comments on classification would certainly increase the veracity of the review by providing an independent assessment of my classifications.

References

The references used in this review and their classification are included in the spreadsheet here:

References-Global Cooling Consensus.xlsx

Revision 02

29-Dec-2022: Minor errors corrected by Angus McFarlane.

Continue reading

Documenting the Global Extent of the Medieval Warm Period

Repost from Watts up with That here: WUWT

Some typos in the text and diagrams have been corrected

Introduction

In this article I pose the following questions:

  • Was the Medieval Warm Period (MWP) a global event?
  • Were the MWP temperatures higher than recent times?

The reasons for asking these questions are that climate establishment have tried to sideline the MWP as a purely local North Atlantic event. They also frequently state that current temperatures are the highest ever.

I attempt to answer these questions below.

Mapping Project for the Medieval Warm Period

I use the mapping project for the Medieval Warm Period (MWP) developed by Dr Sebastian Luening and Fritz Vahrenholt to establish the global extent of the MWP. This project is a considerable undertaking and I commend the authors for their work.

Luening states on Research Gate that,

“The project aims to identify, capture and interpret all published studies about the Medieval Climate Anomaly (Medieval Warm Period) on a worldwide scale. The data is visualized through a freely accessible online map: http://t1p.de/mwp.”

I show a screenshot from the project in Figure 1 but I recommend that you use the online version at http://t1p.de/mwp because it contains a wealth of data, including abstracts and links to the individual papers.

(Note – The t1p.de/mwp URL given may or may not be reachable by some viewers as it’s one of those mangled “URL shorteners” that may or may not work. It does not work for me. The author claims the URL is valid, and offered no alternate, I independently found this link to a Google maps document which might work better for some readers – Anthony)

fig-1-screenshot-of-mwp-project

key-to-figure-1

Figure 1: Screenshot of MWP Mapping Project (Source: Luening http://t1p.de/mwp downloaded 27-Dec-2016)

A cursory inspection of Figure 1 indicates that there are a large number of warm study locations dispersed throughout the world. However, to determine the global numbers for the Warm-Cold-Neutral-Dry-Wet studies, I downloaded the mapped data for the 934 studies that were available on 30 December 2016 and these are summarised in Figure 2.

fig-2a-temperature-hydroclimate-data

 

fig-2b-temperature-data-only

Figure 2: Results from MWP Mapping Project (Source: Luening http://t1p.de/mwp downloaded 30-Dec-2016)

The following observations are evident from Figure 2;

a. The number of Warm studies (497) greatly exceed the other studies, namely, 53% of the studies when temperature and hydroclimate data is used and 88% when temperature only data is used.

b. The number of Cold studies (18) is very small, at 2-3% of the overall studies.

c. The number of Neutral studies (53) is comparatively low, at 6-9% of the overall studies.

d. The number of studies that report only Hydroclimatic data is not insignificant. The number of Dry studies (184) and the number of Wet studies (182) are 20% and 19% of the overall studies respectively.

In summary, the overwhelming evidence from the Luening MWP Mapping Project to date is that the MWP was globally warm but it is not immediately obvious what the definition of warm is?

Descriptions of warm or cold are given the individual studies. For example, Kuhnert & Mulitza (2011: GeoB 9501) (extracted from Luening) states that the,

“Medieval Warm Period 800-1200 AD was about 1.1°C warmer (50 years mean) than subsequent Little Ice Age.”

Now, whilst this description is useful, it does not allow us to compare MWP temperatures with modern warming. Therefore, I compare modern temperatures with the MWP below.

How Warm was the MWP?

Several of the references in the Luening MWP Mapping project are also so referenced in Ljungqvist (2010), therefore the latter may be used to estimate the temperature in the Northern Hemisphere (NH) during the MWP.

Note that I have used the Ljungqvist (2010) NH paper because I already had the data available on my laptop. However, readers could carry out a similar exercise by using their preferred study to derive appropriate temperatures for global or hemispherical temperatures for comparison with the present day.

Ljungqvist published his data along with his paper (other climate scientists should do the same) and Figure 3 shows my version of the Ljungqvist chart (reproduced using his data). I have also annotated the chart to show previous warm and cold periods.

fig-3-estimate-of-nh-temperature-ljungqvist-2010

Figure 3: Estimate of Extra-tropical Northern Hemisphere (90–30°N) Decadal Mean Temperature Variations (after Ljungqvist, 2010)

a. All values are decadal mean values.

b. Proxy temperatures are shown in the thick black line and are relative to the 1961–1990 mean instrumental temperature from the variance adjusted CRUTEM3+HadSST2 90–30°N record.

c. Two standard deviation error bars are shown in light grey shading.

d. CRUTEM3+HadSST2 instrumental temperatures are shown in the red line.

The following observations are evident from Figure 3:

a. The Medieval Warm Period (MWP) temperature is warmer than the Modern Warm Period proxy temperature – at no time do the modern proxies exceed the MWP proxies.

b. The recent instrumental temperature is warmer than the Medieval Warm Period temperature.

c. The MWP is obviously preindustrial and it would be logical to use the MWP temperature as preindustrial temperature instead of the Little Ice Age (LIA) favoured by the climate establishment.

Observations (a) and (b) above are based on Ljungqvist’s comments. He states that [emphasis added],

“…the Medieval Warm Period, from the ninth to the thirteenth centuries, seem to have equalled or exceeded the AD 1961-1990 mean temperature level in the extra-tropical Northern Hemisphere.”

He also adds the following words of caution [emphasis added],

Since AD 1990, though, average temperatures in the extra-tropical Northern Hemisphere exceed those of any other warm decades the last two millennia, even the peak of the Medieval Warm Period, if we look at the instrumental temperature data spliced to the proxy reconstruction. However, this sharp rise in temperature compared to the magnitude of warmth in previous warm periods should be cautiously interpreted since it is not visible in the proxy reconstruction itself.”

Regarding Ljungqvist’s note of caution and item (b) above, comparing instrumental temperatures with those derived from proxies is not an, “apples for apples” comparison because proxy temperatures are damped (flattened out) whilst thermometers respond very quickly to changes in temperature.

Ljungqvist describes the dampening proxy temperatures thus [emphasis added],

“The dating uncertainty of proxy records very likely results in “flattening out” the values from the same climate event over several hundred years and thus in fact acts as a lowpass filter that makes us unable to capture the true magnitude of the cold and warm periods…What we then actually get is an average of the temperature over one or two centuries.”

However, the flattening out of proxy records is not the only problem when comparing proxies with instrumental temperatures – there is also the “divergence problem.” We examine this for the Ljungqvist’s 1850-1989 calibration period in Figure 4.

fig-4-comparison-of-proxy-instrumental-temps

 

Figure 4: Comparison of Proxy & Instrumental Decadal Mean Temperature Variations (after Ljungqvist, 2010)

The decadal correlation (r = 0.95, r² = 0.90) between proxy and instrumental temperature is very high during the calibration period AD 1850-1989. However, note that the 1990-1999 instrumental peak is not used in the calibration. Probably because Figure 4 shows that the instrumental peak temperature is 0.39°C for 1990-1999, which diverges from the proxy temperature of 0.06°C – a difference of 0.33°C.

This divergence is outside the 2 standard deviation error bars of ±0.12°C reported by Ljungqvist and it illustrates the divergence problem in a nutshell – proxies do not follow the instrumental record for recent high temperatures. The reason for this divergence is that the proxy response is probably nonlinear instead of the linear response that is assumed in all climate reconstructions to date.

Ljungqvist explains the nonlinearity [emphasis added],

“One circumstance that possibly has led to an underestimation of the true variability is that we must presuppose a linear response between temperature and proxy. If this response is nonlinear in nature, which is often likely the case, our interpretations necessarily become flawed. This is something that may result in an underestimation of the amplitude of the variability that falls outside therange of temperatures in the calibration period.”

As a structural engineer, I agree with Ljungqvist that nonlinearity is often likely to be the case for proxies. Most natural materials, e.g., timber, rock, etc., exhibit linear-nonlinear behaviour. However, this concept is difficult to explain in words, therefore, I explain the concept diagrammatically in Figure 5.

fig-5-indicative-nonlinear-response

 

Figure 5: Indicative Nonlinear Response for a Temperature Proxy

For clarity, I show the positive part of the temperature response in Figure 5 but it should be noted that there would be a similar negative response. I now explain Figure 5 as follows:

a. Line OAB represents the linear proxy response to temperature currently used in reconstructions. Parameter P1 yields a temperature T1 (via point A) and the response increases linearly so that parameter P2 indicates a higher temperature T2 (via point B).

b. Line OACD represents the indicative nonlinear proxy response to temperature. Parameter P1 is in the linear portion of the curve and therefore results in the same temperature T1 as that in item (a) above.

c. In addition to item (b), parameter P1 at point C is in the nonlinear portion of the curve, which would result in the temperature T2, which is significantly higher than the value of T1 obtained from the linear response via point A.

In summary, a nonlinear proxy-temperature response can result in two temperature values for a single proxy value, which could be problematic in deciding which temperature result to use.

However, for modern temperatures, the nonlinear response of the proxy can be calibrated against the instrumental record. For example, if the temperature results in Figure 4 were for a single proxy then the following procedure could be used:

a. The nonlinear curve would be calibrated so that temperature T1 = 0.08°C would be in the linear portion of the curve in Figure 5.

b. The temperature T2 = 0.39 °C would be calibrated to be in the nonlinear portion of the curve in Figure 5.

Note that the linear response used in Figure 4 shows T2 = 0.06°C but the nonlinear response procedure above allows the calibration T2 = 0.39°C. This type of nonlinear calibration would mean that historical temperatures would be higher than those currently estimated by using a linear proxy-temperature response.

Furthermore, the calibration in modern times is relatively straightforward because we can compare proxies with instrumental temperatures to determine which temperature is correct, T1 or T2. However, historical temperatures are more problematic because we do not have instrumental records for several thousands of years to determine the correct temperature, T1 or T2. Nevertheless, we can estimate the correct temperature by deploying a similar methodology to that used by earthquake engineers as explained below.

Earthquake engineers routinely design structures for 1-in-500 year seismic events and for special structure (e.g., nuclear installations) 1-in-5,000 year events. However, they have only had the seismograph to measure the strength of earthquakes from 1935, when it was invented by Charles Richter. Earthquake engineers solve this short instrumental record dilemma by investigating historical descriptions of seismic events.

For example, the historical record for an earthquake may include a description such as, “The earth boiled.” This literal description does not mean that the earth actually reached boiling. It means that the groundwater in the soil bubbled to the surface – a process known as liquefaction of the soil. Earthquake engineers can then use this description to estimate the amount of shaking that would cause liquefaction and thus estimate the strength of the earthquake at that location.

In climate science, the temperature, T1 or T2, could also be estimated from historical descriptions by using changes in vegetation, human habitation, etc. For example, if our proxy were in Greenland then the following outcomes are possible:

a. If the land were in permafrost at the date of the proxy then the lower temperature T1 would be used.

b. It the land was being cultivated by Vikings at the date of the proxy (i.e., no permafrost) then the higher temperature T2 would be used.

The above nonlinear methodology would obviously require more work by paleo scientists when compared with the current linear calibration methods but this work is no more than is routinely carried out by earthquake engineers.

Additionally, up-to-date proxies for the post-2000 period would be useful to determine if the divergence between proxies and instrumental temperatures has increased. The proxies would also enable a more accurate nonlinear proxy-temperature calibration, say, point D in Figure 5. Strangely, there seems to be a reluctance in the climate establishment to publish up to date proxies.

Using a nonlinear proxy-temperature response would result in historical high temperatures being calculated to be higher and historical low temperatures being calculated to be lower.

Conclusions

A review of the global extent of the MWP is presented and the following conclusions are offered:

  1. The MWP was a global event and a large number of studies show that warming events overwhelming outnumber cold events.
  2. However, the not insignificant number of dry or wet events recorded in the MWP Mapping Project would suggest that perhaps the Medieval Climate Anomaly would be a better description than the MWP.
  3. NH temperatures during the MWP were at least as warm those in 1980-1989 instrumental record.
  4. Recent instrumental temperatures show higher temperatures when compared with the MWP proxies. However, instrumental temperatures should not be compared directly with proxy temperatures because this is not an “apples for apples” comparison. Proxy temperatures are dampened (flattened) out on decadal or greater scales.
  5. Recent proxy records diverge from instrumental temperatures – instruments show higher readings when compared with proxies.
  6. The divergence problem in item (5) above is probably due to a linear proxy-temperature response being assumed in current temperature reconstructions. A nonlinear proxy-temperature response would achieve more accurate results for historical high and low temperatures and achieve a better correlation with recent instrumental data.

Until there is a good correlation between instrumental temperatures and proxies, no reputable scientist can definitely state that current temperatures are the highest ever.

Afterword

Regarding the 2°C limit proposed by the climate establishment, the following phrase (based on the facts presented in this article) does not sound nearly as alarming as is currently promulgated in the media:

“We know that the preindustrial MPW temperatures exceeded those of the LIA by ≈ 1°C without CO2 but we also think that a similar temperature rise from the LIA to 1980-1989 was caused by CO2. Therefore, we think that we need to limit CO2 emissions to stop a further (dangerous) 1°C rise. We think.”

I think that there is too much thinking going on and not enough real climate science.

A good start would be to use nonlinear proxy-temperature responses to establish accurate historical high and low temperatures.

Is 1 °C Halfway to Hell?

Pro-anthropogenic Global Warming web sites are concerned that we are about to exceed the 1 °C value for the temperature anomaly (see Met Office, New Scientist and Skeptical Science) but is this really “uncharted territory” (Met Office) or halfway to New Scientist’s global warming hell of 2 °C?

It is shown in the following discussion that “1 °C: Halfway to Hell” is an ill-chosen headline – a more appropriate headline would be, “1 °C: Halfway to the Optimum”.

HadCRUT4 Data

The HadCRUT temperature anomalies are shown in Figure 1.

Figure 1-HadCRUT4 Overlay on 1850-1900 Ave-CompressedFigure 1: HadCRUT 1850-1900 Temperature Anomaly with 1961-1990 Overlay (After: Met Office Chart)

The Met Office chart in Figure 1 is unusual in that the HadCRUT anomaly data are usually referenced to the 1961-1990 mean temperature. Using a pre-industrial mean of 1850-1900 is not strictly correct because the industrial revolution began from approximately 1760 to sometime between 1820 and 1840 (Wikipedia). Consequently, the use of an 1850-1900 mean for pre-industrial temperatures is an arbitrary choice.

Therefore, I have overlain the HadCRUT4 data (1961-1990 mean), plotted as the blue line and it is evident from Figure 1 that using a mean of 1850-1900 raises the anomaly by ≈ 0.3 °C, when compared with the 1961-1990 mean. This gives the pre-industrial anomaly greater visual impact than the usual HadCRUT4 values.

The use of the 1850-1900 mean as the basis for the 1 °C rise is unusual because the IPCC reports have always refer to the 1961-1990 mean from HadCRUT for climate projections, e.g., see IPCC AR4 FAQ 3.1, Figure 1. However, we can use other data to determine a real pre-industrial mean as discussed below.

A Different Pre-industrial Benchmark

A chart from Ljungqvist (2010) is presented in Figure 2 that shows temperature fluctuations in the northern hemisphere for the last two millennia.

LjungqvistFigure 2: Reconstructed Extra-tropical (30-90 °N) Decadal Temperature Anomaly to 1961-1990 mean (after Ljungqvist, 2010)

The purpose of Ljungqvist (2010) is to assess the amplitude of the pre-industrial temperature variability. It is evident from Figure 2 that there have been previous warm and cold periods over the last two millennia but that most of the temperatures have been cooler than the 1961-1990 mean. Furthermore, Ljungqvist notes that the Roman Warm Period (RWP) and the Medieval Warm Period (MWP)

seem to have equalled or exceeded the AD 1961-1990 mean temperature level in the extra-tropical Northern Hemisphere.”

The following points are worth noting from Figure 2:

  1. The instrumental date (shown in red at the right hand side of the chart) represents a comparatively small portion of the data available.
  2. The modern proxy peak temperature is 0.082 °C, which is 0.114 °C lower than the MWP peak of 0.196 °C.
  3. Using 1850-1900 as the base for pre-industrial temperature is a relatively cold benchmark for temperature measurements. For example, the 1850-1920 instrumental mean is -0.299 °C, which is 0.495 °C lower the MWP peak.
  4. It is apparent that we could use the 1961-1990 as a suitable base for pre-industrial temperatures. We could even use the mean of the Medieval Warm Period, which is 0.041 °C higher than the 1961-1990 mean.

It is also worth noting that one of Ljungqvist’s conclusions is that,

“Since AD 1990, though, average temperatures in the extra-tropical Northern Hemisphere exceed those of any other warm decades the last two millennia, even the peak of the Medieval Warm Period, if we look at the instrumental temperature data spliced to the proxy reconstruction. However, Ljungqvist stresses that, “However, this sharp rise in temperature compared to the magnitude of warmth in previous warm periods should be cautiously interpreted since it is not visible in the proxy reconstruction itself [my emphasis]

Indeed, Ljungqvist states that the proxy records result in “flattening out” the values,

“that makes us unable to capture the true magnitude of the cold and warm periods in the reconstruction… What we then actually get is an average of the temperature over one or two centuries.”

In other words when comparing earlier temperatures we should be careful when comparing temperature readings – we should only compare proxies with proxies and not proxies with thermometers.

Same Data: Different Perception

There are good scientific reasons for displaying temperatures as anomalies because it allows widely different temperatures from geographically disparate regions to be compared. Nevertheless, they do not need to be displayed in the format of Figure 1. It appears that one of the intentions of displaying temperatures in the Figure 1 format is to depict the temperature rise as being unusual and rising rapidly. However, this is not the case.

A temperature change of approximately 1 °C for 165 years from 1850 to 2015 is almost undetectable by human beings. To illustrate this I use actual HadCRUT4 global temperatures (not anomalies) in Figure 3.

Global Ave Temperature 1850-1900Figure 3: Global Average Temperature (1850-2015)

The chart presented in Figure 3 uses the HadCRUT 1961-1990 mean (14 °C) global temperature as its baseline (see FAQ here) and the actual temperatures are derived by adding or subtracting the anomalies data here from the 14 °C baseline. Figure 3 is based on a diagram by Anthony Watts (published in Climate Change the Facts, 2014) and it gives a less alarming view of global warming when compared with the anomaly diagram in Figure 1.

The following points are worth noting from Figure 3:

  1. The current high value for temperature (September 2015) is 14.70 °C and the lowest recorded value of 13.45 °C occurred in 1911. The starting value of the HadCRUT4 series is 13.63 °C in 1850.
  2. I have used temperatures from my own personal experience to determine a reasonable scale for the vertical axis, ranging from a high of 51 °C in Dubai to a low of -16 °C in Scotland.
  3. I also show temperatures from my new home in Sydney. These are more benign than those in item (2) above but they still show a large range from a high of 45.8 °C to a low of 2.1 °C.

Furthermore, by using temperature data from environs in which I have lived, it is evident that a temperature change of 1 or 2 °C is very small and is not unusual for most flora, fauna and humans. Nevertheless, let us examine if a 2 °C rise would cause serious climatic damage by discussing the Holocene Optimum.

The Holocene Optimum

The first IPCC report FAR presented the diagrams shown if Figure 4 for temperature variations over the last ten thousand years, 4(b), and the last one thousand years, 4(c).

The charts in Figure 4 are based on the work of Lamb, who was the founding director of the Climatic Research Centre (CRU) that produces the HadCRUT temperature data in conjunction with the Met Office. HadCRUT data is used extensively by the IPCC.

Schematic of Glbal Temp Variation-Lamb-IPCC FARFigure 4: Schematic Diagrams of Global Temperature Variations (Source: FAR Figure 7.1)

The similarity between Figure 4(c) and Ljungqvist’s chart in Figure 2 is remarkable, considering that Figure 4 was published in 1990. The dotted line in diagrams 4(b) and 4(c) is stated in FAR as nominally representing conditions near to the beginning of the twentieth century. Unfortunately, the diagrams in Figure 4 do not show values for the temperature scale. Therefore, I use Marcott et al (2013), which is referenced in AR5 WG1, to supply these values.

Marcott et al (2013) has been criticised for showing a spurious uptick in temperature in the 20th century. Indeed, Marcott stated in the paper and in RealClimate that is uptick is “probably not robust.” Consequently, I have used Roger Pielke, Jr’s version of Marcott’s diagram as Figure 5, in which the spurious data are deleted.

Marcott Fig-1B Pielke-AmendmentFigure 5: Holocene Global Temperature Variations (Source: Marcott Figure 1B amended by Pielke)

Approximately 80% of the Marcott et al (2013) proxies are of marine origin and consequently underestimate the variability in land temperatures. Nevertheless, several useful conclusions are obtained by Marcott et al (2013), namely:

  1. “Global temperature for the decade 2000-2009 has not exceeded the warmest temperatures in the early Holocene.”
  2. “The early Holocene warm interval is followed by a long-term global cooling of ≈ 0.7 °C from 5,500 BP to 1850.”
  3. “The Northern Hemisphere (30-90°) experienced a temperature decrease of ≈ 2 °C from 7,000 BP to 1850.”

Spatial Distribution of Temperature during Holocene Climatic Optimum

Renssen et al (2012) use a computer simulation to derive early Holocene temperature anomalies. They call the Holocene Climatic Optimum the Holocene Thermal Maximum (HTM) and, in referring to their simulation, they state that,

“The simulated timing and magnitude of the HTM are generally consistent with global proxy evidence, with some notable exceptions in the Mediterranean region, SW North America and eastern Eurasia.”

The Renssen et al (2012) computer simulation is cited in AR5 WG1 and it presents the spatial distribution of peak temperature anomalies during the Holocene Climatic Optimum relative to a  1000-200 BP pre-industrial mean (see Figure 6).

Global characterization of the Holocene Thermal Maximum-Renssen et al-2012-Fig 3A.-compressedFigure 6: Global Variation of Holocene Thermal Maximum Anomalies (Source: Renssen et al, 2012)

It is evident from Figure 6 that most of Europe and North America experienced an anomaly of 2-3 °C during the Holocene Thermal Maximum (HTM) and Renssen et al (2012) offer the following conclusions:

  1. “At high latitudes in both hemispheres, the HTM anomaly reached 5 °C.”
  2. “Over mid-to-high latitude continents the HTM anomaly was between 1 and 4 °C.”
  3. “The weakest HTM signal was simulated over low-latitude oceans (less than 0.5 C) and low latitude continents (0.5-1.5 °C).”

I reiterate that Renssen et al (2012) use a pre-industrial mean of (1,000 to 200) BP, which is ≈ 0.3 °C less than the HadCRUT4 (1961-1990) mean. Therefore, we should add ≈ 0.3 °C to their values when comparing them with modern-day temperatures. Not withstanding the aforementioned, it should be noted that the Renssen et al values are peak values and that global temperatures would be lower than their peak values.

Discussion

Current temperatures are examined with regard to the approaching 1 °C anomaly and the following standpoints are evident from the discussion:

  1. Portraying current temperatures as an anomaly from the 1850-1900 mean gives the false impression that current temperatures are high because it is shown that temperatures during this period were very low when compared with other warm periods, either in the last two millennia (Ljungqvist, 2010) or in the early Holocene (Marcott et al, 2013 or Renssen et al, 2012).
  2. A reasonable mean for pre-industrial temperatures would be 1961-1900 because this mean compares well with actual mean temperatures that occurred during times that really were pre-industrial, e.g., the Roman Warm Period and the Medieval Warm Period.
  3. The change in temperature during the last 165 years is hardly visible in Figure 3 but such plots wouldn’t normally get people overly concerned. Conversely, when an anomaly plot is deployed, the vertical scale is highly magnified as shown in Figure 1. The magnified vertical scale gives a steep slope to the temperature rise in modern times, which conveys the impression that global warming is proceeding rapidly. To the contrary, and in reality, Figure 3 shows that temperatures have been very stable over the last century and a half.
  4. Less worrying anomaly plots than that shown in Figure 1 are presented in Figures 2 and 5. These show that current temperatures are not unusual when compared with earlier warm periods.
  5. Figure 6 (Renssen et al, 2012) shows that many parts of the world experienced temperatures during the early Holocene that were significantly greater than 2 °C above the pre-industrial

Conclusions

The following conclusions are evident from the above:

  1. Portraying current temperatures as an anomaly from an 1850-1900 pre-industrial mean gives the false impression that current temperatures are high because temperatures during 1850-1900 were amongst the lowest in the last 10,000 years.
  2. Global temperature for the decade 2000-2009 has not reached the warmest temperatures in the early Holocene.
  3. Northern Hemisphere temperatures would need to increase by at least 2 °C above the (1850-1900) pre-industrial mean to reach temperatures experienced during the Holocene Climatic Optimum.

I contend that “1 °C: Halfway to Hell” is an inappropriate headline – a more appropriate headline would be, “1 °C: Halfway to the Optimum”, especially, if you live in the Northern Hemisphere.

Hansen 1988 Revisited

Hansen’s 1988 temperature projections have recently received quite a bit of attention, e.g., RealClimate, WUWT and SkS. The pro-AGW sites state than Hansen has done very well, whereas the anti-AGW say that he hasn’t. Therefore, I thought that it would be a good time to revisit Hansen’s work to determine how well he did?

Temperature Sensitivity & What Can We Learn?

Dana1981 @ SkS states that:

“The observed temperature change has been closest to Scenario C, but actual emissions have been closer to Scenario B. This tells us that Hansen’s model was “wrong” in that it was too sensitive to greenhouse gas changes. However, it was not wrong by 150%, as Solheim claims. Compared to the actual radiative forcing change, Hansen’s model over-projected the 1984-2011 surface warming by about 40%, meaning its sensitivity (4.2°C for doubled CO2) was about 40% too high.

What this tells us is that real-world climate sensitivity is right around 3°C, which is also what all the other scientific evidence tells us. Of course, this is not a conclusion that climate denialists are willing to accept, or even allow for discussion.”

Perhaps. Climate sensitivity may be ≈ 3°C but we can also learn several other things as discussed below.

How Well Did Hansen Do?

Hansen Compared With the Real World

Figure 1 shows Hansen’s scenarios compared with the GISS Land-Ocean Index (LOTI). I have also added Dana1981’s data as Scenario D. This is the Scenario B data but with the temperature sensitivity reduced from 4.2°C to 2.7 °C. Dana did this by multiplying the Scenario B data by a factor of (0.9*3/4.2), which equates to temperature sensitivity of 2.7 °C (see SkS for the data). The SkS estimate for Scenario D appears to be based on Schmidt (2009).

Figure 1: Hansen’s 1988 Scenarios compared with Real-world Temperatures

It is evident from Figure 1 that the best fit for real world temperatures is Scenario C. However, the pro-AGW in SkS state that Scenario C is irrelevant because it uses the “wrong” sensitivity of 4.2°C and incorrect emissions. Therefore, perhaps I should modify my conclusion to real-world temperatures are following Scenario D, which has the “right” temperature sensitivity of 2.7°C and emissions that are close to actual emissions. It makes no difference; Scenarios C and D are similar, although Scenario D has tended to under-predict temperatures for the last 30 years or so.

2012 Projections

Hansen’s temperature projections for 2012 are compared with the LOTI data in Table 1. It should be noted that the 2012 LOTI temperature estimate is based on the 12-month running average from Jun-2011 to May 2012.

Scenario

2012 Anomaly (°C)

Comparison

With LOTI

(%)

Source

A

1.18

226%

Hansen (1988a)

B

1.77

205%

Hansen (1988a)

C

0.60

116%

Hansen (1988a)

D

0.67

128%

Dana (2011)

LOTI

0.52

100%

GISS LOTI

Note: The comparison with LOTI is based on Scenario/LOTI.

Table 1: Comparison of Hansen’s 1988 Temperature Projections for 2012

Comparing Hansen’s temperature projections with LOTI, it is evident that Hansen’ didn’t do very well.

Scenarios A and B overestimated real-world temperatures by a whopping 126% and 105% respectively. Scenario D over-predicts by 28% and even the no-increase-in-emissions Scenario C over-predicts real-world temperatures by 16%.

What do we learn? We could argue that climate sensitivity should be reduced to ≈ 2.1°C to correspond to the 28% over-prediction in Scenario D. However, I would suggest that we wait a few more years to determine the trend more accurately.

2019 Projections

The timeline for Hansen’s temperature projections for 2019 is presented in Table2. A summary of the comments made by different commentators are included to show how the favoured scenario/projection evolved with time.

Scenario

2019 Anomaly (°C)

Comparison

With

Scenario D (%)

Source

Comments
B

1.10

160%

Hansen (1988a)

In May 1988, Hansen states in AGU paper that, “Scenario A, assumes that growth rates of trace gas emissions typical of the 1970s and 1980s will continue indefinitely…[but]…since it is exponential, must eventually be on the high side of reality in view of finite resource constraints…Scenario B is perhaps the most plausible of the three cases.”
A

1.57

227%

Hansen (1988b)

In June 1988, Hansen states to US Congressional Committee that Scenario A was “business as usual.”
B

1.10

160%

Hansen (2005)

Hansen states that, “In my testimony in 1988, and in an attached scientific paper… Scenario A was described as “on the high side of reality”…The intermediate Scenario B was described as “the most plausible”… is so far turning out to be almost dead on the money.”
B

1.10

160%

Hansen (2006)

Hansen assesses the predictions and states that the close agreement, “for the most realistic climate forcing (scenario B) is accidental.” He states current estimate for sensitivity is 3 ± 1°C.
B-

1.00

144%

Schmidt (2007)

RealClimate blog, Schmidt states that forcings in Scenario B are “around 10% overestimate.”
B-

1.00

144%

Schmidt (2009)

RealClimate blog, Schmidt states that Scenario B, “is running a little high compared with the actual forcings growth (by about 10%)”
B

1.00

144%

Schmidt (2011)

RealClimate blog by Schmidt, “As stated last year, the Scenario B in that paper is running a little high compared with the actual forcings growth (by about 10%)”
D

0.69

100%

Dana (2011)

Skeptical Science blog, climate sensitivity reduced from 4.2 to 2.7°C for Scenario B. Use this as the benchmark for comparison.
?

?

?

Schmidt (2012)

RealClimate blog, Schmidt states that Scenario B, “is running warm compared to the real world (exactly how much warmer is unclear)”
C

0.61

88%

Hansen (1988a))

Hansen’s original Scenario C. This is the commitment scenario with emissions held at year 2000 levels. Include this as a measure of how well the other scenarios perform.

Note: The comparison with Scenario D is based on Scenario/Scenario D

Table 2: Evolution of Hansen’s 1988 Temperature Projections for 2019

It is evident from the timeline and narrative in Table 2 that the evolution in temperature is generally downwards; apart from the brief upwards spurt for US Congressional Committee presentation in June 1988 (more on this in unethical conduct later in this blog).

The following points are also evident:

  • There is a large reduction in the estimate for the 2019 temperature anomaly from Hansen’s estimate of 1.57°C in 1988 (as presented to the US Congress) to Dana’s estimate of 0.69°C in 2011.
  • Until recently (Schmidt, 2012) the overestimate in Scenario B was portrayed as ≈ 10% but Dana at SkS (2011) showed that the overestimate was ≈ 44%.

What do we learn? All of the pro-AGW blogs states that the Hansen Scenario B was pretty good estimate. I suggest that an error of ≈ 44% is pretty bad.

Unethical Behaviour

Hansen’s paper Hansen (1988a) was published in August 1988 but it is important to note that it was accepted for publication on 6 May 1988. This date is particularly relevant because Hansen stated on 6 May 1988 that:

Yet, one month later Hansen (1988b)
in his congressional testimony here he described Scenario A as “business as usual” (see below):


Notice that Scenario A is stressed to be “business as usual”. No mention to Congress that Scenario B was “most plausible” and that Scenario A was “on the high side of reality”.

Later (2006), Hansen re-worded his 1988 congressional testimony to be Scenario A, “was described as on the high side of reality”.


From the foregoing, it is evident that Hansen did not describe to Congress in 1988 that Scenario A was on the “high side of reality”. At best, he has been economical with the truth by re-writing history and (at worst) he has been unethical and totally unprofessional.

Conclusions

I offer the following conclusions regarding Hansen 1988:

  • Temperature forecasts (sorry, should I use the politically correct term projections?) for 2019 have plummeted from 1.57°C in 1988 to 0.69°C in 2011.
  • Estimates of temperature are in error by ≈ 60 for Scenario B and 127% for Scenario A.
  • Climate sensitivity has also fallen from ≈ 4.2°C to ≈ 2.1-2.7°C, i.e., it has fallen to 50-64% of Hansen’s 1988 estimates.

These sorts of errors do not represent pretty good estimates.

Climate Models 2011: Same Data – Different Conclusions

In his blog post 2011 Updates to model-data comparisons at Real Climate, Gavin Schmidt shows the diagram in Figure 1.

Figure 1: Real World Temperatures Compared with IPCC Model Ensemble (Schmidt, 2012)

Gavin states that, “Overall, given the latest set of data points, we can conclude (once again) that global warming continues.” My perception was that there had been some cooling over the last 15 years, therefore I have decided to check Gavin’s claims.

Gavin explains that the chart shows the annual mean anomalies from the IPCC AR4 models plotted against the surface temperature records from the HadCRUT3v, NCDC and GISTEMP products (it really doesn’t matter which). Everything has been baselined to 1980-1999 (as in the 2007 IPCC report) and the envelope in grey encloses 95% of the model runs.

At first glance the chart seems to show a good correspondence between real world temperature and the average of the IPCC models. However, the correspondence does not look quite so good when you compare the chart with the AR4 charts. I have updated AR4 Figures 1.1 and TS.26 to include the HadCRUT data up to May 2012 and discuss these as follows.

Figure 2 is derived from Figure 1.1 of IPCC AR4.

Figure 2: Global Average Temperature Compared with FAR, SAR & TAR (after AR4 Figure 1.1)

It should be noted in Figure 2 that I could not get the HadCRUT3 temperature to match exactly with the values in Figure 1.1 in AR4. Therefore, I had to adjust the HadCRUT3 data by adding 0.026 °C. I am not sure why I had to make the adjustment in the HadCRUT3 data, perhaps it is just a printing error in the AR4 diagram but this error also repeats elsewhere. It may be coincidence but the average temperature for 1961-1990 on which HadCRUT3 is based is 0.026 °C. Therefore, it may be that the AR4 chart is normalised to a zero temperature for the 1961-1990 period. However, I can find no information that confirms that this adjustment should be made.

Notwithstanding the above, it is evident from Figure 2 that the correlation between the adjusted HadCRUT3 data and the original AR4 Figure 1.1 data is very good. This applies to both the individual data points and the smoothed data. It is also evident that the temperature trend is significantly below the FAR estimate and is at the very low ends of the SAR and TAR estimates.

In order to compare Gavin’s diagram with actual global temperatures, I use Figure TS.26 from AR4 a shown in Figure 3.

Figure 3: Model Projections of Global Mean Warming Compared with Observed Warming (after AR4 Figure TS.26)

The following points should be noted regarding Figure 3 compared with AR4 Figure TS.26:

  1. I have deleted the FAR, SAR and TAR graphic from Figure TS.26 in Figure 3 because they make the diagram more difficult to understand and because they are already presented in Figure 2, in a form that is much easier to assimilate.
  2. The temperature data shown in AR4 Figure 1.1 does not correspond to that shown in Figure TS.26. The Figure 1.1 data appear to be approximately 0.02 °C higher than the corresponding data in Figure TS.26. I have assumed that this is a typographical error. Therefore, I have used the same 0.026 °C adjustment to the HadCRUT3 data in Figure 3 that was used for Figure 2.
  3. My adjusted HadCRUT3 data points are typically higher than those presented in Figure TS.26.
  4. Despite items (1), (2) and (3) above, there is very good agreement between the smoothed data in TS.26 and the adjusted HadCRUT3 data, particularly for the 1995-2005 period. It should be noted that AR4 uses a 13-point filter to smooth the data whereas HadCRUT uses a 21-point filter. Nevertheless, AR4 states that the 13-point filter gives similar results to the 21-point filter.

Comparing Gavin’s projections in the RC chart in Figure 1 with the official AR4 projections in Figure 3, the following points are evident:

  1. The emissions scenarios and their corresponding temperature outcomes are clearly shown in the AR4 chart. Scenarios A2, A1B and B1 are included in the AR4 chart – scenario A1B is the business-as-usual scenario. None of these scenarios are shown in the RC chart.
  2. Real world temperature (smoothed HadCRUT3) is tracking below the lower estimates for the Commitment emissions scenario., i.e., emissions-held-at-year-2000 level in the AR4 chart. There is no commitment scenario in the RC chart to allow this comparison.
  3. The smoothed curve is significantly below the estimates for the A2, A1B and B1 emissions scenarios. Furthermore, this curve is below the error bars for these scenarios, yet Gavin shows this data to be well within the error bands.
  4. The RC chart shows real world temperatures compared with predictions from models that are an “ensemble of opportunity”. Consequently, Gavin states, “Thus while they do span a large range of possible situations, the average of these simulations is not ‘truth’.” [My emphasis].

In summary, TS.26 from AR4 is useful for comparing real world temperature data with the relevant emissions scenarios. To the contrary, Gavin uses a chart which compares real world temperature data with average model data for which he states does not represent “truth.” I suggest that this is not much of a comparison and I conclude that the AR4 chart is a much more informative comparison.

I also conclude that it is evident from Figure 3 (AR4 Figure TS.26) that there has been a pause in global warming and that some cooling is occurring. It is certainly not as Gavin concluded that, “Overall, given the latest set of data points, we can conclude (once again) that global warming continues.” Whether or not this cooling pause is a longer-term phenomenon or temporary pause only time will tell.

2010 – The Hottest Year on Record: Is this a Cause for Concern?

GISS report that 2010 has tied with 2005 as being the hottest year on record. James Hansen, the director of GISS, said that, “If the warming trend continues, as is expected, if greenhouse gases continue to increase, the 2010 record will not stand for long.”

Is this a cause for concern?

GISS Data Compared with Hansen’s Scenarios (2006)

The GISS Land Ocean Temperature Index (LOTI) data up to January 2011 are shown in Figure 1. They are compared with the global warming models presented by Hansen (2006).


Figure 1: Scenarios A, B and C Compared with Measured GISS Land-Ocean Temperature Index (after Hansen, 2006)

The blue line in Figure 1 denotes the GISS LOTI data and Scenarios A, B and C describe various CO2 emission outcomes. Scenarios A and C are upper and lower bounds. Scenario A is “on the high side of reality” with an exponential increase in emissions. Scenario C has “a drastic curtailment of emissions”, with no increase in emissions after 2000. Scenario B is described as “most plausible” which is expected to be closest to reality. The original diagram can be found in Hansen (2006). It is interesting to note that, in his testimony to US Congress, Hansen (1988) describes Scenario A as “business as usual”, which somewhat contradicts his “on the high side of reality” statement in 2006.

It is evident from Figure 1 that the best fit for actual temperature measurements is currently the emissions-held-at-year-2000-level Scenario C. The current temperature anomaly is 0.61  °C. Therefore, even with temperatures at record highs, we are not experiencing the runaway temperatures predicted for the “business-as-usual” Scenario A. Indeed, for Scenario C with emissions curtailed at year-2000 levels, the rate of temperature increase is an insignificant 0.01 °C/decade.

It is also worth noting that we are currently at the lower end of the range of estimated temperatures for the Holocene optimum and the prior interglacial period. These occurred without human intervention or huge increases in carbon dioxide.

HadCRUT3 Compared with IPCC AR4

The above comparison based on Hansen (2006) uses relatively old climate models. Therefore, I have compared current HadCRUT3 temperature data with the latest IPPC AR4 (2007) models in Figure 2.

Figure 2: IPCC Scenarios A1B, A2 & B1 Compared with HadCRUT3 Temperature Data (after AR4, 2007)

Figure 2, which is based on IPCC AR4 Figure TS.26. I have added the HadCRUT3 data as blue dots. The black dots in the original TS.26 diagram appear to be HadCRUT3 data but are slightly misaligned. Therefore, I offset the HadCRUT3 data by adding 0.018°C to achieve a reasonable fit with the individual data points shown in AR4. The blue line with white dots the smoothed HadCRUT3 data. It is evident from Figure 2 that the smoothed curve gives an excellent fit with observed data presented as the solid black line in AR4. The current temperature anomaly is 0.52  °C.

The observed temperature trends in Figure 2 are significantly below the “likely” warming scenarios presented in AR4. Furthermore, as with the GISS data, the current HadCRUT3 trend is similar to the emissions-held-at-year-2000-level scenario.

Conclusions

Two comparisons are presented that compare GISS LOTI data and HadCRUT3 data with their respective temperature simulation models and the following conclusions are offered:

  1. Observed temperatures are significantly below the “most plausible” or “likely” high emissions scenarios. Instead, they are on a trajectory that is similar to the emissions-held-at-year-2000-level scenarios.
  2. Current temperatures are at the lower end of the range of estimated temperatures for the Holocene optimum and the prior interglacial period. These temperatures occurred without human intervention.

In summary, global temperatures may be reaching record highs but they are not following “runaway” trajectories suggested by computer models. Instead, they are following an insignificant warming trend of approximately 0.01 °C/decade.

Notwithstanding the above, it should be noted that time period for the comparison of actual temperature measurements with those predicted by computer models is still relatively short. Hansen (2006) suggests that we could expect reasonable results for distinction between the scenarios and useful comparison with the real world by 2015.