On November 19, thousands of email messages and documents from the University of East Anglia’s Climatic Research Unit (CRU) were leaked to a Russian website. The leaked material suggested that CRU researchers may have manipulated historical temperature data, hidden data from other researchers, and tried to interfere with the scientific peer review process. A day later, British journalist James Delingpole dubbed the affair “Climategate,” and a new front was opened in the global warming wars. reason asked its science correspondent and longtime climate science watcher, Ronald Bailey, some questions to help navigate readers through the controversy.
What is East Anglia’s Climatic Research Unit?
The CRU, one of the world’s leading climate study centers, has constructed one of the three main records of global surface temperature trends since the mid-19th century. These records indicate that the earth’s average surface temperature has increased by about 0.8 degree Celsius since 1860. These data are used to validate the results of the computer climate models that have been predicting steep rises in global temperatures as a result of increases in atmospheric carbon dioxide, which is produced mainly by burning fossil fuels. The CRU researchers are deeply involved as participants and reviewers in the process of producing assessment reports by the United Nations Intergovernmental Panel on Climate Change (IPCC).
What is the IPCC?
Set up by the United Nations in 1988, the IPCC is responsible for evaluating the risks associated with man-made climate change. The IPCC assessment reports, released every six years, are supposed to be authoritative summaries of current climate science, guiding policy makers’ decisions on how to address man-made global warming under the United Nations Framework Convention on Climate Change.
How much does the broad understanding of surface temperatures depend on East Anglia’s research?
Phil Jones, the CRU’s embattled head, tried to allay concerns about the integrity of his center’s data by issuing this statement: “Our global temperature series tallies with those of other, completely independent, groups of scientists working for NASA and the National Climate Data Center [sic] in the United States, among others. Even if you were to ignore our findings, theirs show the same results. The facts speak for themselves; there is no need for anyone to manipulate them.”
In addition to the CRU surface temperature data, there are two other leading sources used by the IPCC, one created by National Aeronautics and Space Administration’s Goddard Institute for Space Studies (GISS), the other by the National Oceanic and Atmospheric Administration’s National Climatic Data Center (NCDC). But in 2007 University of Colorado climatologist Roger Pielke and his colleagues noted in the Journal of Geophysical Research that “the raw surface temperature data from which all of the different global surface temperature trend analyses are derived are essentially the same.” They said “the best estimate that has been reported is that 90–95 percent of the raw data in each of the analyses is the same. That the analyses produce similar trends should therefore come as no surprise.”
A leaked email message from Jones himself appears to confirm the data overlap: “Almost all the data we have in the CRU archive is exactly the same as in the Global Historical Climatology Network (GHCN) archive used by the NOAA National Climatic Data Center,” he wrote. Given this interdependence, Jones’ defense that the CRU’s data correlate with other data sets is less reassuring than it sounds. Indeed, the fact that the three main sets of research correlate so well may provoke concerns about the validity of each.
Where do these data come from?
The data sets are derived from instrumental surface records (essentially thermometers) from weather stations around the globe, in some cases going back to the middle of the 19th century, combined with sea surface measurements from sources such as the International Comprehensive Ocean-Atmosphere Data Set.
What did the leaked email messages and documents say?
In one message, researcher Tom Wigley described adjusting mid-20th-century global temperature data in order to lower an inconvenient warming “blip.” Editing the 1940 temperature spike downward by 0.15 degree Celsius makes a better-looking trend line in support of the notion of rapidly accelerating man-made warming. The adjustment may have been reasonable given changes in instrumentation, but all raw data and the methodologies used to adjust them should be publicly available so others can check to make sure.
In another set of troubling email messages, the CRU crew and associates discussed how to freeze out researchers and editors who expressed doubts about man-made climate change. CRU leader Phil Jones, for example, said he and researcher Kevin Trenberth would keep two dissenting scientific articles out of the IPCC’s next report “even if we have to redefine what the peer-review literature is!” In addition, the CRU crew evidently plotted to remove journal editors with whom they disagreed and suppress the publication of articles they disliked.
Then there is the “HARRY_READ_ME” file in which someone who seems to be a CRU computer programmer expresses his frustration with the muddled data sets he is trying to make sense of. At one point, he asks, “What the hell is supposed to happen here? Oh yeah—there is no ‘supposed,’ I can make it up. So I have :-)” The note does not inspire confidence in the accuracy of the temperature data, to say the least.
What does Climategate mean for the credibility of the CRU’s work?
The CRU researchers have promised to release their data and methods of adjustment. This will allow independent analyses of the leading temperature data sets to show whether skeptics are right in claiming that researchers at the CRU and elsewhere consistently chose to maximize historical temperature trends. If it turns out that the CRU’s adjustments fell into the middle or lower range, the credibility of its work will be bolstered.
Is it true that some of the original data are missing?
The CRU told some skeptical researchers it couldn’t send them the original raw data because “data storage availability in the 1980s meant that we were not able to keep the multiple sources for some sites, only the station series after adjustment for homogeneity issues. We, therefore, do not hold the original raw data but only the value-added (i.e. quality controlled and homogenized) data.” This explanation raises legitimate questions about how the lost original data were manipulated to produce the “value-added” data.
Later, University of East Anglia Pro-Vice-Chancellor Trevor Davies stated, “It is well known within the scientific community and particularly those who are skeptical of climate change that over 95% of the raw station data has been accessible through the Global Historical Climatology Network for several years.” See above about data interdependence.
What’s the difference between the tweaking these guys were doing and the normal decisions scientists have to make when they are trying to get a handle on messy data?
That’s not clear. Surface temperature data collected during the last century and a half from thousands of land stations and ship measurements really do need to be adjusted to account for things like changes in thermometers, relocation of weather stations, urbanization, and changes in how ships measure sea water temperatures (e.g., using buckets vs. intake tubes).
University of Alabama climatologist John Christy thinks the teams that compile temperature data could all be making a similar set of errors, which would result in their finding comparable (and perhaps spurious) trends. For example, Christy found that if one focuses on daily maximum temperatures in East Africa, one does not find any upward trend in temperatures since 1900. The CRU, NCDC, and GISS records, by contrast, use average temperatures, which are more sensitive to land use changes than they are to factors that affect the deep atmosphere. If Christy’s analysis is correct, much of the global warming in East Africa reported by the three official data sets is exaggerated. Christy has found similar effects on temperature trend reporting for other regions of the world.
How does all this affect what we know about global warming?
The climate is warming. Nothing revealed by Climate-gate changes that fact. For example, satellite data from two different scientific groups, the University of Alabama at Huntsville and Remote Sensing Systems, indicate that during the last 30 years average global temperatures have been increasing at a rate of 0.13 degree Celsius and 0.153 degree Celsius per decade, respectively. The disputed CRU surface data indicate an increase of 0.15 degree Celsius per decade since the mid-1970s, which is within that range.
In addition, research has uncovered numerous trends that are consistent with a warming world. For example, sea levels are rising, mountain glaciers across the globe are receding, the behavior of plants and animals indicates that spring is coming earlier, recent satellite monitoring finds that ice sheets in Greenland and Antarctica are thinning, and during the last 30 years the area covered by Arctic sea ice has been declining by 4 percent per decade.
If the planet is definitely warming, why have temperatures remained more or less steady since 1998?
Although the 2000s are the warmest decade in the modern record, there has been little discernible warming since 1998. Why this has happened and what it means are matters of dispute. Researchers convinced of global climate change believe the pause is the result of natural variations, such as periodic shifts in ocean currents, that will dissipate during the next few years, followed by a resumption of warming, perhaps even at a faster pace. Skeptics argue that the computer models’ failure to predict this pause suggests they cannot reliably predict future warming.
How does this scandal affect what we know about the politicization of climate science?
The findings of climate science are being used to justify policies that aim to dramatically shift how the entire world produces and uses energy. With trillions of dollars at stake, it is no wonder that climate science is now politicized from top to bottom. Consequently, citizens and policy makers must be skeptical of the claims made by advocates on both sides of the global warming debate.
How are environmentalists spinning this?
Nobel Peace Prize winner Al Gore told Slate: “To paraphrase Shakespeare, it’s sound and fury signifying nothing. I haven’t read all the e-mails, but the most recent one is more than 10 years old. These private exchanges between these scientists do not in any way cause any question about the scientific consensus. But the noise machine built by the climate deniers often seizes on what they can blow out of proportion, so they’ve thought this is a bigger deal than it is.”
At the Copenhagen climate change conference in December, Jeremy Symons of the National Wildlife Federation seconded Gore’s theory that it’s all a smear campaign orchestrated by climate change “deniers” and funded by oil companies that are trying to derail emission negotiations by playing “gotcha politics with 10 year-old emails.” Never mind that the some of the email messages are as recent as October 2009. For example, in a message dated October 12, 2009, National Center for Atmospheric Research climatologist Kevin Trenberth told CRU researchers, “The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t.”
How are global warming skeptics spinning this?
Daily Telegraph columnist Christopher Booker described Climategate as “the worst scientific scandal of our generation.” Sen. James Inhofe (R-Okla.), a longtime skeptic of climate change, said “95 percent of the nails were in the coffin prior to this week. Now they are all in.”
What changes will occur as a result of Climategate?
One happy result of Climategate could be that the climate research community heeds Georgia Tech climatologist Judith Curry: “Climate data needs to be publicly available and well documented. This includes metadata that explains how the data were treated and manipulated, what assumptions were made in assembling the data sets, and what data was omitted and why.”
Greater transparency should not be limited to temperature data but should include all aspects of climate science, including the software used by computer climate models. Only through such transparency can other researchers determine whether the models are adequate forecasters of future climate change or merely prejudices made plausible.
But there is one important thing that more transparency won’t fix: the complications and uncertainties inherent in crafting policies to reduce global warming. Making climate data and climate model code available to independent researchers will lead to a deeper understanding of the scientific uncertainties surrounding man-made global warming, but this could render projections of future warming less clear, demonstrating that it is harder than previously believed to know what the best policies are to address the problem.