The Berkeley Earth Surface Temperature project was put together in the wake of the Climategate blowup in 2009. Led by University of California Berkeley physicist Richard Muller and advised by Georgia Tech climatologist Judith Curry, the BEST project is examining various surface temperature records with the aim of resolving ...
...current criticism of the former temperature analyses, and to prepare an open record that will allow rapid response to further criticism or suggestions. Our results will include not only our best estimate for the global temperature change, but estimates of the uncertainties in the record.
The BEST project is using over 39,000 unique stations, which is more than five times the 7,280 stations found in the Global Historical Climatology Network Monthly data set (GHCN-M) that has served as the focus of many climate studies.
In Congressional testimony last week, Muller released some initial findings:
Prior groups (NOAA, NASA, HadCRU) selected for their analysis 12% to 22% of the roughly 39,000 available stations. (The number of stations they used varied from 4,500 to a maximum of 8,500.)
They believe their station selection was unbiased. Outside groups have questioned that, and claimed that the selection picked records with large temperature increases. Such bias could be inadvertent, for example, a result of choosing long continuous records. (A long record might mean a station that was once on the outskirts and is now within a city.)
To avoid such station selection bias, Berkeley Earth has developed techniques to work with all the available stations.
This requires a technique that can include short and discontinuous records.
In an initial test, Berkeley Earth chose stations randomly from the complete set of 39,028 stations. Such a selection is free of station selection bias.
In our preliminary analysis of these stations, we found a warming trend that is shown in the figure. It is very similar to that reported by the prior groups: a rise of about 0.7 degrees C since 1957. (Please keep in mind that the Berkeley Earth curve, in black, does not include adjustments designed to eliminate systematic bias.)
The Berkeley Earth agreement with the prior analysis surprised us, since our preliminary results don’t yet address many of the known biases. When they do, it is possible that the corrections could bring our current agreement into disagreement.
Why such close agreement between our uncorrected data and their adjusted data? One possibility is that the systematic corrections applied by the other groups are small. We don’t yet know.
The main value of our preliminary result is that it demonstrates the Berkeley Earth ability to use all records, including those that are short or fragmented. When we apply our approach to the complete data collection, we will largely eliminate the station selection bias, and significantly reduce statistical uncertainties.
One oft expressed concern is that climatologists have failed to adequately account for confounders in their temperature data such as urban heat island effects, station placement and equipment changes, changes in the time of monitoring and so forth. Perhaps such changes have led researchers to find a spurious trend toward higher average global temperatures. However, Muller testifying about the BEST project's preliminary (and un-peer reviewed) analysis of station quality reported:
Many temperature stations in the U.S. are located near buildings, in parking lots, or close to heat sources. Anthony Watts and his team has shown that most of the current stations in the US Historical Climatology Network would be ranked “poor” by NOAA’s own standards, with error uncertainties up to 5 degrees C.
Did such poor station quality exaggerate the estimates of global warming? We’ve studied this issue, and our preliminary answer is no.
The Berkeley Earth analysis shows that over the past 50 years the poor stations in the U.S. network do not show greater warming than do the good stations.
Thus, although poor station quality might affect absolute temperature, it does not appear to affect trends, and for global warming estimates, the trend is what is important.
Go here to download Muller's testimony.
On the other hand, it must be noted that BEST's efforts have not received universal approbation. Both skeptics and alarmists (I use the terms they call each other) take Muller and BEST to task for releasing conclusions in advance of peer review. Particularly at issue will be the validity of whatever statistical tweaking BEST is applying to temperature datasets. It will be very interesting to see what happens as the BEST project analyses proceed to peer review.