Saturday, March 24, 2018

Climate Change According to the Global Historic Climate Network: Part 1

Before I get going on part 1 of today's post I would like to cover a bit of background information. The information I am working with comes from the Global Historic Climate Network (GHCN) which I pulled from a link on the Berkeley Earth website. The GHCN files in this data set run from the 1700's up to 2011. However, I am focusing on the years 1900 to 2011. This data set consists primarily of two files. One file contains daily maximum temperatures while the other contains daily minimum temperatures.

The vast majority of these daily measurements were made with a device which has been around for over 200 years known as a Six's Thermometer. These are actually very accurate and in many cases very precise devices. Six's Thermometers little different from those used back in the late 1800's are still in use today for precise and accurate temperature measurements. A Six's Thermometer will faithfully record the maximum and minimum temperatures encounter between settings. Typically, station person would record the max and min temperatures from the previous 24 hours and reset the thermometer every day at the same time.

The era of precision thermometry began in the early 1700's. Daniel Gabriel Fahrenheit was the German physicist who invented the alcohol thermometer in 1709, and the mercury thermometer in 1714. He also established the Fahrenheit scale which of course is still in use.

The point of this background information is don't make the mistake of assuming old data equates to inaccurate data. The basis of the science has not changed. Much of the equipment we use today has not changed substantially. The methods for establishing base point temperatures such as the freezing point of water have not changed appreciably. Every system for measuring temperature owes its accuracy to the same basic principles. In that regard nothing has really changed.

Below is a close up of a Six's Thermometer.

 
Below is a Six's Thermometer circa 1897 from the Museo Galileo.
 
 
Now it is time to begin to dive into the GHCN data. The data set is massive. During the period of 1900 through 2011 alone the data set contains measurements from 27, 721 stations. I condensed the individual daily maximum measurements into 927, 961 individual annual averages. Really gives you an appreciation for modern computers doesn't it?

As massive as the shear size of the data set, the next question is what to do with it. Is it all good, usable data? The answer, unfortunately is no. Most of the data is not usable for my look at the 1900 to 2011 time frame because comparatively few stations were in operation for the entire time span. It clearly makes no sense to compare the temperatures of Atlanta Georgia in the 1930's to the average temperatures of Atlanta and Anchorage Alaska in the 2000's.

I can attempt to use all the data and I have done so in a number of different ways. I could emulate NASA and the NOAA and try to fill in the blanks using statistical models. There is a major problem with that. As I show below, 70% of the data would have to be estimated. In other words, as massive as the data set is it only contains 30% of the data necessary to look at all temperature trends associate with the locations of all 27,721 stations. Yet, that is exactly what Berkeley Earth, NASA, and the NOAA are doing. Much of what they present is based upon reconstructed data.

 
For my study I am using the 493 stations with complete records for every year in my study. This is actually a very high number when compared to other data sets. As far as I have seen this is by far the most complete individual data set out there. It is also an entirely sufficient amount of data for trend analysis. The difference between what I am doing and what NASA, NOAA, and Berkeley are doing is I am making no attempt to construct a global average temperature out of incomplete data. I am looking trends based upon continuous records for a very large number of locations and assuming those trends will be indicative of trends in most similar places.
 
What I am not doing is assuming those trends will match what has happened in every location on the planet. I do not expect what has occurred in San Francisco, Beijing, Anchorage, and the middle of the south pole would all be the same. I am quite sure they would not be the same as conditions on the ground, development, and a host of other factors would not be the same.
 
It is also important to understand what inhabited areas of the planet are represented by this data and to what degree. Most of the data comes from North America and Europe. Africa, and South America are mostly unrepresented. Data from Asia, the Pacific, and Australia is sparse. That is really unavoidable because we just do not have enough long term data from those areas. We have no idea what temperatures were on average in modern day Sudan back in the 1930's.
 
Is it not obvious it is impossible to determine trends in areas for which there is no data? No data is just no data. Any method of filling in the blanks would just be a fancy way of making a guess.
 
I am not guessing. At least I am being up front about this. I am telling you these various governmental agencies are not being up front. Besides, if climate change - i.e. global warming caused by CO2 emissions - is truly a global problem it should show up in any subset of the data I care to use. So long as I am faithfully and truthfully showing you what the data really says. That is something I promise you I am doing.
 
Next post: The Results.
 
Stay tuned.

No comments:

Post a Comment