Wednesday, April 4, 2018

GHCN Part 9: A look at the Daily Minimums Debunks a Basic Assumption of Global Warming

In today's post on the Global Historical Climatology Data I am going to concentrate on daily minimum temperatures for long term stations in North America and Europe. As I mentioned last time the coverage is heavily weighted to the US.

In my last post I talked about the high amount of variance between stations. I conjectured most of those variances were due to localized site changes such as development. I believe that is a safe conjecture.

However, looking at daily minimums yields a different picture. The by site variation is there but it is not as pronounced. The standard deviation of the average of annual station average is only .46. That is a very reasonable value in comparison to my previous data set. The annual range between highest and lowest deviation from station average is consistent on average.

The following chart is the difference by year between the highest and lowest temperatures records for all stations in the study. While there is some variation over time the key point is the lack of any clear trend on average. There is a fluctuation in the magnitude of in year variation but that appears to be due to weather events with in the US in the form of hot and cold waves. Because the data is heavily weighted to the US it is sensitive to such events in the US.

This is the average daily minimum temperature record for all stations as mentioned above. It is a reasonable approximation of individual station records.

The following graph may grab your interest if you are familiar with statistics, especially that brand of statistics used in Quality Engineering. If you are interested in the technique you can Google search for Statistical Process Control. This is a well established methodology which has been in use since the 1950's.

What you see here is my twist on the method. I have transposed the data shown in the preceding chart  by converting it to standard deviations with the overall average normalized to zero. This is nothing more than a graphical test for equality of the means. Confidence intervals are thus easily defined, such as ± 1.96 standard deviations form a 95% confidence interval. The second key indicator of a shift in the mean is the number of consecutive points above or below the zero line.

There is no question about the clear signal of a pattern here. There are also clear evidence of extreme events occurring in 1904, 1917, 1921, 1931, and 1998.

Thus far I see no reason to doubt the veracity or accuracy of these extreme events. They appear to be accurate. They are, however, out of the ordinary. The other interesting observation is how the year to year variability decreased going into the 1940's and then again in the 1960's. That variability increases coming into the 1970's. That is reflected in the chart of annual ranges above.

The conclusions I draw are as follows:
  • There is evidence of a regular pattern about 60 or so years in length.
  • There is no statistically significant difference between the 1900's and the 1960's. Using my normalized data, the 1960's is warmer by 0.07 standard deviations. This is insignificant.
  • There is no significant difference between the 1930's and either the 1990's or the 2000's. The 1930's are warmer than either by .08 standard deviations. This is insignificant.
  • If this pattern holds true I would expect to see a low point going into the 2020's. This does appear to be happening, but I would be very careful drawing conclusions from short term data. However, similarities do exist between 1931 to 1942 and 1998 to 2007.

Finally, the last question is why would the daily low temperatures show such a different result? I will hazard a few guesses:

  • Daily lows must be unaffected for the most part by site changes which cause higher day time temperatures.
  • Structures and surfaces added to a site cause increased temperature due to differences in absorbed energy and in heat capacity or specific heat. Lower heat capacity or specific heat means surfaces and objects achieve a higher temperature for the same energy absorbed than surfaces and objects with higher heat capacities or specific heats. That generally means they cool off more quickly as well. Therefore the extra heat is not retained.
  • The effect just described above is the opposite effect where specific heat is relatively higher. The best example of that effect is water. Water in either liquid or gas form has a much higher specific heat than a normal atmospheric gas mixture, concrete, brick, shingles, and so forth. A body of water not only stays cooler during the day than what is on the land, it also cools off much slower.

The lowest temperature of a typical day in most locations normally occurs within an hour of sunrise. In order to systematically affect the daily minimum temperature objects, structures, and surfaces that would retain or produce heat must be added to the site. That is possible, certainly adding a pond or lake next to a climate station could have such an affect.


This result and the obviously different outcome from my prior study supports the supposition most instances of higher than typical temperature increases are due to site changes as described above.

I would further conclude the daily minimum temperatures provide a far more accurate picture of what is happening with respect to the anthropogenic global warming theory.

The lack of any evidence of a change in heat retained over night, if correct, would debunk the concept added CO2 is causing the surface of the Earth to warm up due to downward IR. The logic behind this assertion is simple. If CO2 truly did act as a greenhouse or a blanket to retard cooling that effect would be demonstrable in progressively higher overnight temperatures. There is no evidence that has occurred.

You could conjecture as to whether or not temperatures have increase during those overnight hours which precede the daily low point. This data does not address that conjecture.

Until the next time.......

Sunday, April 1, 2018

GHCN Post 8: North America and Europe or It Varies. A Lot.

This is my eighth post in this series. I would encourage anyone to start at the first post and go forward. However, this post will serve as a stand alone document. In this post I have taken my experience in exploring the history of Australia and applied it forward to cover North America and Europe.
The way to view this study is literally a statistic based survey of the data. Meaning I have created a statistic to quantify, rank, and categorize the data. My statistic is very straight forward. It is simply the net change in temperature between the first and last 10 years of 1900 through 2011 for each station.
Below is a list of countries showing the lowest net change, the highest net change, and the number of stations per country.
This is an old fashioned histogram showing how the stations ranked in terms of over all temperature change. This shows the data falls in a bell shaped curve. The underlying distribution is very close to normal. This means analysis using normal techniques will yield very reasonable estimates. This is significant to a statistician. However, you don't need any statistical knowledge to understand this.
The mid line value is between -0.5° and 0.5°. The number of stations showing a overall drop in temperature is 40%. Slightly less than 60% of the stations show an increase. The absolute change is statistically insignificant in 74.6% of the stations.

The following graph shows a normalized look at each category: No significant change, significant warming, and significant cooling. The graph is of rolling 10 year averages. Each plot has been normalized to show the 1900 - 1910 average as zero.

You will note, though the overall slope of each plot is significantly different, the shape of the plots are nearly identical. A random sampling of individual station data shows that condition remains true for each station in the range. For example, Denmark's Greenland station shows the 1990 - 2000 average is the same as the 1930 - 1940 average.

Short term changes, such as the warming into the 1930's, hold true for the vast majority of stations. Other examples of this would be the 1940's temperature jump, the post 1950 temperature drop, and the late 1990's temperature jump.

Long term changes vary significantly.

There are a number of conclusions to be drawn from this analysis.
There is no statistically significant difference between North America and Europe. Those stations showing significant cooling are just 8% of the total. By that statistic, the expected number of the 17 European stations to show cooling would be just one. The number expected to show significant warming would be three. From a statistical sampling standpoint, 17 is just not a robust enough sample size to yield accurate estimates.
Short term changes which appear in the vast number of stations from Canada to the US to Europe are probably hemispheric changes. However, there is no indication these are global changes as there is no evidence of similar changes in Australia. Australia did not experience a 1930's warming trend for example. In fact, the overall pattern in Australia is obviously different from what we see here.
The evidence strongly suggests the large variation in overall temperature trends is due to either regional or local factors. As shown in the data table at the beginning, the extremes in variation all come from the US. As noted before, there just aren't enough samples from Europe to form accurate estimates for low percentage conditions.
Further evidence suggests most of the differences in overall temperature change are due to local factors. What we see from the US is extreme warming is generally limited to areas with high population growth or high levels of development. Large cities such as San Diego, Washington DC, and Phoenix follow the pattern of significant change. Airports also follow this pattern. However, cities like New Orleans, St Louis, El Paso, and Charleston follow the pattern of no significant change.
In Conclusion, based upon the available long term temperature data the case for global warming is very weak. There is evidence to suggest a hemispheric pattern exists. The evidence further suggest this is a cyclical pattern which is evident in localized temperature peaks in the 1930's and the 1990's. However, changes in local site conditions due to human development appear to be the most important factor affecting overall temperature changes. Extreme warming trends are almost certainly due to human induced local changes. 
What is unclear at this point is the significance of lower levels of human induced local changes. Assessing this would require examining individual sites to identify a significant sample of sites with no changes. Unfortunately, the US, Canada, and Europe are not nearly as obliging on that kind of information as the Aussies are. I have to admit the Australians have done an excellent job of making site information available. Having the actual coordinates to where the actual testing station resides made that easy. I literally pulled them up on Google Maps and was able to survey the site and surrounding areas.
It appears this is about as far down the rabbit hole as I am going to get, at least, not without a lot of work which at this point doesn't appear warranted.
Until next time......

Saturday, March 31, 2018

GHCN Part 7: Australian Temperature Record 1985 - 2011

Before I begin I will briefly recap previous posts in the series. I am looking at the temperature records contained in the Global Historical Climatology Network. I have focused myself on Australia as a test case to develop my process. There are only 10 stations with long term records from Australia in the GHCN data set.

As I mentioned in a previous post, I found the data from the station located in Robe Australia to be unusable because of serious site contamination. Meaning what once was probably a rural area is now a business district. It is surrounded by concrete, buildings, AC units, and other objects which could influence the measurements. All of this is easily detectable in the data.

After reviewing each site and reviewing the data I rejected 7 out of 10 sites as unusable. Below are pictures of representative sites I rejected.

The following are from the town of Hobart.

The station at Hobart is surrounded by buildings in what appears to be an industrial or business area.

These are pictures of the station located in Diniliquin

This station is located almost in a court yard at the edge of a large development. However, this is one of the sites which had an unusual cooling trend. One of the things which jump out at me are how shadows from nearby trees and buildings come close to the station location.

The next set of pictures is from Observatory Hill in Sydney.

This station is in a partial enclosed space, close to a brick wall and several electrical boxes. There are tall buildings nearby. The entire are is surrounded by development.

Now for some pictures of the sites I accepted.

This is the  Cape Otway Light House. As you can see it is a rural area. The actual location is more in the grassy area, the GPS coordinates were off a bit. There is nothing blocking the wind, there are no structures close enough to affect the readings.

The next pictures are from the Richmond Post Office. As you can see the site is in an open field well away from any buildings.

The final set of pictures is from Boulia Airport. This station is located well away from any buildings and is certainly not sheltered from the wind. It is close to a runway. I would think any influence would be minimal.


These last three locations are as good as it gets. I see no reason to reject them. They are probably not in precisely the same location and there are probably other factors I am not seeing. However, I am also looking at the data. There are no abrupt changes of any kind. All three locations have extremely similar records. Speaking of which, let's get to that.
This is the average of those three stations with a running 5 year average trend line. All three stations follow this trend with an average maximum deviation of ± 0.32°. Most of that variance occurs from 1895 to 1900. From 1900 onward deviations from the average trend are minimal.
Below is a graph of the maximum and minimum deviations from average for all three stations used in my graph.

Do not mistake me here. This is not a reconstructed average of the record of temperature change in Australia. There simply isn't enough data to make a true reconstruction. This is the average record for three sites with minimal human induced localized changes. Because changes due to local factors have been minimized as much as possible this provides a better baseline as to changes in temperature due to changes affecting the entire region than a simple average of all sites.
Note I said minimal and not no human induced changes. The fact is I can't see and estimate the effects of all human changes to a particular area. I have just eliminated the obvious changes. Make no mistake, humans do create localized changes in temperature. It is widely recognized developed areas typically have higher average temperatures than surrounding, lesser or undeveloped areas.
Every location in Australia will follow this general trend within factors of variability caused by localized affects, most of which are probably human induced. The strength of this assertion is it is absolutely true for all seven of the site records not included in the average. All of them follow this general trend to some degree or for some period of time.
Until the next time......

Friday, March 30, 2018

GHCN Part 6: Robe, A Tale of Bad Data

In my last post I discussed the station at Low Head and issues with the quality of the data. I also made some conjectures as to why the data is questionable. In this post I will discuss the site located in Robe, Australia. This time I won't be guessing as to what is wrong with the data.

Robe is yet another site which appears to invalidate the relationship between population growth or population density and temperature rise.

As before, I have transposed the data into standard deviations. It is pretty obvious the site measurements show some significant changes in measured temperature over time. A number of such shifts are evident. However, the major shift appears to happen between 1958 and 1971. As before, I will isolate those time periods.
I am not going to bother with a detailed statistical look at the data for reasons which will soon be obvious. The important information from this is in the pre 1958 time frame the temperature remained close to the average within minor oscillations. Post 1971 there is no appreciable change of any significance. In other words, no increase or decrease.
So, what happened? Let's look at the actual site itself.

 See the Stevenson screen holding the thermometers and other equipment? it is that white box visible behind the upper left corner of the gate.
And here is the view from the heavens courtesy of Google Earth. The little pointer misses it just a bit, but that ain't bad.
 So, let's go over what we are seeing here. First of all, the location changes since 1895 are probably pretty profound. I doubt they had Chinese restaurants and drive through banking back then. Or cars for that matter. There probably weren't any commercial AC units blowing hot air around back then either. I am betting development probably started after the war, but probably began really taking off in the late 50's. It probably reached a saturation point by the early 70's.
So, what do we have here? Simply put, a serious case of site contamination. While the population density is low, the development density is not. The factors of change are roads, parking lots, buildings, cars, AC units, and all the other things development brings. Remember my example from my last post of the black asphalt square? Same thing but with lots of hot exhaust added.
The bottom line on this is this station has been compromised to an extent where the data is just not usable. This situation exists on three certainly and probably four out of the four sites in Australia I have looked at in detail.
Which means, in all likelihood, I am down to just six sites.
What do you guess a good, detailed look at sites in the US is going to show?
Until next time.

GHCN data Part 5: Australia and Data Problems

In my last blog post I discussed creating accurate models of the existing data. I ended up having five separate models which describe what I would have to term different scenarios. Meaning the history of temperature changes varied greatly from location to location. Temperatures fell in some places and rose in others. In a typical location the over all change ended up being close to no change, even though there were lots of changes in between.

Obviously temperatures, or more accurately temperature measurements, changed over time for certain reasons. Those reasons presumably would fall into categories. Those categories could be broadly defined as local, regional, or global changes.

A global change, by definition, would be a change which affects every location in the world. However, it doesn't mean that change would be discernible in all locations. Global changes could be offset or augmented by local or regional changes. This is where the task becomes extremely difficult. It is impossible to know much less quantify all the local or regional changes which might distort a global change.

Because of this difficulty I decided to make a test case out of Australia. The advantages to this are the limited number of stations involved, the relative isolation of the region, and its location. Unlike the US, which is bordered by the Pacific, the Atlantic, and the Gulf. So the task of filtering the wheat from the chaff, as it were, should be easier. Should being the important word here.

My first pass at this task was looking at population growth as a proxy for changes in land usage. The urban heat island effect, where developed cities are often several degrees warmer than the surrounding areas, is well known. So this is a logical place to start. There does appear to be a correlation between population growth and temperature.

In general more people does equal higher temperatures. However, there are two stations where this pattern is broken rather severely. Those locations are Low Head, which is located on the north of Tasmania, and Otway Lighthouse. Both locations are somewhat similar, the Lead Head station is located at the Low Head Lighthouse. For now I am going to concentrate on Low Head.

Low Head breaks the population / temperature pattern because it has the highest overall temperature increase but without a corresponding population increase. It is listed in the 2016 census as home to 6,765 people with a population density of 331 per square kilometer.

This is the Low Head Temperature record.

Please note the last 10 temperature readings are estimated based upon the average of the previous 10 years. This is one of two instances where I imputed data. Now I will strip that data off and proceed to do some statistical analysis.

This is the same data converted to a standard normal distribution.

This is a graphical analysis which is generally used to look at data and see if there is evidence to suggest a shift in the average has occurred. This will become more clear in a moment, but it is obvious the average has changed over time.

This is where the advantages of this technique really come into play. This test shows on obvious change or changes in the site average. There is, however, more. It appears there are three distinctly different average occurring at different times. Meaning, what looks like a gradual rise in temperature from 1959 to 1974 is actually two discrete changes of .69° in 1959 and .82° in 1974. The inference is the site experienced a local change which affected the temperature measurements.

To state the hypothesis formally: The data shows three distinct periods of time which three distinctly different averages. These mean shifts occurred with no apparent transition period. There is no variation in the average between these shifts.

I will test this hypothesis, using the same technique, by splitting the graph into three parts and see if I can prove the null hypothesis. The null hypothesis is the average did shift inside these three time frames.

This test is somewhat inconclusive. There is evidence to suggest means shifts did occur, but, with the exception of the first point, nothing falls outside of a 95% confidence interval for the mean. The null hypothesis has not been proven.
This test is also inconclusive. There is insufficient evidence to support the null hypothesis.

This test is also inconclusive. There is no evidence to support the null hypothesis.

So, what does this mean? That is an excellent question. Failing to prove the null hypothesis, my hypothesis stands. Not proven, just not unproven. Meaning there is a reasonable probability I am correct and it is exactly as it looks. The inference is something changed in 1958 - 1959 and in 1973 - 1974 which permanently altered the temperature measurements. Beyond those singular changes, there is no other evidence of any change or trend.

At this point I want to go back to my designation of changes being local, regional, or global. Based upon the other nine graphs for Australian stations, these changes are not reflected in  eight the other stations. The 1959 shift does appear to be reflected in the Otway Lighthouse record, but the 1974 shift is not reflected. In fact, there appears to be no change at the Otway Lighthouse from 1959 onward. I will expound upon that in a future blog post.

Therefore, the assumption is these shifts reflect localized changes which impacted measurements on site. But what things could effect those kinds of changes?

Low Head Lighthouse
The first obvious thing to look at is the means of measuring temperature and how that is done. As mention in the first post of this series thermometers evolved from mercury and alcohol filled glass thermometers to digital thermometers which use a version of a thermocouple to measure temperature. That change really didn't get going until the nineties.
A Stevenson Screen
Weather monitoring stations use what is basically a white louvered box know as a Stevenson screen to hold measuring equipment. This is painted white to reflect sunlight and limit actual heating of the box. It is enclose on the sides by double walls. It holds the instruments at a standardized height from the ground. This design has essentially remained unchanged since the late 1800's.

There has been some speculation changes in paint formulas in the 1970's to eliminate lead may have resulted in changes to readings inside Stevenson screens. I have no data on that, so I will file it away for future consideration.

According to records on line the Low Head Lighthouse has undergone numerous changes over the years. Buildings have been added, electricity sources changed, equipment moved in and moved out. Probably the most important thing would be relocation, replacement, or repairs to the Stevenson screen. This is a location exposed to the ocean, so it is hard to imagine the same screen being in operation for  over 100 years. I have no doubt it has been painted, repaired, moved, and even replaced at some point. The location where it currently resides is on a rocky, grassless, portion of the site. There are several bushes and large rocks very close to the screen. Other locations nearby are grass covered, bare sand, or rock. The surface color varies quite a bit. All of these could have a measureable affect on a spot just three meters off the ground.

One of the facts most people don't know is temperature isn't uniform even in a single location. More to the point, temperature as measured can vary quite a bit even within a few yards, even in your own yard. For example, you could have a small area in your yard covered in black asphalt surrounded by a grassy area. The asphalt can be quite hot while the grass is cool. A thermometer held above the asphalt on a calm day will read higher than it would when held in a nearby location over grass. Have you ever noticed when walking towards a beach how the sand can go from just warm to too hot to walk on to actually being cool? Ever wonder why wet sand tends to be cooler and dry sand tends to be hotter? If you guessed water you are correct. When it comes to sand and dirt wet is cooler and dry is warmer. The point here is location and maintaining a location is important.

However, these things are just speculation. Ultimately the point of this is something changed suddenly. There is no gradual, incremental change apparent.

This final chart for Low Head was created by making adjustments to the data to eliminate those changes which occurred about 1959 and 1974. I am really showing this more for speculative purposes with respect to the concept of making adjustments to past data. How advisable is this? Is it really kosher to do this? Even though backed by data, I am still making assumptions. I am assuming what ever site changes may have happened did not augment or mitigate other changes which may have occurred. Whatever changes did occur, if they occurred, it is too late to quantify now except as an educated guess. Knowing exactly how to quantify such a change would require measuring in parallel for a period of time so any measurement change would be precisely defined. A more rigorous approach in situ would be to quantify that change and then take actions to eliminate it.

The ramifications of this are pretty high within the context of my study. Therefore, I am somewhat undecided. This is the central question to this whole endeavor. The options I have are to discard the record entirely, modify it and include it, or leave it in and hope it is offset somewhere else by negative impacts.

I have no answer at this point.

More to follow....

Thursday, March 29, 2018

GHCN Part 4: The Models, Current State

In my previous post I talked about five distinct models of temperature data. Three for the US, Canada, and Europe and two for Australia. Based upon the defining parameter of what I am calling the temperature delta, which is the absolute change in temperature from the beginning to the end of my time period, these models are accurate to within ± 1° for 75% and ± 1.5° for the remainder in describing individual station data for the appropriate region.

Below are the five models.


Suffice to say, at this point, there are obviously great differences in what happened between locations over the past 100 or so years. There are obviously significant local and regional factors as yet undefined. This, in itself, is a significant finding in the context of the Global Warming or Climate Change debate.

There are also indications of what may be global factors. If you look carefully at the first four graphs, paying attention to the red five year average trend lines, there is a distinctive V shaped pattern centered on the year 1996. This appears to be a strong signal as it has manifested itself through four distinctly different trends. That would indicate the existence of a wide ranging event of some significance such as a major volcanic eruption.

I am anticipating moving forward from here is going to become progressively more time consuming. My first inclination is to begin by looking at changes in population based upon the Australian models. However, there are many ways in which the face of the land may change over time. Man and nature both never stand still.

Until next time, folks.

GHCN Part 3: Creating a Temperature Model

This is part three of a series on Global Warming using records from the Global Historical Climatology Network. In my first post of this series I discussed my data source, my means of selecting which records to use, and the time frame of study. I used only station records which were complete for the time frame of study. I had determined not to impute or estimate any data.

The previous post of this series described the results from that data in terms of number of days above   85°F, 90°F, 95°F, and 100°F as well as the number of days where the daily highs did not exceed 32°F, 20°F, and 10°F. As we saw, for the locations involved, the number of warmer days has consistently fluctuated up and down at apparently regular intervals, but have generally decreased since the 1930’s. The number of colder days has consistently fluctuated up and down in a repeated pattern. For the number of days at or below freezing the data indicate the years of 1900 to 1930 and 1980 to 2003 are nearly identical. There is evidence to suggest that pattern began to repeat again in 2005 to 2010.

As I explained in the first post of the series there are certain limitations to this study which bear repeating here. The long term data from the GHCN daily max min tables is very limited. Most of the coverage is in the lower 48 states of the US. Canada, Europe, Central Asia, and Australia are all represented but to significantly lesser degrees. Africa, Central America, and South America are not covered.

With those limitations in mind, let me describe in general terms the process I used to develop the data into useful information.

Goal defined

When performing an analysis for this type of data the goal is to develop a model which accurately describes the data and then determine a means of applying that model to other, similar situations which fit the definitions of the model. This is a process which involves creating a model, testing the model against known data, evaluating the results, and adjusting the model accordingly. An accurate model will be able to perform accurate predictions for known data within acceptable levels of data variation. A model which cannot make accurate predictions against known data is flawed and therefore would not be useful.

Creating the Model

My first pass approximation for such a model was the simplest model available, which is a raw average of all the data. I chose to test this model by comparison to selected samples of individual station data. Without going into details, let me just say this initial model failed. For example, the model failed to describe the general time series trend of individual stations. Meaning where things started off and where they ended up. That failure informed my method for refining the model.

I refined my data model to address that failure by creating a new data set consisting of beginning and ending temperatures for each station. I analyzed that data by calculating a temperature change delta for each station and performing statistical analysis on that data set.

I found quantifying stations by the overall start to finish temperature change, the temperature delta, produced a near normal data set. Using this as a starting point I refined my model in to three separate models. One model covers the -1° to 1° range which contains 75% of all stations. The second model covers the 1° to 4° range which contains 17% of the stations. The last model covers the -1° to -2.5° range which contains the last 8% of the stations.

As before, I tested the data models against actual station data with model selection based upon the temperature delta parameter. Again, the models failed to accurately reflect all the data. Quantifying that failure was easy as all failures occurred in Australia. Separating Australia as a separate data set and performing the same analysis as above, I created two additional models. The number of models is now three for the northern hemisphere and two for Australia. A total of 5 models.

These five models, based upon two selection parameters, are accurate within ± 1° for over 75% of the individual stations and within ±1.5° for the remainder. This is an acceptable degree of error in my opinion.

Refining and Utilizing the Model

One of the primary reasons for creating a model, beyond defining what has gone before, is to act as a predictive tool. Having defined models which describe what is known to have happened it now becomes necessary to try and define and quantify those factors which affected what happened. Therefore, it is helpful to have five different models and a wide range of results. These five models can be further reduced into three essential models based upon the over all results: Temperatures rose, temperatures fell, or temperatures did not change appreciably. Those are distinctly different outcomes.

Each station in this data set has been affected by certain factors. I will define those factors as local, regional, or global. Local factors contribute to unique site results. Regional factors contribute to site results over a certain area. The scope of regional factors may vary quite a bit. Global factors would contribute to outcomes all over the world.

The process going forward is defining and quantifying local factors, regional factors, and then global factors. In the process of doing so the various models become refined to include those parameters. Ideally, they will be combined into one or two models. The process of model redefinition would as always include testing the models for descriptive and predictive accuracy.

The Example of Australia

Australia is an interesting case study for this method. There are only 10 stations with usable data. However, this data extends back to 1895. Australia not only has a distinctive regional difference, there are distinctive local differences. Australia is also a mostly sparsely populated place. One factor which became apparent immediately was population density. Given the time frame involved, it is reasonable to assume the initial population density is essentially zero. Predicting which model applies to a site by current population density proved 100% accurate. There are sites which are geographically close but far apart by population totals. The magnitude of temperature increase over the period 1950 to 2005 between these proximate sites was as much as 3.5° higher for the more heavily populated site.

When you consider the generally accepted figure for temperature rise over the past century due to CO2 is 1.5°, a 3.5° temperature increase differential over 55 years due to a population increase differential would be significant. Accepting those numbers as reasonably accurate and assuming 1.5° as the result of Global factors, and assuming linearity, the inference is the localized factor of population growth has a greater influence on local temperatures than global factors by several orders of magnitude.

Moving Forward

Understand, this process is far from complete. I am presenting this information on what I am working on essentially in real time as the process progresses. I am letting the data lead me and not the other way around. I may end up somewhere totally unexpected. Even so, I think it is worthwhile to make these posts. I would certainly welcome constructive input.


Next post: The five models.