Thursday, June 8, 2017

The Flaw at the Heart of the Global Warming Myth

It is Thursday 4:14 in the afternoon, and we are coming to you live from the Lanett Alabama Econo Lodge. It has been a long week but things are rapidly coming to a close. Today I propose to show you what I believe will be the last thing you will ever need to see about the myth of global warming. Yes, I said myth. Over the past few posts I have shown you how the proof CO2 is driving temperatures up is weak. I have shown you how the number of monitoring stations active in the world has changed. We have examined station data from Australia, to Wake Island, to Scotland and seen no evidence of catastrophic warming of any kind. If you are scratching your head and wondering why none of this adds up I am going to show you. The answer is in the data and in the text files that go with it.
 
 
What does this tell you? It tells you simply they are not reporting the actual temperatures, they are reporting how much the temperatures vary from the 1951 to 1980 average. In fact, according to the text, they reported each month as a deviation from the monthly 1951 to 1980 average. That is how they created a homogenized model of the global average through time.
 
 
 
That is why the plot of their data from the year 1980 shows no seasons. No summer, no winter. Just variations from an average.
 
 
 
This is exactly where they got it wrong. Their model is badly flawed. Remember how I said the number of active stations in the world has grown exponentially during the time their charts show so much rise in temperature? That is the heart of the flaw. In order for their method to work the annual global average can not change because of new stations coming on line or old ones going off line. Meaning if you drop off a station in Antarctica and add 10 stations in the Bahamas you have just caused the global average to go up. It doesn't matter if you move all the data up or down by adding or subtracting 50°, the difference between the land of frost and the land of sunshine remains.
 
The correct way to homogenize or smooth a large collection of records with varying local temperatures, different starting and ending dates, with no way to determine the true, accurate local temperature back in time, and no way to determine the referential accuracy between instrumentation is to homogenize each individual station record to it's own zero reference point. This way you are asking a simple question. How much temperature rise or fall did this particular location see over that amount of time. Then, when you look at a time frame of the record you are seeing a true record of change on average.
 
Let me prove this to you. I have created a spread sheet simulation that mimics four temperature stations. Two of these are random numbers between 40 and 50, one consists of random numbers between 30 and 40, and the last one consists of random numbers between 50 and 60. I then processed the numbers using their method and lastly using my method.
 
 
When I chart the data, regardless of how many times I generate a new set of random numbers, the result looks like this.
 

I created my simulation to generate numbers that varied within temperature bands, but always maintained the exact theoretical averages of 45, 55, and 65. That would be the orange line above. It always, always, reflects the reality contained in the data regardless of how I play with the data. I can add station data, or take it away, but the actual trend of the data remains at or close to zero as long as I didn't change the average in any station over time. Even then, it would take a fairly large change in one of the four created stations to make much of a difference on the chart.  
 
The blue line, however, demonstrates their model is highly sensitive to changes in the number of stations active and the differences in local climate. This is their flaw, and it is a big one. Where the stations were added in that big change from 1977 to 2000 makes a huge impact. That would not be the case if they had constructed their core mathematical model correctly.
 
The evidence I have presented is, in my opinion, conclusive. Their chart of annual global averages shows a likely probability of correlation between temperature and the number of stations on line. Under examination that correlation does not break down as the CO2 correlation did. In fact further examination supports that correlation. 
 
In order for their model to work they would have had to maintained an exact ratio of weather stations to temperature zones, which they did not do. This is why their over all global data does not match any individual records set. If the basic mathematical model for how they engage with the data is so seriously flawed then any conclusions they derive must also be flawed. They are in fact divorced from reality.
 
Honestly, it is hard to imagine any group of high powered scientists not seeing this basic flaw. It is hard to imagine no one even testing the model against real world results to make sure it was right.
 
Anyway, I believe my work here is finished.
 
 

No comments:

Post a Comment