Saturday, March 31, 2018

GHCN Part 7: Australian Temperature Record 1985 - 2011

Before I begin I will briefly recap previous posts in the series. I am looking at the temperature records contained in the Global Historical Climatology Network. I have focused myself on Australia as a test case to develop my process. There are only 10 stations with long term records from Australia in the GHCN data set.

As I mentioned in a previous post, I found the data from the station located in Robe Australia to be unusable because of serious site contamination. Meaning what once was probably a rural area is now a business district. It is surrounded by concrete, buildings, AC units, and other objects which could influence the measurements. All of this is easily detectable in the data.

After reviewing each site and reviewing the data I rejected 7 out of 10 sites as unusable. Below are pictures of representative sites I rejected.

The following are from the town of Hobart.



The station at Hobart is surrounded by buildings in what appears to be an industrial or business area.

These are pictures of the station located in Diniliquin


This station is located almost in a court yard at the edge of a large development. However, this is one of the sites which had an unusual cooling trend. One of the things which jump out at me are how shadows from nearby trees and buildings come close to the station location.

The next set of pictures is from Observatory Hill in Sydney.


This station is in a partial enclosed space, close to a brick wall and several electrical boxes. There are tall buildings nearby. The entire are is surrounded by development.

Now for some pictures of the sites I accepted.

This is the  Cape Otway Light House. As you can see it is a rural area. The actual location is more in the grassy area, the GPS coordinates were off a bit. There is nothing blocking the wind, there are no structures close enough to affect the readings.

The next pictures are from the Richmond Post Office. As you can see the site is in an open field well away from any buildings.



The final set of pictures is from Boulia Airport. This station is located well away from any buildings and is certainly not sheltered from the wind. It is close to a runway. I would think any influence would be minimal.

 

These last three locations are as good as it gets. I see no reason to reject them. They are probably not in precisely the same location and there are probably other factors I am not seeing. However, I am also looking at the data. There are no abrupt changes of any kind. All three locations have extremely similar records. Speaking of which, let's get to that.
 
 
This is the average of those three stations with a running 5 year average trend line. All three stations follow this trend with an average maximum deviation of ± 0.32°. Most of that variance occurs from 1895 to 1900. From 1900 onward deviations from the average trend are minimal.
  
Below is a graph of the maximum and minimum deviations from average for all three stations used in my graph.

 
Do not mistake me here. This is not a reconstructed average of the record of temperature change in Australia. There simply isn't enough data to make a true reconstruction. This is the average record for three sites with minimal human induced localized changes. Because changes due to local factors have been minimized as much as possible this provides a better baseline as to changes in temperature due to changes affecting the entire region than a simple average of all sites.
 
Note I said minimal and not no human induced changes. The fact is I can't see and estimate the effects of all human changes to a particular area. I have just eliminated the obvious changes. Make no mistake, humans do create localized changes in temperature. It is widely recognized developed areas typically have higher average temperatures than surrounding, lesser or undeveloped areas.
 
Every location in Australia will follow this general trend within factors of variability caused by localized affects, most of which are probably human induced. The strength of this assertion is it is absolutely true for all seven of the site records not included in the average. All of them follow this general trend to some degree or for some period of time.
 
Until the next time......

Friday, March 30, 2018

GHCN Part 6: Robe, A Tale of Bad Data

In my last post I discussed the station at Low Head and issues with the quality of the data. I also made some conjectures as to why the data is questionable. In this post I will discuss the site located in Robe, Australia. This time I won't be guessing as to what is wrong with the data.

Robe is yet another site which appears to invalidate the relationship between population growth or population density and temperature rise.

 
As before, I have transposed the data into standard deviations. It is pretty obvious the site measurements show some significant changes in measured temperature over time. A number of such shifts are evident. However, the major shift appears to happen between 1958 and 1971. As before, I will isolate those time periods.
 
 
 
I am not going to bother with a detailed statistical look at the data for reasons which will soon be obvious. The important information from this is in the pre 1958 time frame the temperature remained close to the average within minor oscillations. Post 1971 there is no appreciable change of any significance. In other words, no increase or decrease.
 
So, what happened? Let's look at the actual site itself.
 


 See the Stevenson screen holding the thermometers and other equipment? it is that white box visible behind the upper left corner of the gate.
 
And here is the view from the heavens courtesy of Google Earth. The little pointer misses it just a bit, but that ain't bad.
 
 So, let's go over what we are seeing here. First of all, the location changes since 1895 are probably pretty profound. I doubt they had Chinese restaurants and drive through banking back then. Or cars for that matter. There probably weren't any commercial AC units blowing hot air around back then either. I am betting development probably started after the war, but probably began really taking off in the late 50's. It probably reached a saturation point by the early 70's.
 
So, what do we have here? Simply put, a serious case of site contamination. While the population density is low, the development density is not. The factors of change are roads, parking lots, buildings, cars, AC units, and all the other things development brings. Remember my example from my last post of the black asphalt square? Same thing but with lots of hot exhaust added.
 
The bottom line on this is this station has been compromised to an extent where the data is just not usable. This situation exists on three certainly and probably four out of the four sites in Australia I have looked at in detail.
 
Which means, in all likelihood, I am down to just six sites.
 
What do you guess a good, detailed look at sites in the US is going to show?
 
Until next time.
 
 



GHCN data Part 5: Australia and Data Problems

In my last blog post I discussed creating accurate models of the existing data. I ended up having five separate models which describe what I would have to term different scenarios. Meaning the history of temperature changes varied greatly from location to location. Temperatures fell in some places and rose in others. In a typical location the over all change ended up being close to no change, even though there were lots of changes in between.

Obviously temperatures, or more accurately temperature measurements, changed over time for certain reasons. Those reasons presumably would fall into categories. Those categories could be broadly defined as local, regional, or global changes.

A global change, by definition, would be a change which affects every location in the world. However, it doesn't mean that change would be discernible in all locations. Global changes could be offset or augmented by local or regional changes. This is where the task becomes extremely difficult. It is impossible to know much less quantify all the local or regional changes which might distort a global change.

Because of this difficulty I decided to make a test case out of Australia. The advantages to this are the limited number of stations involved, the relative isolation of the region, and its location. Unlike the US, which is bordered by the Pacific, the Atlantic, and the Gulf. So the task of filtering the wheat from the chaff, as it were, should be easier. Should being the important word here.

My first pass at this task was looking at population growth as a proxy for changes in land usage. The urban heat island effect, where developed cities are often several degrees warmer than the surrounding areas, is well known. So this is a logical place to start. There does appear to be a correlation between population growth and temperature.


In general more people does equal higher temperatures. However, there are two stations where this pattern is broken rather severely. Those locations are Low Head, which is located on the north of Tasmania, and Otway Lighthouse. Both locations are somewhat similar, the Lead Head station is located at the Low Head Lighthouse. For now I am going to concentrate on Low Head.

Low Head breaks the population / temperature pattern because it has the highest overall temperature increase but without a corresponding population increase. It is listed in the 2016 census as home to 6,765 people with a population density of 331 per square kilometer.

This is the Low Head Temperature record.

Please note the last 10 temperature readings are estimated based upon the average of the previous 10 years. This is one of two instances where I imputed data. Now I will strip that data off and proceed to do some statistical analysis.

This is the same data converted to a standard normal distribution.

This is a graphical analysis which is generally used to look at data and see if there is evidence to suggest a shift in the average has occurred. This will become more clear in a moment, but it is obvious the average has changed over time.

 
This is where the advantages of this technique really come into play. This test shows on obvious change or changes in the site average. There is, however, more. It appears there are three distinctly different average occurring at different times. Meaning, what looks like a gradual rise in temperature from 1959 to 1974 is actually two discrete changes of .69° in 1959 and .82° in 1974. The inference is the site experienced a local change which affected the temperature measurements.

To state the hypothesis formally: The data shows three distinct periods of time which three distinctly different averages. These mean shifts occurred with no apparent transition period. There is no variation in the average between these shifts.

I will test this hypothesis, using the same technique, by splitting the graph into three parts and see if I can prove the null hypothesis. The null hypothesis is the average did shift inside these three time frames.

 
This test is somewhat inconclusive. There is evidence to suggest means shifts did occur, but, with the exception of the first point, nothing falls outside of a 95% confidence interval for the mean. The null hypothesis has not been proven.
 
 
This test is also inconclusive. There is insufficient evidence to support the null hypothesis.


This test is also inconclusive. There is no evidence to support the null hypothesis.

So, what does this mean? That is an excellent question. Failing to prove the null hypothesis, my hypothesis stands. Not proven, just not unproven. Meaning there is a reasonable probability I am correct and it is exactly as it looks. The inference is something changed in 1958 - 1959 and in 1973 - 1974 which permanently altered the temperature measurements. Beyond those singular changes, there is no other evidence of any change or trend.

At this point I want to go back to my designation of changes being local, regional, or global. Based upon the other nine graphs for Australian stations, these changes are not reflected in  eight the other stations. The 1959 shift does appear to be reflected in the Otway Lighthouse record, but the 1974 shift is not reflected. In fact, there appears to be no change at the Otway Lighthouse from 1959 onward. I will expound upon that in a future blog post.

Therefore, the assumption is these shifts reflect localized changes which impacted measurements on site. But what things could effect those kinds of changes?

Low Head Lighthouse
 
The first obvious thing to look at is the means of measuring temperature and how that is done. As mention in the first post of this series thermometers evolved from mercury and alcohol filled glass thermometers to digital thermometers which use a version of a thermocouple to measure temperature. That change really didn't get going until the nineties.
 
A Stevenson Screen
 
Weather monitoring stations use what is basically a white louvered box know as a Stevenson screen to hold measuring equipment. This is painted white to reflect sunlight and limit actual heating of the box. It is enclose on the sides by double walls. It holds the instruments at a standardized height from the ground. This design has essentially remained unchanged since the late 1800's.

There has been some speculation changes in paint formulas in the 1970's to eliminate lead may have resulted in changes to readings inside Stevenson screens. I have no data on that, so I will file it away for future consideration.

According to records on line the Low Head Lighthouse has undergone numerous changes over the years. Buildings have been added, electricity sources changed, equipment moved in and moved out. Probably the most important thing would be relocation, replacement, or repairs to the Stevenson screen. This is a location exposed to the ocean, so it is hard to imagine the same screen being in operation for  over 100 years. I have no doubt it has been painted, repaired, moved, and even replaced at some point. The location where it currently resides is on a rocky, grassless, portion of the site. There are several bushes and large rocks very close to the screen. Other locations nearby are grass covered, bare sand, or rock. The surface color varies quite a bit. All of these could have a measureable affect on a spot just three meters off the ground.

One of the facts most people don't know is temperature isn't uniform even in a single location. More to the point, temperature as measured can vary quite a bit even within a few yards, even in your own yard. For example, you could have a small area in your yard covered in black asphalt surrounded by a grassy area. The asphalt can be quite hot while the grass is cool. A thermometer held above the asphalt on a calm day will read higher than it would when held in a nearby location over grass. Have you ever noticed when walking towards a beach how the sand can go from just warm to too hot to walk on to actually being cool? Ever wonder why wet sand tends to be cooler and dry sand tends to be hotter? If you guessed water you are correct. When it comes to sand and dirt wet is cooler and dry is warmer. The point here is location and maintaining a location is important.

However, these things are just speculation. Ultimately the point of this is something changed suddenly. There is no gradual, incremental change apparent.



This final chart for Low Head was created by making adjustments to the data to eliminate those changes which occurred about 1959 and 1974. I am really showing this more for speculative purposes with respect to the concept of making adjustments to past data. How advisable is this? Is it really kosher to do this? Even though backed by data, I am still making assumptions. I am assuming what ever site changes may have happened did not augment or mitigate other changes which may have occurred. Whatever changes did occur, if they occurred, it is too late to quantify now except as an educated guess. Knowing exactly how to quantify such a change would require measuring in parallel for a period of time so any measurement change would be precisely defined. A more rigorous approach in situ would be to quantify that change and then take actions to eliminate it.

The ramifications of this are pretty high within the context of my study. Therefore, I am somewhat undecided. This is the central question to this whole endeavor. The options I have are to discard the record entirely, modify it and include it, or leave it in and hope it is offset somewhere else by negative impacts.

I have no answer at this point.

More to follow....






Thursday, March 29, 2018

GHCN Part 4: The Models, Current State

In my previous post I talked about five distinct models of temperature data. Three for the US, Canada, and Europe and two for Australia. Based upon the defining parameter of what I am calling the temperature delta, which is the absolute change in temperature from the beginning to the end of my time period, these models are accurate to within ± 1° for 75% and ± 1.5° for the remainder in describing individual station data for the appropriate region.

Below are the five models.


 
 


Suffice to say, at this point, there are obviously great differences in what happened between locations over the past 100 or so years. There are obviously significant local and regional factors as yet undefined. This, in itself, is a significant finding in the context of the Global Warming or Climate Change debate.

There are also indications of what may be global factors. If you look carefully at the first four graphs, paying attention to the red five year average trend lines, there is a distinctive V shaped pattern centered on the year 1996. This appears to be a strong signal as it has manifested itself through four distinctly different trends. That would indicate the existence of a wide ranging event of some significance such as a major volcanic eruption.

I am anticipating moving forward from here is going to become progressively more time consuming. My first inclination is to begin by looking at changes in population based upon the Australian models. However, there are many ways in which the face of the land may change over time. Man and nature both never stand still.

Until next time, folks.



GHCN Part 3: Creating a Temperature Model


This is part three of a series on Global Warming using records from the Global Historical Climatology Network. In my first post of this series I discussed my data source, my means of selecting which records to use, and the time frame of study. I used only station records which were complete for the time frame of study. I had determined not to impute or estimate any data.

The previous post of this series described the results from that data in terms of number of days above   85°F, 90°F, 95°F, and 100°F as well as the number of days where the daily highs did not exceed 32°F, 20°F, and 10°F. As we saw, for the locations involved, the number of warmer days has consistently fluctuated up and down at apparently regular intervals, but have generally decreased since the 1930’s. The number of colder days has consistently fluctuated up and down in a repeated pattern. For the number of days at or below freezing the data indicate the years of 1900 to 1930 and 1980 to 2003 are nearly identical. There is evidence to suggest that pattern began to repeat again in 2005 to 2010.

As I explained in the first post of the series there are certain limitations to this study which bear repeating here. The long term data from the GHCN daily max min tables is very limited. Most of the coverage is in the lower 48 states of the US. Canada, Europe, Central Asia, and Australia are all represented but to significantly lesser degrees. Africa, Central America, and South America are not covered.

With those limitations in mind, let me describe in general terms the process I used to develop the data into useful information.

Goal defined

When performing an analysis for this type of data the goal is to develop a model which accurately describes the data and then determine a means of applying that model to other, similar situations which fit the definitions of the model. This is a process which involves creating a model, testing the model against known data, evaluating the results, and adjusting the model accordingly. An accurate model will be able to perform accurate predictions for known data within acceptable levels of data variation. A model which cannot make accurate predictions against known data is flawed and therefore would not be useful.

Creating the Model

My first pass approximation for such a model was the simplest model available, which is a raw average of all the data. I chose to test this model by comparison to selected samples of individual station data. Without going into details, let me just say this initial model failed. For example, the model failed to describe the general time series trend of individual stations. Meaning where things started off and where they ended up. That failure informed my method for refining the model.

I refined my data model to address that failure by creating a new data set consisting of beginning and ending temperatures for each station. I analyzed that data by calculating a temperature change delta for each station and performing statistical analysis on that data set.


I found quantifying stations by the overall start to finish temperature change, the temperature delta, produced a near normal data set. Using this as a starting point I refined my model in to three separate models. One model covers the -1° to 1° range which contains 75% of all stations. The second model covers the 1° to 4° range which contains 17% of the stations. The last model covers the -1° to -2.5° range which contains the last 8% of the stations.

As before, I tested the data models against actual station data with model selection based upon the temperature delta parameter. Again, the models failed to accurately reflect all the data. Quantifying that failure was easy as all failures occurred in Australia. Separating Australia as a separate data set and performing the same analysis as above, I created two additional models. The number of models is now three for the northern hemisphere and two for Australia. A total of 5 models.

These five models, based upon two selection parameters, are accurate within ± 1° for over 75% of the individual stations and within ±1.5° for the remainder. This is an acceptable degree of error in my opinion.

Refining and Utilizing the Model

One of the primary reasons for creating a model, beyond defining what has gone before, is to act as a predictive tool. Having defined models which describe what is known to have happened it now becomes necessary to try and define and quantify those factors which affected what happened. Therefore, it is helpful to have five different models and a wide range of results. These five models can be further reduced into three essential models based upon the over all results: Temperatures rose, temperatures fell, or temperatures did not change appreciably. Those are distinctly different outcomes.

Each station in this data set has been affected by certain factors. I will define those factors as local, regional, or global. Local factors contribute to unique site results. Regional factors contribute to site results over a certain area. The scope of regional factors may vary quite a bit. Global factors would contribute to outcomes all over the world.

The process going forward is defining and quantifying local factors, regional factors, and then global factors. In the process of doing so the various models become refined to include those parameters. Ideally, they will be combined into one or two models. The process of model redefinition would as always include testing the models for descriptive and predictive accuracy.

The Example of Australia

Australia is an interesting case study for this method. There are only 10 stations with usable data. However, this data extends back to 1895. Australia not only has a distinctive regional difference, there are distinctive local differences. Australia is also a mostly sparsely populated place. One factor which became apparent immediately was population density. Given the time frame involved, it is reasonable to assume the initial population density is essentially zero. Predicting which model applies to a site by current population density proved 100% accurate. There are sites which are geographically close but far apart by population totals. The magnitude of temperature increase over the period 1950 to 2005 between these proximate sites was as much as 3.5° higher for the more heavily populated site.

When you consider the generally accepted figure for temperature rise over the past century due to CO2 is 1.5°, a 3.5° temperature increase differential over 55 years due to a population increase differential would be significant. Accepting those numbers as reasonably accurate and assuming 1.5° as the result of Global factors, and assuming linearity, the inference is the localized factor of population growth has a greater influence on local temperatures than global factors by several orders of magnitude.

Moving Forward

Understand, this process is far from complete. I am presenting this information on what I am working on essentially in real time as the process progresses. I am letting the data lead me and not the other way around. I may end up somewhere totally unexpected. Even so, I think it is worthwhile to make these posts. I would certainly welcome constructive input.

 

Next post: The five models.

Saturday, March 24, 2018

Climate Change According to the Global Historic Climate Network: Part 2 - the Results

In my previous post I presented some background information on the GHCN and the data available from it. I won't go over that information in detail again here. What is important to know is I am looking at the GHCN data covering the years 1900 through 2011 using only stations with complete records for all those years. That is 493 stations. For this post I am using the highest recorded daily temperatures.

So without further ado and editorial comments, here are the fairly self explanatory charts. Note: I have added trend lines consisting of 5 year running averages to each chart shown in red.

 
 
 
 
 
What you see here is nothing more than what we really should know from our history. The hottest weather seen since 1900 occurred in the 1930's. This corresponds with a significant drop in the number of days where the daily high failed to get above freezing. This also corresponds to a period of severe weather related incidents all over the world. In fact, 1936 was the worst year for heat, floods, and storms in the modern record.
 
Don't just take my word for that. Steve Goddard (aka Tony Heller)  has done an excellent job of documenting the extremes of the past on his blog. There you will see newspaper articles, magazine articles, government reports, and supporting data showing extreme events all over the world. A good starting point would be to simply search for 1936 on his blog. You will get plenty of reading material from that year alone.
 
 
If you look carefully you will notice a couple of other things. For temperatures over 90° F there are actually five individual spikes of varying magnitudes. These appear to occur at fairly regular intervals. The latest of these appears to have started in the late 1990's. You will also notice the number of days where the temperature did not exceed freezing began to go down about 1980 and reached its lowest point in the early 2000's.
 
As I explained in my previous post, the data I am presenting is heavily weighted towards North America and Europe. It does include data from all over the world, but that data is sparse in comparison. There is however nothing I can do about that. Lacking similar data from Africa means I am unable to create an extensive long term temperature record from Africa. Lacking similar data from South America means I am unable to do so for South America either. The records exist where they exist, they just do not exist elsewhere.
 
However, the data does cover significant portions of the inhabitable land mass of the world. If global warming were truly happening as rapidly and as incontrovertibly over all the world as claimed would it not be apparent in North America, Europe, and significant portions of Asia? Of course it would.
 
The simple fact is the UN, NASA, the NOAA, and others are trying to reconstruct a history of the global average temperature from woefully incomplete data. Yet, those reconstructions in no way match the actual temperature records from anywhere in the world with long term temperature records. Why then should we believe them, much less commit billions upon billions of dollars and place the very bases of our economies into their hands? Can we really justify condemning billions of people to poverty, disease, and no hope to ever better their lives based upon flimsy reconstructions?
 
Coming soon: Climate Change According to the Global Historic Climate Network: Part 3.
 
 

Climate Change According to the Global Historic Climate Network: Part 1

Before I get going on part 1 of today's post I would like to cover a bit of background information. The information I am working with comes from the Global Historic Climate Network (GHCN) which I pulled from a link on the Berkeley Earth website. The GHCN files in this data set run from the 1700's up to 2011. However, I am focusing on the years 1900 to 2011. This data set consists primarily of two files. One file contains daily maximum temperatures while the other contains daily minimum temperatures.

The vast majority of these daily measurements were made with a device which has been around for over 200 years known as a Six's Thermometer. These are actually very accurate and in many cases very precise devices. Six's Thermometers little different from those used back in the late 1800's are still in use today for precise and accurate temperature measurements. A Six's Thermometer will faithfully record the maximum and minimum temperatures encounter between settings. Typically, station person would record the max and min temperatures from the previous 24 hours and reset the thermometer every day at the same time.

The era of precision thermometry began in the early 1700's. Daniel Gabriel Fahrenheit was the German physicist who invented the alcohol thermometer in 1709, and the mercury thermometer in 1714. He also established the Fahrenheit scale which of course is still in use.

The point of this background information is don't make the mistake of assuming old data equates to inaccurate data. The basis of the science has not changed. Much of the equipment we use today has not changed substantially. The methods for establishing base point temperatures such as the freezing point of water have not changed appreciably. Every system for measuring temperature owes its accuracy to the same basic principles. In that regard nothing has really changed.

Below is a close up of a Six's Thermometer.

 
Below is a Six's Thermometer circa 1897 from the Museo Galileo.
 
 
Now it is time to begin to dive into the GHCN data. The data set is massive. During the period of 1900 through 2011 alone the data set contains measurements from 27, 721 stations. I condensed the individual daily maximum measurements into 927, 961 individual annual averages. Really gives you an appreciation for modern computers doesn't it?

As massive as the shear size of the data set, the next question is what to do with it. Is it all good, usable data? The answer, unfortunately is no. Most of the data is not usable for my look at the 1900 to 2011 time frame because comparatively few stations were in operation for the entire time span. It clearly makes no sense to compare the temperatures of Atlanta Georgia in the 1930's to the average temperatures of Atlanta and Anchorage Alaska in the 2000's.

I can attempt to use all the data and I have done so in a number of different ways. I could emulate NASA and the NOAA and try to fill in the blanks using statistical models. There is a major problem with that. As I show below, 70% of the data would have to be estimated. In other words, as massive as the data set is it only contains 30% of the data necessary to look at all temperature trends associate with the locations of all 27,721 stations. Yet, that is exactly what Berkeley Earth, NASA, and the NOAA are doing. Much of what they present is based upon reconstructed data.

 
For my study I am using the 493 stations with complete records for every year in my study. This is actually a very high number when compared to other data sets. As far as I have seen this is by far the most complete individual data set out there. It is also an entirely sufficient amount of data for trend analysis. The difference between what I am doing and what NASA, NOAA, and Berkeley are doing is I am making no attempt to construct a global average temperature out of incomplete data. I am looking trends based upon continuous records for a very large number of locations and assuming those trends will be indicative of trends in most similar places.
 
What I am not doing is assuming those trends will match what has happened in every location on the planet. I do not expect what has occurred in San Francisco, Beijing, Anchorage, and the middle of the south pole would all be the same. I am quite sure they would not be the same as conditions on the ground, development, and a host of other factors would not be the same.
 
It is also important to understand what inhabited areas of the planet are represented by this data and to what degree. Most of the data comes from North America and Europe. Africa, and South America are mostly unrepresented. Data from Asia, the Pacific, and Australia is sparse. That is really unavoidable because we just do not have enough long term data from those areas. We have no idea what temperatures were on average in modern day Sudan back in the 1930's.
 
Is it not obvious it is impossible to determine trends in areas for which there is no data? No data is just no data. Any method of filling in the blanks would just be a fancy way of making a guess.
 
I am not guessing. At least I am being up front about this. I am telling you these various governmental agencies are not being up front. Besides, if climate change - i.e. global warming caused by CO2 emissions - is truly a global problem it should show up in any subset of the data I care to use. So long as I am faithfully and truthfully showing you what the data really says. That is something I promise you I am doing.
 
Next post: The Results.
 
Stay tuned.