Sunday, June 9, 2019

On Combining Record Series of Different Lengths Into A Time Series

How do I put a bunch of different time series together into a grand time series is not a question the average person is likely to ever ask. Much less even care about. However, in the grand debate of climate change this question remains important. Many people, maybe even most, will naturally say this question was asked and answered long ago. Okay, most people don't know and really don't care. Neither of those things means asking basic questions is something which just shouldn't be done. What if reinventing the wheel produces a better wheel? And who wouldn't want a better wheel?

Maybe established wheel makers?

I have asked this basic question and I have an answer which I would like to present to anyone who is interested. I will do so in the most simple manner possible. The following examples describe this process. I have chosen to use a common waveform using the sin function as the basis for the sample data.

Consider the four time series depicted below. All four time series follow the sin wave pattern on different averages. However, only one record is complete. The other three are fragments measured from longer series. We assume these fragments follow patterns which are the same as or very similar to that of the complete series. I know this is true because I constructed the data so this would be true.


Obviously we cannot simply compute an average as we could if we had complete data for all four series. The error being the differences on average between the series. I am going to transform the data into a form where those differences are minimized. I will do so by subtracting the series average from each part of a series. This, in effect, translates the series average to zero.


The final result of this operation is shown below.


You will notice there are errors in this process with the shortest two of the four series. The reason for each of these is an estimation error for the true series average due to the length of the time series data. Each of these series is a cyclical curve of a repeating pattern. Any estimate based upon a length of time which is not equal to or a multiple of the length of this pattern will be biased by an amount dependent upon where the endpoints lie relative to the pattern.

The following shows the resulting estimate of the complete average using the transformed data above. This estimate is very close to the actual average.



The equation for the linear trend  for the estimated average is  y = -0.0004x  + .1049. The actual average is also end point biased. It's linear trend line  is y = -0.0005 + 0.1353. The true trend is zero.

The following example illustrates how this procedure performs using two series which are identical with the exception of a trend induced into a portion of one series.




Now I am going to present the same operation using the anomalies from common baseline technique which, as I understand it, is another method commonly used. For this example my baseline period corresponds to the portion of time where a trend was induced into one of the two series. I have chosen to do this to illustrate the potential for error.


As we see the fit between the two series is not as optimal as the previous process. The resulting average has exactly the same trend as the previous method with a slightly lower overall average.


I am now going to further explore how these two processes differ by looking at the range of the data produced over the length of the time series. Keep these results in mind.




Both processes are a means of meshing data series of differing lengths of time with differing base averages together. Both process share the same error source. That error source being the error in estimating how the data you have relates to the actual local average over the full time frame being studied. This is in fact an unknown.

The assumption, as I stated previously in my first example, is a pattern exists which would adequately describe all the fragmentary records we have if we only knew what that pattern was. Assuming such a pattern exists in reality, having produced an estimate of such a pattern, it is necessary to determine how good the estimate is. One way of doing so would be to test your estimated data against the data you have. The following shows how well the estimated pattern from each method, using anomalies and using my transform method, matches the patterns of the individual data sets used in my example.


My transform method produces a good match where the two series follow the same pattern. The anomaly method produces a good match only in the area where the individual series data are close to the baseline average determined during the time of a temporary localized trend.

The bottom line here is the sub average used to calculate anomalies in my example was simply an inaccurate estimate of the average of the available data. The best estimate of the average of the available data is the average of the available data. Which is so basic, isn't it?

This is the essential weakness of any attempt to combine incongruent data series of varying lengths. You do not know if the snippet of data you have represents a point in time which is higher or lower than the true average. The correct way to average the data is exactly the same way you would do so if you had complete data.

Therefore, the most accurate method to use is one which comes the closest to the ideal condition of replicating the true average for the time period you are looking at.

Now, let's discuss how an uneven number of these short time series might induce a bias into a calculated composite time series using the anomalies from a baseline method. I will again demonstrate using an example.

For this example I am using three simulated series, two of which have induced negative trends at the time I am using to establish a baseline from which to compute anomalies.


Now I am going to convert each curve to a set of anomalies from this baseline period.


As you can see the fit is not exactly optimal. Below is how this new computed average looks against the actual average.



As expected the match of the estimated curve is only good close to where the three curves were averaged. Below is how this biased average manifests in terms of the error from true.



As you can see, the net effect is to lower the estimate of the past. What causes this bias in the resulting average is the number of series which are complete from the baseline point forward and incomplete from the baseline point back. Obviously, if we had complete data sets for all three series the resulting average would have the correct shape. Also, if the length and times were evenly distributed throughout the composite time series would be more accurate as the error would be more or less evenly distributed. However, in this scenario the error is forced  into the past through the averaging process and by the increasing number of series added moving forward.

This is the mechanism whereby this process will be biased to either over estimate or under estimate the past depending upon the number of series added moving forward and direction of any anomalous trends in the baseline period.
















Friday, February 22, 2019

Back to Basic Statistics: Global Warming is Likely An Artifact of Localized Night Time Warming


I am looking at the history of temperature change from the 1900 to 2010 using a sample of 691 stations from the USHCN. The choice of stations was determined by the number of years recorded. These 691 stations have a minimum of 100 annual records.

Since these stations cover a range of temperatures and the number of stations reporting per year is less than 691 for much of this time frame it is necessary to translate the records to deviations from a baseline average. This is less than ideal, however limiting the sample to stations with 111 years reduces the sample size quite a bit.

The question at this point is how to determine an appropriate time interval to determine this base line.

Consider what happens when I use the 1980s to establish this base line. The shape of the average temperature will be the same no matter what baseline I choose. The issue here is what happens to the standard deviations. This indicates my choice of the 80’s as a baseline has changed the shape of the data distribution over time. This shape does not accurately reflect how the data changes.

A test for this is to simply compare the projection against the actual data from 1900 forward. I accomplished this by subtracting decadal averages from the 2000-2009 average for each station and looking at the average and spread for each decade. This shows this projection is only accurate from about 1980 forward. The accuracy decreases progressively from 1980 back to 1900. This was the expected result.






 

Consider what happens when I use the 1920’s to establish my average baseline. Using the same testing methods as before I found this to be reasonably accurate, within the population parameters, from 1920 forward.






 

I decided upon using an average from 1900 to 1960 as a baseline. When tested as above this produced the most accurate results. Part of that comes from the longer length of time used. This evens out smaller sub trends within the data. There is also the issue of number of stations reporting. Any longer time frame begins reducing the projection accuracy.









But what about all those shorter station records?

It should be obvious, based upon how I define my baseline, I can’t add in any records beginning after the baseline time frame. For those stations which began prior to 1960 the accuracy with which they can be located within the record is dependent upon how many years they reported within the baseline period. I can only accurately place a record within this time frame if I accurately transform the data by the station location’s true average from 1900 to 1960.

You can test this by comparing sub averages of samples from the data set. For example, look at the difference by station between their 1900-1960 average and their 1930 to 1960 average. Look at the average and standard deviation of that data set. The standard deviation defines the amount of uncertainty in including stations which began in 1930. The added uncertainty is simply unacceptable. Meaning I can't accurately place the shorter record into the data set because I do not know what its true average really was.
The proper way to handle shorter data sets is to perform this exact same process from a later baseline. Again, you need to maintain as uniform a data set as you can. The same comments concerning shorter data records still apply. To include even more data sets you would look at a shorter time frame. Shorter studies can then be compared against longer studies during coincident time frames. You simply cannot average a record from 1930 to 1950 with a record from 1990 to 2005. That is common sense between two records, that logic applies to many records.

Now the results

The average temperature of this sample did rise by 0.28° C in the 2000’s relative to the 1900 – 1960 baseline. The standard deviation also rose by .25 in the same manner. That translates into and increase of 0.8° C in the spread of the data.




 

Let’s examine how the upper and lower edges defining a projected 90% of the population changed over time.

 
 



As you can see both plots show the warming which occurred going into the 1930’s and the subsequent cooling trend from about 1950 forward. However, the upper bound shows a marked increase from about 1970 forward. This corresponds to a slight decrease in the lower bound. Both boundaries show a period of warming from about 1996 forward.

This pattern is further demonstrated by data consisting of station averages from 2000 to 2009. The indication is, relative to my 1900 to 1960 baseline, there has been an upward skewing of the data.



 

Where does this trend originate

To answer that question, I broke the average down into its component parts, the annual daily minimum and maximum averages. You will notice the trend for maximum temperatures is close to zero overall. It shows the now familiar warming trend into the 1930’s and the cooling after about 1950.





However, the minimums show a warming trend beginning after 1960 which is not evident in the plot above. Now you know why my choice of baseline time frame makes sense. Remember, this has no affect upon the average.




Now we have established, for my 691 station sample, warming after 1960 is almost exclusively associated with the night time lows. We can account for this by development factors.

The following chart shows the results of a correlation study between a development rating factor and the 2000’s average temperature. This development factor is the width of proximate development in miles at the widest point as measured on Google Earth. Since I have defined temperature as a deviation from a 1900-1960 average the 2000’s average is also an average deviation. There does appear to be evidence of correlation. I have a correlation coefficient of 0.72 and a R squared value of .51.



This indicates warming is associated with development, warming is limited to specific locations, warming is evident in the night time lows, and no warming is evident in the daily highs.

This result runs counter to the theory of warming due to increased CO2.

 

You can view the 691 sample sites for this study as well as the 33 sites for the correlation study along with included data and information in interactive map form at the following links.