Friday, June 22, 2018

How George Bush, Barrack Obama, and SCOTUS Created the Current Immigration Crisis

I have been watching the current discourse on illegal immigration in the US with great interest and often with rising anger. My anger is not against those of us who are passionate on the subject. My anger is with the media and the politicians. They are not being above board. They are not presenting the full picture, misrepresenting what is going on in fact, to promote their own agenda. To defeat Trump in the mid terms. To support Trump in the mid terms. Either way, you, the voter, are being screwed.
 
How difficult is it for professional news people to present the entire picture? How many people are apprehended, detained, deported, granted asylum, and so forth for each year since 1997? Come on! All they do is show you bits and pieces that are true enough, but lack the full picture to provide context. They are stoking your anger on purpose. The fact we can't rely upon the main stream media to present an honest, factual, and complete picture so we the people are truly informed is a real danger to our republic.
 
I am going to present what I see is a more complete and therefore more accurate picture. I am also providing supporting articles.
 
First off, this crisis is not new. The pictures you see of people behind chain link fencing and barbed wire are not new. George W Bush was lambasted for his handling of illegal immigration and so was Barrack Obama. Quite strenuously in fact. You will find plenty of supporting articles below. If you look for yourself you will find plenty more.
 
The current crisis Trump is dealing with was created by those who came before. I could go back to even further, but the start of the current crisis actually began in 1988 with the Flores case. This case was finally decided by the Supreme Court in 1993. The key point which impacts the situation today is the decision to limit the amount of time an unaccompanied minor can be held to about 20 days. They have to be deported, returned, or released.
 
Obama was famous in certain circles for instituting the program of catch and release. According to many sources the Obama administration released 75% of the illegals without pursuing deportation. That included at least 68,000 with existing criminal records in the US. However, Obama wasn't the first to pursue this policy. That was actually George W Bush. Apparently PBS and other liberal outlets had no problem castigating GW Bush for this policy being a disaster. As PBS noted, while this policy was in effect illegal entry across the border surged.
 
In 2008 George W Bush signed into law a bipartisan bill that required a full immigration hearing for any unaccompanied minors seized at the border who did not come from either Mexico or Canada. Also under this law, immigrant minors from central America must be transferred to DHHS with 72 hours of the apprehension. After this law was signed into effect the illegal immigrant crises exploded. From fewer than 5,000 children in 2008 to nearly 75,000 in 2013. It is important to note none of this impacts anyone from Mexico. It applies only to countries below Mexico.
 
From 2012 or 2013 forward, the Obama administration responded to the crisis by building or appropriating existing structures to serve as family detention centers. This was met with stiff resistance from pro immigrant groups, progressives, the ACLU, and other groups. Issues with such detention centers being unsafe, unsanitary, and lacking in basic requirements to meet the needs those held were raised under Obama just as they had been under GW Bush. Ultimately most family detention centers were sheltered.
 
The final factor which helped created this crisis is the practice of releasing families into the general population if they have children. This was done throughout the Obama years with most of these families disappearing and never being seen before an immigration judge. This is well documented. What is not well documented is when this practice began. I am inclined to suspect it began during the illegal surge created by GW Bush with his catch and release policies.
 
One thing appears obvious to me. People in central America and Mexico appear to follow the changes in our laws and in our practices quite well. Changes in policy or law here result in changes to the numbers of people coming to our borders. No doubt they are watching the current situation quite closely.
 
It should also be obvious detaining people as a family units was not acceptable to the left, even when done by the left. Which is admirably consistent. However, now we find detaining minors separate from their family is also unacceptable. So that leaves catch and release, where 97% just disappear and never face an immigration judge. That is unacceptable as well. These policies helped create the crisis in the first place.
 
What we have and have had for the past three decades is unacceptable. The actions or previous administrations only made things worse. Unrestricted immigration and illegal immigration are hugely unpopular with the majority of Americans on both sides. We don't want it. We want the crises ended, we want illegal immigration stopped. Which means doing away with the failed policies of the past, the Flores agreement, and the 2008 immigration law. Good intentions do not mitigate bad results and if what you do creates bad results you have to do something else.
 
 
New York Times February 2007 
 
The Least of These - March 2007 
 
The ACLU - December 2010
 
 
The Atlanta Journal Constitution - November 2014
 
 
MSNBC - November 2014
 
 
 
CNBC - June 2014
 
The Guardian - July 2014
 
NBC News - July 2014
 
Find Law - 2015
 
Detention Watch Network - 2017
 
NOLO - June 2012
 
 
ACLU - January 2013
 
Huffington Post - November 2012
 
 
CIS - March 2014
 
 
Washington Times - March 2014
 
PBS - June 2014
 

Wednesday, April 4, 2018

GHCN Part 9: A look at the Daily Minimums Debunks a Basic Assumption of Global Warming

In today's post on the Global Historical Climatology Data I am going to concentrate on daily minimum temperatures for long term stations in North America and Europe. As I mentioned last time the coverage is heavily weighted to the US.

In my last post I talked about the high amount of variance between stations. I conjectured most of those variances were due to localized site changes such as development. I believe that is a safe conjecture.

However, looking at daily minimums yields a different picture. The by site variation is there but it is not as pronounced. The standard deviation of the average of annual station average is only .46. That is a very reasonable value in comparison to my previous data set. The annual range between highest and lowest deviation from station average is consistent on average.

The following chart is the difference by year between the highest and lowest temperatures records for all stations in the study. While there is some variation over time the key point is the lack of any clear trend on average. There is a fluctuation in the magnitude of in year variation but that appears to be due to weather events with in the US in the form of hot and cold waves. Because the data is heavily weighted to the US it is sensitive to such events in the US.


This is the average daily minimum temperature record for all stations as mentioned above. It is a reasonable approximation of individual station records.


The following graph may grab your interest if you are familiar with statistics, especially that brand of statistics used in Quality Engineering. If you are interested in the technique you can Google search for Statistical Process Control. This is a well established methodology which has been in use since the 1950's.

What you see here is my twist on the method. I have transposed the data shown in the preceding chart  by converting it to standard deviations with the overall average normalized to zero. This is nothing more than a graphical test for equality of the means. Confidence intervals are thus easily defined, such as ± 1.96 standard deviations form a 95% confidence interval. The second key indicator of a shift in the mean is the number of consecutive points above or below the zero line.

There is no question about the clear signal of a pattern here. There are also clear evidence of extreme events occurring in 1904, 1917, 1921, 1931, and 1998.


Thus far I see no reason to doubt the veracity or accuracy of these extreme events. They appear to be accurate. They are, however, out of the ordinary. The other interesting observation is how the year to year variability decreased going into the 1940's and then again in the 1960's. That variability increases coming into the 1970's. That is reflected in the chart of annual ranges above.

The conclusions I draw are as follows:
  • There is evidence of a regular pattern about 60 or so years in length.
  • There is no statistically significant difference between the 1900's and the 1960's. Using my normalized data, the 1960's is warmer by 0.07 standard deviations. This is insignificant.
  • There is no significant difference between the 1930's and either the 1990's or the 2000's. The 1930's are warmer than either by .08 standard deviations. This is insignificant.
  • If this pattern holds true I would expect to see a low point going into the 2020's. This does appear to be happening, but I would be very careful drawing conclusions from short term data. However, similarities do exist between 1931 to 1942 and 1998 to 2007.

Finally, the last question is why would the daily low temperatures show such a different result? I will hazard a few guesses:

  • Daily lows must be unaffected for the most part by site changes which cause higher day time temperatures.
  • Structures and surfaces added to a site cause increased temperature due to differences in absorbed energy and in heat capacity or specific heat. Lower heat capacity or specific heat means surfaces and objects achieve a higher temperature for the same energy absorbed than surfaces and objects with higher heat capacities or specific heats. That generally means they cool off more quickly as well. Therefore the extra heat is not retained.
  • The effect just described above is the opposite effect where specific heat is relatively higher. The best example of that effect is water. Water in either liquid or gas form has a much higher specific heat than a normal atmospheric gas mixture, concrete, brick, shingles, and so forth. A body of water not only stays cooler during the day than what is on the land, it also cools off much slower.

The lowest temperature of a typical day in most locations normally occurs within an hour of sunrise. In order to systematically affect the daily minimum temperature objects, structures, and surfaces that would retain or produce heat must be added to the site. That is possible, certainly adding a pond or lake next to a climate station could have such an affect.

Conclusion:

This result and the obviously different outcome from my prior study supports the supposition most instances of higher than typical temperature increases are due to site changes as described above.

I would further conclude the daily minimum temperatures provide a far more accurate picture of what is happening with respect to the anthropogenic global warming theory.

The lack of any evidence of a change in heat retained over night, if correct, would debunk the concept added CO2 is causing the surface of the Earth to warm up due to downward IR. The logic behind this assertion is simple. If CO2 truly did act as a greenhouse or a blanket to retard cooling that effect would be demonstrable in progressively higher overnight temperatures. There is no evidence that has occurred.

You could conjecture as to whether or not temperatures have increase during those overnight hours which precede the daily low point. This data does not address that conjecture.

Until the next time.......






Sunday, April 1, 2018

GHCN Post 8: North America and Europe or It Varies. A Lot.

This is my eighth post in this series. I would encourage anyone to start at the first post and go forward. However, this post will serve as a stand alone document. In this post I have taken my experience in exploring the history of Australia and applied it forward to cover North America and Europe.
 
The way to view this study is literally a statistic based survey of the data. Meaning I have created a statistic to quantify, rank, and categorize the data. My statistic is very straight forward. It is simply the net change in temperature between the first and last 10 years of 1900 through 2011 for each station.
 
Below is a list of countries showing the lowest net change, the highest net change, and the number of stations per country.
 
 
This is an old fashioned histogram showing how the stations ranked in terms of over all temperature change. This shows the data falls in a bell shaped curve. The underlying distribution is very close to normal. This means analysis using normal techniques will yield very reasonable estimates. This is significant to a statistician. However, you don't need any statistical knowledge to understand this.
 
The mid line value is between -0.5° and 0.5°. The number of stations showing a overall drop in temperature is 40%. Slightly less than 60% of the stations show an increase. The absolute change is statistically insignificant in 74.6% of the stations.


The following graph shows a normalized look at each category: No significant change, significant warming, and significant cooling. The graph is of rolling 10 year averages. Each plot has been normalized to show the 1900 - 1910 average as zero.

You will note, though the overall slope of each plot is significantly different, the shape of the plots are nearly identical. A random sampling of individual station data shows that condition remains true for each station in the range. For example, Denmark's Greenland station shows the 1990 - 2000 average is the same as the 1930 - 1940 average.

Short term changes, such as the warming into the 1930's, hold true for the vast majority of stations. Other examples of this would be the 1940's temperature jump, the post 1950 temperature drop, and the late 1990's temperature jump.

Long term changes vary significantly.

 
There are a number of conclusions to be drawn from this analysis.
 
There is no statistically significant difference between North America and Europe. Those stations showing significant cooling are just 8% of the total. By that statistic, the expected number of the 17 European stations to show cooling would be just one. The number expected to show significant warming would be three. From a statistical sampling standpoint, 17 is just not a robust enough sample size to yield accurate estimates.
 
Short term changes which appear in the vast number of stations from Canada to the US to Europe are probably hemispheric changes. However, there is no indication these are global changes as there is no evidence of similar changes in Australia. Australia did not experience a 1930's warming trend for example. In fact, the overall pattern in Australia is obviously different from what we see here.
 
The evidence strongly suggests the large variation in overall temperature trends is due to either regional or local factors. As shown in the data table at the beginning, the extremes in variation all come from the US. As noted before, there just aren't enough samples from Europe to form accurate estimates for low percentage conditions.
 
Further evidence suggests most of the differences in overall temperature change are due to local factors. What we see from the US is extreme warming is generally limited to areas with high population growth or high levels of development. Large cities such as San Diego, Washington DC, and Phoenix follow the pattern of significant change. Airports also follow this pattern. However, cities like New Orleans, St Louis, El Paso, and Charleston follow the pattern of no significant change.
 
In Conclusion, based upon the available long term temperature data the case for global warming is very weak. There is evidence to suggest a hemispheric pattern exists. The evidence further suggest this is a cyclical pattern which is evident in localized temperature peaks in the 1930's and the 1990's. However, changes in local site conditions due to human development appear to be the most important factor affecting overall temperature changes. Extreme warming trends are almost certainly due to human induced local changes. 
 
What is unclear at this point is the significance of lower levels of human induced local changes. Assessing this would require examining individual sites to identify a significant sample of sites with no changes. Unfortunately, the US, Canada, and Europe are not nearly as obliging on that kind of information as the Aussies are. I have to admit the Australians have done an excellent job of making site information available. Having the actual coordinates to where the actual testing station resides made that easy. I literally pulled them up on Google Maps and was able to survey the site and surrounding areas.
 
It appears this is about as far down the rabbit hole as I am going to get, at least, not without a lot of work which at this point doesn't appear warranted.
 
Until next time......

Saturday, March 31, 2018

GHCN Part 7: Australian Temperature Record 1985 - 2011

Before I begin I will briefly recap previous posts in the series. I am looking at the temperature records contained in the Global Historical Climatology Network. I have focused myself on Australia as a test case to develop my process. There are only 10 stations with long term records from Australia in the GHCN data set.

As I mentioned in a previous post, I found the data from the station located in Robe Australia to be unusable because of serious site contamination. Meaning what once was probably a rural area is now a business district. It is surrounded by concrete, buildings, AC units, and other objects which could influence the measurements. All of this is easily detectable in the data.

After reviewing each site and reviewing the data I rejected 7 out of 10 sites as unusable. Below are pictures of representative sites I rejected.

The following are from the town of Hobart.



The station at Hobart is surrounded by buildings in what appears to be an industrial or business area.

These are pictures of the station located in Diniliquin


This station is located almost in a court yard at the edge of a large development. However, this is one of the sites which had an unusual cooling trend. One of the things which jump out at me are how shadows from nearby trees and buildings come close to the station location.

The next set of pictures is from Observatory Hill in Sydney.


This station is in a partial enclosed space, close to a brick wall and several electrical boxes. There are tall buildings nearby. The entire are is surrounded by development.

Now for some pictures of the sites I accepted.

This is the  Cape Otway Light House. As you can see it is a rural area. The actual location is more in the grassy area, the GPS coordinates were off a bit. There is nothing blocking the wind, there are no structures close enough to affect the readings.

The next pictures are from the Richmond Post Office. As you can see the site is in an open field well away from any buildings.



The final set of pictures is from Boulia Airport. This station is located well away from any buildings and is certainly not sheltered from the wind. It is close to a runway. I would think any influence would be minimal.

 

These last three locations are as good as it gets. I see no reason to reject them. They are probably not in precisely the same location and there are probably other factors I am not seeing. However, I am also looking at the data. There are no abrupt changes of any kind. All three locations have extremely similar records. Speaking of which, let's get to that.
 
 
This is the average of those three stations with a running 5 year average trend line. All three stations follow this trend with an average maximum deviation of ± 0.32°. Most of that variance occurs from 1895 to 1900. From 1900 onward deviations from the average trend are minimal.
  
Below is a graph of the maximum and minimum deviations from average for all three stations used in my graph.

 
Do not mistake me here. This is not a reconstructed average of the record of temperature change in Australia. There simply isn't enough data to make a true reconstruction. This is the average record for three sites with minimal human induced localized changes. Because changes due to local factors have been minimized as much as possible this provides a better baseline as to changes in temperature due to changes affecting the entire region than a simple average of all sites.
 
Note I said minimal and not no human induced changes. The fact is I can't see and estimate the effects of all human changes to a particular area. I have just eliminated the obvious changes. Make no mistake, humans do create localized changes in temperature. It is widely recognized developed areas typically have higher average temperatures than surrounding, lesser or undeveloped areas.
 
Every location in Australia will follow this general trend within factors of variability caused by localized affects, most of which are probably human induced. The strength of this assertion is it is absolutely true for all seven of the site records not included in the average. All of them follow this general trend to some degree or for some period of time.
 
Until the next time......

Friday, March 30, 2018

GHCN Part 6: Robe, A Tale of Bad Data

In my last post I discussed the station at Low Head and issues with the quality of the data. I also made some conjectures as to why the data is questionable. In this post I will discuss the site located in Robe, Australia. This time I won't be guessing as to what is wrong with the data.

Robe is yet another site which appears to invalidate the relationship between population growth or population density and temperature rise.

 
As before, I have transposed the data into standard deviations. It is pretty obvious the site measurements show some significant changes in measured temperature over time. A number of such shifts are evident. However, the major shift appears to happen between 1958 and 1971. As before, I will isolate those time periods.
 
 
 
I am not going to bother with a detailed statistical look at the data for reasons which will soon be obvious. The important information from this is in the pre 1958 time frame the temperature remained close to the average within minor oscillations. Post 1971 there is no appreciable change of any significance. In other words, no increase or decrease.
 
So, what happened? Let's look at the actual site itself.
 


 See the Stevenson screen holding the thermometers and other equipment? it is that white box visible behind the upper left corner of the gate.
 
And here is the view from the heavens courtesy of Google Earth. The little pointer misses it just a bit, but that ain't bad.
 
 So, let's go over what we are seeing here. First of all, the location changes since 1895 are probably pretty profound. I doubt they had Chinese restaurants and drive through banking back then. Or cars for that matter. There probably weren't any commercial AC units blowing hot air around back then either. I am betting development probably started after the war, but probably began really taking off in the late 50's. It probably reached a saturation point by the early 70's.
 
So, what do we have here? Simply put, a serious case of site contamination. While the population density is low, the development density is not. The factors of change are roads, parking lots, buildings, cars, AC units, and all the other things development brings. Remember my example from my last post of the black asphalt square? Same thing but with lots of hot exhaust added.
 
The bottom line on this is this station has been compromised to an extent where the data is just not usable. This situation exists on three certainly and probably four out of the four sites in Australia I have looked at in detail.
 
Which means, in all likelihood, I am down to just six sites.
 
What do you guess a good, detailed look at sites in the US is going to show?
 
Until next time.
 
 



GHCN data Part 5: Australia and Data Problems

In my last blog post I discussed creating accurate models of the existing data. I ended up having five separate models which describe what I would have to term different scenarios. Meaning the history of temperature changes varied greatly from location to location. Temperatures fell in some places and rose in others. In a typical location the over all change ended up being close to no change, even though there were lots of changes in between.

Obviously temperatures, or more accurately temperature measurements, changed over time for certain reasons. Those reasons presumably would fall into categories. Those categories could be broadly defined as local, regional, or global changes.

A global change, by definition, would be a change which affects every location in the world. However, it doesn't mean that change would be discernible in all locations. Global changes could be offset or augmented by local or regional changes. This is where the task becomes extremely difficult. It is impossible to know much less quantify all the local or regional changes which might distort a global change.

Because of this difficulty I decided to make a test case out of Australia. The advantages to this are the limited number of stations involved, the relative isolation of the region, and its location. Unlike the US, which is bordered by the Pacific, the Atlantic, and the Gulf. So the task of filtering the wheat from the chaff, as it were, should be easier. Should being the important word here.

My first pass at this task was looking at population growth as a proxy for changes in land usage. The urban heat island effect, where developed cities are often several degrees warmer than the surrounding areas, is well known. So this is a logical place to start. There does appear to be a correlation between population growth and temperature.


In general more people does equal higher temperatures. However, there are two stations where this pattern is broken rather severely. Those locations are Low Head, which is located on the north of Tasmania, and Otway Lighthouse. Both locations are somewhat similar, the Lead Head station is located at the Low Head Lighthouse. For now I am going to concentrate on Low Head.

Low Head breaks the population / temperature pattern because it has the highest overall temperature increase but without a corresponding population increase. It is listed in the 2016 census as home to 6,765 people with a population density of 331 per square kilometer.

This is the Low Head Temperature record.

Please note the last 10 temperature readings are estimated based upon the average of the previous 10 years. This is one of two instances where I imputed data. Now I will strip that data off and proceed to do some statistical analysis.

This is the same data converted to a standard normal distribution.

This is a graphical analysis which is generally used to look at data and see if there is evidence to suggest a shift in the average has occurred. This will become more clear in a moment, but it is obvious the average has changed over time.

 
This is where the advantages of this technique really come into play. This test shows on obvious change or changes in the site average. There is, however, more. It appears there are three distinctly different average occurring at different times. Meaning, what looks like a gradual rise in temperature from 1959 to 1974 is actually two discrete changes of .69° in 1959 and .82° in 1974. The inference is the site experienced a local change which affected the temperature measurements.

To state the hypothesis formally: The data shows three distinct periods of time which three distinctly different averages. These mean shifts occurred with no apparent transition period. There is no variation in the average between these shifts.

I will test this hypothesis, using the same technique, by splitting the graph into three parts and see if I can prove the null hypothesis. The null hypothesis is the average did shift inside these three time frames.

 
This test is somewhat inconclusive. There is evidence to suggest means shifts did occur, but, with the exception of the first point, nothing falls outside of a 95% confidence interval for the mean. The null hypothesis has not been proven.
 
 
This test is also inconclusive. There is insufficient evidence to support the null hypothesis.


This test is also inconclusive. There is no evidence to support the null hypothesis.

So, what does this mean? That is an excellent question. Failing to prove the null hypothesis, my hypothesis stands. Not proven, just not unproven. Meaning there is a reasonable probability I am correct and it is exactly as it looks. The inference is something changed in 1958 - 1959 and in 1973 - 1974 which permanently altered the temperature measurements. Beyond those singular changes, there is no other evidence of any change or trend.

At this point I want to go back to my designation of changes being local, regional, or global. Based upon the other nine graphs for Australian stations, these changes are not reflected in  eight the other stations. The 1959 shift does appear to be reflected in the Otway Lighthouse record, but the 1974 shift is not reflected. In fact, there appears to be no change at the Otway Lighthouse from 1959 onward. I will expound upon that in a future blog post.

Therefore, the assumption is these shifts reflect localized changes which impacted measurements on site. But what things could effect those kinds of changes?

Low Head Lighthouse
 
The first obvious thing to look at is the means of measuring temperature and how that is done. As mention in the first post of this series thermometers evolved from mercury and alcohol filled glass thermometers to digital thermometers which use a version of a thermocouple to measure temperature. That change really didn't get going until the nineties.
 
A Stevenson Screen
 
Weather monitoring stations use what is basically a white louvered box know as a Stevenson screen to hold measuring equipment. This is painted white to reflect sunlight and limit actual heating of the box. It is enclose on the sides by double walls. It holds the instruments at a standardized height from the ground. This design has essentially remained unchanged since the late 1800's.

There has been some speculation changes in paint formulas in the 1970's to eliminate lead may have resulted in changes to readings inside Stevenson screens. I have no data on that, so I will file it away for future consideration.

According to records on line the Low Head Lighthouse has undergone numerous changes over the years. Buildings have been added, electricity sources changed, equipment moved in and moved out. Probably the most important thing would be relocation, replacement, or repairs to the Stevenson screen. This is a location exposed to the ocean, so it is hard to imagine the same screen being in operation for  over 100 years. I have no doubt it has been painted, repaired, moved, and even replaced at some point. The location where it currently resides is on a rocky, grassless, portion of the site. There are several bushes and large rocks very close to the screen. Other locations nearby are grass covered, bare sand, or rock. The surface color varies quite a bit. All of these could have a measureable affect on a spot just three meters off the ground.

One of the facts most people don't know is temperature isn't uniform even in a single location. More to the point, temperature as measured can vary quite a bit even within a few yards, even in your own yard. For example, you could have a small area in your yard covered in black asphalt surrounded by a grassy area. The asphalt can be quite hot while the grass is cool. A thermometer held above the asphalt on a calm day will read higher than it would when held in a nearby location over grass. Have you ever noticed when walking towards a beach how the sand can go from just warm to too hot to walk on to actually being cool? Ever wonder why wet sand tends to be cooler and dry sand tends to be hotter? If you guessed water you are correct. When it comes to sand and dirt wet is cooler and dry is warmer. The point here is location and maintaining a location is important.

However, these things are just speculation. Ultimately the point of this is something changed suddenly. There is no gradual, incremental change apparent.



This final chart for Low Head was created by making adjustments to the data to eliminate those changes which occurred about 1959 and 1974. I am really showing this more for speculative purposes with respect to the concept of making adjustments to past data. How advisable is this? Is it really kosher to do this? Even though backed by data, I am still making assumptions. I am assuming what ever site changes may have happened did not augment or mitigate other changes which may have occurred. Whatever changes did occur, if they occurred, it is too late to quantify now except as an educated guess. Knowing exactly how to quantify such a change would require measuring in parallel for a period of time so any measurement change would be precisely defined. A more rigorous approach in situ would be to quantify that change and then take actions to eliminate it.

The ramifications of this are pretty high within the context of my study. Therefore, I am somewhat undecided. This is the central question to this whole endeavor. The options I have are to discard the record entirely, modify it and include it, or leave it in and hope it is offset somewhere else by negative impacts.

I have no answer at this point.

More to follow....