23 July 2014

Data are ugly

Current news about whether there really is an increase in Antarctic sea ice cover is reinforcing my belief, shared by most people who deal with data, that data are ugly.  This work argues that the trend that some have seen in some trend analyses has more to do with the data processing than with nature.  I encourage you to read the article in full itself.  It is freely available.

From the abstract:
Although our analysis does not definitively identify whether this change introduced an error or removed one, the resulting difference in the trends suggests that a substantial error exists in either the current data set or the version that was used prior to the mid- 2000s, and numerous studies that have relied on these observations should be reexamined to determine the sensitivity of their results to this change in the data set.
One of the obnoxious things about data sources is that they don't remain the same forever.  This is not so much a problem for my concerns about weather prediction, since the atmosphere forgets what you said you observed in a few days.  But for a climate trend, the entire record is important.  For the data set being discussed, the Bootstrap Algorithm (Comiso) applied to passive microwave, we immediately run in to data obnoxing.  Since 1978, there have been several passive microwave instruments -- SMMR, SSMI F-8, SSMI-F11, SSMI-F13, 14, 15, AMSR-E, SSMI-S F16, 17, 18, and AMSR-2.  They didn't all fly at the same time, and they don't have exactly the same methods of observation.  And none of them exactly observe 'sea ice', which leads to a universal problem which we (people who want to use these instruments to say something about sea ice) all have to deal with.

So a few considerations of what all is behind the scenes of this paper and the earlier Screen, 2011.  The latter paper involved some of my work (read deep in to the acknowledgements).  This one doesn't, but the fundamental issues are the same ...
One issue is obvious from the list -- there are a lot of different instruments to work from.  But that misleads you some.  Until 1995, no more than 1 instrument was available for most of the time -- SMMR* from 1978 to 1987, F-8 from 1987 to 1991 and F-11 SSMI from 1991-1995.  F-13 overlapped F-11 in 1995, and continued for many years, including extensive overlap with F-14, F-15, F-16, F-17, and with AMSR-E.

Hold that thought.  

The thing that these instruments actually observe (or more nearly exactly) is the amount of energy reaching the satellite at specific frequencies (consider it the intensity of colors, but with 7 colors or more available instead of the 3 we humans work with).  As with any use of satellite data, what the satellite observes has to be translated to something that we care about, in this case, sea ice.  Other people use these same instruments for ocean winds, atmospheric water content, and other things.

The method we use for translating from what the satellite sees to what we want to know about is an algorithm.  There are many algorithms which have been used and proposed over the years.  In the Screen paper, the algorithms most at had were Team1 and Team2 (from teams of NASA+ investigators).  In the current paper, it is the Bootstrap algorithm (from Comiso, in the same group at NASA).  

If data were perfect, and series of instruments were perfect, then there'd be no problem from here on.  But data are ugly, and instruments series are not perfect.  There are two different imperfections in these instruments for purposes of analyzing sea ice climate trends.  One is, the instruments don't look at exactly the same frequencies (colors).  This means that when you change from SMMR to SSMI, you have to retune your algorithm.  When you go from SSMI to SSMI-S, you retune it again.  And if you use AMSR (E or 2), you have to do yet another retuning.

Reality is still uglier than that.  Namely, there's only a single retuning from SMMR to SSMI if all of the SSMI worked _exactly_ like each other.  They're close, but not exactly.  To go from F8 to F11's SSMI, there's another retuning, and may be another from F11 to F13.  14 and 15 were extremely like 13, so no retuning there -- for the algorithm(s) I was using.  The SSMI-S are also not identical, so I'm in the process of doing the tuning (necessary) to make F-16 and F-18 data look like F-17 (my reference).  I don't have the data in hand yet, but it's likely that AMSR2 is not observing identically to AMSRE, so there's another retuning.

Ok, you can bring back that thought I asked you to hold.  If you were trying to count all the different retunings (plus the bonus ones for F-15 as one of its sensors picked up different noise levels as time has passed), the answer is ... big.  You have an option to reduce the number of retunings, which is what most people do.  Namely, use only one satellite at a time except for adjusting from one to the next.  The plus is, with only one pair of satellites to consider, you can spend much more effort on the single pair to get the transition just right. 

It is in these weeds that the data problems may have crept in, not from the original instruments, but from the data files to describe how to do the retuning as you go from one instrument to the next.  It is unavoidable that there be different files for this.  It seems, and remains to be confirmed, that in redoing the computations, incorrect retuning files were used for part of the span.  In particular, across that 1991 transition.  

This was a particularly bad transition for Antarctic sea ice purposes in the first place.  Two big problems.  One is, it was very short, which doesn't give you much data to work with in making the retuning.  The second is, it was in December -- which is near the Antarctic ice minimum.  So you have relatively little ice to use for the retuning, and the ice you have is relatively warm.  The trend, to the extent it's real, is in the winter time maximum -- where we have no 1991 overlap data for the Antarctic.

Ok, science is a messy business.  We already knew that.  And, as I started with, data are ugly.  Not useless or worthless, but distinctly ugly.  Where does this leave the trend for Antarctic ice maxima?  Well, one news article (sorry, I lost track of which) cite Comiso that he believes that most of the trend is real, even once the processing is redone with the best files through the whole period.  The paper I led with suggests that most of the trend is due to the processing oops. 

Non-digression: This also illustrates a point I've been trying to make over the years -- publication does not mean perfection.  It is part of a conversation.  Another point on the conversation is that one of the authors, Meier, works down the hall from Comiso.  The proof of how important the error is will occur not from reading the earlier papers from the Bootstrap and this one, but in redoing the data analysis.  And as said in the abstract -- people will have to examine their data usage to see whether and how they may be affected by these issues. 

So keep tuned to the literature. 


* The instruments are SMMR, SSMI, SSMI-S, AMSR (2 or E), which refers to just how the instrument is trying to observe.  SMMR = Satellite Multichannel Microwave Radiometer, SSMI = Special Sensor / Microwave Imager.  F-8 and such numbers refer to the platform which carries these and other instruments.  They are part of the DMSP ( = Defense Meteorological Satellite Program) series of satellites.

+ NASA helped pay for my graduate schooling.  This group in fact.  Summer of 1987 I worked there, also getting to know Comiso and the Team investigators.

3 comments:

Walt Meier said...

Nice summary Bob. Thanks.

Walt

WhiteBeard said...


Ditto to what Walt said.

Ian Eisenman said...

Bob,

Walt just emailed me the link to this. Thanks for this thoughtful discussion of our paper! I found it to be a really nice summary of the results and context, including an accessible discussion of a lot of the gory details, all in the language of a backyard BBQ conversation that made it fun to read.

Thanks,
Ian