November 13, 2007

Future v Future I

One of the hot new techniques of getting things done is using technology to harness the power of openness and transparency to make things happen that would have been either prohibitively expensive or flat out impossible in the past. Two manifestations of this are the wikimedia constellation of efforts and the open source movement in all its manifestations including open source peer review. But open source peer review does not get a very friendly reception over at wikipedia, at least in the english version, at least when the subject under discussion impacts global warming, and I've got the scars to prove it.

One of the key instrument sets that determine our understanding of global temperatures is a world-wide network of surface temperature stations that use fairly low-tech ways to measure the temperature. The US is generally considered the best of the best in the quality of its network. Like the US' credit rating, nobody ever actually went and checked. So how good is that instrumental temperature record? Anthony Watts decided to find out. Watts is chief meteorologist for KPAY-AM radio and has his own company that sells weather forecasts and various tools. He uses these weather stations professionally and has for many years. He decided to start with the USHCN network, a limited network of "high quality" stations, 1221 in all, resolving to survey them all to determine whether they've fallen victim to poor maintenance or the urban heat island effect.

He had the funds to register a domain and maintain a website but not enough to send paid data gatherers across the US to physically survey those 1221 sites. So he took a look at the NOAA's site information handbook (PDF) and used their 1 through 5 classification system (page 7), designed survey forms, set up rules of conduct, and put out a call for volunteers. And the volunteers came. Currently there are 421 sites already surveyed and published on his website surfacestations.org. And here is where things get interesting. Applying the standard temperature classification categories in the siting handbook there are two categories (4 and 5) whose error ratings exceed the amount of detected global warming in the 20th century. So how much of the 421 stations (an admittedly urban biased sample) are in those two categories? It's 68% of the sample and 23% of the total network.

That's a pretty eye popping result. At least it seems eye popping until you try to get it included in the relevant articles in Wikipedia. Then it just gets ugly. The standards for reliable sources are theoretically flexible and merely require that it's not just self-published pap that has no independent editorial oversight but the community that camps on the pages won't budge on its own, unwritten set of standards that it's got to be peer reviewed in a "good" publication.

What isn't on the list is an open source scientific effort to gather data. Peer review can only be done by professional referees, the great unwashed are welcome to submit to wikipedia but if they attempt to engage in open source peer review, that simply won't do. It's one version of the future (Wikipedia) rejecting another (open source peer review).

And Watt's data? I gave up getting the data in when a Wikipedia admin casually libeled Watts' reputation by saying that a veteran weather man couldn't be trusted to ensure that his own effort is run by the rules he laid out on the project website. It was going to go downhill from there, and fast.

Posted by TMLutas at November 13, 2007 12:07 PM