Archive for November 2009

Swing, and a Miss   1 comment

I read Karl Denniger’s blog.  He has fascinating observations on the economic woes of the nation.  Even when I don’t agree with him, I usually find what he has to say interesting.

Not so much this time around.  Karl wrote two posts about this story in the news:

Hundreds of private e-mail messages and documents hacked from a computer server at a British university are causing a stir among global warming skeptics, who say they show that climate scientists conspired to overstate the case for a human influence on climate change.

The e-mail messages, attributed to prominent American and British climate researchers, include discussions of scientific data and whether it should be released, exchanges about how best to combat the arguments of skeptics, and casual comments — in some cases derisive — about specific people known for their skeptical views. Drafts of scientific papers and a photo collage that portrays climate skeptics on an ice floe were also among the hacked data, some of which dates back 13 years.

Karl offers a couple of observations here and here:

Science is the process by which we take a question and:

  • Form a hypothesis.
  • Design an experiment to test that hypothesis.
  • Perform the experiment and collect the data thus generated.
  • Analyze the resulting data.
  • Form a conclusion from the data thus collected.

That’s “The Scientific Method.”

Not quite, Mr. D.  That’s one method of exploring questions scientifically, yes.  But many questions can’t be explored by experimentation, and it’s certainly not the only method of scientific inquiry.

Uncertainties in measurement are additive – that is, if I measure two rulers and each is reported as “12 inches +/- 0.1 inch” then the total length of the two rulers is 24 inches +/- 0.2 inch – because it is possible that both errors were on the same side.

Yes and no.  In your particular case, yes.  However, when you’re talking about extremely complex behaviors, your measurements are not necessarily combined this way.  For example, phenomena A may have an error in measurement, and phenomena B may have an error in measurement, but it may actually be the case that those errors are not independent.  I could be measuring PSI and temperature in a pressure cooker.  My thermometer may have an error margin, and my pressure indicator may have an error margin.  But if they are both incorrect in different directions, a composite measurement would reveal this.

To the extent that method is corrupted on purpose one does not have science.  To the extent that it is corrupted out of necessity (e.g. missing data that one requires, and thus one “guesses”) this is accepted provided one discloses one’s guess and how it was derived – that is, provided there is no material concealment.

In the “Big Science World” the check and balance on concealment – and outright fraud – is peer review and post-publication duplication.  To be able to duplicate the results claimed, however, the algorithms, code, methods and data sets must be made publicly available so that anyone who desires to do so can validate the claimed experimental results.

In the spirit of science, I will note that I fully expect others to try to validate (or dispute) my observations below.  As such you can find the original archive at Wikileaks should you decide you would like to do so, and I encourage all other independent investigation.

The first point is correct, but needs clarification.  With regards to a particular scientific field, you’re talking about an incredibly large body of literature.  Every scientific field is in a dynamic state, but generally speaking there is a collection of assumptions that each field currently regards as acceptable assumptions.  For example, in physics, it is currently regarded as acceptable to assume that gravity is a constant.  Since we don’t know the mechanism by which gravity operates, this *is*, in fact, an assumption.  No physicist, however, would take the time to put in any sort of disclaimer about this assumption in their publication; it’s assumed that gravity is a constant, because we have no reason to expect otherwise.

The second point is correct… and I personally as yet have had no trouble whatsoever acquiring access to anybody’s data when I’ve asked them for it.  Data sets are typically not included in journal publishing, due to space limitations (yes, in this day and age, this is stupid and data sets should at least be accessible in electronic copies of journals… but this is a reflection of a hundred years of traditional science publishing, not conspiracy).  Most climatologists have their data linked from their own websites, or blogs, or the data is a public data set and already in the public domain in any event.

In short, I see nothing in that data set that implies that the messages have been tampered with, but there is also no reasonable way to prove their provenance as the necessary information to do so (routing and message-id information) is missing.  A well-place FOI request should resolve that problem, if anyone is particularly interested in doing so.

The data sets included in the archive are also interesting.  Again, a reasonably-detailed look through them shows nothing implying that they have been tampered with, and they include data and computer code (source program code) from a wide variety of time periods.  It appears authentic.

On the contrary, Karl, this is bad framing, as you yourself note earlier in your own post.  This is a demonstrably pitifully incomplete selection of data gathered over a period of 13 years.  Emails are missing.  Off-email conversations are not referenced, and so on.  If I gave someone access to every document I’ve created and my email store since 1996, I have no doubt whatsoever that you’d be able to construct a 60MB zip file containing enough emails, blog posts, snippets of papers I’ve written, and dump that up at the Department of Homeland Security and net me a visit from some counterterrorism division.  I write about security and information systems; this would be a trivial exercise.

I have long argued that the major problem with so-called “published papers” on global warming is that it is rare to see find measurement uncertainties reported in the alleged findings, and competing studies have cited wildly different values for the same thing (e.g. atmospheric CO2 emitted by man per year.)

I believe we can now deduce why those uncertainties are missing – they are not being carried through the computational process as is required for any scientific calculation and this omission is in fact intentional.

That’s not a major problem (and usually isn’t the case).

Karl, you really must not read a lot of scientific papers.  Limitation sections are a requirement.  Have you really read a large chunk of the climate science of the last 3 decades?

Measurement uncertainties usually are cited, of course… and when they aren’t, it’s usually because those measurement uncertainties are known among the particular science community.  An aside for the non-scientists among the readership: most academic journal articles are not written for the layman to understand (side note: this is one reason why science reporting is usually so bad… science reporters are generally *not* well versed enough to accurately report on what they’re reading).  They are written with the assumption that the persons reading the articles are fairly well versed in the field in question.  If you think about it, this is not only normal it’s absolutely necessary.  Academic journals have to assume that the person reading the article is familiar with: (a) basic undergraduate science (b) basic undergraduate mathematics (c) advanced graduate level science in the particular field (d) whatever advanced mathematics are necessary tools for examining that particular field.  You’re talking about 6 or so years of academic study assumed as known.

In some sense, yes, the omission is intentional.  This is not indicative of nefariousness.

Without hard proof of whatever answer is propounded to that question we as the people of this planet must insist on a full stop for all purported “climate amelioration” efforts, as there is every possibility that the entirety of this so-called science in fact proves exactly nothing, except that the so-called “researchers” have added much CO2 to the atmosphere producing the electricity required to power their computers!

Extraordinary claims require extraordinary proof, and from the released set of data that proof is, quite simply, not present and accounted for.

Yes, Karl, extraordinary claims do require extraordinary proof.  So let us reframe the issue at hand.

Climate science journals have been publishing papers supporting global climate change papers for over 30 years now.  I have not seen a single academic paper that has been published since 1980 that refutes the global warming hypothesis (compare this to the earlier “global cooling” hypothesis that some climate change denialists point at, which was not well supported by any body of literature and never regarded as a mainstream theory).  Minor disclaimer, this is my cursory reading of the literature, I’m not a climatologist myself… but I have looked at both the general body of literature and I’ve tried to follow the “skeptic” crowd’s trail of evidence and found nothing, so it is (at the very least) a decent investigation by someone who doesn’t qualify as a rank amateur or layman.

Global warming theories are grounded in basic thermodynamics (physics) and the properties of greenhouse gasses (chemistry).  Impacts have been seen in species diversity (biology), ice cores and rock strata (geology), and marine lifecycles (marine biology and oceanography).  No credible falsification evidence has been presented in any journal, be it any of the major climate journals, the more general Science or Nature, or any other peer reviewed publication in any of the other fields listed here.

There are two possibilites here (edited to add: there are actually three*).  Either the global climate change (and its proposed human root causes) are very close to highly probably true, or there is a massive, cross-national, cross-cultural, pervasive conspiracy… in not one but possibly dozens of scientific fields… to cover up the truth.

Knowing what I do of conspiracies in general, the second possibility would be laughable to consider just on the sheer numbers involved.  However, even supposing that you could in fact keep such a conspiracy under wraps, I don’t understand what the actual motiviation would be for such a conspiracy; why would a geologist or an oceanographer care about publishing refuting the global warming hypothesis?  They’re not climatologists, they’re not going to lose their shot at tenure for challenging another field (in fact, given what I know of university politics, they’d be more likely to get tenure by challenging a basic hypothesis of another field).

This clearly counts as a very, very extraordinary claim.

And what evidence exists to bolster this claim?  An illegally acquired, demonstrably incomplete, cherry picked selection of contextually limited data from a single climate center representing a subpopulation of climate scientists obtained by an individual or group whose motives are unknown, containing… ah, what looks like a very few emails that could be interpreted as malicious by someone who already assumed that global warming was a completely fabricated theory, or could also be interpreted as the diplomatically uncautious phrasing of perfectly normal behavior among reasonable scientists.

I don’t think that qualifies as extraordinary evidence.

* (edited to add): the third possibility is complex.  It is possible that global climate change hypothesis is based upon a causal theory that has been reinforced entirely by correlative evidence in other fields, but the principle causes are independent and not in fact tied to the global climate theory.  This is possible, but as time goes on it becomes increasingly unlikely.  People who support AGW tend to claim that this has already reached the point where the lack of falsification results in other fields shows that those other fields support the causal hypothesis.  People who challenge AGW tend to claim that each one of these instances of lack of falsification have independent explanations.  It is possible that the second group is correct, but I as yet have not seen a substantive analysis supporting this view.  That doesn’t mean it’s not possible to construct such an analysis.

Posted November 22, 2009 by padraic2112 in science

Meet The Press Mondays   3 comments

An Idea from Ann:

In this great new world called BlogWorld, we are all the press. We are the reporters, the editors, the advertising agents, and the publishers. Well, I guess Google is the publisher. Anyway.

Much like in the real world, I’ve created a character for myself here.

In an effort to have some sort of zero tolerance policy about something or other, I hereby create Meet the Press Mondays. (I fully expect no one will play.) On Meet the Press Mondays, bloggers should share a little piece of themselves. Whether you haven’t shared it in the blog world, or you haven’t shared it in the “real” world, or you haven’t shared it in any world outside of your crazy little mixed up confusion mind… there is a place for it here, on Mondays.

I like this idea.  You’ll have to take the jump to her place to see her tidbit.  Here’s mine (although the family won’t be surprised by it):

I love musicals.  West Side Story, Singin’ In The Rain, Easter Parade, White Christmas, even bad ones like Xanadu.

You tell ’em, Cosmo…

Posted November 16, 2009 by padraic2112 in memes

This is Bad.   2 comments

Ben Laurie (yes, *that* Ben Laurie) reports that TLS is flat busted.

For the last 6 weeks or so, a bunch of us have been working on a really serious issue in SSL. In short, a man-in-the-middle can use SSL renegotiation to inject an arbitrary prefix into any SSL session, undetected by either end.

To make matters even worse, through a piece of (in retrospect) incredibly bad design, HTTP servers will, under some circumstances, replay that arbitrary prefix in a new authentication context. For example, this is what happens if you configure Apache to require client certificates for one directory but not another. Once it emerges that your request is for a protected directory, a renegotiation will occur to obtain the appropriate client certificate, and then the original request (i.e. the stuff from the bad guy) gets replayed as if it had been authenticated by the client certificate. But it hasn’t.

More here.

There are three general attacks against HTTPS discussed here, each with slightly different characteristics, all of which yield the same result: the attacker is able to execute an HTTP transaction of his choice, authenticated by a legitimate user (the victim of the MITM attack.

Posted November 5, 2009 by padraic2112 in Uncategorized

Outsourcing: The Legal Implications   Leave a comment

Overheard on die.net’s jabber server: “It basically says that since Gmail et al are in “possession” of your e-mail, a warrant only needs to be served on them, and it can include a provision that they aren’t allowed to notify the owner.”  (‘it’ being this decision, blogged about here and here)

From the decision:

Thus subscribers are, or should be, aware that their personal information and the contents of their online communications are accessible to the ISP and its employees and can be shared with the government under the appropriate circumstances. Much of the reluctance to apply traditional notions of third party disclosure to the e-mail context seems to stem from a fundamental misunderstanding of the lack of privacy we all have in our e-mails. Some people seem to think that they are as private as letters, phone calls, or journal entries. The blunt fact is, they are not.

District Judge Mosman seems to be making an interesting conflation that I find… odd… for a judge to make: “because things technically are a certain way, people should know that they are that way, but… regardless of whether they know it or not, we should treat them that way.”  Why should people know that they are that way?  We expect people to understand the full implications of Internet RFCs and how they interact?  Why should we assume that?  Even if we could assume that, there’s lots of other technical implications that can lead to abuse of legal authority, and those are curtailed on different rationales, why should we treat email differently?

There are a number of inconvenient blunt facts, Judge Mosman.  It’s a blunt fact that your cell phone tracks your whereabouts.  It’s a blunt fact that many people’s cars track their whereabouts.  It’s a blunt fact that I can stand up a laser and a few other bits of technology that are commercially available in the United States, point them at my neighbor’s curtains, and get a nice full blown real audio copy of their conversations (this one’s pricey, but here just for reference that I’m not making that last statement up).

What prevents misuse of these technologies is the mechanism of legal warrant: if someone wants access to this sort of data, they’re required to convince a judge that this information is pertinent to a legal action, get said warrant written, and execute it.  Now, *physical* searches are different from wiretaps; this has long been recognized by courts.  One can argue that monitoring someone’s email is monitoring communications, and thus should be treated more like a wiretap than a physical search.  When you go to court to get permission to wiretap a target, all you have to do is exectute that court order on the telecommunications company, you don’t have to inform the target.  You can also get a warrant for someone’s telephone records, but you don’t have to inform the target.  In that light, it doesn’t seem entirely different from the issue of email, right?

But email *is* different from telephone communications, because you’re not just getting someone’s *current* communications, you’re getting someone’s data store.  This isn’t like someone’s phone records, which is just a call log, this is like having retroactive access to everyone’s phone conversations, backwards in time.

There’s a quantifiable and quantifiable difference here.  Certainly, the question of whether or not the government should be able to execute a warrant on someone’s email store is an outstanding legal question, but “well, it can be done and people should know that it can be done so they can’t expect privacy” seems like a really weak sauce position here…

… and coincidentally, here’s a case where the ruling was found to be precisely the reverse…

In an interesting side note, a federal judge ruled yesterday that jurors in the Bear Stearns case (in which Cioffi and Tannin are accused of making their portfolios sound much healthier than they were) will not be permitted to hear about one email, in which Tannin wrote, “I became very worried very quickly. Credit is only deteriorating. I was worried that this would all end badly and that I would have to look for work.”

Judge Frederic Block ruled that the government’s search warrant filed with Google to obtain access to the e-mail was unconstitutionally broad and “did not comply with the Warrants Clause of the Fourth Amendment.”

What’s the moral of the story?  Well, it’s certainly the case that outsourcing providers have a bunch of your data.  It’s also certainly the case that it’s not currently stated in law what the legal obligations are of such an outsourcing provider vis-a-vis protecting your data from nefariousness, let alone government subpoenas or warrants (nor is there a suitable body of case law).  Heck, many outsourcing providers aren’t even in this country, so there’s no guarantee that even *if* legislation is drawn up (or case law reaches some preponderance of decisions) that you’re going to have U.S. legal protections over your data.

It just means that you ought to keep this sort of thing in mind, when you’re deciding whether or not to outsource…

Posted November 2, 2009 by padraic2112 in law, outsourcing, security, tech