More on Exception Scenarios   2 comments

A follow up from this previous post.

John McQuaid over at True/Slant (twice), David Roberts at Grist, and David Brooks in a NYT Op-Ed weigh in on this with some commentary that’s worth reading.

Some assorted observations:

David Roberts says, “Once we know that accidents can be catastrophic and irreversible, it becomes clear that there is no margin of error. We’re operating a brittle system, unable to contain failure and unable to recover from it… the thing is, we’re already operating in those circumstances in a thousand different ways — it’s just that the risks and the damages tend to be distributed and obscured from view.”

That’s actually a bad way to look at this particular scenario, he’s learning something of the wrong lesson.  Systems are brittle when accidents can have catastrophic effects, sure, but the conclusion is that “there is no margin of error” is incorrect.  It’s based upon the faulty premise that “catastrophic and irreversible” also means “the end of the Universe”.  There’s lots of catastrophic and irreversible (at least, in non-geologic time) events that can occur that have nothing to do with anything human beings can control.  Supervolcanoes, Gamma-Ray Bursts, other stuff you’d read about in Phil’s Death From the Skies.  Some of these *are* “endgame”.  Some aren’t.  Even if this oil spill were irreversible, it’s not endgame.

Now, sure, it would be *really* bad, but once you cross over into “some possible accident may have consequences so dire that we can’t possibly engage in this activity” land, you’re rapidly going to wind up huddled in a corner, afraid to do anything.  You can’t do a risk analysis without looking at benefits.  It’s costs (including possible exception scenarios) vs benefits.  Take one of them out, and you’re doing it wrong.

We are not, in fact, operating in those circumstances in a thousand different ways.  You could bring down the entire U.S. power grid for a month, you wouldn’t destroy civilization.  You could release a supervirus that kills off 70% of its victims, worse than any plague we’ve ever had, and you wouldn’t destroy civilization.  You could release a bacterium that destroys rice, or wheat, or corn, and you wouldn’t destroy civilization (this one would come a lot closer, though, and would certainly depopulate some geographical areas).  The consequences of AGW aren’t an actual destruction of the planet (the death of a few billion people, which is horrible, granted, but you’re not going to shatter the earth).

Now, I don’t disagree with his conclusion: “Maybe they’ll stop thinking about how to drill better and start thinking about how to avoid drilling altogether.”  But not just because “some mistakes can’t be undone”.  Because some mistakes can’t be undone, and it’s actually possible to generate energy using something other than hydrocarbons.  We don’t need to burn fossil fuels.  We just need to get off our butts and start committing to the alternatives… and replacing oil as the lynchpin of our economy is going to take time, and be really, really painful, and irritate everyone who can’t think in a longer term than the price of travel during summer vacation.  Tough.

John McQuaid says, “Or take the oft-litigated (in the courts and the media) case of Hurricane Katrina and the New Orleans levee system. I’ll repeat this here, for clarity: most of the devastating flooding of New Orleans occurred because faulty floodwalls collapsed because of errors in their designs approved by the Army Corps of Engineers – i.e., the U.S. government.”

John’s right in the sense that the flooding of New Orleans wasn’t just due to the hurricane, but also the faulty design of the levees (side note to John: this isn’t a suitable place for a blanket anti-ACoE/anti-government plug, since the ACoE itself had reported the fault in the design, the NOAA had asked them to update their hurricane model, etc.  Many people knew or strongly suspected that the levees would break, upgrades had been in the pipe for some time and were cut out of various budgets).

But, when it comes to exception scenarios, the source of the scenario is only important to the extent one can mitigate it.  If the levees are substandard, and might break, but you have no ability to fix them yourself (due to a lack of local government funds) or can’t rely upon someone else to fix them for you (lack of political will among, well, whoever), you have to plan for the levees breaking.  In fact, it’s a good idea to have a plan for the levees breaking regardless of whether they’re broken by water pressure caused by a hurricane, or an actual act of human agency.  Any time you have a large population center below water level, it’s probably a good idea to focus on the question: “What do we do if the waters come rushing in?”, in addition to trying to prevent it in the first place.  Actually, focusing upon the response to classes of failures is going to be much more likely to produce positive results than trying to stop a long laundry list of individual, specific failures.

David Brooks says, “These systems, which allow us to live as well as we do, are too complex for any single person to understand. Yet every day, individuals are asked to monitor the health of these networks, weigh the risks of a system failure and take appropriate measures to reduce those risks.  If there is one thing we’ve learned, it is that humans are not great at measuring and responding to risk when placed in situations too complicated to understand.”

Actually, this is both true and misleading.  Human beings, as a class, are really bad at risk analysis.  However, some individual humans (by a combination of training and natural inclination) are really, really good at risk analysis.  Human beings, as a class, can be either good or bad at risk analysis in very specific problem domains with which they have a lot of experience, certain types of risks will be generally downplayed, others will be exaggerated.

The big problem with events like BP’s oil spill isn’t (precisely) that people are bad at risk analysis, though.  Further down in Brooks’ piece: “On the Deepwater Horizon oil rig, a Transocean official apparently tried to close off a safety debate by reminding everybody the blowout preventer would save them if something went wrong.”

This illustrates a problem I’ve actually alluded to before, in this specific-to-the-IT-industry post.  Someone might actually have a very good idea that something is wrong.  FBI agents suspected terrorists were getting flying lessons prior to 9/11.  Harry Markopolos knew that Madoff was running a Ponzi scheme.  Lots and lots and lots of economists or real estate professionals knew that there was a real estate bubble.  Safety debates took place on the Deepwater Horizon.

The problem is in leadership.  Good risk analysts don’t necessarily acquire leadership positions in our society.  Most of our economic and social-ladder reward systems aren’t based upon “slow and steady wins the race”, but “flashy and makes huge profit right now gets promoted”.  If you deliver a success at a company, you get promoted.  If, three years down the line, your “success” is actually shown to be an utter failure, you normally don’t get demoted.  It’s too late, you can cover your hind end, claim plausible deniability, state conditions have changed, etc.  Typically, you can’t even really prove that the success is an utter failure, because you didn’t quantify at the beginning what constituted success!

Human organizations (be they businesses, non-profits, educational institutions, governmental entities, or a collection of random spontaneous volunteers) are very, very bad at keeping incentives aligned with responsibilities.  If one of those Deepwater Horizon employees had the understood authority to unilaterally declare a shutdown of an operation that costs $500,000 a day to operate, with no outright repercussions on their career, you can bet that the rig would have been shut down during that safety debate.  As long as individuals have to weigh their own individual economic interest against their responsibilities, you put an incredible tension when possible problems crop up.

We don’t reward caution.  We don’t promote risk analysts.

There are advantages to this: when things go right (and by definition, exception scenarios are things that are the exception, not the rule), we progress very, very rapidly.  We make money, we open new markets, we develop new technologies, we have rapid innovation.

We also build economic time bombs, build new monopolies, introduce new byproducts, crush outstanding methods that might be (when all externalities are accounted for), cheaper, more efficient, less polluting, whatever.

I think a case can be made that perhaps American business and government could use more principled hardasses that won’t keel over at the first sign of trouble.  Whatever happened to the American leader?

Advertisements

Posted May 28, 2010 by padraic2112 in rants, Uncategorized

2 responses to “More on Exception Scenarios

Subscribe to comments with RSS.

  1. I’ll tell you what happened: business selected for sociopathic behavior because the net result tended to be higher profits. I read a fascinating article on the subject not too long ago which was the result of personality profiles of ~100 CEOs of large companies. Apparently they had a much higher rate of sociopathic traits then would be expected. Here’s another article on the subject from a few years ago:

    http://www.fastcompany.com/magazine/96/open_boss.html

  2. Yeah, we talked about that study in my last class (Organizational Behavior). The professor did her dissertation on the interactions between board members. Fascinating stuff.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: