Archive for the ‘security’ Category

Audit   1 comment

There are typically four major processes that people talk about when they’re talking about security – identification, authentication, authorization, and audit.  It’s pretty typical for people to talk about the first two as if they were one thing (identification and authorization), but really, they’re not (that’s a topic for another day).

  • Identification: Who are you? – “Are you anybody?”
  • Authentication: Are you allowed to act on behalf of a principal? –  “Are you, the identified person, allowed to play here?”, or “Do we let just anybody play here?”
  • Authorization: What are you allowed to do? – “What sorts of ‘play’ do we allow ‘here’?”
  • Audit: Hey, what have we been letting people do here? –  “Are the above three working?”

I’ll talk about these more in depth someday, but today I want to focus just on audit.

There are lots of different kinds of audit.  You have a computer security audit, whereby some nerd like me analyzes log files and system executables and whatnot and tries to determine if the system itself has only been used for its intended purpose by the people who are supposed to be using it.  You have fiscal audits, where guys in green eye shades analyze accounting logs and purchase orders and credit card receipts and justification forms and try to determine if the money has been used only for its intended purposes by the people who are supposed to be spending it (or collecting it, as the case may be).  You have safety audits, where guys in orange vests with clipboards analyze workspaces and insurance reports and work processes and try to determine if people are doing things that are statistically likely to produce a high number of injuries or deaths.  You have sales audits, where guys in suits look over sales records and market analysis reports and phone logs and try to determine if the guys with good teeth who talk to the customers are selling about what they ought to be expected to sell given the corporate understanding of the market and the customers.

In practice, all these things are wildly different, obviously.  Conceptually, from the standpoint of systems analysis, they’re all the same.  You’re taking some process, and you’re examining the inputs and outputs of that process, and if the end result doesn’t jibe with what you expect, you have a problem.  Either the inputs are off or measured improperly, the process is bad or is measured improperly, the outputs are off or are measured improperly, or your expectation (the way you audit) is just outright wrong.

Now, in the real world, almost everybody *hates* audit.  There’s lots of reasons for this, of course (in many cases, the Big Irk is that the auditor only looks at the first three possibilities, and it’s difficult or impossible to get the auditing organization to see that the actual problem is that they’re doing it wrong).

At the same time, in the real world, everybody *loves* audit, as long as what’s being audited is something somebody else is doing.  Politicians talk about oversight (which is a nice code word for audit), and the public eats it up.

Oversight!  That’s gotta be good, right?

Welfare scofflaws, corrupt politicians, police abusing authority, people abusing government grants, yeah!  Catch those rich bastards putting their money in the Swiss banks and tax the hell out of them!  Crawl up BP’s hind end with a flashlight and find out who’s responsible for this big oil spill!  We want accountability!  Measure teacher performance!  Who’s paying for my congressperson’s reelection campaign!?  Who’s driving, have they passed the test?  Who’s in the country, are they a citizen?  Who’s using welfare that shouldn’t be?  What government programs aren’t producing results?  What the hell are we spending all this money for in the military budget?  Oh, and hey, are our fraud reporting mechanisms actually working at all?  We need to audit our ability to audit!  Rargh!  Righteous indignation!

Somebody knocks on your door and says you’re being audited, suddenly you might not be such a fan of oversight.

Regulations!  Compliance!  Paperwork!  I gotta stand in line at the County Records Office or the DMV!  I have to write a stupid five page report justifying buying a plane ticket on Lufthansa instead of United, what a waste of my frickin’ time!  How the hell am I supposed to be getting any work done with all this bureaucratic red tape getting in my way!  Government is so inefficient!  We can’t measure teachers by performance, it doesn’t work!

Okay, take a breath.

Here’s the reality.  You can audit a process for success, or failure, or both.  Which one you *ought* to use in a particular scenario actually depends upon a wide number of factors.

  • What’s our false positive rate? – how often will our audit flag somebody as being bad, when they’re not?
  • What’s our false negative rate? – how often will our audit flag somebody as being good, when they’re bad?
  • How much does it cost for us to audit this thing, whatever it is?
  • What are the externalities involved in the audit?  Are we auditing the right process to begin with?
  • What happens if we don’t audit anything at all? – does it even matter?
  • If we don’t audit, will the negative consequences actually cost more than the audit?
  • If we do audit, can we do anything with the results, or are we already limited to doing one thing anyway (e.g., “Too big to fail”)?
  • Quis custodiet ipsos custodes?
  • If we make it harder for people to do bad stuff, does this actually prevent people from doing the bad stuff, or does it just make it more profitable for those who get away with it?
  • If that last is the case, are we actually going to have less bad stuff (in toto), or just fewer incidents of bad stuff with a lot more bad in the stuff?
  • If we make it harder for people to do one kind of bad stuff, are they going to stop doing bad stuff altogether?  Or are they going to move to a different kind of bad stuff that’s worse?

These are all questions you need to ask yourself when people start talking about “accountability” and “oversight”.

Otherwise, what you’re paying for isn’t better or more secure processes.  What you’re paying for is a false sense that you’re getting what you’re paying for, which is double-dipped stupidity.

Posted June 8, 2010 by padraic2112 in management, politics, security

Singed Your Hair On That One, WH.   Leave a comment

JoshBW , Michael Coates, and Pinvoke deconstruct a research project by White Hat Security.  From Josh, a pithy summary I agree with almost wholeheartedly:

All of that said, for any given organization the languages that are probably the most secure are the ones the developers are most comfortable writing code with. Forcing a PHP developer to write mvc.net code because you feel it is more secure is a mistake and will buy you nothing but a longer development cycle. (exception – if your coders still swear by CGI you really are better off forcing them into something invented in the past decade even if they will have a learning curve. You probably shouldnt’ have let them be so resistant to change to begin with).

My exception to his exception: someone who can program secure C code for a CGI-based web site is probably a valuable developer.  The problem is, (s)he is going to be dang hard to replace.  The value in forcing your development crew to stay current with technology – at least, not five “cool frameworks” in the past – is that eventually you’re going to have to hand that code over to somebody else.

And the likelihood that their replacement can write secure C code is very low.  It’s really easy to shoot yourself in the foot with C.

Posted May 26, 2010 by padraic2112 in security, tech

You Gotta Be Kidding Me   Leave a comment

Next up in the “Say what?” category:

EV SSL certificates provided by Network Solutions are reported as not working on some smartphones, because they’re not one of the Root CAs on said phones.

Yeah, uh, Network Solutions?  You seriously need to fire somebody.

Posted March 23, 2010 by padraic2112 in security, tech

Facebook Connect: Yeah, It’s Like Dat, Yo.   Leave a comment

Dan Wineman over at Venomous Porridge offers this tidbit:

This is called Facebook Connect, and it’s a very bad thing for security and user education. Teaching people to check that the URL starts with facebook.com before logging in is useless, because Facebook wants its users to log into anything that vaguely looks Facebookish, and it’s training them to do so. How is anyone expected to distinguish Facebook from a phishing site masquerading as Facebook, when Facebook Connect looks and acts like a phishing site by design?


That’s indeedy a very good question, Dan. Undoubtedly people aren’t expected to distinguish Facebook from a phishing site, because Mr. Zuckerberg doesn’t think about security any more than he thinks about privacy.

Posted February 16, 2010 by padraic2112 in security, tech, web sites

Shorter Answer, Ivan   1 comment

I just read Ivan Ristić’s slides for his talk on “How to Render SSL Useless“, found via Luke O’Conner’s blog.  Thanks, Luke!

(spoiler: Here’s the shorter answer: if you use SSL/TLS, you’re probably not using it for the right reasons and you’re probably not getting the level of security you think you’re getting, because you’re probably doing it wrong.)

Ivan’s points boil down to this: SSL/TLS, by itself, is secure.  It’s all these implementation details that render it insecure in practice.  Ivan then offers eleven areas where SSL is “broken” in practice.

Here’s my issue with the slides: some of them don’t detail problems with SSL at all, and the other half are built into the design of SSL itself.

Let’s go through the slides by point.

Ivan’s first contention is that self-signed certificates are bad.  Ivan argues that they’re insecure, they teach users to ignore warnings, and that it’s cheaper to get a “real” certificate than to use a self-signed one anyway.

Well, a self-signed certificate is certainly differently secure than one signed by a root CA, but as to whether or not it’s less secure or insecure, that’s a completely different question (trusted authorities and exploitation scenarios deserve their own post, so I’ll leave it at this for now — edited to add — thank you, Ed Felten, now I don’t need to write this up).  The second contention is just silly, users don’t need to be trained to ignore warnings, they do it already.  The last is at best incomplete.  It requires a certain level of skill to deploy a service that relies upon a self-signed certificate, so saying “you have to maintain it” should be considered as part of the cost is mostly pointless.  You have to maintain any certificate, whether you sign it yourself or pay Verisign to sign it for you.  If I have to pay Bob the Apache Wizard to maintain my site and Bob knows how to generate a self-signed cert, it’s going to be cheaper for me to have Bob sign the cert than it will be for me to pay Verisign to do it, because Bob is going to get his salary (or his packaged SLA payment) either way.

Ivan’s second contention is that private certificate authorities are bad.  The logic follows mostly along the lines of the previous point… it’s better for you to pay someone else to do this for you than it is for you to do it yourself.  Now, he has something of a point here.  Building a CA isn’t the same as self-signing a certificate, it takes a higher degree of knowledge to build the thing properly.  I would imagine that there are a number of CAs out there that are unnecessary and they could be easily covered under one of the existing root CAs.  However, there are any number of completely legitimate reasons for running your own CA, and in any event I don’t think one-off CAs represent a big threat to the overall infosec domain.

Oh, and against both previous points: for-profit root CAs have issued insecure certs before, why should we trust them?

Points 3, 4, 8, 9, and 10 are all basically the same point: if your site needs to be encrypted some of the time in transmission, it really needs to be encrypted all of the time, period.  This is a good point (really should be a single point with examples, though), and I’m more or less with Ivan on this one, although I understand why it isn’t always the case.

Point 7 is that SSL sometimes isn’t used at all when it should be.  Not sure why this belongs on the list, that’s not a problem with SSL implementation, per se.  And I personally haven’t seen an unencrypted site that handles sensitive data in a long while, so I don’t know how germane it is anymore.

Point 11, and to a lesser extent 5, aren’t so much problems with SSL as they are problems with the couplings between SSL & DNS, pushed through the lens of user expectations.  DNS has had its own problems.

Finally, point #6 (using an EV certificate, as opposed to a normal SSL certificate) illustrates the problem I have with computer security engineering professionals.

Now, I haven’t seen the talk and I haven’t read any of Ivan’s blogging (I should, and I’m adding it to my blogroll now), so I can’t say that this is fair, but just reading the slides, here’s how I interpret the underlying context of this talk:

“SSL is totally secure, if you are using it in the totally most secure way and no other way, because we designed it to be totally secure if you use it in the totally most secure way.  Oh, but we also made it so that you could use it in all of these other ways, but DON’T DO THAT because you ruin our perfect design by using it in the non-perfect way!”

There’s a reason why I switched my research focus from infosec to disaster/crisis management, and this is it.  Information systems security designers have a tendency to draw a box in their head, and design a system that is secure inside that box.  If you use the tools they provide within the boundaries of that box, you’re golden, and if you don’t, you’re probably screwed.  But that’s not on them because they can only design out to the edges of the box.

The problems with this approach are that most systems don’t fit inside that box, the box itself often sits on top of a completely insecure table, and often the box itself has lots of little holes in it that are punched into it for various reasons.

Ignore those reasons!  Don’t use that functionality!  It’s bad!  But it’s necessary, that’s why we put it in there!  But you’re probably not doing it right, and it’s not necessary for you, so just pay someone else to do it!

If setting up your own CA is bad, then why is it good to have multiple root CAs?  Shouldn’t there be just one?  (no)

If EV certificates are the best, why do CAs offer regular certs?  (because)

If using incomplete certs is a problem, then why is it possible to generate an incomplete cert in the first place?  (because not all certs are certifying the same thing)

Heck, if self-signed certs are bad, then why do you have the ability to generate them in the first place?  (because in most practical cases, you’re looking for session security, not authoritative identification).

Posted February 16, 2010 by padraic2112 in information science, security, software, tech, web sites

Outsourcing: The Legal Implications   Leave a comment

Overheard on die.net’s jabber server: “It basically says that since Gmail et al are in “possession” of your e-mail, a warrant only needs to be served on them, and it can include a provision that they aren’t allowed to notify the owner.”  (‘it’ being this decision, blogged about here and here)

From the decision:

Thus subscribers are, or should be, aware that their personal information and the contents of their online communications are accessible to the ISP and its employees and can be shared with the government under the appropriate circumstances. Much of the reluctance to apply traditional notions of third party disclosure to the e-mail context seems to stem from a fundamental misunderstanding of the lack of privacy we all have in our e-mails. Some people seem to think that they are as private as letters, phone calls, or journal entries. The blunt fact is, they are not.

District Judge Mosman seems to be making an interesting conflation that I find… odd… for a judge to make: “because things technically are a certain way, people should know that they are that way, but… regardless of whether they know it or not, we should treat them that way.”  Why should people know that they are that way?  We expect people to understand the full implications of Internet RFCs and how they interact?  Why should we assume that?  Even if we could assume that, there’s lots of other technical implications that can lead to abuse of legal authority, and those are curtailed on different rationales, why should we treat email differently?

There are a number of inconvenient blunt facts, Judge Mosman.  It’s a blunt fact that your cell phone tracks your whereabouts.  It’s a blunt fact that many people’s cars track their whereabouts.  It’s a blunt fact that I can stand up a laser and a few other bits of technology that are commercially available in the United States, point them at my neighbor’s curtains, and get a nice full blown real audio copy of their conversations (this one’s pricey, but here just for reference that I’m not making that last statement up).

What prevents misuse of these technologies is the mechanism of legal warrant: if someone wants access to this sort of data, they’re required to convince a judge that this information is pertinent to a legal action, get said warrant written, and execute it.  Now, *physical* searches are different from wiretaps; this has long been recognized by courts.  One can argue that monitoring someone’s email is monitoring communications, and thus should be treated more like a wiretap than a physical search.  When you go to court to get permission to wiretap a target, all you have to do is exectute that court order on the telecommunications company, you don’t have to inform the target.  You can also get a warrant for someone’s telephone records, but you don’t have to inform the target.  In that light, it doesn’t seem entirely different from the issue of email, right?

But email *is* different from telephone communications, because you’re not just getting someone’s *current* communications, you’re getting someone’s data store.  This isn’t like someone’s phone records, which is just a call log, this is like having retroactive access to everyone’s phone conversations, backwards in time.

There’s a quantifiable and quantifiable difference here.  Certainly, the question of whether or not the government should be able to execute a warrant on someone’s email store is an outstanding legal question, but “well, it can be done and people should know that it can be done so they can’t expect privacy” seems like a really weak sauce position here…

… and coincidentally, here’s a case where the ruling was found to be precisely the reverse…

In an interesting side note, a federal judge ruled yesterday that jurors in the Bear Stearns case (in which Cioffi and Tannin are accused of making their portfolios sound much healthier than they were) will not be permitted to hear about one email, in which Tannin wrote, “I became very worried very quickly. Credit is only deteriorating. I was worried that this would all end badly and that I would have to look for work.”

Judge Frederic Block ruled that the government’s search warrant filed with Google to obtain access to the e-mail was unconstitutionally broad and “did not comply with the Warrants Clause of the Fourth Amendment.”

What’s the moral of the story?  Well, it’s certainly the case that outsourcing providers have a bunch of your data.  It’s also certainly the case that it’s not currently stated in law what the legal obligations are of such an outsourcing provider vis-a-vis protecting your data from nefariousness, let alone government subpoenas or warrants (nor is there a suitable body of case law).  Heck, many outsourcing providers aren’t even in this country, so there’s no guarantee that even *if* legislation is drawn up (or case law reaches some preponderance of decisions) that you’re going to have U.S. legal protections over your data.

It just means that you ought to keep this sort of thing in mind, when you’re deciding whether or not to outsource…

Posted November 2, 2009 by padraic2112 in law, outsourcing, security, tech

Feature Added   1 comment

Germane to my last post, check this out (from Wired, via Bruce’s blog):

Researchers at the University of Utah have found a way to see through walls to detect movement inside a building.

The surveillance technique is called variance-based radio tomographic imaging and works by visualizing variations in radio waves as they travel to nodes in a wireless network. A person moving inside a building will cause the waves to vary in that location, the researchers found, allowing an observer to map their position.

Add a nice little HUD and you could have your own personal radar, tracking all movement inside your evil genius lair.

Posted October 13, 2009 by padraic2112 in information science, security

That’s What I Call a Hefty Sum   7 comments

Karl Denniger links to a story that has some interesting details:

Italy’s financial police (Guardia italiana di Finanza) has seized US bonds worth US 134.5 billion from two Japanese nationals at Chiasso (40 km from Milan) on the border between Italy and Switzerland. They include 249 US Federal Reserve bonds worth US$ 500 million each, plus ten Kennedy bonds and other US government securities worth a billion dollar each. Italian authorities have not yet determined whether they are real or fake, but if they are real the attempt to take them into Switzerland would be the largest financial smuggling operation in history; if they are fake, the matter would be even more mind-boggling because the quality of the counterfeit work is such that the fake bonds are undistinguishable from the real ones.

What caught the policemen’s attention were the billion dollar securities. Such a large denomination is not available in regular financial and banking markets. Only states handle such amounts of money.

The question now is who could or would counterfeit or smuggle these non-negotiable bonds.

Karl’s take?

Those sound like Bearer Bonds – at least the Kennedy ones do.  We no longer issue those (nor does pretty much anyone else) for obvious reasons – they’re essentially money and can be had in VERY large size, making them great vehicles for various illegal enterprises.

But folks: This is $134.5 billion dollars worth.

If they’re real, what government (the only entity that would have such a cache) is trying to unload them?

If they’re fake, this is arguably the biggest counterfeiting operation ever, by a factor of many times.  I’ve seen news about various counterfeiting operations over the years that have made me chuckle, but this one, if that’s what it is, is absolutely jaw-dropping.

I can think of two other explanations, that neither Karl nor the AsiaNews reporter considered.  One, this might be the biggest heist in recorded history.  I can’t imagine that a government that lost $134.5 billion dollars would immediately start trumpeting that fact to the world… in fact, they’d be dying to cover that up, which might explain why U.S. media hasn’t picked up the story and run crazy with it (somebody made some phone calls).  Wow, you could make one heck of a caper movie out of that, inside job or not.  Two, this might be a case of money that belonged to a government that no longer existed; like, say, Saddam Hussain’s Iraq.  This seems less likely; while lots of the Iraqi treasury that was under the control of Saddam’s Baath party went missing, I haven’t seen any reports top the “tens of billions” mark.  They already *made* this one into a movie, so no artistic avenue there.

Personally, I’m not entirely certain this isn’t a hoax.  If it isn’t, though, I’m *really* hoping it’s a caper.  Can you imagine the details that would come to light in the next few years?

Posted June 12, 2009 by padraic2112 in news, security

A Note About Choice, An Observation About Fear, and Parental Decisions   5 comments

It’s a great scene, but you can skip to 2:45 for the purposes of this post.

The scene is from 2007’s No Country For Old Men.  For anyone not into contemporary American cinema, Javier Bardem (the guy with the haircut) won the Best Supporting Actor Oscar for his role as Anton Chigurh, “an emotionless, compassionless killing machine. His inability to comprehend human life is matched only by his ability to take it, as he does with ruthless abandon throughout the running of No Country For Old Men… As for the victims who don’t have some sort of reason to be dead behind them, he flips a coin to decide their fate.”

Here’s the question.  What do you think he’s going to do if the gas station owner doesn’t call that coin?

You see, the character of Anton considers himself some sort of instrument of fate, almost a force of nature.  Once he decides to pull out that coin, there are two possible results; you call the toss correctly and you live, or you call it incorrectly and you die.  You could refuse to call the coin toss.  Nobody does this in the movie, but my guess is that Anton would probably start torturing you until you make a call; since he doesn’t believe that it’s fair for him to call the toss, he’d force you to do it.

I’m writing this to illustrate a point.  In some ways, your environment is like Anton.  Your environment will occasionally force you to make a choice, and many people refuse to acknowledge this.

When you wake up in the morning, sitting next to your bed on the nightstand is a metaphorical revolver with 1 “bullet” and somewhere around 100,000 empty chambers.  When you decide to get out of bed and walk to the bathroom, you spin the chamber in a game of Russian Roulette and pull the trigger.  If you’ve picked the chamber with the bullet in it, somewhere between your bed and the bathroom you trip and fall and give yourself a fatal head injury.  When you get to the bathroom, there’s another metaphorical revolver with 1 bullet and about 100,000 empty chambers, you get to play again.  If you fail, you drown in the bathtub (note: for the morbidly statistically inclined, the chamber is about half that size if you’re a woman). Of course, some days you don’t even get to choose to get out of bed.  The alarm clock itself is a revolver and if you have a weak heart, there’s a small but statistically present chance that your clock will give you a heart attack.  Bang.  You’re dead.

It’s a game we all play, every day.  You can’t opt-out of making these choices, there is no way to not play the game. Oh, and sooner or later you’re going to lose.  Happy Friday!

I’ve been reading a lot of the “vaccination” posts out on the Internet over the last few days and an observation has bubbled up into my forebrain.  Many people don’t understand that there really is no difference between making some of these choices yourself, and allowing them to be made for you by the environment.  I’ve seen posts on more than one comment thread that boils down to, “I’m just not comfortable taking the risk of injecting my child, I think the risk of the disease is less.”  Many, many of these people have offered this observation *after* being shown that no, actually, the quantifiable risk of vaccinations is not only less than the risk of disease, but they’re not even on the same scale.  Some people have commented that this is plainly irrational (I’ve been thinking it myself, but I’m trying to stay smooth in these hot-button “debates”).

The problem is that for these people, the thought that they might harm their child is so mindnumbingly horrifying that they are including that in their analysis, but only on one side.  The line of thinking, I’m imagining, goes something like this… “If I choose to give my child a shot, and something happens, it will be my fault, because I decided to okay the shot.”  Conversely, however, if they choose *not* to give their child a shot, and the child gets the disease and suffers, it’s something else’s fault… random chance, the will of God, some grand conspiracy, etc… “I can pretend like they my child got the measles and died because fate decided that my child was going to get the measles, instead of acknowledging the fact that my child got the measles because I refused to vaccinate them.”  Consciously or subconsciously, they’re punting.  They’re assigning *more* pejorative value to their action than they are to the pejorative value of their inaction.  They’re pretending like they can ignore old Anton.

While this might give you the opportunity to retain your sanity if (God forbid) something should happen to your child, let’s be honest about what’s going on here.  You’re afraid.  You realize that the world is unsafe, and you’re assigned the responsibility of making decisions for this little person that you love more than your own life, and the cold reality that they are mortal scares the beejesus out of you.  It scares you so much that you’re allowing a sense of time to overcome your ability to think clearly: “If I vaccinate my child and something happens, it’s my fault because it happened right after the decision.  If I don’t vaccinate my child and they get a disesase three years from now, that’s three years from now and it’s so far away and I just don’t want to think about it that’s so morbid and oh God I’ll just stick my head over here in the corner and decide later.”

Bruce talks about this on his site.  The part on Prospect Theory details exactly what I’m talking about here:

The authors of this study explained this difference by developing something called “prospect theory.” Unlike utility theory, prospect theory recognizes that people have subjective values for gains and losses. In fact, humans have evolved a pair of heuristics that they apply in these sorts of trade-offs. The first is that a sure gain is better than a chance at a greater gain. (“A bird in the hand is better than two in the bush.”) And the second is that a sure loss is worse than a chance at a greater loss. Of course, these are not rigid rules–given a choice between a sure $100 and a 50% chance at $1,000,000, only a fool would take the $100–but all things being equal, they do affect how we make trade-offs.

You can see how this applies to the thought process I’m talking about above.  For someone who discounts their own “choice” as being relevant to the risk, a vaccination is a sure gain (reduced susceptibility to disease) and therefore better than a chance at a greater gain (passing on the vaccination and avoiding any possible risk *and* getting lucky and not getting sick anyway).  For someone who includes their own “choice” as being relevant to the risk, a vacciantion is a sure loss (a chance to directly inflict harm upon my child) which is worse than a chance at a greater loss (passing on the vaccination and having my child get sick and dying).

Is it still irrational?  Well, from the view of utility theory, absolutely.  But humans aren’t necessarily wired that way, and consequences are measured not just in death and horrible side effects, but in the emotional damage those consequences do to the participants involved.  Five years ago I’d be calling people who refused to vaccinate their child criminally negligent and horrible people.  Now I just see them as humans.  Scared humans making bad decisions, but humans nonetheless.

Recognize your fear, and overcome it, everybody.  Here’s one time where “do it for the children” actually applies.  It’s not about you and your fear, it’s about doing what’s best for them.  Unfortunately, that’s not always so clear-cut, and sometimes you’ll do the right thing and your child will suffer for it.  Kids get trapped in burning cars and die because of their car seats.  Far, far, far more children are saved because of them.  Even if vaccinations were as dangerous as some people (erroneously) claim they are, they’re still better than disease.

Posted April 24, 2009 by padraic2112 in parenting, security

Anti-Terrorist Stupidity, Volume 3,541   Leave a comment

LA Times reports that NASA is keeping shuttle launch times “secret” to prevent terrorism:

Beginning with the April 4 launch of space shuttle Atlantis, NASA will keep its launch times secret until 24 hours in advance to guard against terrorist attack, a spokesman for the National Aeronautics and Space Administration said. “NASA is choosing to be extra careful,” Kennedy Space Center spokesman Bruce Buckingham said from Cape Canaveral, Fla. Under the new policy, the space agency will give only four-hour launch periods until about 24 hours before liftoff.

Hrm, the shuttles sits on the launch pad for a month, attached to the SRBs and the ginormous fuel tank, according to the NASA site:

About 18 hours before launch, the Rotating Service Structure — a mobile portion of the launch pad that protects the vehicle and provides worker access while the Space Shuttle is at the pad — is rolled away. About nine hours before launch, the external tank is filled with liquid hydrogen and liquid oxygen. The loading process takes almost three hours.

You can’t exactly fuel the thing in secret, if it takes 3 hours to fill ‘er up.

What sort of convoluted crazy terrorist plot did someone imagine deep in the recesses of their brain that led them to believe that this qualifies as a security measure?

Posted April 8, 2009 by padraic2112 in noise, security