Archive for the ‘security’ Category
There are typically four major processes that people talk about when they’re talking about security – identification, authentication, authorization, and audit. It’s pretty typical for people to talk about the first two as if they were one thing (identification and authorization), but really, they’re not (that’s a topic for another day).
- Identification: Who are you? – “Are you anybody?”
- Authentication: Are you allowed to act on behalf of a principal? – “Are you, the identified person, allowed to play here?”, or “Do we let just anybody play here?”
- Authorization: What are you allowed to do? – “What sorts of ‘play’ do we allow ‘here’?”
- Audit: Hey, what have we been letting people do here? – “Are the above three working?”
I’ll talk about these more in depth someday, but today I want to focus just on audit.
There are lots of different kinds of audit. You have a computer security audit, whereby some nerd like me analyzes log files and system executables and whatnot and tries to determine if the system itself has only been used for its intended purpose by the people who are supposed to be using it. You have fiscal audits, where guys in green eye shades analyze accounting logs and purchase orders and credit card receipts and justification forms and try to determine if the money has been used only for its intended purposes by the people who are supposed to be spending it (or collecting it, as the case may be). You have safety audits, where guys in orange vests with clipboards analyze workspaces and insurance reports and work processes and try to determine if people are doing things that are statistically likely to produce a high number of injuries or deaths. You have sales audits, where guys in suits look over sales records and market analysis reports and phone logs and try to determine if the guys with good teeth who talk to the customers are selling about what they ought to be expected to sell given the corporate understanding of the market and the customers.
In practice, all these things are wildly different, obviously. Conceptually, from the standpoint of systems analysis, they’re all the same. You’re taking some process, and you’re examining the inputs and outputs of that process, and if the end result doesn’t jibe with what you expect, you have a problem. Either the inputs are off or measured improperly, the process is bad or is measured improperly, the outputs are off or are measured improperly, or your expectation (the way you audit) is just outright wrong.
Now, in the real world, almost everybody *hates* audit. There’s lots of reasons for this, of course (in many cases, the Big Irk is that the auditor only looks at the first three possibilities, and it’s difficult or impossible to get the auditing organization to see that the actual problem is that they’re doing it wrong).
At the same time, in the real world, everybody *loves* audit, as long as what’s being audited is something somebody else is doing. Politicians talk about oversight (which is a nice code word for audit), and the public eats it up.
Oversight! That’s gotta be good, right?
Welfare scofflaws, corrupt politicians, police abusing authority, people abusing government grants, yeah! Catch those rich bastards putting their money in the Swiss banks and tax the hell out of them! Crawl up BP’s hind end with a flashlight and find out who’s responsible for this big oil spill! We want accountability! Measure teacher performance! Who’s paying for my congressperson’s reelection campaign!? Who’s driving, have they passed the test? Who’s in the country, are they a citizen? Who’s using welfare that shouldn’t be? What government programs aren’t producing results? What the hell are we spending all this money for in the military budget? Oh, and hey, are our fraud reporting mechanisms actually working at all? We need to audit our ability to audit! Rargh! Righteous indignation!
Somebody knocks on your door and says you’re being audited, suddenly you might not be such a fan of oversight.
Regulations! Compliance! Paperwork! I gotta stand in line at the County Records Office or the DMV! I have to write a stupid five page report justifying buying a plane ticket on Lufthansa instead of United, what a waste of my frickin’ time! How the hell am I supposed to be getting any work done with all this bureaucratic red tape getting in my way! Government is so inefficient! We can’t measure teachers by performance, it doesn’t work!
Okay, take a breath.
Here’s the reality. You can audit a process for success, or failure, or both. Which one you *ought* to use in a particular scenario actually depends upon a wide number of factors.
- What’s our false positive rate? – how often will our audit flag somebody as being bad, when they’re not?
- What’s our false negative rate? – how often will our audit flag somebody as being good, when they’re bad?
- How much does it cost for us to audit this thing, whatever it is?
- What are the externalities involved in the audit? Are we auditing the right process to begin with?
- What happens if we don’t audit anything at all? – does it even matter?
- If we don’t audit, will the negative consequences actually cost more than the audit?
- If we do audit, can we do anything with the results, or are we already limited to doing one thing anyway (e.g., “Too big to fail”)?
- Quis custodiet ipsos custodes?
- If we make it harder for people to do bad stuff, does this actually prevent people from doing the bad stuff, or does it just make it more profitable for those who get away with it?
- If that last is the case, are we actually going to have less bad stuff (in toto), or just fewer incidents of bad stuff with a lot more bad in the stuff?
- If we make it harder for people to do one kind of bad stuff, are they going to stop doing bad stuff altogether? Or are they going to move to a different kind of bad stuff that’s worse?
These are all questions you need to ask yourself when people start talking about “accountability” and “oversight”.
Otherwise, what you’re paying for isn’t better or more secure processes. What you’re paying for is a false sense that you’re getting what you’re paying for, which is double-dipped stupidity.
JoshBW , Michael Coates, and Pinvoke deconstruct a research project by White Hat Security. From Josh, a pithy summary I agree with almost wholeheartedly:
All of that said, for any given organization the languages that are probably the most secure are the ones the developers are most comfortable writing code with. Forcing a PHP developer to write mvc.net code because you feel it is more secure is a mistake and will buy you nothing but a longer development cycle. (exception – if your coders still swear by CGI you really are better off forcing them into something invented in the past decade even if they will have a learning curve. You probably shouldnt’ have let them be so resistant to change to begin with).
My exception to his exception: someone who can program secure C code for a CGI-based web site is probably a valuable developer. The problem is, (s)he is going to be dang hard to replace. The value in forcing your development crew to stay current with technology – at least, not five “cool frameworks” in the past – is that eventually you’re going to have to hand that code over to somebody else.
And the likelihood that their replacement can write secure C code is very low. It’s really easy to shoot yourself in the foot with C.
Next up in the “Say what?” category:
EV SSL certificates provided by Network Solutions are reported as not working on some smartphones, because they’re not one of the Root CAs on said phones.
Yeah, uh, Network Solutions? You seriously need to fire somebody.
Dan Wineman over at Venomous Porridge offers this tidbit:
This is called Facebook Connect, and it’s a very bad thing for security and user education. Teaching people to check that the URL starts with facebook.com before logging in is useless, because Facebook wants its users to log into anything that vaguely looks Facebookish, and it’s training them to do so. How is anyone expected to distinguish Facebook from a phishing site masquerading as Facebook, when Facebook Connect looks and acts like a phishing site by design?
That’s indeedy a very good question, Dan. Undoubtedly people aren’t expected to distinguish Facebook from a phishing site, because Mr. Zuckerberg doesn’t think about security any more than he thinks about privacy.
I just read Ivan Ristić’s slides for his talk on “How to Render SSL Useless“, found via Luke O’Conner’s blog. Thanks, Luke!
(spoiler: Here’s the shorter answer: if you use SSL/TLS, you’re probably not using it for the right reasons and you’re probably not getting the level of security you think you’re getting, because you’re probably doing it wrong.)
Ivan’s points boil down to this: SSL/TLS, by itself, is secure. It’s all these implementation details that render it insecure in practice. Ivan then offers eleven areas where SSL is “broken” in practice.
Here’s my issue with the slides: some of them don’t detail problems with SSL at all, and the other half are built into the design of SSL itself.
Let’s go through the slides by point.
Ivan’s first contention is that self-signed certificates are bad. Ivan argues that they’re insecure, they teach users to ignore warnings, and that it’s cheaper to get a “real” certificate than to use a self-signed one anyway.
Well, a self-signed certificate is certainly differently secure than one signed by a root CA, but as to whether or not it’s less secure or insecure, that’s a completely different question (trusted authorities and exploitation scenarios deserve their own post, so I’ll leave it at this for now — edited to add — thank you, Ed Felten, now I don’t need to write this up). The second contention is just silly, users don’t need to be trained to ignore warnings, they do it already. The last is at best incomplete. It requires a certain level of skill to deploy a service that relies upon a self-signed certificate, so saying “you have to maintain it” should be considered as part of the cost is mostly pointless. You have to maintain any certificate, whether you sign it yourself or pay Verisign to sign it for you. If I have to pay Bob the Apache Wizard to maintain my site and Bob knows how to generate a self-signed cert, it’s going to be cheaper for me to have Bob sign the cert than it will be for me to pay Verisign to do it, because Bob is going to get his salary (or his packaged SLA payment) either way.
Ivan’s second contention is that private certificate authorities are bad. The logic follows mostly along the lines of the previous point… it’s better for you to pay someone else to do this for you than it is for you to do it yourself. Now, he has something of a point here. Building a CA isn’t the same as self-signing a certificate, it takes a higher degree of knowledge to build the thing properly. I would imagine that there are a number of CAs out there that are unnecessary and they could be easily covered under one of the existing root CAs. However, there are any number of completely legitimate reasons for running your own CA, and in any event I don’t think one-off CAs represent a big threat to the overall infosec domain.
Oh, and against both previous points: for-profit root CAs have issued insecure certs before, why should we trust them?
Points 3, 4, 8, 9, and 10 are all basically the same point: if your site needs to be encrypted some of the time in transmission, it really needs to be encrypted all of the time, period. This is a good point (really should be a single point with examples, though), and I’m more or less with Ivan on this one, although I understand why it isn’t always the case.
Point 7 is that SSL sometimes isn’t used at all when it should be. Not sure why this belongs on the list, that’s not a problem with SSL implementation, per se. And I personally haven’t seen an unencrypted site that handles sensitive data in a long while, so I don’t know how germane it is anymore.
Point 11, and to a lesser extent 5, aren’t so much problems with SSL as they are problems with the couplings between SSL & DNS, pushed through the lens of user expectations. DNS has had its own problems.
Finally, point #6 (using an EV certificate, as opposed to a normal SSL certificate) illustrates the problem I have with computer security engineering professionals.
Now, I haven’t seen the talk and I haven’t read any of Ivan’s blogging (I should, and I’m adding it to my blogroll now), so I can’t say that this is fair, but just reading the slides, here’s how I interpret the underlying context of this talk:
“SSL is totally secure, if you are using it in the totally most secure way and no other way, because we designed it to be totally secure if you use it in the totally most secure way. Oh, but we also made it so that you could use it in all of these other ways, but DON’T DO THAT because you ruin our perfect design by using it in the non-perfect way!”
There’s a reason why I switched my research focus from infosec to disaster/crisis management, and this is it. Information systems security designers have a tendency to draw a box in their head, and design a system that is secure inside that box. If you use the tools they provide within the boundaries of that box, you’re golden, and if you don’t, you’re probably screwed. But that’s not on them because they can only design out to the edges of the box.
The problems with this approach are that most systems don’t fit inside that box, the box itself often sits on top of a completely insecure table, and often the box itself has lots of little holes in it that are punched into it for various reasons.
Ignore those reasons! Don’t use that functionality! It’s bad! But it’s necessary, that’s why we put it in there! But you’re probably not doing it right, and it’s not necessary for you, so just pay someone else to do it!
If setting up your own CA is bad, then why is it good to have multiple root CAs? Shouldn’t there be just one? (no)
If EV certificates are the best, why do CAs offer regular certs? (because)
If using incomplete certs is a problem, then why is it possible to generate an incomplete cert in the first place? (because not all certs are certifying the same thing)
Heck, if self-signed certs are bad, then why do you have the ability to generate them in the first place? (because in most practical cases, you’re looking for session security, not authoritative identification).
Overheard on die.net’s jabber server: “It basically says that since Gmail et al are in “possession” of your e-mail, a warrant only needs to be served on them, and it can include a provision that they aren’t allowed to notify the owner.” (‘it’ being this decision, blogged about here and here)
From the decision:
Thus subscribers are, or should be, aware that their personal information and the contents of their online communications are accessible to the ISP and its employees and can be shared with the government under the appropriate circumstances. Much of the reluctance to apply traditional notions of third party disclosure to the e-mail context seems to stem from a fundamental misunderstanding of the lack of privacy we all have in our e-mails. Some people seem to think that they are as private as letters, phone calls, or journal entries. The blunt fact is, they are not.
District Judge Mosman seems to be making an interesting conflation that I find… odd… for a judge to make: “because things technically are a certain way, people should know that they are that way, but… regardless of whether they know it or not, we should treat them that way.” Why should people know that they are that way? We expect people to understand the full implications of Internet RFCs and how they interact? Why should we assume that? Even if we could assume that, there’s lots of other technical implications that can lead to abuse of legal authority, and those are curtailed on different rationales, why should we treat email differently?
There are a number of inconvenient blunt facts, Judge Mosman. It’s a blunt fact that your cell phone tracks your whereabouts. It’s a blunt fact that many people’s cars track their whereabouts. It’s a blunt fact that I can stand up a laser and a few other bits of technology that are commercially available in the United States, point them at my neighbor’s curtains, and get a nice full blown real audio copy of their conversations (this one’s pricey, but here just for reference that I’m not making that last statement up).
What prevents misuse of these technologies is the mechanism of legal warrant: if someone wants access to this sort of data, they’re required to convince a judge that this information is pertinent to a legal action, get said warrant written, and execute it. Now, *physical* searches are different from wiretaps; this has long been recognized by courts. One can argue that monitoring someone’s email is monitoring communications, and thus should be treated more like a wiretap than a physical search. When you go to court to get permission to wiretap a target, all you have to do is exectute that court order on the telecommunications company, you don’t have to inform the target. You can also get a warrant for someone’s telephone records, but you don’t have to inform the target. In that light, it doesn’t seem entirely different from the issue of email, right?
But email *is* different from telephone communications, because you’re not just getting someone’s *current* communications, you’re getting someone’s data store. This isn’t like someone’s phone records, which is just a call log, this is like having retroactive access to everyone’s phone conversations, backwards in time.
There’s a quantifiable and quantifiable difference here. Certainly, the question of whether or not the government should be able to execute a warrant on someone’s email store is an outstanding legal question, but “well, it can be done and people should know that it can be done so they can’t expect privacy” seems like a really weak sauce position here…
… and coincidentally, here’s a case where the ruling was found to be precisely the reverse…
In an interesting side note, a federal judge ruled yesterday that jurors in the Bear Stearns case (in which Cioffi and Tannin are accused of making their portfolios sound much healthier than they were) will not be permitted to hear about one email, in which Tannin wrote, “I became very worried very quickly. Credit is only deteriorating. I was worried that this would all end badly and that I would have to look for work.”
Judge Frederic Block ruled that the government’s search warrant filed with Google to obtain access to the e-mail was unconstitutionally broad and “did not comply with the Warrants Clause of the Fourth Amendment.”
What’s the moral of the story? Well, it’s certainly the case that outsourcing providers have a bunch of your data. It’s also certainly the case that it’s not currently stated in law what the legal obligations are of such an outsourcing provider vis-a-vis protecting your data from nefariousness, let alone government subpoenas or warrants (nor is there a suitable body of case law). Heck, many outsourcing providers aren’t even in this country, so there’s no guarantee that even *if* legislation is drawn up (or case law reaches some preponderance of decisions) that you’re going to have U.S. legal protections over your data.
It just means that you ought to keep this sort of thing in mind, when you’re deciding whether or not to outsource…
Germane to my last post, check this out (from Wired, via Bruce’s blog):
Researchers at the University of Utah have found a way to see through walls to detect movement inside a building.
The surveillance technique is called variance-based radio tomographic imaging and works by visualizing variations in radio waves as they travel to nodes in a wireless network. A person moving inside a building will cause the waves to vary in that location, the researchers found, allowing an observer to map their position.
Add a nice little HUD and you could have your own personal radar, tracking all movement inside your evil genius lair.