Tip o’ the hat to Greg.
Archive for February 2010
I can’t attest to the first. I just don’t buy many movies anymore because the second is painfully true.
I’m a Bell.
That’s not a joke. I actually am a Bell. Also a Lion and a (no, really) Pedant. Not exactly a Beaver or a Cub, although I’ve spent more time at Caltech than most students and I was at Loyola High as a staff member long enough to qualify as a member of both communities.
I’m talking about mascots. CGU’s mascot is the Pedant (an ant wearing a mortarboard, although the choice of words was tongue in cheek and intentional), LMU’s mascot is the Lion, and Bellarmine College Preparatory‘s mascot is the Bell. Archibishop Mitty students were Monarchs. St. Francis students were Lancers. We were the Bells.
I hated that mascot. Apparently every so many years someone would suggest a name change and the alumni would all vote it down. I always wanted someone to explain to me what sort of collective idiocy that represented. Not because of the inevitable jokes about “Belles” (after all, thanks to Saturday Night Live I was immune to gender-based “insults” by the time I got to high school), but because it was a lame mascot. Bells call you to be spiritual, or they call you to dinner, or announce the time. They don’t call you to a contest, which is what your mascot is actually supposed to be doing, right?
You don’t follow a bell onto a gridiron. You follow a bell to vespers.
Now, I’m a little older. What’s interesting to me, though, is that I still don’t like the Bell mascot. I don’t dislike the Pedant; really when you’re in grad school a funny mascot (particularly a self-deprecating one) ought to appeal to you. High school is different, though, just like teenagers are different from adults. Sports are important when you’re in high school, even if you’re like I was and aggressively ignore them while you’re there. They mean something to the community at large. And you still don’t follow a bell onto a gridiron.
I find it mildly amusing that I still get annoyed about my high school mascot. What’s yours? Did you like it?
Engadget reports Kingston is shipping a 256GB USB thumb drive. No, that’s not a typo.
Okay, so it’s $1,000, which is pricey for something so small you might run it through the wash and ruin it.
Still, let’s assume for a minute that Engadget is properly labeling it 12 MB/sec (typically, I expect data transfer rates to be in megabits per second, not megaBytes), and that the storage is in the usual 1,000,000,000 bytes to the GB instead of the normal filesystem 1,073,741,824 bytes to the GB.
That’s 256,000,000,000/12,000,000 ~ 21,333 seconds to write it full (with no overhead – ha – that happens in the real world precisely never). About 6 hours.
If, of course, it’s actually 12mpbs instead of 12MBps, you’re looking at 42 hours to fill the damn thing up, again assuming no overhead (yeah, right)….
That is one huge Fat32 partition, assuming you were technically neophyte enough to put the standard removable drive filesystem on it, but simultaneously savvy enough to figure out how. My God, it makes my brain hurt just to imagine how many centuries it would take to defrag the thing. On the upside, you’ll be able to hide one heck of an encrypted TrueCrypt volume on that sucker; I don’t think you’d be able to find it in a reasonable amount of time with TCHunt just because of the gigantic size of the thing and the amount of *time* it would take to read it.
Dan Wineman over at Venomous Porridge offers this tidbit:
This is called Facebook Connect, and it’s a very bad thing for security and user education. Teaching people to check that the URL starts with facebook.com before logging in is useless, because Facebook wants its users to log into anything that vaguely looks Facebookish, and it’s training them to do so. How is anyone expected to distinguish Facebook from a phishing site masquerading as Facebook, when Facebook Connect looks and acts like a phishing site by design?
That’s indeedy a very good question, Dan. Undoubtedly people aren’t expected to distinguish Facebook from a phishing site, because Mr. Zuckerberg doesn’t think about security any more than he thinks about privacy.
(spoiler: Here’s the shorter answer: if you use SSL/TLS, you’re probably not using it for the right reasons and you’re probably not getting the level of security you think you’re getting, because you’re probably doing it wrong.)
Ivan’s points boil down to this: SSL/TLS, by itself, is secure. It’s all these implementation details that render it insecure in practice. Ivan then offers eleven areas where SSL is “broken” in practice.
Here’s my issue with the slides: some of them don’t detail problems with SSL at all, and the other half are built into the design of SSL itself.
Let’s go through the slides by point.
Ivan’s first contention is that self-signed certificates are bad. Ivan argues that they’re insecure, they teach users to ignore warnings, and that it’s cheaper to get a “real” certificate than to use a self-signed one anyway.
Well, a self-signed certificate is certainly differently secure than one signed by a root CA, but as to whether or not it’s less secure or insecure, that’s a completely different question (trusted authorities and exploitation scenarios deserve their own post, so I’ll leave it at this for now — edited to add — thank you, Ed Felten, now I don’t need to write this up). The second contention is just silly, users don’t need to be trained to ignore warnings, they do it already. The last is at best incomplete. It requires a certain level of skill to deploy a service that relies upon a self-signed certificate, so saying “you have to maintain it” should be considered as part of the cost is mostly pointless. You have to maintain any certificate, whether you sign it yourself or pay Verisign to sign it for you. If I have to pay Bob the Apache Wizard to maintain my site and Bob knows how to generate a self-signed cert, it’s going to be cheaper for me to have Bob sign the cert than it will be for me to pay Verisign to do it, because Bob is going to get his salary (or his packaged SLA payment) either way.
Ivan’s second contention is that private certificate authorities are bad. The logic follows mostly along the lines of the previous point… it’s better for you to pay someone else to do this for you than it is for you to do it yourself. Now, he has something of a point here. Building a CA isn’t the same as self-signing a certificate, it takes a higher degree of knowledge to build the thing properly. I would imagine that there are a number of CAs out there that are unnecessary and they could be easily covered under one of the existing root CAs. However, there are any number of completely legitimate reasons for running your own CA, and in any event I don’t think one-off CAs represent a big threat to the overall infosec domain.
Points 3, 4, 8, 9, and 10 are all basically the same point: if your site needs to be encrypted some of the time in transmission, it really needs to be encrypted all of the time, period. This is a good point (really should be a single point with examples, though), and I’m more or less with Ivan on this one, although I understand why it isn’t always the case.
Point 7 is that SSL sometimes isn’t used at all when it should be. Not sure why this belongs on the list, that’s not a problem with SSL implementation, per se. And I personally haven’t seen an unencrypted site that handles sensitive data in a long while, so I don’t know how germane it is anymore.
Point 11, and to a lesser extent 5, aren’t so much problems with SSL as they are problems with the couplings between SSL & DNS, pushed through the lens of user expectations. DNS has had its own problems.
Finally, point #6 (using an EV certificate, as opposed to a normal SSL certificate) illustrates the problem I have with computer security engineering professionals.
Now, I haven’t seen the talk and I haven’t read any of Ivan’s blogging (I should, and I’m adding it to my blogroll now), so I can’t say that this is fair, but just reading the slides, here’s how I interpret the underlying context of this talk:
“SSL is totally secure, if you are using it in the totally most secure way and no other way, because we designed it to be totally secure if you use it in the totally most secure way. Oh, but we also made it so that you could use it in all of these other ways, but DON’T DO THAT because you ruin our perfect design by using it in the non-perfect way!”
There’s a reason why I switched my research focus from infosec to disaster/crisis management, and this is it. Information systems security designers have a tendency to draw a box in their head, and design a system that is secure inside that box. If you use the tools they provide within the boundaries of that box, you’re golden, and if you don’t, you’re probably screwed. But that’s not on them because they can only design out to the edges of the box.
The problems with this approach are that most systems don’t fit inside that box, the box itself often sits on top of a completely insecure table, and often the box itself has lots of little holes in it that are punched into it for various reasons.
Ignore those reasons! Don’t use that functionality! It’s bad! But it’s necessary, that’s why we put it in there! But you’re probably not doing it right, and it’s not necessary for you, so just pay someone else to do it!
If setting up your own CA is bad, then why is it good to have multiple root CAs? Shouldn’t there be just one? (no)
If EV certificates are the best, why do CAs offer regular certs? (because)
If using incomplete certs is a problem, then why is it possible to generate an incomplete cert in the first place? (because not all certs are certifying the same thing)
Heck, if self-signed certs are bad, then why do you have the ability to generate them in the first place? (because in most practical cases, you’re looking for session security, not authoritative identification).
In the “ZOMG talk about spending too much time on something” award, this:
Tip o’ the blogger hat to Live Granades.