Bruce wrote an editorial for Wired (reproduced with commentary at his blog) about what kinds of people make good security people, and in which he muses about whether or not a security mindset can actually be taught. It’s good reading, particularly if you know me and want to know a little bit about the way my brain works. From the article:
SmartWater is a liquid with a unique identifier linked to a particular owner. “The idea is for me to paint this stuff on my valuables as proof of ownership,” I wrote when I first learned about the idea. “I think a better idea would be for me to paint it on your valuables, and then call the police.”
You see Bruce’s point. Security-minded people are naturally sneaky. Whether they exploit their natural sneakiness or not dictates whether they become a criminal, a CIA analyst, a law-enforcement agent, or just a guy like me who notices where the security cameras *don’t* overlap in the local convenience store.
One of the commentators pointed to this other blog post by Colin Percival, in which Percival states: “If you want someone to understand security, just send him to a university mathematics department for four years.”
To some extent I think Percival has a point. I have noticed that lots of mathematicians are naturally slated towards picking out the unseen assumptions that introduce systematic weakness. But (as I commented on Bruce’s blog), those same mathematicians can suffer from being stuck “in the box”. Mathematics is (generally) the study of closed axiomatic systems. Security systems are usually neither closed, nor axiomatic. Mathematically-trained security guys can oftentimes be obsessed with security inside a box, but they can easily miss the forest for the trees.
I’ll use on of my favorite Bruce anecdotes to illustrate the point: Bruce, back in his more naive days, was attending a security conference, and was involved in a discussion about a cryptographic protocol when someone (I think it was an FBI agent) started describing a side-channel attack. Bruce says something to the affect of, “But that’s cheating,” to which the FBI agent replies, “There is no cheating in this game.”
I see this all the time in Infosec IT papers, and it’s actually why I’ve chosen *not* to do Infosec work as my research focus -> everyone is obsessed with proving that some cryptographic protocol is secure, or figuring out attacks against those protocols. Sure, this is worthwhile and necessary work, but the real pressing “right-now-today” problems in Infosec aren’t protcol-related. Key management. Inherited trust. Authentication and Identification. These are the problems in security, and for the most part they are not really technical problems; they are process problems or human problems. You can prove a protocol is mathematically secure, but if your engineering results in keys sitting in memory, your implementation is insecure. And really, you can be an excellent mathematician and a really cruddy practical security specialist.