Archive for January 2008

Real Science Meets Muh-hahahahaha!   2 comments

Connie-Lynne’s brother Adam just put up a post about a LiveScience article on 10 Ways To Destroy The Earth.

Adam may not have seen this: the ultimate guide to actually destroying the Earth (also by Sam Hughes).  Not some disaster-flick, “Oh, the Earth is will be uninhabitable if this occurs,” version of destroying the Earth. The author defines his terms thusly: by any means necessary, to change the Earth into something other than a planet or a dwarf planet. From the intro:

Destroying the Earth is harder than you may have been led to believe.

You’ve seen the action movies where the bad guy threatens to destroy the Earth. You’ve heard people on the news claiming that the next nuclear war or cutting down rainforests or persisting in releasing hideous quantities of pollution into the atmosphere threatens to end the world.

Fools.

The Earth is built to last. It is a 4,550,000,000-year-old, 5,973,600,000,000,000,000,000-tonne ball of iron. It has taken more devastating asteroid hits in its lifetime than you’ve had hot dinners, and lo, it still orbits merrily. So my first piece of advice to you, dear would-be Earth-destroyer, is: do NOT think this will be easy.

This is not a guide for wusses whose aim is merely to wipe out humanity. I (Sam Hughes) can in no way guarantee the complete extinction of the human race via any of these methods, real or imaginary. Humanity is wily and resourceful, and many of the methods outlined below will take many years to even become available, let alone implement, by which time mankind may well have spread to other planets; indeed, other star systems. If total human genocide is your ultimate goal, you are reading the wrong document. There are far more efficient ways of doing this, many which are available and feasible RIGHT NOW. Nor is this a guide for those wanting to annihilate everything from single-celled life upwards, render Earth uninhabitable or simply conquer it. These are trivial goals in comparison.

This is a guide for those who do not want the Earth to be there anymore.

You can link to the below image to have a graphical representation of the current status of the earth on your web site.

Current Earth-Destruction Status

The site also includes a variety of fascinating links to resources for both the seriously scientifically minded folks and goofy guys who occasionally like to chuckle evilly. My favorite linked site is this one, where someone well versed in physics spends entirely too much time analyzing The Death Star.

Posted January 30, 2008 by padraic2112 in humor, web sites

What Can We Learn From This Story?   2 comments

Headline: Angry Employee Deletes All of Company’s Data, from foxnews.com

There’s actually quite a few interesting security and systems lessons in this one article. Quoting here:

When Marie Lupe Cooley, 41, of Jacksonville, Fla., saw a help-wanted ad in the newspaper for a position that looked suspiciously like her current job — and with her boss’s phone number listed — she assumed she was about to be fired. So, police say, she went to the architectural office where she works late Sunday night and erased 7 years’ worth of drawings and blueprints, estimated to be worth $2.5 million.

It didn’t take Steven Hutchins, owner of the architectural firm that bears his name, much time to figure out who’d done it — Cooley was the only other person who had full access to the files.

Hutchins told one TV station he’d managed to recover all the files using an expensive data-recovery service.

As for the job, Cooley originally wasn’t in danger of losing it. The ad was for Hutchins’ wife’s company.

The first lesson here is that you should never have only one copy of $2.5 million dollars worth of anything. The second lesson is that at some point, project drawings (or whatever your data happens to be) should be archived into a read-only state, which would have limited her ability to only messing with the firm’s current projects. The third lesson is that certain types of behavior can’t be easily restricted with simple negative consequences. Surely Ms. Cooley, if she had been thinking rationally, would have known that she was going to be caught out for this, which implies either she was so angry at the thought of being fired she was acting irrationally, or her idea of the consequence (“Well, all he could do is fire me anyway”) didn’t match the reality (she’s most likely going to be convicted of a crime for this, which will make it pretty much impossible for her to get any sort of real paycheck for the remainder of her working life).

But the real lesson is that it is actually hidden behind all that. It is very difficult to protect yourself from an insider attack, particularly at a small company. If you have a limited number of people you can trust, you’re going to wind up trusting some of them implicitly. Even if Hutchins’s architectural firm had taken steps to help prevent this sort of thing from happening, it’s likely that Ms. Cooley would have been able to seriously bollix up the works anyway if she was determined enough. Read-only media can help cut down the risk, but if someone had physical access there’s nothing to prevent them from taking a fire axe to the disk (or tapes or DVDs or whatever media they might have used for archival purposes). A trusted insider with the right information could even arrange to get rid of offsite backups with a simple phone call (“Our server crashed and we need our offsite backup copies for a restore…”).


Digg!

Posted January 29, 2008 by padraic2112 in news, security

Cloverfield was Just Okay   1 comment

Cloverfield is getting pretty good reviews, generally. Some respectably geeky people liked it a lot.

I didn’t, really. No-spoiler review follows.

I’m a big fan of both monster movies and their two near cousins, disaster flicks and post-apocalyptic movies. Odds are pretty good that if it was made after 1930 and there is some supernatural, paranormal, or scientifically implausible critter trying to destroy or take over the world (or eradicate humanity), I’ve seen it. Whenever Mother Nature gets irritated and wipes out a large swath of supporting characters, I’ll plunk myself down for 2 hours or more and enjoy it. I’m generally agnostic when it comes to villain preference, I’ll take zombies, simulacrums (man-made, machine-made, or alien-made ones), “wee beasties” of varying virulence and origin, earthquakes, fires, aliens, mutated humans, you name it.

I’m also happy to watch varying grades of entertainment in the vein. Well thought out villains with interesting motivations are as entertaining as gigantic apes. I’ll take cult classics crammed with B-list actors, and happily suspend my disbelief at marginally crazy plot lines, particularly if they’re just trying to be funny or campy.

However, there are some things I *don’t* like. I don’t like remakes of movies that were already awesome in the first place (although there is the occasional statistically unlikely exception to this rule). I don’t like movies that rely so much on a suspension of disbelief that you have to club yourself in the head repeatedly with a heavy blunt object to get to the point where your IQ is so low that you can forget about the ridiculous premise.

So, you ask, “Why didn’t you like Cloverfield?” No, the cinematography didn’t bother me. I thought the effects were pretty darn good. I can reasonably suspend disbelief and allow big giant monsters that are exceptionally resistant to conventional weaponry to be a central focus of a movie. None of that bothered me.

Some time a while back I read a description of the movie and what stuck in my head was the line, “It’s a movie about the people who ran.” Aficionados of monster movies and disaster movies that read this description should know immediately what the author is talking about. Classic movies of this type have some recurring characters. You have The Scientist Who Wrongly Assumes That The Unknown Is Good. You have The Engineer Who Cannot Believe His Design Is Flawed. You have The Corporate Tool Whose Greed Dooms Everybody. Usually these characters are sprinkled liberally around the core cast, and you can tell more or less from the beginning who is going to make it through to the end and who is going to be hoisted by their own petard or hubris. Part of the fun of the movie is knowing that Carter Burke is going to get eaten, it’s just a matter of when. Also part of the fun of the movie is *not* knowing who is going to make it through to the end (although the Plucky Heroine and Everyman usually manage to muddle through), and occasionally being surprised when your favorite character gets it early on. Every time I watch Aliens, I hope Sgt. Apone is going to make it, or at least go out in a blaze of self-sacrificing glory. Nope, he’s always squished. Almost as an afterthought.

Another set of “staple” characters in these movies is “The Crowds Who Flee”. They’re staple in the sense that there’s usually a large group of people who run screaming from the monster when it finally stops fooling around and gets down to seriously crunching, munching, or otherwise wreaking havok on the populace. You don’t know much about them, except they run, and some of them get squished, and some of them run offscreen, and presumably a percentage of them manage to hold out until the end of the movie, at which point they pick themselves out of the rubble and gather around our Hero and Heroine and get down to the job of cleaning up the mess.

Cloverfield gets some major props for focusing on The People Who Ran. It’s a stroke of genius on the part of the writers, in that they actually managed to find a set of classic disaster movie characters who nobody ever wrote about before. I was really looking forward to seeing how they delivered.

They did almost everything right. They hid the right things, didn’t have any Big Explanatory Moments (these are also a staple of the genre, but not appropriate if you’re trying to focus on The People Who Ran). They nailed a lot of these things, even to the point where some people who like disaster movies were actually unhappy with Cloverfield because they were expecting Explanatory Moments. Here’s my problem: they chose an entirely uninteresting set characters to represent The People Who Ran.

I thought the actors all did a good job, that’s not the point. The entire driving focus of the movie is a romantic subplot that is more appropriate for a movie where the main characters are all in their mid-teenage years (every time Rob says, “I’m going to Japan!” my mental editor re-wrote it as “I’m going off to college!”) In fact, virtually the entire cast of characters seemed to be more of a collection of kids than younger adult professionals. I don’t know, maybe it’s because I’m in my thirties and I just don’t get the 20-something generation, but they seemed a rather vacuous collection of people… to the point where I really didn’t find myself emotionally invested what happened to them in the slightest. I’d get into more detail, but I don’t want to spoil anything for anyone who is going to go see the movie.

It’s still a decently fun watch, and if you didn’t find yourself supremely annoyed at the hand-cam perspective of The Blair Witch Project you’ll probably find this to be a justifiable expenditure of $10. It’s not a classic, though.

Posted January 29, 2008 by padraic2112 in movies, noise

Security Alerts Need To Suck Less   3 comments

As a systems administrator, I get a lot of security vulnerability alerts. Since I do Linux and Windows, I get sandblasted with them on a distressingly regular basis; security vulnerabilities usually come in blocks, and are often grouped in bunches by operating system, so if you’re lucky enough to just do Solaris (for example) you have a bad month or two during a year when all of the Solaris security guys publish all their stuff in chunks, but for the most part you can cheerfully ignore the IIS, IE, Apache, or Linux kernel updates that come your way.

You handle two major operating systems, you always have something to read about.

I’m going to write this post about a particular security vulnerability, recently published, for the Linux kernel, to illustrate how bad the handling of security vulnerabilities is done in our industry.

Here’s the vulnerability, according to CAIC. Here it is, according to CERT. Oh, wait… no, it’s not. CERT doesn’t have it in the list yet. Here it is on Secunia. Here it is on SecurityFocus. Here it is on NIST. And ISS, and FrSIRT, and blah, blah, blah.

CAIC’s summary:

The risk is MEDIUM. A loal attacker can exploit this issue to access arbitrary files on the affected computer. Successfully exploiting this issue may grant the attacker elevated privileges on affected computers.

ISS’s summary:

The Linux kernel could allow a local attacker to bypass security restrictions, caused by a vulnerability when opening a directory and checking the access modes and open flags. If write privileges for the directory are truncated, a local attacker could call the open() function using specific flags to bypass security restrictions and truncate arbitrary directories.

FrSIRT’s summary:

A vulnerability has been identified in Linux Kernel, which could be exploited by malicious users to bypass security restrictions. This issue is caused by an error in the “may_open()” [fs/namei.c] function that performs tests of access mode by using the “flag” variable instead of the “acc_mode” variable, which could be exploited by local attackers to bypass file permissions and gain unauthorized access to certain files.

Secunia’s summary:

Less critical Impact: Security Bypass, Manipulation of data

Security Focus says:

A local attacker can exploit this issue to access arbitrary files on the affected computer. Successfully exploiting this issue may grant the attacker elevated privileges on affected computers. Other attacks are also possible.

NIST weighs in:

VFS in the Linux kernel before 2.6.22.16, and 2.6.23.x before 2.6.23.14, performs tests of access mode by using the flag variable instead of the acc_mode variable, which might allow local users to bypass intended permissions and remove directories. Impact Type: Provides administrator access, Allows complete confidentiality, integrity, and availability violation , Allows unauthorized disclosure of information , Allows disruption of service. CVSS Severity (version 2.0): CVSS v2 Base score: 7.2 (High)

Six different security databases, three different ratings. “High”, “Medium”, “Less Critical”.

Six different security databases, six different summaries of what an attacker can actually do by exploiting this vulnerability, some of which sound functionally equivalent to “OMG h4x0r pwndage imminent!”

Six different security databases, six links to the kernel.org changelog. Well, at least they all did one thing exactly the same.

Of course, if you’re going to link to Bill Roman’s original bug report, maybe you might (as a security database) want to link to THIS version, which includes the actual code change to fix the problem. This contains, you know, actual information about the problem Bill found.

 fs/namei.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/namei.c b/fs/namei.c
index 3b993db..73e2e66 100644
--- a/fs/namei.c
+++ b/fs/namei.c
@@ -1605,7 +1605,7 @@ int may_open(struct nameidata *nd, int acc_mode, int flag)
        if (S_ISLNK(inode->i_mode))
                return -ELOOP;

-       if (S_ISDIR(inode->i_mode) && (flag & FMODE_WRITE))
+       if (S_ISDIR(inode->i_mode) && (acc_mode & MAY_WRITE))
                return -EISDIR;

        /*
@@ -1620,7 +1620,7 @@ int may_open(struct nameidata *nd, int acc_mode, int flag)
                        return -EACCES;

                flag &= ~O_TRUNC;
-       } else if (IS_RDONLY(inode) && (flag & FMODE_WRITE))
+       } else if (IS_RDONLY(inode) && (acc_mode & MAY_WRITE))
                return -EROFS;

        error = vfs_permission(nd, acc_mode);
-

Now, I’m not a real C programmer, and I’m certainly not a kernel engineer. I know very little about namei.c (here’s the code for the whole thing). People who actually know code are testing this out. Here’s the previous version. The clip we’re interested in is this:

int may_open(struct nameidata *nd, int acc_mode, int flag)
{
	struct dentry *dentry = nd->dentry;
	struct inode *inode = dentry->d_inode;
	int error;

	if (!inode)
		return -ENOENT;

	if (S_ISLNK(inode->i_mode))
		return -ELOOP;

	if (S_ISDIR(inode->i_mode) && (flag & FMODE_WRITE))
		return -EISDIR;

	error = vfs_permission(nd, acc_mode);
	if (error)
		return error;

	/*
	 * FIFO's, sockets and device files are special: they don't
	 * actually live on the filesystem itself, and as such you
	 * can write to them even if the filesystem is read-only.
	 */
	if (S_ISFIFO(inode->i_mode) || S_ISSOCK(inode->i_mode)) {
	    	flag &= ~O_TRUNC;
	} else if (S_ISBLK(inode->i_mode) || S_ISCHR(inode->i_mode)) {
		if (nd->mnt->mnt_flags & MNT_NODEV)
			return -EACCES;

		flag &= ~O_TRUNC;
	} else if (IS_RDONLY(inode) && (flag & FMODE_WRITE))
		return -EROFS;
	/*
	 * An append-only file must be opened in append mode for writing.
	 */
	if (IS_APPEND(inode)) {
		if  ((flag & FMODE_WRITE) && !(flag & O_APPEND))
			return -EPERM;
		if (flag & O_TRUNC)
			return -EPERM;
	}

	/* O_NOATIME can only be set by the owner or superuser */
	if (flag & O_NOATIME)
		if (current->fsuid != inode->i_uid && !capable(CAP_FOWNER))
			return -EPERM;

	/*
	 * Ensure there are no outstanding leases on the file.
	 */
	error = break_lease(inode, flag);
	if (error)
		return error;

	if (flag & O_TRUNC) {
		error = get_write_access(inode);
		if (error)
			return error;

		/*
		 * Refuse to truncate files with mandatory locks held on them.
		 */
		error = locks_verify_locked(inode);
		if (!error) {
			DQUOT_INIT(inode);

			error = do_truncate(dentry, 0, ATTR_MTIME|ATTR_CTIME, NULL);
		}
		put_write_access(inode);
		if (error)
			return error;
	} else
		if (flag & FMODE_WRITE)
			DQUOT_INIT(inode);

	return 0;
}

Looks like the old version errors out, or truncates things.

Now, admittedly, this is a bad thing. Anyone who is worried about data retention is going to be worried about this. But actually exploiting this would be *hard*, for one thing, and it doesn’t “provide administrative access” or “unauthorized data disclosure”. Telling someone that this gives an attacker “elevated privileges on affected computers” or “other attacks are possible” is technically true, but totally useless information from a risk analysis standpoint.

Who is to blame for this nightmare of conflicting information? Well, I’ll give props to Bill for finding the bug, but he didn’t do a good job of explaining what the actual possible effects of the problem were. That’s not really Bill’s fault; he’s talking on a kernel mailing list with a bunch of geeks that make me look like an idiot, and they know in their heads what the possible applications of this problem really are. The security vendors all cross-link each other in their own databases, so it’s obvious that there is some level of awareness that other security databases exist, but they don’t use the same terminology, they don’t use the same encoding for threat levels, and it’s painfully obvious that at least some of them HAVE NO IDEA WHAT THEY ARE TALKING ABOUT. Heck, I could be wrong in my estimation of what’s going on here, I don’t have the time to dig through the entire namei.c function and find out if my logic is correct. But the functional description of what the vulnerability *is* doesn’t match.

Will you people get your $!@&^* together, for crying out loud? Maybe you could have *one* database, and instead of paying to maintain your own individual ones, hire a couple of people to actually put meaningful information in your summaries?

Digg!

Posted January 24, 2008 by padraic2112 in security, software, tech, web sites

New Labor-Saving Devices, the Uber-Nerd Edition   1 comment

I have no idea how much this costs. I have no idea how robust the back end is. I have a million questions about the product already. But, 3tera, you have my attention.

This interface is TOTALLY COOL.

From Larry’s blog:

These services allow one to develop applications without hardware or datacenter cost. They bill for resources used — CPU time, storage, bandwidth. That means there is essentially no cost while an application is being developed and debugged since there is no traffic. When the application goes live, the capacity, and hence cost, grow and shrink dynamically depending upon utilization.

It’s a good time to be a startup, that’s for certain. The barrier to creating a proof of concept application is plunging downwards at an amazing rate.

Digg!

Posted January 23, 2008 by padraic2112 in management, tech, web sites

Immersive 3D   3 comments

Now, this guy is obviously a skilled hacker with too much time on his hands. But this little project illustrates how *close* we are to VR in your living room:

ed. note: I don’t know how Vinnie parses the internet and finds all of this stuff, but I’m glad that he does and forwards the interesting bits to me.

Posted January 23, 2008 by padraic2112 in hardware, tech

Large Numbers II – Fun with Calculators   Leave a comment

I wrote a post about large numbers a while back. Yesterday I came across the MegaPenny Project, which uses the common penny as an illustrative tool to provide a sense of scale for some big numbers.

These aren’t really, really big numbers, but 1 trillion (1,000,000,000,000) is still a bit bigger than most people can wrap their heads around. Here’s an image that shows you what a trillion really looks like:

1,000,000,016,640 pennies

That little teeny spec with the shadow in the bottom left corner? That’s a person, for perspective. The cube above would weigh 3,125,000 tons. That’s a little bit over $10 billion dollars in pennies. Another “fabulous fact”: as of this writing, 24 hour copper futures are running at $3.12/lb, which means that this might be $10 billion dollars in pennies, but it is also $19.5 billion dollars in raw copper. Well, it would be if pennies were still made of copper. Still, if you have a cube of pennies this size minted between 1944 and 1982, it would be worth a little over $18.5 billion. That’s not chump change.

Of course, if you collected 50 pennies a day (certainly possible), it would take you 54,794,521 years to build a cube this big (assuming a lifespan of 75 years, that’s 730,594 lifetimes). That’s a bit longer than the entire Permian era.

All of that seems pretty astonishing, no? One of the interesting things about big numbers is what happens when you collide them with other big numbers, however. If everyone in America (just over 301,000,000 according to the CIA factbook) collected 10 pennies a day, you could build a cube this big in 332 days, give or take. If everyone in America collected 50 pennies a day ($182.50 a year), in 365 days you’d have a whopping $54,932,500,000. That’s $54 billion.  Now you know why that “fraction of a penny” hack is so effective.

Hm. Of course, that means that a $150 billion dollar economic stimulus package for a year could be thought of as basically equivalent to giving every American roughly $1.38 a day, or … ah… a free cup of coffee (and not even a large coffee, at that).

Posted January 23, 2008 by padraic2112 in noise, web sites