Archive for July 2007

E-voting Update   2 comments

Following up on an earlier post:

California’s Secretary of State, Debra Bowden, has posted the results of the first audit of e-voting machines used in California. The summary document is here. Sequoia, Diebold, and Hart machines all fared rather miserably.  If you’re not in California, and you’re interested in knowing if your ballot is being tallied by one of these three models, you can check out this spreadsheet at Electionline to see what your state uses (credit to the Wired blog linked below for that url).


The red team analyzing the Sequoia system identified several issues. They fall into several classes:

1. Physical Security. The testers were able to gain access to the internals of the systems by, for example, unscrewing screws to bypass locks. The screws were not protected by seals. Similarly, plastic covers that were protected by seals could be pried open enough to insert tools that could manipulate the protected buttons without damaging the seals or leaving any evidence that the security of the system had been compromised.

2. Overwriting Firmware. The testers discovered numerous ways to overwrite the firmware of the Sequoia Edge system, using (for example) malformed font files and doctored update cartridges. The general approach was to write a program into memory and use that to write the corrupt firmware onto disk. At the next reboot, the boot loader loaded the malicious firmware. At this point, the attackers controlled the machine, and could manipulate the results of the election. No source code access was required or used for this attack, and a feature of the proprietary operating system on the Edge made the attack easier than if a commercial operating system had been used.

3. Overwriting the Boot Loader. Just as the testers could overwrite firmware on the disk, they could overwrite the boot loader and replace it with a malicious boot loader. This program could then corrupt anything it loaded, including previously uncorrupted firmware.

4. Detecting Election Mode. The firmware can determine whether the system is in test mode (LAT) or not. This means malicious firmware can respond correctly to the pre-election testing and incorrectly to the voters on Election Day.

5. Election Management System. The testers were able to bypass the Sequoia WinEDS client controlling access to the election database, and access the database directly. They were able to execute system commands on the host computer with access only to the database. Further, the testers were able to exploit the use of the autorun feature to insert a malicious program onto the system running the Sequoia WinEDS client; this program would be able to detect the insertion of an election cartridge and configure it to launch the above attacks when inserted into an Edge.

6. Presence of an Interpreter. A shell-like scripting language interpreted by the Edge includes commands that set the protective counter, the machine’s serial number, modify the firmware, and modify the audit trail.

7. Forging materials. Both the update cartridges and voter cards could be forged. The report presents several scenarios in which these weaknesses could be exploited to effect the correct recording, reporting, and tallying of votes.


The team investigating the Diebold system identified several issues. They fall into several classes:

1. Election Management System. The testers were able to penetrate the GEMS server system by exploiting vulnerabilities in the Windows operating system as delivered and installed by Diebold. Once this access was obtained, they were able to bypass the GEMS server to access the data directly. Further, the testers were able to take security-related actions that the GEMS server did not record in its audit logs. Finally, with this level of access, the testers were able to manipulate several components networked to the GEMS server, including loading wireless drivers onto the GEMS server that could then be used to access a wireless device plugged surreptitiously into the back of the GEMS server.

2. Physical Security. The testers were able to bypass the physical controls on the AccuVote Optical Scanner using ordinary objects. The attack caused the AV-OS unit to close the polls, meaning the machine could not tally ballots at the precinct or inform voters whether they had “over-voted” their ballot. Similarly, the testers were able to compromise the AccuVote TSx completely by bypassing the locks and other aspects of physical security using ordinary objects. They found an attack that will disable the printer used to produce the VVPAT in such a way that no reminders to check the printed record will be issued to voters.

3. AccuVote TSx. The testers found numerous ways to overwrite the firmware in the AccuVote TSx. These attacks could change vote totals, among other results. The testers were able to escalate privileges from those of a voter to those of a poll worker or central count administrator. This enabled them to reset an election, issue unauthorized voter cards, and close polls. No knowledge of the security keys was needed.

4. Security Keys for Cryptography. The testers discovered that a well-known static security key was used by default.

The report presents several scenarios in which these weaknesses could be exploited to affect the correct recording, reporting, and tallying of votes.


The team investigating the Hart system identified several issues. They fall into several classes:

1. Election Management System. The testers did not test the Windows systems on which the Hart election management software was installed because Hart does not configure the operating system or provide a default configuration. Hart software security settings provide a restricted, Hart-defined environment that the testers bypassed, allowing them to run the Hart software in a standard Windows environment. They also found an undisclosed account on the Hart software that an attacker who penetrated the host operating system could exploit to gain unauthorized access to the Hart election management database.

2. eScan. The testers were able to overwrite the eScan firmware. The team also accessed menus that should have been locked with passwords. Other attacks allowed the team to alter vote totals; these attacks used ordinary objects. The team, in cooperation with the source code review team, was able to issue administrative commands to the eScan.

3. JBC. The team developed a surreptitious device that caused the JBC to authorize access codes without poll worker intervention. The team verified that the mobile ballot box (MBB) card can be altered during an election. The team also found that post-election safeguards to prevent the altered data on a tampered MBB card from being counted can be easily bypassed.

4. eSlate. The testers were able to remotely capture the audio from a voting session on an eSlate with audio enabled, thereby providing an attack that violates voter privacy. The team was also able to force an eSlate to produce multiple barcodes after printing “BALLOT ACCEPTED” on the VVPAT records. This could cause a county that used bar code readers to read the VVPAT to produce erroneous vote totals.

he report presents several scenarios in which these weaknesses could be exploited to affect the correct recording, reporting, and tallying of votes.

The report stresses that the researchers were under severe time pressure to complete their initial review quickly, and as a result, they are confident that a large number of additional exploits exist for all three systems.  There are other blog posts covering more details here, here, and here.

Posted July 31, 2007 by padraic2112 in politics, security, tech

Data at Rest, Data in Transit, Data in Use   3 comments

Interesting thread on the FDE (Full Disk Encryption) mailing list about the ins-and-outs of securing data in three states. I recommend the mailing list, so far the signal-to-noise ratio is > 1.

My commentary on the thread was related to classifying data by state, I’ll replicate it here:

I’ve been following this thread, and I think it’s gone a little out of focus. I like the original idea, but if you’re going to talk about classifying data in those three states (at rest, in use, in transit), you need to define exactly what you mean by those terms.

Robert (who kicked off the thread) said:

However, Data at Rest is almost by definition completely useless. Generally speaking, at least in most enterprise environments, data is worthless unless it can be shared with someone else, and that implies Data in Transit. And that includes data being physically transported on a USB flash drive, as well as transmitted electronically.

Right at the beginning of the thread, we have ambiguity as to what “Data at Rest” means. If Robert is including physical transportation of media as part of the classification of “Data in Transit” (edited from the original typo), then… well, the term “Data at Rest” itself can really be regarded as a useless category: you can always pick up a disk and carry it away… effectively (from a security analysis standpoint), the class of data which can be regarded as “at rest” is the null set.

Similarly, what is the difference between “Data in Transit” and “Data in Use”? We all seem to have some sort of assumed idea about what those states are, but there’s no rigor attached to it… if data is being read off of media by some process, being transferred to the application layer in order to be presented to a user, is that “Data in Use” or “Data in Transit”? It’s certainly changing states, which could imply that it’s “in Transit”… I could go on, but you get my point.

We’re lacking the base context. So, being obsessed by classification, I’ll propose one.

“Data at Rest” is data recorded on storage media. This data can be regarded as “secure” if and only if data is protected by strong encryption (where “strong encryption” is defined as “encryption requiring a computationally infeasible amount of time to brute force attack”) AND the key is (a) not present on the media itself (b) not present on the node associated with the media; and (c) is of sufficient length and randomness to be functionally immune to a dictionary attack.

“Data in Use” is all data not in an at rest state, that is on only one particular node in a network (for example, in resident memory, or swap, or processor cache or disk cache, etc. memory). This data can be regarded as “secure” if and only if (a) access to the memory is rigorously controlled (the process that accessed the data off of the storage media and read the data into memory is the only process that has access to the memory, and no other process can either access the data in memory, or man-in-the-middle the data while it passes through I/O), and (b) regardless of how the process terminates (either by successful completion, or killing of the process, or shutdown of the computer), the data cannot be retrieved from any location other than the original at rest state, requiring re-authorization.

“Data in Transit” is all data being transferred between two nodes in a network. This data can be regarded as secure if and only if (a) both hosts are capable of protecting the data in the previous two classifications and (b) the communication between the two hosts is identified, authenticated, authorized, and private, meaning no third host can eavesdrop on the communication between the two hosts.

Looking at these three classifications, here’s your vulnerabilities:


Protecting Data at Rest:

You must either (a) encrypt the entire contents of the storage media or (b) you must have complete knowledge of how any system or user organizes data when writing to the storage media so that you can encrypt the data that needs to be protected. (a) is FDE. (b) can accomplished by any one of a number of other solutions, but is very very difficult because even if you know how the system stores everything, you don’t know (or have to enforce through restriction) how the user may store something (you must disable his/her ability to store anything “sensitive” on the media in a location that is not encrypted). Furthermore (c) you must enforce strong keys/passwords and (d) you must prevent the user from storing the password on the media.

Finally, remember, (e) for detachable media, including laptop hard drives, the USER is considered the “node associated with the media”, so really, your data can’t be considered secure, because the user is the node, and the user has the key. (Unless, I suppose, you have the ability to revoke the key remotely, preventing Disgruntled Joe from taking a laptop out and then quitting with a copy of your code base already in his possession).

By far, (c)/(d)/(e) are going to be the hardest. A suitably strong password that prevents a dictionary attack is going to be burdensome to the user to retain, so they’re either going to forget it, or write it down and stickynote it to the monitor, etc. The only way to mitigate this risk effectively is to *limit access to the data in the first place* – people look at FDE as a “silver bullet” to allow them to say, “We can now allow our vice president to take a copy of the financial database home on his laptop, because it is encrypted, so we don’t have to worry if the laptop is stolen”, but that assumes that (c)/(d)/(e) aren’t problems, which is screwy. Sensitive data shouldn’t leave the house, people. If the VP wants access to the data because it makes his life easier, say “No, you need to be in the office to get access to that,” or make sure ahead of time that everyone at the CEO/Board of Directors level knows that you have *no real data protection* – your data is only as secure as everyone is trustworthy. And while I may trust a particular worker to not read data to a corporate rival over the phone, I simply don’t trust any number of workers > 2 to *not put their password on a sticky note on the screen of their laptop*.


Protecting Data in Use:

This is basically impossible in today’s OS market… anyone who claims that they have have “secure data is use” is full of baloney. The best you can do here is mitigate the attack vectors. If you use FDE, you solve some of the problems because the swap space is encrypted, which prevents one attack vector, or you can get rid of swap altogether (and make sure you’re not using NVRAM). However, if you look at the various ways that Data in Use can be mishandled, virtually all of the major vulnerabilities are exploitable at the OS level, which is something that you’ve more or less outsourced to your OS vendor. Your only mitigation here is to lock down the OS as much as you possibly can (including using FDE to protect the OS files at rest!), and this is more often way more trouble than it is worth, given that even if you could cover all of your bases, it doesn’t protect from Kevin Mitnick. From a cost/benefit analysis, aside from taking basic steps to secure an operating system, you’re wasting money – locking down Windows to the point of near un-usability isn’t going to protect you from a zero-day IE exploit.

The number one way to prevent OS level exploits is to use a web proxy at your border and disallow all attachments via email. Anybody who can successfully sell #2, please let me know how you did it. If you can’t do those two things, though, spending more than a minimal effort locking down the host OS is largely a waste of time.


Protecting Data in Transit:

Here’s where S/MIME and SSL and IPSec and all that good stuff comes in. Actually, next to protecting Data at Rest, protecting Data in Transit is probably one of the easier tasks to accomplish at the present time, except for the fact that both hosts have to be able to protect the Data in Use, and we illustrated in the previous paragraph how hard that is. Yes, you can man-in-the-middle data in transit in many, many instances in today’s networked world, but we already have many of the technologies to mitigate this; we just don’t deploy them properly.


Looking at all of the above, it should be obvious to everybody that you can’t claim your data is “secure”. So, what you need to do is decide what constitutes “reasonably secure” and shoot for that; and that is an organizationally-dependent classification. There is no industry-wide Best Practice available here.

Assuming you are using a hardware token to provide two-factor authentication, hopefully it has a big red light on it to let you know when it is being used for encryption or decryption. And hopefully you log off of the token as soon as you have finished encrypting a file, and likewise whenever the screen-saver locks.

Although I agree, this would be awesome if used properly, it’s simply not ever going to be used properly. People won’t notice if their hardware token flashes red. They don’t notice when their browser doesn’t have the SSL lock icon. Even if they do notice, taking a set of users bigger than a few, most of them aren’t going to care. You can’t solve this problem with education or training.

Unfortunately, really large files can become rather cumbersome to deal with, and particularly the .pst files created by Outlook – some of which can grow to 4 GB. So archive your e-mail religiously to keep the working set small, and use s/mime for all your important correspondence.

Also great tips, which unfortunately are going to fail for any reasonable number of users.

Finally, plan ahead. File formats change, disk crashes occur, encryption hardware gets lost or broken, and your wife might need to access your income tax returns if you run into a tree some night.

Absolutely. Also, remember, your data is only as secure as your backups. If you’re busting your chops protecting your data, you should be busting your chops equally to protect your backups, whatever they are.

Edited to add (07-2010): Bruce has a recent post up that is related.

Posted July 26, 2007 by padraic2112 in hardware, security, software, tech

100 Gpbs Internet – 802.3ba   Leave a comment

Ars Technica reports that the Higher Speed Study Group has settled on a specification for 802.3ba and submitted it to IEEE for formal approval, which will support both 40Gbps and 100Gbps speeds. You can see the Project Authorization Request here.

The IEEE standards process is highly entertaining, since it represents the most colossal collision for agendas on the tangent point between the academic, business, and technical spheres. I wish the process could be a bit less entertaining and more focused, but the current iteration is preferable to the alternative. However, the projected 2010/11 ship date mentioned in the Ars article is getting close to too late.

In 2000, we were purchasing 6-10 GB hard drives. Now, you can get 1 TB desktop drives. Prior to 1990, people’s data files were mostly plain text. As the web became popular, pictures and audio files became more common; by 2000 I’d see a GB or so of photos and .mp3s on someone’s computer frequently. In the last five years, the trend has accelerated alarmingly – now people fill up their 80 GB iPods not just with photos and music, but television shows and movies. My own .mp3 collection, (ed note- ripped at 386 bit rate from my own 700+ CD collection, completely legally) fills up about 60 GB of data space and the home server has another 20 GB or so of photos. Digital cameras used to be VGA (.3 megapixels, or 640×480), now they’re WQUXGA (9.2 megapixels, or 3840×2400) or better. In 2003, my wife’s digital camera snapped photos that averaged out between 300 and 600 KB, now with the new N80 SLR camera, the raw photos are between 2 and 2.5 MB, and unlike the old digital cameras, you can snap a dozen pictures in seconds, trying to get the perfect shot. You may pick out one of those dozens to re-size for your personal web page, but you’re going to keep the others lying around.

In fact, the limitations now in what people store aren’t on the technical side, they’re on the legal side. It’s difficult to easily encode NTSC television shows without using proprietary equipment (yes, I know people that do it, but it’s not point-click-store-move-share). HDTV is going to be more difficult, simply because broadcasters are ornery when it comes to protecting what they believe to be their rights over their intellectual property. Ripping your own DVDs to another video format also can be done, but again, it’s a pain. Blu-Ray and HD-DVD movies are huge, but media companies are going to make it difficult/annoying to get those movies off of those discs.

People are still going to do it, though… and if nothing else, the standard home digital video camera is going to be producing HD recordings as HD camcorders become more popular. People aren’t going to shell out enough money for a sweet HDTV plasma/LCD setup at home to watch HDTV over cable or DirecTV and be happy with the quality of the current digital video recorders, especially now that you can get HD camcorders that do 1080i for under $1,000.

Digressions aside, what was a perfectly “normal” expected amount of digital data for people to store up in 1993 fit on a few 1.44 MB floppies. In 2003, it was a few gigabytes. In 2007, we’re into the hundreds of gigabytes. At this rate, by 2010, multi-terabyte home data stores are going to be not uncommon.

And if there is one thing that is known to be true, it’s that people want to share their data with other people. If someone wants to share home HD movies with grandma, right now it takes forever over ADSL. Broadband connectivity in the US is wretched, compared to the rest of the plugged-in world, which is still astonishing to me. 100 Mbps should be common by now. It takes more than a day to download the Mandriva 2007.1 DVD .iso off of bittorrent, why is our intra-continental network still so dang slow? (ed note – I actually know why it’s so slow, I’m just griping).

Posted July 25, 2007 by padraic2112 in networking, news, tech

Cool Software – OTR   Leave a comment

Off the Record Messaging.  A plugin for Gaim Instant Messenger or Trillian that supports a nice set of security-related functionality.  A geek-level description of the protocol is here, describing the AKE and the encryption algorithms in the back-end.

Installing is easy on Fedora Core (via a number of repositories) or Mandriva.

Posted July 23, 2007 by padraic2112 in security, software, tech

Large Numbers   1 comment

I was idly parsing the blog on XKCD’s site, and came across this post concerning large numbers. Reading it led me to this page, which is quite an excellent summation of how cool really big numbers are (in terms a non-mathematician can more or less understand) and also gives a pretty good basic introduction to the concepts of cardinal and ordinal infinities.

My favorite paragraph:

At this point in the discussion (or usually sooner) it becomes apparent that there is additional knowledge and assumptions “outside the system”. An effort is made to identify these, define them precisely and add them into the quantity N. After doing this, it is soon discovered that the resulting formal system itself depends on things outside itself, and so on. I have encountered many expositions, discussion threads, etc. over the years, that begin with an optimistic determination to formalize the problem and quantify exactly how large numbers can be derived from first principles; they all have ended up somewhere in this jungle of abstraction. Here is a relevant quote:

I have this vision of hoards of shadowy numbers lurking out there in the dark, beyond the small sphere of light cast by the candle of reason. They are whispering to each other; plotting who knows what. Perhaps they don’t like us very much for capturing their smaller brethren with our minds. Or perhaps they just live uniquely numberish lifestyles, out there beyond our ken.

Edited to add: Apparently some people don’t understand how cool this is. Then again, looking at the Digg feed quoted by that blogger, lots of people do understand how cool this is, too.

Posted July 19, 2007 by padraic2112 in math, web sites

Cool Software – Guiffy   Leave a comment

This looks very useful.  It’s cross platform GUI-based “diff” tool.

Posted July 16, 2007 by padraic2112 in freeware, software, tech

Bad Security 201 – More On Remote Display   Leave a comment

Related to this post is the question… “What’s the best way to get a graphical interface on a remote machine?

If the remote machine is Windows, you run Remote Desktop Connection (from a Windows box) or rdesktop (from a Linux box) and call it a day.

If the remote machine is Linux, barring some lower-level protocol solution like IPSec, you basically have the following options:

  • Native X-forwarding
  • X-forwarding over SSH
  • VNC
  • VNC tunneled through SSH
  • Something like FreeNX (see below)


Native X-forwarding

Uses an unencrypted channel, and thus any traffic is wide open for stealing. Since it’s unencrypted, and since the local x-server only has to render the gui info piped from the remote machine, it’s pretty fast, relatively speaking, although there is no compression, so you can see performance issues if you don’t have good network bandwidth. But, since it’s unencrypted, it’s pretty much suicidal on an open network. Anyone who does this ought to be shot. It’s about 1000 times worse than doing remote admin using telnet, since (in addition to handing away control of the destination box) you’re potentially compromising your local machine. Brrr.


X-forwarding over SSH

Uses an encrypted channel, preventing eavesdropping, at a cost of performance, since everything has to pass through the encryption. Security-wise, still pretty horrid since many applications on a remote machine don’t work properly unless you force the connection to run “trusted” (this is generally stinky and ought not to be done).

If the remote machine is compromised, you’re handing over your local desktop to the compromised machine when you connect to it. If the remote machine is reasonably trustworthy, well, this may be an acceptable risk, but unfortunately if/when you tell anyone how to do this they’re going to connect to *untrustworthy* boxes (like, their own machines or machines set up elsewhere by people who don’t know what they’re doing), so I generally consider this to be a bad idea. You see lots of posts about X-fowarding over SSH on the web, few of them mention the risks involved.

Anyone who does this ought to go into sysadmin time-out, or shot if they’re connecting to untrusted machines.



VNC has a DES encrypted challenge-response for the login process, but a 56 bit key and a 8-character password are pretty trivial to crack. In addition, all subsequent traffic between the hosts is in the clear, due to performance issues, so you have the same snooping problems you do with native X forwarding. I have professional experiences of instances of VNC sessions being brute-forced and subsequently had compromises, escalated because the user in question had sudo.

Anyone who does this also ought to be shot, as per Native X-forwarding. I’ve talked about this ’till I’m blue in the face, and people still do it, and it makes me nuts.



Setting up an SSH tunnel and then piping all your traffic through it solves the “in the clear” issue of using native VNC. There are performance issues, since VNC itself has cruddy compression, and tunneling everything through SSH can bog things down. Since VNC does not require the local X server to talk to the remote machine, this is much better than X-forwarding over SSH, as you’re not treating the remote machine as a trusted X client. In addition, VNC runs as a user-level process, so if there is an hack in the wild, the worst thing that can happen is that someone hijacks the user account on the local machine; whereas if you’re doing X-forwarding, the local X server (running as root) is exposed to input from the remote client, not terribly keen about that.

VNC over SSH, in my experience, is so “performance-wise” annoying that it is a barely acceptable solution. Their compression algorithms are just plain bad.

Note, if you’re going to do this, you need to make sure that the VNC Server only accepts connections from localhost, using a firewall of some sort, else the Internet Bots will eventually crack in.



FreeNX uses SSH as a backend for encryption, but adds compression and caching, and is supposed to make remote graphical connections usable. I’ve heard good things, but I’ve never actually tried it (I only connect to Linux boxes remotely using a terminal), so I can’t give a comprehensive review. Reading the documentation leads me to believe that if someone actually needs remote GUI, this is probably the only real, current alternative that isn’t full of gaping ugly security holes or (potentially) horrid performance issues. You may want to explore it (or some other alternative similar to it) as a solution.

ed: Coworker Dave has used FreeNX, reports that it works as advertised, and does remote-to-local audio and video forwarding functionally as well. FreeNX is available in many repositories, and apparently comes with Open SuSe and Mandriva at least. FreeNX is also part of the FreeBSD ports collection (/usr/ports/net/freenx)


Remote Desktop Protocol

RDP solves pretty much all of the above, but the remote box needs to run Windows, since there isn’t currently an RDP server for X (and, since RDP is proprietary to Microsoft, unlikely to be forthcoming). Which is too bad, since RDP is functional, supports high levels of encryption, and even (most recently) supports server keys, so you have a destination host verification channel like SSH does.

Posted July 12, 2007 by padraic2112 in security, software, tech

I Burn The Feed   Leave a comment

Just set up a FeedBurner site.  If you want an RSS feed of this blog, check it out here.

Posted July 11, 2007 by padraic2112 in newsflash

Cool Software – WinGrep   Leave a comment

I’m ecstatic. I love grep. I wish I had grep on Windows (other than through Cygwin). Now I can get grep for Windows. Can’t wait to try it out.

Posted July 11, 2007 by padraic2112 in freeware, software, tech

Bad Security 201 – Remote X Sessions over SSH   2 comments

There is a lot of misleading information on the web regarding the potential security problems using ssh with X-forwarding.

People don’t seem to understand the implications of the documentation. So, here’s the skinny. (technical note: this is written using OpenSSH 4.6.1p1, your mileage may vary with older versions).

OpenSSH (again, remember we’re talking about 4.6.1p1 here) supports the X Security Extension specification. In implementation, this means that the command ssh -X now by default treats the remote machine as an “untrusted” X client, and the command ssh -Y treats the remote machine as a “trusted” X client (actually, to be precise, it appears that the ssh -Y command bypasses the normal X trust mechanism entirely). We’re not going to talk about X’s security mechanism here, for the most part, except in the context of remote connections.

However, this command line behavior depends upon your ssh_config. In ssh_config, you have

  • ForwardX11
  • ForwardX11Trusted

By default, in OpenSSH, these two beasties are “no”. This is *not* the case in most deployed versions of OpenSSH, as packaged by your favorite distribution. In FreeBSD, they’re “no” (in NetBSD, I’d imagine they’re set to “no”, since Theo de Raadt is unlikely to override the default configuration of his most famous software package in his most famous BSD). In Mandriva 2007.1 (using the “medium” msec configuration for workstations) and Fedora Core 6, they’re both set to “yes”.

If ForwardX11Trusted is set to “yes”, then the commands ssh -X and ssh -Y are functionally equivalent. If ForwardX11 and ForwardX11Trusted are both set to “yes”, then the command flags are not only equivalent, they’re unnecessary… that is

ssh user@host command = ssh -X user@host command = ssh -Y user@host command

If ForwardX11 is set to “yes” and ForwardX11Trusted is set to “no”, then

ssh user@host command = ssh -X user@host command <> ssh -Y user@host command

So, what precisely does this mean? Well, you have to read the Xsecurity specification, and sit back for a while with a whiteboard and do some flowcharting to really get at the meat of this.

When you use SSH to do X-forwarding, what you’re actually doing is issuing a command on a remote machine, but you’re telling that command to send the GUI output back to your local machine’s X server for display (this makes the documentation difficult to read, because your common understanding of “client” and “server” here are going to be confused). The X Security Specification essentially says, “clients that are regarded as untrusted aren’t allowed to so certain things”. This means that (if you’re following the spec), you can connect to a remote machine using X forwarding and issue a command, and if the results of the command violate the “certain things” principle, you’re going to get an error instead of a result.

Side note – there are interesting problems here, since apparently the development crews of lots of client software packages don’t understand the X Security spec, so you get instances of odd behavior when you try and run a software application on a remote machine with X forwarding, where some things just don’t work right. This is apparently why Fedora Core 6 flips the ForwardX11 and ForwardX11Trusted to “yes”, in the interests of making things work.

So, supposedly, if you’re not using ForwardX11Trusted and you’re not using ssh with the -Y option, the application you launch on the remote machine can’t do evil things like take over your X server or keylog or take screenshots of your local display, etc. On the other hand, if you are using ForwardX11Trusted or you’re using ssh with the -Y option, you are bypassing the X Security Specification, which means that yep, that remote machine can do oodles of bad things to your local box.

Now, to make sure nobody misunderstands my position here, I generally think X forwarding is a bad idea. There are lots and lots of reasons why the SSH manpages say things that roughly translate into the vernacular as, “You ought not to load this gun and point it at your foot.” I think X forwarding to untrusted machines is a very, VERY bad idea, regardless of whether or not you’re using -X or -Y or what your ssh_config is… your X server on your local machine is running as root, not as a user-level process. Hooking that up to something running on an untrustworthy box is just foolhardy. Logging into a potentially compromised box via ssh is a bad idea, but at least there your ssh process is owned by your uid, so if you’re careful and paranoid you can limit badness. Somewhat. Remember that cascade failures are still abundant and can cause seemingly harmless things to break in really bad ways.

But don’t think just because you’ve read the OpenSSH materials that running ssh -X is going to provide you some level of protection…

Posted July 9, 2007 by padraic2112 in security, software