Archive for the ‘software’ Category

ASUS Blu-Ray Recorder and Power2Go   Leave a comment

I recently installed an add-on Blu-Ray Recorder on my Dell desktop machine here at work, and I had some troubles.  You might too.

The physical installation went fine, although they didn’t include a SATA cable in the box with the drive, which I found annoying.  If you’re buying one of these, keep in mind you need to pick up a cable as well unless you’re ditching your existing optical drive.

When I went to install the Power2Go 7.0 software that was pre-packaged with the drive – so that I can actually burn Blu-Ray discs – I got a popup error:

Error 1327.Invalid Drive Y:

“Weird”, thought I.  I have Offline Folders set up on this desktop pointing my documents folder at a server, and that folder is mapped as drive Y.  Well, it can occasionally happen that software developers hard-code a drive letter in their installation (bad practice, boo!) for use as a virtual drive, so I disconnected the mapped drive and tried again.  Still no dice.

It turns out that if you have Offline Folders set up, even if the drive is not connected, the Power2Go installer borks.  Turning off Offline Folders isn’t enough to fix it, because the Offline Folders setup, itself, leaves registry keys in the registry even after you turn off Offline Folders.  You actually have to run the registry editor and remove the legacy entries for the drive mappings to get the installer to not freak out.  Be cautious when you run the registry editor, as you can brick your machine.

Click Start, and then click Run.  In the Open field, type regedit, and then click OK. The Registry Editor opens.

  1. In the Registry Editor, locate the following registry key:
    HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders
  2. In the right pane, note the values in the Data column of each entry.  Find the legacy entry for your Offline Folder entry and delete it.
  3. Repeat for each of the following registry key:
    HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders
  4. Close the Registry Editor and run the installation again.

Posted September 13, 2012 by padraic2112 in hardware, software, tech

Folksonomy of Programmers   3 comments

I don’t program for a living.  However, I know a bunch of programmers, as part of my job I talk shop with Computer Science people, and programming languages interest me from the standpoint of logical systems.

I was chewing the fat with one of those aforementioned Computer Science folk this morning, and I offered a classification system for programmers that he found entertaining.  Since I needed something to write about to get me back into the blogging gig, here it is for your consumption, with some embellishment.

Class One: The Directionless Hack

How to recognize them:

These people don’t know much about anything.  They’re the type who learned how to program by reading “Java for Dummies”, or by hammering away at their computer to get something specific done (like, building the web site for their band or their buddy’s motorcycle shop). Edited to add: as Corey points out in the comments, you can have a degree in Computer Science, from a reputable university, and still be a directionless hack.  How you learned what you know is less important than what you haven’t learned, and “just getting this next thing done” is a very common approach to formal education, too.

In and of itself, self-taught programmers aren’t necessarily a bad lot (many of the below classes started here), but these yahoos haven’t got an enormous amount of intellectual curiosity about how programming works and their “self-taught” skills stop precisely at the place where their current problem ends.  They generally keep solving the same problems over and over, as they fail to realize that the problem that they’re trying to solve is something that someone else already solved a couple of decades ago.

They program almost entirely in Perl, PHP, or JavaScript, because that’s what they can hack together with a minimal amount of abstract work, and they run everything over the web that they can.  They’ve built at least one database in their lifetime, and anyone who understands normalization will scream in abject horror if they see the database schema.  Generally, they need complete access to everything to get anything done, because they can’t explain what they’re doing while they do it, at least in part because they’re not sure what they’re doing.  Within a very short period of time after they leave their code up and running, it will collapse in a way that only the original developer can untangle.  They don’t use version control; or if they use version control a typical change comment will include, “Checking in some changes”.  Comments in the source code will be likewise either nonexistent, or include no useful information.  They don’t work well with systems administrators, DBAs, or networking folks.

Common statement: “Well, it works for me.”

Class Two: The Directed Hack

How to recognize them:

These are the people who know they need to get something done, but figured out somewhere along the line that it might be a good idea to pick up at least a couple of books about programming concepts instead of just “howto” manuals.  If they graduated with a CS major, they paid enough attention in their theory classes to grasp what “layer of abstraction” means, although they may not be able to explain it.  Like the directionless hack, they usually need full access to most things to get anything done, however unlike the directionless hack they’ll have learned enough about at least some things to know they don’t want to mess with it any more.  If they have more than 5 years of experience, they’ll have some comments in their code (precisely at the location where most of the bugs are), because they occasionally want to go on vacation.  They program in whatever language is currently en vogue wherever they are, or they will default to Perl or Python.  They will have an opinion on what the best language is, but their strength of their opinion will be directly correlated to whether or not they’ve worked primarily on writing their own code (in which case they will prefer an unstructured language), or supporting the code of someone else (in which case they will prefer a much more rigorously structured language).  Depending upon the stage of their professional development, they will work well with systems administrators, database administrators, or networking personnel, but never all three at the same time.  These people actually make up the bulk of systems administrators, industry-wide.  Programming editors vary wildly.

Common statement, in comments: “# I know this sucks, but it works.  Do it yourself if you don’t like it.”

Class Three: The Mercenary Professional

How to recognize them:

They know Java, ASP.NET, and/or C#.  They’re at least conceptually familiar with SOAP and/or Ajax.  They hate whichever database they’ve relied upon that had the least competent DBA, and like whichever database they’ve relied upon that had the most competent DBA.  They want version control, they want detailed specifications, they don’t want to have more than two meetings with the customer.  They regard most of what they learned in college (if they were a CS major) as useless and unnecessary.  These people want to get stuff *done*, and they don’t want another phone call about it afterward.  They regard systems administrators and networking staff as necessary evils.  They will love good DBAs, and set fire to bad ones.  They will have a love-hate relationship with XML.  They will likely prefer Waterfall development, and use NetBeans or Visual Studio IDE.

Common statement: “File that bug report with the maintenance team,” or “That’s not in the spec.”

Class Four: The Cabalist

How to recognize them:

They have a very, very strong opinion on which language is teh best evar, and the list of candidates includes Lisp, C, Modula-3, and Haskell.  They will be able to write up at least six different logical diagrams of any project before writing any code… although they probably won’t write up logical diagrams because they can move from one layer of abstraction to another, on the fly, in their own head (or at least, they think they can).  Their first approach to most software projects will be to suggest a complete re-write in a language that doesn’t suck.  They will hate most programming languages, including any language that is proposed by anyone on their programming team that isn’t a Cabalist or a Professional.  Good Cabalists will work well with good DBAs, but bad Cabalists will annoy the hell out of them by offering suggestions on how to improve the database.  They will often argue with systems administrators or networking staff about technical limitations (of the systems or network) as if problems with deviation from RFCs are the technical staff’s fault.  They will likely prefer some sort of agile programming methodology, and have a strong preference for their particular favorite incarnation.  Cabalists typically will use Emacs (and have a very strong preference for a particular incarnation of emacs), and sneer at any IDE.

Common statement: “Only a complete idiot would do it that way.”

Class Five: The Theoretician

Closely related to The Cabalist, the Theoretician goes one step farther.  They will have done one of the following: written a major software program in Assembly, written their own compiler in a language *other* than Assembly “just to do it”, written their own programming language, installed a C compiler on their HP calculator, contributed a driver or a chunk of the kernel to either the Linux, FreeBSD, or NetBSD projects, or installed and run Plan 9 on their home network un-ironically.  These people are nearly useless in a team environment as nobody else can understand them except a Cabalist, and Cabalists and Theoreticians rarely get along.  However, they also can be the sort of Free Electron that can re-write an entire application over a weekend.  Almost all Theoreticians use vi.

Common statement: “Yeah, I wrote about that in the 2001 issue of IEEE Interactions between Compilers and Computer Architectures”

Class Six: The Weary Wise One

These guys and gals used to be one of the above classes, but have passed into programming Nerdvana to Enlightenment.  They will have opinions, but no longer care about the strength of those convictions.  They will re-write the entire code base, but only if necessary, or hack it if that’s the best way to get the job done.  They will cheerfully reuse another programmer’s code (even bad code), or buy an off-the-shelf component if that’s legitimately the best way to get the job done nearly on time, basically on budget, and with quality good enough to make everyone happy.  They will hate most operating systems, programming languages, and programming methodologies equally.  Typically they will prefer vi or emacs over an IDE, but will use an IDE if it gets the job done.

Common statement: “This is still better than my dot-com days.”

Posted November 17, 2010 by padraic2112 in humor, software, tech, Uncategorized

Shorter Answer, Ivan   1 comment

I just read Ivan Ristić’s slides for his talk on “How to Render SSL Useless“, found via Luke O’Conner’s blog.  Thanks, Luke!

(spoiler: Here’s the shorter answer: if you use SSL/TLS, you’re probably not using it for the right reasons and you’re probably not getting the level of security you think you’re getting, because you’re probably doing it wrong.)

Ivan’s points boil down to this: SSL/TLS, by itself, is secure.  It’s all these implementation details that render it insecure in practice.  Ivan then offers eleven areas where SSL is “broken” in practice.

Here’s my issue with the slides: some of them don’t detail problems with SSL at all, and the other half are built into the design of SSL itself.

Let’s go through the slides by point.

Ivan’s first contention is that self-signed certificates are bad.  Ivan argues that they’re insecure, they teach users to ignore warnings, and that it’s cheaper to get a “real” certificate than to use a self-signed one anyway.

Well, a self-signed certificate is certainly differently secure than one signed by a root CA, but as to whether or not it’s less secure or insecure, that’s a completely different question (trusted authorities and exploitation scenarios deserve their own post, so I’ll leave it at this for now — edited to add — thank you, Ed Felten, now I don’t need to write this up).  The second contention is just silly, users don’t need to be trained to ignore warnings, they do it already.  The last is at best incomplete.  It requires a certain level of skill to deploy a service that relies upon a self-signed certificate, so saying “you have to maintain it” should be considered as part of the cost is mostly pointless.  You have to maintain any certificate, whether you sign it yourself or pay Verisign to sign it for you.  If I have to pay Bob the Apache Wizard to maintain my site and Bob knows how to generate a self-signed cert, it’s going to be cheaper for me to have Bob sign the cert than it will be for me to pay Verisign to do it, because Bob is going to get his salary (or his packaged SLA payment) either way.

Ivan’s second contention is that private certificate authorities are bad.  The logic follows mostly along the lines of the previous point… it’s better for you to pay someone else to do this for you than it is for you to do it yourself.  Now, he has something of a point here.  Building a CA isn’t the same as self-signing a certificate, it takes a higher degree of knowledge to build the thing properly.  I would imagine that there are a number of CAs out there that are unnecessary and they could be easily covered under one of the existing root CAs.  However, there are any number of completely legitimate reasons for running your own CA, and in any event I don’t think one-off CAs represent a big threat to the overall infosec domain.

Oh, and against both previous points: for-profit root CAs have issued insecure certs before, why should we trust them?

Points 3, 4, 8, 9, and 10 are all basically the same point: if your site needs to be encrypted some of the time in transmission, it really needs to be encrypted all of the time, period.  This is a good point (really should be a single point with examples, though), and I’m more or less with Ivan on this one, although I understand why it isn’t always the case.

Point 7 is that SSL sometimes isn’t used at all when it should be.  Not sure why this belongs on the list, that’s not a problem with SSL implementation, per se.  And I personally haven’t seen an unencrypted site that handles sensitive data in a long while, so I don’t know how germane it is anymore.

Point 11, and to a lesser extent 5, aren’t so much problems with SSL as they are problems with the couplings between SSL & DNS, pushed through the lens of user expectations.  DNS has had its own problems.

Finally, point #6 (using an EV certificate, as opposed to a normal SSL certificate) illustrates the problem I have with computer security engineering professionals.

Now, I haven’t seen the talk and I haven’t read any of Ivan’s blogging (I should, and I’m adding it to my blogroll now), so I can’t say that this is fair, but just reading the slides, here’s how I interpret the underlying context of this talk:

“SSL is totally secure, if you are using it in the totally most secure way and no other way, because we designed it to be totally secure if you use it in the totally most secure way.  Oh, but we also made it so that you could use it in all of these other ways, but DON’T DO THAT because you ruin our perfect design by using it in the non-perfect way!”

There’s a reason why I switched my research focus from infosec to disaster/crisis management, and this is it.  Information systems security designers have a tendency to draw a box in their head, and design a system that is secure inside that box.  If you use the tools they provide within the boundaries of that box, you’re golden, and if you don’t, you’re probably screwed.  But that’s not on them because they can only design out to the edges of the box.

The problems with this approach are that most systems don’t fit inside that box, the box itself often sits on top of a completely insecure table, and often the box itself has lots of little holes in it that are punched into it for various reasons.

Ignore those reasons!  Don’t use that functionality!  It’s bad!  But it’s necessary, that’s why we put it in there!  But you’re probably not doing it right, and it’s not necessary for you, so just pay someone else to do it!

If setting up your own CA is bad, then why is it good to have multiple root CAs?  Shouldn’t there be just one?  (no)

If EV certificates are the best, why do CAs offer regular certs?  (because)

If using incomplete certs is a problem, then why is it possible to generate an incomplete cert in the first place?  (because not all certs are certifying the same thing)

Heck, if self-signed certs are bad, then why do you have the ability to generate them in the first place?  (because in most practical cases, you’re looking for session security, not authoritative identification).

Posted February 16, 2010 by padraic2112 in information science, security, software, tech, web sites

Interesting Mockup-Maker   4 comments

Balsamiq.  This is probably a lot easier to build quick mockups than using Illustrator or Visio.  There’s a free download, I’m going to check it out…

Posted May 5, 2009 by padraic2112 in software, tech

Password Accepted   1 comment

I’ve been absent from the blogosphere for a few days for two main reasons: I’ve been digging through piles of academic journals looking for references, and the new back end web 2.0 dashboard for wordpress is about as responsive as a pregnant whale.  I’ll pick up the pace this week, I think.

Some current news: your browser is insecure.  You probably knew that already, but it’s worse than you think.  Results of a Chapin Information Services study, from Ryan Naraine on ZDNet’s blog:

That nifty password management feature in your favorite Web browser could be helping identity thieves pilfer your personal data.

Yes, Virginia, Security *is* Important in Design!

Opera scores the best, and they got a whopping 33%, failing 14 tests and passing 7.  F-MINUS.

Posted December 16, 2008 by padraic2112 in security, software, tech

Installing Windows XP on a Dell Inspiron 1420 laptop   4 comments

My earlier post on how to roll-back-a-Dell-Inspiron-518-to-XP gets quite a few hits, so I’m adding this one for a different model.  Unlike the previous post, this one concerns a laptop, and requires a few additional steps.

To perform this install, you will need:

  • a USB floppy drive
  • a USB flash drive
  • an XP installation CD
  • a working network connection
  • patience

Boot your laptop into Vista, then launch the Control Panel, and make a note of the following devices if they’re different from what I have listed here:

  • Video Device (in my case, this is an Intel Mobile 965, XP driver available here, file name R181739.exe)
  • Intel Mobile Chipset (XP driver available here, file name R153997.exe)
  • Ricoh Chipset – media card (XP driver available here, file name R141246.exe)
  • Modem (in my case, this is a Conextant HDA D330 MDC V.92 Modem, XP driver available here, file name R167368.exe)
  • Modem Utility – optional (XP version of the utility available here, for that Conextant modem, file name R148605.exe)
  • Network Devices (in my case, this is a Broadcom Netlink Fast Ethernet, XP driver available here, file name R155246.exe)
  • WIreless Devices (in my case, this is a Dell Wireless 1395 WLAN Mini-Card, XP driver available here, file name Dell_multi-device_A17_R174291.exe)
  • Bluetooth Devices (in my case, this is a Dell Truemobile 355 Bluetooth, XP driver available here, file name R127314.exe) – this one is tricky, there’s no link to it on the Inspiron 1420 page.
  • Audio Devices (in my case, a Sigmatel 92xx, XP driver available here, file name R171789.exe)
  • Dell Touchpad (the default XP driver will work, but there is added functionality you can get with the Dell driver, XP driver available here, file name R165804.exe)

Then connect your laptop to the internet, and download all of those files, saving them to your USB flash drive.  You’ll need those later.  If you have devices other than these (there are a lot of different configurations for the 1420), you may need to find the XP drivers for those devices on the Dell Support website for the Inspiron 1420.  Note, however, that if you miss something this is not a terribly big deal, as long as you get the wireless or wired network drivers correct, you can always connect to the Dell Support website at that link *after* you’ve installed XP and find the driver for your mystery device.

Then you connect your USB floppy drive (you’ll also need a floppy, btw), and download the XP mass storage driver for your laptop from the Dell Support website.  Run the executable, and unpack the driver files to c:\temp\intel, and then copy the contents of that directory onto your floppy drive (, iaahci.inf,, iastor.sys, you don’t need the text files).  Then open your CD tray and insert your Windows XP installation CD.  Close the CD tray, and reboot your laptop.

At the BIOS loading screen, hit “F12” to pull up the boot order – the default is to boot from the hard drive.  Boot from the CD drive.  In a few seconds you’ll see “Hit any key to boot from CD…”, hit the keyboard (not too hard), and then the XP installation will begin.  At the bottom of the screen you’ll see “hit F6 to add a storage driver”, HIT F6.  The XP installation will load a few drivers, and then ask you if you want to add a storage driver.  Hit “S” to load the mass storage driver.  This will read the iastor file(s) off of the floppy drive, and prompt you with four options for mass storage drivers, two desktop drivers and two mobile ones.  Unless you’ve chosen two hard drives as an opion, you want the Mobile AHCI driver, not the RAID driver (you’ll get an error if you choose the RAID driver and you’ll need to start over).

Assuming you’ve gotten this far (it could fail if the floppy drive or disk is broken, and you’ll have to create a new floppy from inside Vista and start over), you’ll move on to the next step of the installation.  Blow away all of the existing partitions, unless you want to keep the Dell Diagnostic partition (it’s the smallest Fat32 one).  Then install XP following your normal XP installation guide (there’s a ton available on the Internet, I’m not going to write up a specific one here today).  After the installation is complete boot into XP, connect your flash drive, and install the XP drivers for all the devices that you downloaded above… then enable the firewall (if your XP installation disk is pre-SP2), connect the XP laptop to the Internet, and download the four gajillion XP patches and update your laptop.

Posted October 24, 2008 by padraic2112 in hardware, OS, tech, Windows

Our Concept of “Intellectual Property” Is Seriously Broken, Exhibit A   3 comments

ZDnet reports that Microsoft’s patent application has been approved as U.S. patent 7,415,666.  We’ll pause for the inevitable joke about numerology and Bill’s relationship with the netherworld.

The patent in question?  “A method and system in a document viewer for scrolling a substantially exact increment in a document, such as one page, regardless of whether the zoom is such that some, all or one page is currently being viewed”.

In other words, Microsoft has patented “Page Up” and “Page Down”, a “method and system” for which has existed since I first started playing around with computers over thirty years ago.

Posted October 13, 2008 by padraic2112 in news, software

Why Is There A Watermelon There?   Leave a comment

Scott Aaronson over at Shtetl-Optimized describes a project for the summer of 2009 that I find very interesting.

The web app — tentatively called “Worldview Manager” — is intended to help people ferret out hidden contradictions in their worldviews.  Think of a kindly, patient teacher in a philosophy seminar who never directly accuses students of irrationality, but instead uses Socratic questioning to help them clarify their own beliefs.

The problem of hidden contradictions in people’s thought processes is a huge one.  I’ve mentioned here on my own blog (and on countless others in comment threads) that the failure of the U.S. educational system to teach logic is a major hole in brain training.  You see the consequences everywhere, but they are perhaps most blatantly obvious whenever you come across a discussion board or comment thread involving politics.

Training people how to think rationally is desperately needed, but this idea has some interesting implications itself in attacking the problem from the other end.  The biggest weakness I can see immediately is that the exams will be difficult to structure properly (something Scott acknowledges himself in the post).  In the comment thread, Gareth pointed out these two exams that follow the same idea that Scott has:

They’re somewhat simplistic, and I find myself critical of some of the assumptions (you can read the details in the comment thread of Scott’s post), but the FAQ does acknowledge some of my criticisms, and any way you slice it they’re still interesting.  Take ’em and post your results in the comments here (for the record, I scored “7% tension” on the philosophy health test and “2 hits” on the God exam).

I think it would be a fun project to work on, if you’re a CS student you should check it out.

Posted October 10, 2008 by padraic2112 in philosophy, research, software, web sites

A Nod of Praise   Leave a comment

WordPress uses Akismet spam detector to keep spam posts off of hosted blogs.  Akismet is developed by the same crew that develops the wordpress software itself.

I’ve been blogging here since March 23rd, 2007.  Since then, I have had exactly three comments show up in my “to be considered valid” comments queue that were likely spam (in fact, they were largely meaningless comments with an blog-linked poster name, so not exactly spam, just people trying to drum up a couple extra links)… and exactly one comment that showed up in my Akismet spam comments queue that was a real comment.  There are 499 published comments on this blog at the time of this writing.

That’s a damn fine “false positive” and “false negative” rate for any spam scanner.  I commend Matt and crew.

Posted October 1, 2008 by padraic2112 in software, tech, web sites

Oopsy Daisy.   Leave a comment

From RedHat:

Last week Red Hat detected an intrusion on certain of its computer systems and took immediate action. While the investigation into the intrusion is on-going, our initial focus was to review and test the distribution channel we use with our customers, Red Hat Network (RHN) and its associated security measures. Based on these efforts, we remain highly confident that our systems and processes prevented the intrusion from compromising RHN or the content distributed via RHN and accordingly believe that customers who keep their systems updated using Red Hat Network are not at risk. We are issuing this alert primarily for those who may obtain Red Hat binary packages via channels other than those of official Red Hat subscribers.

In connection with the incident, the intruder was able to sign a small number of OpenSSH packages relating only to Red Hat Enterprise Linux 4 (i386 and x86_64 architectures only) and Red Hat Enterprise Linux 5 (x86_64 architecture only). As a precautionary measure, we are releasing an updated version of these packages, and have published a list of the tampered packages and how to detect them at

Man, would I love to see how package signing occurs at Red Hat.  I’m going to guess that they’re doing it wrong.

Basically, someone’s managed to get a trojaned SSH package signed by the RH signing authority.  Since they were (apparently) unable to get the compromised package into the Red Hat Network, all RHEL customers that use RHN for their updates should be okay.

However, if you use… say… CentOS in your enterprise, it’s probably a good idea for you to take a long hard look at your package repository.  You can’t rely on “hey, signature checks out!” to verify trustworthiness.

This is one of those security announcements that is of small immediate practical impact, but worrisome in implications.  How does RH sign their packages?  How did this occur?  How do we know it won’t occur again?  I expect the answers to those questions are (a) we’re not going to tell you (b) we’re not going to tell you and (c) trust us, nothing really bad happened this time, right?  Slashdot thread.

Full disclosure time, boys.  Who screwed up?

Here’s an interesting blog post detailing… well, not much.

The risks mean we’ve had to be really careful who has signing privileges with the legacy key and how the key signing is handled.

The new key, in contrast, was created in a hardware cryptographic device which does not allow the unprotected key material to be exported. This means we can give authorised signers the ability to sign with the key, but no one can ever can get access to the key material itself. This is an important distinction. If for example a current authorised signer switches roles and is no longer responsible for package signing we can instantly revoke their rights and know that they no longer have the ability to sign any more packages with that key.

Two immediate possibilities spring to mind: someone was able to socially engineer a signer into signing a package, or the process has some level of automation in it, and the attacker was able to inject the bad package somewhere in the automation.  Either way, it illustrates the point that cryptography isn’t generally the hardest part of security, it’s process that’s the sticky widget.

Posted August 22, 2008 by padraic2112 in linux, news, OS, security, software, tech