Over the course of the last million years of computer history (that’s the last 40 years, in calendar history), the IT world has gone from running all of an organization’s computing needs on some big giant machine maintained by a couple of guys wearing ties and rubber-soled sneakers to running distributed computing centers, using tens or hundreds or thousands of nodes. The ubiquity of hot-swappable pizza-box style servers, with their slightly past-consumer quality hardware components, enabled corporations like Google, Yahoo, etc., to scale up their clusters easily. Studies rolled in from various sources showing that replacing a few giant boxes with scads of cheaper machines enabled companies to save bags and bags of money… there’s even a acronym for it (RAIS – Redundant Array of Inexpensive Servers), which as everyone in the geek world knows is the first sign that you’re really entering the world of “best practices”. Distributing Computing has become its own area of study in the IT world, with researchers trying to figure out the most efficient way of leveraging commodity off the shelf hardware. People have written doctoral dissertations on various methods of making these clusters work.
It looks like we’re starting to hit the plateau on ROI, however. Power costs are one issue, not just on the watt-per-cpu basis, but also running gigantic air handlers to keep what is essentially a stack of hair dryers running full blast 24/7 from raising the ambient temperature to the point where delicate silicon chips eat their own brains. The fact that you can buy what is essentially a mainframe computer now without also buying into a proprietary operating system is another factor from an interoperability standpoint.
Sure, all of these ROI studies rely upon base assumptions that may not be totally universally valid. However, this is just a reminder that “bang-for-your-buck” is a constantly moving target, and that what makes sense strategically today might not make sense tomorrow.