This website is a great resource for the not-yet-a-power-user crowd, and a pretty good resource for even the power-user crowd. I recommend browsing through it.
Archive for August 2007
Tech.co.uk reports that 1TB optical disks the size of CDs/DVDs have been developed by a company called Mempile in Israel. While obviously it will be a while for these (or something like them) to become consumer-grade products, muse a bit on that 1TB number and read this report about broadband connections in the US vs some news today from Finland.
I’ve written about broadband access in the US before. To summarize what the people over at CommunityWireless.org already know, and Bob Cringely has written about before, it stinks on ice. There are some pretty major differences (both politically and geographically) between Finland and the U.S., admittedly. Our problem isn’t that it’s difficult for a U.S. citizen to get broadband connectivity faster than 3Mbps while you can pretty easily nab 20Mbps in France or a whopping 61Mbps in Japan. Our problem is that we’re behind the curve, and in spite of the fact that we’ve recently moved up marginally in some of the standings, the leaders are not only already faster than we are, they’re also getting faster faster than we are. Google makes headlines for rolling out 802.11 b/g coverage in Mountain View, in the meantime Samsung has pocketed a nice package of patents rolling out WiBro in South Korea. The WiBro/WiMAX story in South Korea at first glance doesn’t look like an astounding smash, but those OFDM patents represent leader-of-the-pack development that will undoubtedly pay big dividends in the future.
In 2002, I had DSL coverage at 768Kbps and my hard drive at home was 20GB. In 2007, I have 1.5Mbps, and my hard drive is 250GB. My data rate has doubled in five years while my data storage has gone up 12.5 times. Already I rely heavily on sneakernet and various data storage media to move bits around. In another five years, we’ll be relying more on bike messengers and FedEx to move our data around *physically* overnight, because the staggering amounts of data people acquire simply won’t be able to fit through an electronic transfer.
Just about everyone knows about the Skype outage of last week, caused by a combination of high volume and client machines rebooting due to Microsoft’s Patch Tuesday. Skype was quick to issue a mea culpa making a point of absolving Microsoft of any blame, awfully decent of them in light of how many companies throw blame around rather than admit to design flaws. Although there was a significant amount of public discomfort with the outage, for the most part it appears that most of the users aren’t going to discard using the low cost service due to reliability concerns.
Now, there’s plenty of blog traffic talking about this incident, just like the recent gmail outages. Someone parsing the titles of the blog entries would notice a larger percentage of negative responses than positive ones. This isn’t much of a surprise, really, since generally speaking people are more likely to blog about stuff they find irritating – so even though the blogosphere seems generally pretty angry about the Skype outage, the overall population of Skype users is taking it in stride. I digress – I’m not going to blog about Skype’s design problems (or lack thereof) or the problems with Microsoft’s Patch Tuesday policies here – what I find interesting about this outage is how it represents an absolute perfect example of Consumer IT in the Workplace.
Skype, itself, is not designed as what I would call an enterprise level service. The infrastructure supporting the service is essentially entirely P2P, which means that the service subscribers are more or less entirely dependent not upon some central Skype-run cluster of machines, but instead upon the machines of the userbase itself. As this incident shows, there are design considerations that are implicit in the model, and Skype’s Terms and Conditions reflects these limitations. That doesn’t mean that businesses can’t use Skype as a communications method (indeed, the low cost can be a major benefit to small companies and startups), but that the limitations of the service need to be taken into account, and the IT manager needs to keep track of the hidden costs to determine if the business is actually saving money.
I’m sure that during the Skype outage there were high priority emails and cell phone calls and text messages sent from Skype users to their organizational IT support. This is going to put stress your IT shop, particularly when “first contact” trouble reports may consist of something as nebulous as “the Internet isn’t working” or “My computer isn’t working”. Time is going to be spent tracking down what’s actually broken and why, and it may represent a significant amount of time before someone on your IT staff figures out that this is a known outage with the *service*, instead of a problem with your organization’s computers or network or internet service provider. If you’re the IT manager you need to log this sort of information – Skype isn’t as “low cost” as the monthly billing amount may lead your executives to believe. Like I said in the last post on Consumer IT, this doesn’t mean that using this technology is a bad idea (you may well still be saving a tidy sum of money), but if you’re not reviewing these costs and summarizing them and reporting them to your non-IT brethren in your organization, the rest of the organization doesn’t understand these costs.
(There is an additional consideration of expectation here, which I’ll get into some other day).
From the article:
In fact, end users are so confident in their ability (and authority) to bring whatever Web-based applications and consumer devices they want into the workplace, 31% of them claim to circumvent IT altogether, according to a recent survey from Boston-based Yankee Group Research Inc. Only 13% said IT had complete control over their PCs.
Some interesting tidbits of knowledge buried here in this paragraph. Of course, I don’t know the details of the study, so I’m not sure where these surveyed end-users work. However, assuming the study has some general level of rigor, this says approximately 80% of the institutions out there don’t bother to lock down their workstations. There are lots of potential reasons for this, and although the security wonk in me shudders at this figure, the practical part of me agrees with this quote later in the article:
Holbrook is advocating an approach similar to Garbidakis’. He said IT has to find a way to get control of consumer technology adoption. He said locking down PCs won’t work. Ignoring consumer technology is just a mistake. And expecting IT to take on full responsibility for consumer technology is also doomed to failure because there’s so much of it.
Completely “locking down” a PC (as opposed to just performing basic security operations) is a gargantuan task for any reasonably-sized organization, and given the general shabbiness of the OS market and the implications that has for your data security, it’s an ongoing task with little practical return. Of course, your general workstation security plan should take into account the sensitivity of data accessed by the user, but on the whole, “locking down” a PC generally is only worth the effort for a limited number of use cases.
Holbrook is correct that IT has to find a way to get some level of control of consumer technology adoption, and he’s also more or less correct that the Iron Grip of security doesn’t work. What the article doesn’t say is what I mentioned in this post a few days ago: you can’t solve a people problem with technology. Make no mistake, this is a people problem, the fact that it involves technology is only relevant in making this a people problem the IT department needs to solve.
Some places I’ve worked, I’ve had to deal with a fairly open computer use policy. Users could (and did) do quite a bit of mangling to their own workstations. Unlike the survey takers, however, I’ve only very rarely been surprised by what those users have done to their machines – most of the time I have a very good idea what they’re doing and why. I haven’t accomplished this, for the most part, by using techie monitoring tools. I’ve accomplished this by walking around and talking to people… people in HR, people in Accounting, people in BizOps, people, people, people. If you don’t talk to people all the time, you can’t get in their shoes. You’re never going to understand what their problems actually are, only the symptoms that they report to you when something breaks or they need something new (which, all too often, aren’t even close to the full story you need to provide a solution).
“But Pat,” you may argue, “this just doesn’t scale. You’re big on scalability, how can the CIO or VP of technology walk around and talk to people in an organization that’s 2,000 people? What about 10,000 or 50,000?” Good point. Ground-pounding is still necessary (in my opinion) even for the bigwigs and muckity-mucks, but of course you can’t get a full picture by yourself when you have that many people relying upon your IT department. The obvious answer here is delegation -> if you’re in charge of an IT department, you have to enable your middle managers and front line worker bees and soldiers to do end-user reconnaissance. You need “people-people” in your IT organization, not just gear-heads. Don’t just promote just the most technical people to middle management, or the most business savvy person, but also those with charisma and empathy. Reward your troops for interacting with their customers. Give out Starbucks cards to your desktop support people and ask them to take out end-users for coffee and chew the fat. Form IT Steering Committees that include people that *do* work, instead of just those people who *manage* work.
The End-User isn’t just your customer, (s)he is also an integral part of how IT works in your organization. You need to understand them.
Related to this earlier post:
You’ll note the Historical Trends scenario in Figure ES-1 (page 13) showing that if Data Centers continue on “business as usual” power consumption trends for the next four years we’ll hit an annual electricity use of 120+ billion kWh/year. Yes, that’s 120 billion with a “b”, or about five times the energy consumption required to power data centers in 2000 at the height of the dot-com boom. Given the fact that electricity costs will be rising, we’re talking some serious motivation to organizations to cut down cluster power consumption per unit of processing power.
Every 6 months or so, some newspaper or financial magazine posts an article reporting on a study showing massive financial losses to business by employees “playing around” on the Internet. “People using computers for something other than business” has been referred to as “cyberloafing” or cyberslacking (there’s a wikipedia article on the topic). These articles are usually dragged out when someone wants to justify spending IT dollars monitoring their employees. The target in today’s Sydney Morning Herald is Facebook.
From the article:
Richard Cullen of SurfControl, an internet filtering company, estimates the site may be costing Australian businesses $5 billion a year. “Our analysis shows that Facebook is the new, and costly, time-waster,” he said.
The report calculates that if an employee spends an hour each day on Facebook, it costs the company more than $6200 a year. There are about 800,000 workplaces in Australia.
Now, aside from an obvious conflict of interest in the source here (a company that sells filtering software may not be the most unbiased source), this analysis is lacking in detail and seems laughably basic. $6,200×800,000=$4.97 billion (Australian), sure. However, this assumes 1 worker per workplace wastes an hour per day, and (given there are actually 246 work days in the year, where 365 days – 104 weekend days – 15 vacation/holiday days = 246), that slacker makes $25/hr.
Let’s say that this is accurate, for the sake of argument. That is, you’re running a business in Queensland, and you fit this analysis… somewhere in your organization there’s a person making $25/hr who is spending an hour a day on Facebook. Think for a minute about what you’re actually losing, and what the costs are of remediation. First, employee time is not a fixed asset. If I hire someone to perform a set of job functions, and there is slack time during the day when those functions are not required, there is no “loss” if the employee is surfing around Facebook (or Digg or Google or whatever is the current Internet-time-waster-meme). If, on the other hand, this slack time does not exist, either the employee is falling behind (and I’m losing business) or someone else is doing the employee’s job. In either of these cases, however, the manager/supervisor of the employees in question ought to know that they have an unreliable employee. Blocking Facebook isn’t going to turn your slacker into a completely productive employee – they’ll find something else to waste an hour a day doing. Your real problem here isn’t your slacker employee, it’s your manager who isn’t managing or your supervisor who isn’t supervising.
Next, think about hidden benefits of the “slacker activity”. If someone is surfing Facebook, they may actually be getting access to resources that make it easier for them to do their job. Aside from the obvious benefits to your Recruiters for surfing a social networking site, think of the advantages to your IT staff for reading newsgroups or email lists or blogs. Think of the time your employees save surfing the web between tasks enabling them to find a plumber or electrician for a problem at home; there may not be a direct benefit to you as an employer here, but getting a fix arranged for that burst pipe at home enables them to concentrate on their job, making them more productive.
Certainly, there are security issues with allowing your employees unfettered access to the Internet. Kevin Mitnick has written several books on the hazards of seemingly innocent information when it is accessible by social engineers, as have numerous other authors. There are other employee-employer issues, like sexual harassment claims if one of your employees is surfing porn at work and offending an officemate. There are indeed reasons to take steps to protect your institutional network from web sites hosting malicious content. When it comes to making your employees maximize their work hours, however, you can’t fix a people problem with a technical solution.
Manage your managers!
I generally find Wikipedia to be an eminently useful resource, but the open-edit nature does lead to occasional questions regarding peer review. Tools like Wikiscanner go a long way towards providing additional levels of credibility (and identifying the lack thereof) for the site. This should be incorporated directly into Wikipedia itself.