Facilities Design II   4 comments

A follow-up on an earlier post about building machine rooms.

One of the additional difficulties in designing high-capacity server space is the problem of heat transfer. I’m not a mechanical engineer (and really, most people who read this blog aren’t interested in learning a couple college semesters worth of thermodynamics and practical HVAC engineering concepts, not to mention industry standards). In a nutshell, if you’re trying to cool down something that’s really hot, you have limited options.

There are a few products out there that attempt to help you solve this problem. Emerson/Liebert produces a contained server enclosure called the šŸ˜„, in two configurations (25kW and 17kW). Rittal Corporation produces a modular refrigerator/server cabinet system called the LCP+ that can be configured in a variety of ways. Of course APC has its own solution called InfraStruXure that also handles cooling in an integrated fashion.

None of these solutions is low cost, on the face of it. On the other hand, buying a pair of XDs (or a row of LCP+ units with the Rittal cabinet enclosures) is much more practical for the purposes of chilling a few racks (or even a smallish datacenter) than trying to retrofit an existing building. Using one of these solutions makes it pretty easy on your facilities manager -> bolt the sucker to the structure of the building, hook up a 3/4″ chilled water pipe and return, and run a big power circuit into the room. This (as expensive as it may be) is probably still going to be significantly cheaper than trying to build out a small datacenter in a converted closet in the back corner of your leased office space. And, you can take it with you when your lease is up, which you probably won’t bother to do if you’ve stuck a 35 ton refrigeration unit in a retrofitted room.

We’re building a new building at my place of employment, and I’ve been working on fitting 25 racks with a design parameter of 25kW per rack into a space that’s about 1,000 square feet. We’ve looked pretty exhaustively at both the Rittal and the Emerson packages, and all other things being equal, here’s my considered opinion.

If you’re building a data center from scratch, both of these solutions are pretty damn good, and they both have minor advantages and disadvantages in their design. In my opinion (again, no engineer here), the Emerson product is slightly better engineered not to fail as a stand alone unit, but the Rittal product is designed to fail more modularly and gracefully. What this means for your organization is dependent upon your reliability requirements. Both of them enable you to fit high-power compute clusters in a very small space. The Emerson product is less flexible than the Rittal product since it is self contained, but the obvious flip side to that is that the Emerson product is much easier to add-on in a smaller increment; if you’re planning on adding compute capacity on a 1-rack every 6 months basis, it’s easier to buy one Emerson 25kW unit every six months than to buy 1 row of Rittal cabinets with LCP+ units every year and a half. If four years pass, and you need to upgrade your coolant capacity because you just swapped out 42 1U dual core 2.6GHz machines with 42 1U ten-core 4.8 GHz machines, you can buy one or two more LCP+ units and tack them onto your enclosure. You can’t really do that with the Emerson solution.

They both work in particular scenarios, depending upon your maximum power load available, your chilled water supply, and the number of machines you want to power up (and how often you want to replace them, or add more). I have a slight preference for the Rittal units because the heat exchanger is on the side instead of the bottom. With the Emerson šŸ˜„ solution, your 42U rack is elevated about 14″, which means racking stuff up at the top of the rack requires a lift or a platform; or an employer willing to violate OSHA regulations and an employee strong enough to lift a 100 lb server over his or her head. On the other hand, if square footage is your constraint, you can fit more XDs in the same space that you’d have to dedicate to the LCP solution. Both sets of sales guys were excellent, friendly, and fairly responsive when it came to getting me information.

If you’re blog-searching looking for more information about these sorts of solutions, drop me a comment.

Advertisements

Posted April 1, 2008 by padraic2112 in facilities, hardware, tech, work

4 responses to “Facilities Design II

Subscribe to comments with RSS.

  1. Hi,

    I am an engineer, and one thing that is not mentioned here is the efficiency of the cooling units, the cost of operation, the cost of installation, ease of maintenance, or the ability to maintain specific temperature tolerances across the entire face of the rack with dynamically changing heat loads (such as those seen with power on demand features enabled in new blade server chassis) as well as the ability to realize the installed and designed capacity of a cooling unit .
    Efficiency is a function of how much energy is used to remove the heat from a space. The highest efficiencies come from closely coupling the cooling to the rack and elimanating or at least greatly reducing the possibility of air recirculation and/or short circuiting from the supply of the cooling unit to the return.
    Total elimination of short circuit paths is the only way to guarantee the realization of the total capacity of a cooling unit.
    Efficiency across a dynamic heat load is best achieved with variable speed fans, after all, the blower motors are the largest electrical energy load in any cooling unit, even more than compressors.
    Chilled water type systems provide the highest energy efficiencies in medium to large computer rooms with high heat densities.
    I have looked carefully at all these options from XD/XV, Rittal, and APC, and in my opinion, which many HVAC mechanical engineers and contractors I know share, APC offers the lowest cost unit, and the ability to deploy, and more importantly, RE-DEPLOY units, add capacity as needed, and provide cooling unit redundancy at the rack level and the row level. Redundancy even at the pipe level. The In-Row RC units are the easiest to service, maintain, and replace, and they run about $7000 each.. Several engineering firms I work with here in the Washington DC area have looked closely at these high density cooling solutions from APC, as well as Rittal and Liebert systems. A couple of divisions of govt agencies I know of, The Veterans Business Administration, and NIH agencies within US Dept of Health and Human Services, have standardized on APC’s In-Row cooling for regional service centers. APC In-Row RC units are being installed now at the National Library of Medicine where they are doing 12-28 KW per rack. Have you talked to an APC engineer? With their hot aisle containment system and chilled water In-Row RP or In-Row RC you can get the highest efficiency, lowest cost of ownership, LEED points for the new design space, and achieve the highest level of redundancy with the smallest footprint. It’s easy to design for redundancy with APC’s In-Row, and they can be used with anyone’s rack system, no matter how tall the rack. Who wants to be forced to decide what business services they compromise when one of their cooling units fails? Pretty small footprint per ton, and you don’t have to provide space for a refrigerant pump package.
    We recently modelled 42KW per rack, with dynamic loading and took power in 44 high density racks from 3 to 33 KW in 2.8 minutes. APC’s In-Row was able to maintain the air inlet temperatures within 3 degrees. (Temperature Rate of change is a design parameter for air intake of blade servers). I think they are the easiest to design with and maintain, and provide the data center manager and facilities manager the easiest thing to work around, work on, and work with, after the contractors are gone. CFO’s really like the cost savings, and the EPA and DOE really like the energy efficiency.

  2. Chuck ->

    > I’m an engineer

    > I have looked carefully at all these options from XD/XV,
    > Rittal, and APC, and in my opinion

    Generally speaking, it’s a good idea when commenting on blogs to offer a disclaimer of your place of employment, otherwise people read your comment, say to themselves, “Hm… this guy might have some good points here”, but then google your name and say…

    “Oh, he works for APC, and he didn’t mention that. Hmm.”

    http://forums.isxusergroups.com/profile.jspa?userID=7
    http://www.linkedin.com/pub/4/509/384

    Doing it from a company machine sort of belies the possibility that you’re doing this on your own time, too:

    OrgName: American Power Conversion Corp.
    OrgID: APCC
    Address: 132 Fairgrounds Rd.
    City: West Kingston
    StateProv: RI
    PostalCode: 02892
    Country: US

    NetRange: 159.215.0.0 – 159.215.255.255
    CIDR: 159.215.0.0/16
    NetName: APCC1
    NetHandle: NET-159-215-0-0-1
    Parent: NET-159-0-0-0-0
    NetType: Direct Assignment
    NameServer: APCCORP.APCC.COM
    NameServer: NS1-AUTH.SPRINTLINK.NET
    NameServer: NS2-AUTH.SPRINTLINK.NET
    NameServer: NS3-AUTH.SPRINTLINK.NET
    Comment:
    RegDate: 1992-06-10
    Updated: 2001-08-28

    RTechHandle: ZA101-ARIN
    RTechName: American Power Conversion Corp.
    RTechPhone: +1-401-789-5735
    RTechEmail: is.security@apcc.com

    # ARIN WHOIS database, last updated 2008-10-08 19:10
    # Enter ? for additional hints on searching ARIN’s WHOIS database.

    If you don’t want to be accused of bias, you should probably disclaim your affiliation at the beginning, or your credibility is going to be suspect. I did, in fact, look at APC equipment. It’s not bad stuff at all, but there are a number of reasons why I chose not to go that route, most of which have to do with the base use case for the room that I have.

  3. Oh snap. Astro that turf, APC.

  4. Oh, I work for APC, but I’d say the same thing even if I didn’t. I also am quoting engineers who work for the firms GHT, Hankins Anderson, and others, who don’t work for APC. Sorry guys, I don’t blog, so I don’t know the rules šŸ™‚ I’ll certainly remember it in the future. I hope your cooling design performs to your expectation.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: