Data center facilities pro

ACRHIVED. Please visit our new blog at: http://itknowledgeexchange.techtarget.com/data-center/

» VIEW ALL POSTS Feb 23 2009   8:03PM GMT

Data center high density vs. low density: Is there a paradox?



Posted by: Mark Fontecchio
Tags:
data center cooling
Data center power

Over at CIO, there is an article about a so-called data center power-density paradox. According to Michael Bullock, the CEO of a data center design consultancy called Transitional Data Services, if you don’t beware the power-density paradox, “it will ensnare you in an unappetizing manner.”

OK, so what is it? Bullock argues that as you increase the power density in your data center, “your efforts to free up space in your data center could boomerang, creating an even greater space crisis than you had before.”

Drilling down, the paradox says that as you use more dense equipment (which places greater demands on power and cooling), you will quickly reach an inversion point where more floor space is consumed by support systems than is available to your IT equipment – typically between 100 and 150 watts per square foot. This translates into greater capital and operational costs, not the reductions you were hoping to achieve.

How much space will you need?  At a power density of about 400 watts per square foot, plan to allocate about six times your usable data center space for cooling and power infrastructure.  So before you embrace high-density as a quick fix to your space problem, make sure you have adequate room to house the additional power and cooling infrastructure, sufficient raised-floor space to handle the increased airflow demands of hotter-running boxes and, of course, sufficient available power to operate the hungry systems and their support gear. If any of these resources are unavailable or inadequate, your data center will not support the increased power density. And you will have wasted your time and money.

Let’s drill down, though, for real. Let’s say you decide your data center needs to process more. As an example, let’s say you need to expand your data center so that you have 1,024 processing cores, which you calculate as 256 quad-core processors. Should you use a power-dense design, such as blade servers, or spread that processing power out amongst 1U rack servers?

Hewlett-Packard’s c7000 BladeSystem enclosure is your blade server design. In a 42U rack, you can fit four c7000s, each of which can hold 16 HP ProLiant BL2x220c G5 server blades, for a total of 64 quad-core Intel Xeon 5400 processors. That adds up to 256 quad-core processors in a single rack. Each c7000 chassis demands 6 x 2,400 watts of power, or 14,400 watts. Multiply that by four chassis and you have 57.6 kilowatts of power in a single rack holding 256 quad-core chips.

Now, let’s use a spread-out design with HP’s DL100 rack servers. A DL160 G5 rack server is 1U and holds one quad-core Intel Xeon 5400 processor. So it will take 256U, or about six 42U racks, to reach the same processing power as a single BladeSystem. Each DL160 server demands 650 watts of power, so 256 of them demands 166.4 kilowatts of power.

To sum up:

  • Power-dense design: 1,024 processing cores using blade servers use up 42U of space and 57.6 kilowatts of power
  • Less power-dense design: 1,024 processing cores using 1U rack servers use up 256U of space and 166.4 kilowatts of power

According to this, there is no power-density paradox. If you use power-dense equipment, you will use less space and less power.

Now, I realize that cooling a single rack of blade servers would be a ridiculously difficult chore, and would take a lot more effort than a single rack of rack servers. But that would be comparing a single blade server rack to six racks full of rack servers. It’s not an apples-to-apples comparison.

Bullock’s point is not lost. If you have a rack of 1U servers, don’t expect to be able to convert that rack to blade servers and provide the same level of power and cooling infrastructure as you presently have. It won’t happen. But that’s a comparison of more processing power to less processing power. Comparing equivalent processing power designs yield no paradox, at least on the power side of the equation.

The cooling side of the equation is a different story, and can be complicated by factors such as airside economizers, which can cool less-dense data centers but can’t cool a 57.6KW rack. So as an example, if you spread your IT equipment out enough, then maybe you could eliminate mechanical chillers altogether. That could not only cut down on space, but on cost as well (which is what matters in the end). Also, your raised floor might be able to cool six racks of 1U servers with normal CRAC units, but you might need to convert to overhead or liquid cooling to cool a single 57.6KW rack properly.

In any event, the issue is not as simple as Bullock makes it out to be. Power-dense equipment will not always lead to more data center power and cooling equipment. Oftentimes, it will lead to less when matched up against a comparable rack-server design.

7  Comments on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • MBullock
    Thanks Mark, I agree with your points and don't see as being contradictory to my original blog or the [A href="http://transitionaldata.com/insights/TDS_DC_Optimization_Power_Density_Paradox_White_Paper.pdf "]power density paradox [/A] white paper from which it is based. Unfortunately it is difficult to cover everything in a 500 word blog and my intent was just to create awareness for those who have less grasp of the thermodynamics involved. Yes, higher density systems will pay dividends (recover space and more efficiency per CPU) until you bump up against limits of currently available power and cooling. Once you approach these limits, there are some steps you can take to incrementally improve the situation – some of which are covered in the whitepaper (ultrasonic humidification, VFDs, economizers, etc.). There are also some more exotic approaches that do not have the same ROI. The primary point I was trying to make is simply as you increase density, at some point, you will need more power and more cooling to continue to expand. And as you do, this will cause your total space to increase as you account for additional support space – more CRACs, larger capacity backup generators, fuel, etc. If your CRACs must be positioned on the raised floor, this will also decrease your usable space. My firm, [A href="http://www.transitionaldata.com"]TDS [/A]understands the impact that high density has on power / cooling efficiency - and usable space. We have recently earned one of our clients a $454,000 rebate check from NSTAR which I’ll write about in my next [A href="http://advice.cio.com/blogs/data_center_expert"]blog on CIO.com[/A].
    0 pointsBadges:
    report
  • Est412
    Mark, you've said: "... Each c7000 chassis demands 6 x 2,400 watts of power, or 14,400 watts..." Aren't a half of 6 chassi's PSUs work as the hot spare PSUs? So real maximum power demand of c7000 chassis is 7,200W? Or i'm wrong? P.S. Sorry for my English...
    0 pointsBadges:
    report
  • Mark Fontecchio
    Est412: I'm honestly not sure of the answer to your questions. I've put in a request to HP to get it, and as soon as I know, I'll let you know. Thanks, Mark
    2,880 pointsBadges:
    report
  • Shambruch
    I see a couple of issues with this article. First, the power loads given for the IT equipment are nameplate ratings, meaning these are the numbers the manufacturers are forced to publish to meet UL requirements and are not indicative of the unit's steady state power draw even at peak load. To determine realistic cooling requirements, a derated figure should be used. Secondly, when comparing 1 rack of blade servers to six racks of standard 1U servers, some thought must be given to the fact the rack of blade servers will not stand in isolation. In other words, if you implemented a rack of blade servers, you wouldn't leave the next 5 racks empty in order to offset the power draw from the one rack. Therefor we can assume that the data center will fill up with exponentially increasing heat loads due to the consolidation the blade systems provide. This lends to Mr. Bullock's theory of diminishing returns based on the footprint consumed by increasing power and cooling infrastructure.
    0 pointsBadges:
    report
  • Shambruch
    EST412: Yes you are mostly correct. The unit requires only three of the six power supplies to be operational in order for it to run, however, all six are running during normal operation and consuming power in a load balancing scheme. At the end of the day, a more accurate figure for this blade chassis running at peak load would be 6-7kW not 14kW. The nameplate rating used in the article shows maximum possible transient current draw ( like the initial momentary burst required to start up the system) and not maximum steady state operational current draw, which is sometimes less than half of the nameplate rating.
    0 pointsBadges:
    report
  • TonyHarvey
    EST412: You are correct the c7000 actually load shares in an N+N configuration so all power supplies will be active, but 3 of the power supplies are effectively reserved for redudnancy. So the max theoretical power consumption is 2400 x 3 or 7200W., but as Shambruch correctly points out this just the power supply cpapacity. Not the actual power usage of that enclosure. The actual power usage of the enclosure is very dependant on what is installed in the enclosure - as an example a c7000 could have 1 - 16 blades each blade could 1 - 2 CPUS and anywhere from 1 to 18 DIMMs. All of this can make a huge difference. In shameless self promtion mode I have started a series posts about this on the HP [A href="http://www.communities.hp.com/online/blogs/eyeonblades/default.aspx"]Eye On Blades [/A]Blog
    15 pointsBadges:
    report
  • Mark Fontecchio
    Thanks to TonyHarvey for his response, which is the same one that I got. I also have a few more details. When you order the BladeCenter, HP can configure the system with fewer than six power supplies if the configuration wouldn't require all the power supplies to be installed. Maximum density is often less than 5KW, or about one-third the amount I had said in my initial post. HP also has a feature that allows the end user to set a power cap on the enclosure. If anything, these power figures go against the power-density paradox, but four chasses of fully stocked blade servers could still have a maximum 20KW of power consumed, and that is still a massive cooling challenge, as Bullock has said.
    2,880 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: