Posted by: Mark Fontecchio
data center cooling, Data center power
Over at CIO, there is an article about a so-called data center power-density paradox. According to Michael Bullock, the CEO of a data center design consultancy called Transitional Data Services, if you don’t beware the power-density paradox, “it will ensnare you in an unappetizing manner.”
OK, so what is it? Bullock argues that as you increase the power density in your data center, “your efforts to free up space in your data center could boomerang, creating an even greater space crisis than you had before.”
Drilling down, the paradox says that as you use more dense equipment (which places greater demands on power and cooling), you will quickly reach an inversion point where more floor space is consumed by support systems than is available to your IT equipment – typically between 100 and 150 watts per square foot. This translates into greater capital and operational costs, not the reductions you were hoping to achieve.
How much space will you need? At a power density of about 400 watts per square foot, plan to allocate about six times your usable data center space for cooling and power infrastructure. So before you embrace high-density as a quick fix to your space problem, make sure you have adequate room to house the additional power and cooling infrastructure, sufficient raised-floor space to handle the increased airflow demands of hotter-running boxes and, of course, sufficient available power to operate the hungry systems and their support gear. If any of these resources are unavailable or inadequate, your data center will not support the increased power density. And you will have wasted your time and money.
Let’s drill down, though, for real. Let’s say you decide your data center needs to process more. As an example, let’s say you need to expand your data center so that you have 1,024 processing cores, which you calculate as 256 quad-core processors. Should you use a power-dense design, such as blade servers, or spread that processing power out amongst 1U rack servers?
Hewlett-Packard’s c7000 BladeSystem enclosure is your blade server design. In a 42U rack, you can fit four c7000s, each of which can hold 16 HP ProLiant BL2x220c G5 server blades, for a total of 64 quad-core Intel Xeon 5400 processors. That adds up to 256 quad-core processors in a single rack. Each c7000 chassis demands 6 x 2,400 watts of power, or 14,400 watts. Multiply that by four chassis and you have 57.6 kilowatts of power in a single rack holding 256 quad-core chips.
Now, let’s use a spread-out design with HP’s DL100 rack servers. A DL160 G5 rack server is 1U and holds one quad-core Intel Xeon 5400 processor. So it will take 256U, or about six 42U racks, to reach the same processing power as a single BladeSystem. Each DL160 server demands 650 watts of power, so 256 of them demands 166.4 kilowatts of power.
To sum up:
- Power-dense design: 1,024 processing cores using blade servers use up 42U of space and 57.6 kilowatts of power
- Less power-dense design: 1,024 processing cores using 1U rack servers use up 256U of space and 166.4 kilowatts of power
According to this, there is no power-density paradox. If you use power-dense equipment, you will use less space and less power.
Now, I realize that cooling a single rack of blade servers would be a ridiculously difficult chore, and would take a lot more effort than a single rack of rack servers. But that would be comparing a single blade server rack to six racks full of rack servers. It’s not an apples-to-apples comparison.
Bullock’s point is not lost. If you have a rack of 1U servers, don’t expect to be able to convert that rack to blade servers and provide the same level of power and cooling infrastructure as you presently have. It won’t happen. But that’s a comparison of more processing power to less processing power. Comparing equivalent processing power designs yield no paradox, at least on the power side of the equation.
The cooling side of the equation is a different story, and can be complicated by factors such as airside economizers, which can cool less-dense data centers but can’t cool a 57.6KW rack. So as an example, if you spread your IT equipment out enough, then maybe you could eliminate mechanical chillers altogether. That could not only cut down on space, but on cost as well (which is what matters in the end). Also, your raised floor might be able to cool six racks of 1U servers with normal CRAC units, but you might need to convert to overhead or liquid cooling to cool a single 57.6KW rack properly.
In any event, the issue is not as simple as Bullock makes it out to be. Power-dense equipment will not always lead to more data center power and cooling equipment. Oftentimes, it will lead to less when matched up against a comparable rack-server design.