Data center facilities pro

March 12, 2009  11:38 PM

Should AFCOM rename it the Data Center Vendor World conference?

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

Attendance at this year’s spring Data Center World conference was lighter than in past years, which was expected.

The economy has affected turnout at tradeshows this year. Last week I was at the Share mainframe user group conference in Austin, and that had lower attendance as well. Usually Data Center World in Las Vegas brings in the crowds, but that wasn’t the case as much this year.

Vendors I spoke to said traffic was steady, but definitely lighter. I heard that there were 700-800 end users at the show, but more than 900 exhibitor personnel. Whenever the vendor presence is larger than the end user presence, that’s not a good sign. Then again, Data Center World has always had a large vendor turnout, with dozens of booths on the show floor. Which isn’t a bad thing, as long as you’re getting end users to those booths with intentions to buy.

Speaking of vendors, I noticed there were more vendor-led sessions at Data Center World than in past shows. Some of them were good — Richard Sawyer from EYP Mission Critical Facilities (part of HP), for example, gave a great talk on data center staffing — but others were not. Even the keynote by Sun’s chief technology officer Greg Papadopoulos, though very interesting, was seen by some data center users I talked to as a veiled sales pitch for Sun’s cloud computing technology. I will say that I thought Papadopoulos did what he could to make his talk about the cloud computing trend, and not Sun products. But it’s inevitable that end users will feel the way they did when Papadopoulos had slides in his presentation that mentioned Solaris, Java and xVM.

Nathan Montgomery, data center operations manager for Brinks Home Security, told me he went to an all-day tutorial on Monday that was supposed to be about building and expanding your data center. But he said it was more like a vendor “show and tell,” and was upset enough to seek a refund from AFCOM (the tutorials were an extra $200 fee). To AFCOM’s credit, they did refund him that fee.

Even AFCOM’s board of directors is vendor heavy. It has about a dozen members, but only one of them is an end user. The rest are vendors and consultants. So here’s to hoping that AFCOM reaches out more to the end user community. If they can get more of them on the board of directors, maybe that could lead to more of them giving end user-focused sessions, which is what Montgomery and most other users are looking for.

March 12, 2009  11:32 PM

Can you afford data center CFD?

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

Computational fluid dynamics has become a popular way for data centers to analyze the air flow in their facilities and determine if the cold and hot air is going where it should. But for smaller businesses, CFD is simply out of their price range.

There are different tiers of affordability out there, however. The cheapest I’ve seen is CoolSim, which at its most basic level, costs $7,500 a year. The other two major data center CFD vendors — TileFlow and Future Facilities’ 6Sigma — go for about twice and four times that, respectively.

Another unique aspect of CoolSim is that it’s a client-server, more services-based model. You have a basic desktop application where you build your data center model. Then you export that model to a file and send the file to CoolSim, which crunches the numbers and sends back a report. With TileFlow and Future Facilities, you do the crunching in-house.

“We deliver it as a SaaS model,” said Paul Bemis, president of the Concord, N.H.-based company. “Since it’s client server, you only pay for what you need.”

But the most important aspect of CFD modeling is accuracy, and with a product like CoolSim, there’s a question around whether you get what you pay for. Pete Sacco, president of engineering and consulting firm PTS Data Center Solutions, has said that cheaper CFD modeling tools such as CoolSim simply aren’t as accurate as the Future Facilities product, which his company uses.

Bemis acknowledges that the Future Facilities software encompasses more detailed results, but says that a lot of data centers don’t really need or want that much detail. And obviously he disputes claims that CoolSim results could be inaccurate. But as with any product out there, it’s all about caveat emptor. If you do your due diligence, you can quickly find out for yourself what is the right product for the right price.

March 12, 2009  11:25 PM

Hot aisle cold aisle and chaos air distribution

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

For many years, hot/cold aisle design has been a de facto standard in data center design. But at least one person thinks that was a temporary — and perhaps sub-premium — fix to the problem, and that’s because of chaos air distribution.

Hot/cold aisle design is the concept off aligning IT equipment racks in rows so that the fronts face the fronts and the backs face the back. That way, cool air can come up through a raised floor or from overhead into the cold aisle, enter the front of the servers to cool them, and then exhaust hot air in the hot aisle. It helps prevent the mixing of hot and cold air, which leads to wasted cooling.

But letting that cold and hot air roam free in the hot and cold aisles leads to unpredictability, said Carl Cottuli, a VP of product development for Wright Line. Like a toddler who needs discipline, air in a data center needs better direction so it doesn’t go off and do whatever it feels like. That’s chaos air distribution.

“The real problem all along was not the arrangement of racks, but reliance on chaos air distribution,” Cottuli said at the AFCOM Data Center World conference in Las Vegas this week.

Cottuli said the solution is to contain your hot or cold aisles to guide the air directly to the servers or out of the room. If you can contain both, all the better.

Now, Cottuli does have a specific interest in this. Wright Line builds and sells server cabinets, ceiling plenums and cold aisle containment products that fight this so-called chaos air distribution. But the idea of hot/cold aisle containment is not Wright Line’s alone — a lot of vendors are selling aisle containment products, and a lot of users have bought these products or built their own custom aisle containment systems.

March 10, 2009  5:55 PM

Sun using SuperNAP data center facility for cloud computing

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

Sun Microsystems chief technology officer Greg Papadopoulos gave some insight today into Sun’s cloud computing venture while speaking at the AFCOM Data Center World show in Las Vegas.

He said Sun will be use Switch Communications’ SuperNAP facility for some aspects of the infrastructure. Sun made a splash in the cloud computing industry earlier this year, acquiring a company called QLayer that focuses on internal private clouds.

The SuperNAP data center that Sun will be in is built on slab, engineered around overhead cooling and hot-aisle containment. Switch claims power densities around 1,500 per square foot, or about triple the industry standard for air cooling. The hot exhaust air enters the contained hot aisle, rises into a ceiling plenum and circulates outside the IT equipment room through a bunch of huge heat exchangers. Then it recirculates back into the room as cool air.

So it appears that Sun will be using the cloud to provide the cloud.

“We have thousands of nodes and petabytes of storage there,” Papadopolous said.

He added that with Sun entering the cloud computing market, it may have to change its logo from “The network is the computer” to “The network is your computer.”

February 24, 2009  4:56 PM

Top ten industry demands from Data Center Pulse

Matt Stansberry Matt Stansberry Profile: Matt Stansberry

A new data center user group, built on Linked-In profiles and spearheaded by executives from Sun Microsystems and VMware, held its first in-person gathering last week at Sun’s Santa Clara, Calif. headquarters.

Data Center Pulse (DCP) garnered over 675 online end-user supporters from 400 companies around the world, though only around 30 members showed up in person last week.

The goal of the inaugural DCP conference (more like a BarCamp or unconference) was to develop a list of goals and demands for the data center vendors, and industry groups.

While the founding members of DCP didn’t know what the group’s future would hold or what shape the organization should take — they did have one clear goal: To shape the industry through data center owner and operator feedback.

These are my interpretations of the group’s top ten goals and demands. They are subject to clarification, as they’d bubbled up from working group discussions 30 minutes before they were announced:

1. Align the data center industry organizations (AFCOM, The Green Grid, The Uptime Institute and ASHRAE) under a single international umbrella organization that could speak with one voice for the data center community; bring competing organizations to sit at the same table and collaborate; and to curate a body of data center standards.

2. Develop a data center certification, requiring new data centers to meet certain efficiency criteria, like the fuel efficiency standards on vehicles. It would be a consistent baseline to measure efficiency and drive improvement.

3. Come up with a standard definition of the “data center stack” from top to bottom.

4. Update or dump the Uptime Institute Tier Levels. See Mark Fontecchio’s recent story for more on this topic.

5. Demand data center infrastructure vendors develop more modular products. Stop the fixed, over-provisioned designs. Users want plug-and-play data center capacity

6. The members want an objective way to perform peer-to-peer data center efficiency comparisons. A standard measurement protocol to compare your PUE is against Google and Microsoft. Healthy competition drives efficiency.

7. Users want a common communication standard to monitor all layers of the power delivery system, connecting building management and IT systems.

8. Standardize conductive (liquid) cooling. Encourage ASHRAE to finish and publish a standard on liquid cooling technology. People want to get rid of air.

9. Push vendors to develop higher voltage (480/277volt) servers, allowing users to get rid of one transformer loss and driving up efficiency.

10. Create a repository: A neutral location to house and present data center information. Design best practices, specific server hardware configuration load measurements versus nameplate data, and user-generated vendor evaluations.

Data Center Pulse is gaining a ton of momentum very quickly, and may in fact be able to bring some of these changes to fruition. How do these ten points match up with your data center demands.

February 23, 2009  8:03 PM

Data center high density vs. low density: Is there a paradox?

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

Over at CIO, there is an article about a so-called data center power-density paradox. According to Michael Bullock, the CEO of a data center design consultancy called Transitional Data Services, if you don’t beware the power-density paradox, “it will ensnare you in an unappetizing manner.”

OK, so what is it? Bullock argues that as you increase the power density in your data center, “your efforts to free up space in your data center could boomerang, creating an even greater space crisis than you had before.”

Drilling down, the paradox says that as you use more dense equipment (which places greater demands on power and cooling), you will quickly reach an inversion point where more floor space is consumed by support systems than is available to your IT equipment – typically between 100 and 150 watts per square foot. This translates into greater capital and operational costs, not the reductions you were hoping to achieve.

How much space will you need?  At a power density of about 400 watts per square foot, plan to allocate about six times your usable data center space for cooling and power infrastructure.  So before you embrace high-density as a quick fix to your space problem, make sure you have adequate room to house the additional power and cooling infrastructure, sufficient raised-floor space to handle the increased airflow demands of hotter-running boxes and, of course, sufficient available power to operate the hungry systems and their support gear. If any of these resources are unavailable or inadequate, your data center will not support the increased power density. And you will have wasted your time and money.

Let’s drill down, though, for real. Let’s say you decide your data center needs to process more. As an example, let’s say you need to expand your data center so that you have 1,024 processing cores, which you calculate as 256 quad-core processors. Should you use a power-dense design, such as blade servers, or spread that processing power out amongst 1U rack servers?

Hewlett-Packard’s c7000 BladeSystem enclosure is your blade server design. In a 42U rack, you can fit four c7000s, each of which can hold 16 HP ProLiant BL2x220c G5 server blades, for a total of 64 quad-core Intel Xeon 5400 processors. That adds up to 256 quad-core processors in a single rack. Each c7000 chassis demands 6 x 2,400 watts of power, or 14,400 watts. Multiply that by four chassis and you have 57.6 kilowatts of power in a single rack holding 256 quad-core chips.

Now, let’s use a spread-out design with HP’s DL100 rack servers. A DL160 G5 rack server is 1U and holds one quad-core Intel Xeon 5400 processor. So it will take 256U, or about six 42U racks, to reach the same processing power as a single BladeSystem. Each DL160 server demands 650 watts of power, so 256 of them demands 166.4 kilowatts of power.

To sum up:

  • Power-dense design: 1,024 processing cores using blade servers use up 42U of space and 57.6 kilowatts of power
  • Less power-dense design: 1,024 processing cores using 1U rack servers use up 256U of space and 166.4 kilowatts of power

According to this, there is no power-density paradox. If you use power-dense equipment, you will use less space and less power.

Now, I realize that cooling a single rack of blade servers would be a ridiculously difficult chore, and would take a lot more effort than a single rack of rack servers. But that would be comparing a single blade server rack to six racks full of rack servers. It’s not an apples-to-apples comparison.

Bullock’s point is not lost. If you have a rack of 1U servers, don’t expect to be able to convert that rack to blade servers and provide the same level of power and cooling infrastructure as you presently have. It won’t happen. But that’s a comparison of more processing power to less processing power. Comparing equivalent processing power designs yield no paradox, at least on the power side of the equation.

The cooling side of the equation is a different story, and can be complicated by factors such as airside economizers, which can cool less-dense data centers but can’t cool a 57.6KW rack. So as an example, if you spread your IT equipment out enough, then maybe you could eliminate mechanical chillers altogether. That could not only cut down on space, but on cost as well (which is what matters in the end). Also, your raised floor might be able to cool six racks of 1U servers with normal CRAC units, but you might need to convert to overhead or liquid cooling to cool a single 57.6KW rack properly.

In any event, the issue is not as simple as Bullock makes it out to be. Power-dense equipment will not always lead to more data center power and cooling equipment. Oftentimes, it will lead to less when matched up against a comparable rack-server design.

February 19, 2009  2:07 PM

Local official wants to build data center powered by methane

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

The mayor of Chicopee, Mass., a city of about 60,000 people in the western part of the state, wants to use landfill gases to power a proposed data center in his facility.

According to a story in the Springfield Republican on city Mayor Michael D. Bissonnette being upbeat about Chicopee, he says that he has reached out to Dow Jones — the company that publishes The Wall Street Journal, New York Post and other newspapers — to build “a national data center here with the possibility of generating electricity from the methane at the landfill.” Dow Jones has an office in Chicopee already.

“Keeping what is here is important for both the city and the regional economy,” said Bissonnette, according to the story.

Though rare, using methane to power corporate facilities is not unprecedented. Fujifilm announced in 2007 that it would use methane from a nearby landfill to help power its manufacturing plant in Greenwood, S.C. Last year Google said it would pay for the building of a greenhouse that would reuse methane. The greenhouse is near a data center it’s building in western North Carolina, and paying for it would allow Google to win carbon credits so it could claim environmental friendliness.

February 18, 2009  6:18 PM

HP only one of big four to be off Uptime’s “Green 100”

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

Edit: Uptime made a mistake in leaving HP off the list. The corrected Uptime Green 100 list includes them. So I guess they’re green after all! I’m working on scheduling a talk with some HP execs about how exactly they’ve been green.

I saw today that The Uptime Institute announced its “Global Green 100,” which it described in a press release on the Green 100 as companies that “have showed an exemplary commitment to improving their data center operations, not only reducing their carbon footprints, but also realizing significant financial savings.”

Dell, IBM and Sun Microsystems are on the list. Hewlett-Packard is not. I’m not really sure what that means, but I found it interesting that HP was the only one of the big four server manufacturers to not be part of the list. Other big IT companies on the list include Cisco, EMC, Microsoft and Oracle.

HP did undertake a massive data center consolidation project a couple years ago, but I’m not sure what they’ve done since on the energy-efficiency front.

February 13, 2009  10:31 PM

Green Grid Postmortem: Successes and the work ahead

Matt Stansberry Matt Stansberry Profile: Matt Stansberry

This column was contributed by Deborah Grove of Grove Associates.

Day one of The Green Grid Technical Forum was largely a brag about how much work was done in 2008, and the evidence is truly impressive. I saw how the work done in 2007 paid off the following year because the infrastructure was in place to make white papers, partnerships, outreach and collaboration happen in what seemed to be effortless, well-designed information dissemination.

The Green Grid introduced new interactive tools under development, including the following:

  • Free Cooling Map Web tool;
  • PUE Calculator tool; and
  • Power Configurations Calculator, an online efficiency estimator tool.

The Free Cooling Map, based on weather data, was designed to show where in the U.S. it is possible to obtain free cooling for data center economizers (fresh air or evaporative cooling). Future extension to Europe and Asia Pacific are planned. One map was designed for fresh air (dry bulb) cooling and the other was for evaporative cooling (wet bulb). Upcoming features will allow you enter your data center’s zip code and see how many hours of free cooling you can expect.

The PUE Calculator Tool is designed to accurately compute power usage effectiveness in a consistent manner. The tool will compare data center container designs as well as brick-and-mortar data centers. The measurement system will include power transfer switches, uninterruptible power supply, power distribution units, cooling towers, condenser chiller pumps, fire suppression, security systems, servers and more. The remaining controversy is over air movers (fanless servers) because rack-based air movers could be considered either IT or facility load, depending on your point of view.

Pam Lembke of IBM presented on the Power Configurations Calculator, which allows users to compare efficiency curves on power distribution topologies and create their own topologies based on their own power distribution equipment configurations.

Andrew Fanara, director of the data center energy efficiency program for the Environmental Protection Agency, said he is very pleased and positive about the work done by the Green Grid and expects it to be very beneficial if changes in public policy drive up electricity rates.

Jim Pappas of Intel Corp. said that the level of participation from other industry groups, such as the Storage Networking Industry Association and the American Society of Heating, Refrigerating and Air Conditioning Engineers, or ASHRAE, is unprecedented. “Work group volunteers clearly understand that they are there not to get their names known, but rather to get some engineering collateral that helps you get your job done,” Pappas said.

Paul Scheihing of the Department of Energy (DoE) said he has worked with trade groups for many years and gives The Green Grid an “A” for its rapid and high-quality work. A Memorandum of Understanding between The Green Gird and DoE for a 10% energy savings commitment across the industry illustrates that they are serious about making progress.

What’s next for The Green Grid 2010?
Who will be on the podium next year who was not represented this year at The Green Grid Technical Forum? The U.S. Green Building Council, the Ethernet Alliance, the Distributed Management Task Force or the Standard Performance Evaluation Corporation? The networking and storage industries are large energy consumers. Will they be at the table?

The Green Grid’s Data Center 2.0 strategy is to integrate software, networking, storage and facilities. Software companies can no longer hide under the radar. We need to bring them into the conversation along with the hardware and infrastructure teams. The invite is out. If you can contribute with your knowledge of service-oriented architecture, Open Applications Group standards or additional software platforms that are energy-aware, there is a seat at the table of The Green Grid

Of course, a lot of the discussion on Data Center 2.0 is still like unbaked bread, with a mushy, doughy consistency. Enterprise applications, for example: What is the right software performance metric? The intelligence in instrumentation is available, but the work groups have to understand what to measure.

Green IT and the economy
The shrinking marketplace was discussed at length in conversations with colleagues from the vendor community. Most of the people I spoke with were fairly optimistic in the face of delayed new sales, believing that our industry will find a way through this recession by exhibiting solid management skills. Perhaps the pessimists didn’t make it to this meeting, or maybe it’s a mark of America’s positive spin that we aren’t discussing the downsizing of the market from the podium. When sales forecasts drop so dramatically, shouldn’t we address them, even from the podium at a technical forum?

The last comment aside, I was pleased to have interacted with so many bright and pleasant people who are doing all they can to move the conversation about data center energy efficiency into the 21st Century.

February 11, 2009  2:41 PM

Measuring data center performance

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

There has plenty of talk around the green data center and data center energy efficiency, and one of the players has been The Green Grid, a nonprofit group focused on the topic. The Green Grid came out with power usage effectiveness (PUE), which compares total facility power to IT equipment power.

But in the end, what matters is what your data center does, not how much energy it consumes. You might have a PUE of 1.1, but if your servers just sit there idly all day long, who cares? Data centers are built to perform work, and if they don’t do that, energy consumption doesn’t mean squat. Re-enter The Green Grid.

Over the last year or so, there has been discussion over defining data center performance compared to energy consumed, often referred to as a data center’s useful work or data center productivity. The Green Grid has now come out with eight different proposals for “proxies,” which the group describes as approximations for comparing data center production to data center energy consumption. It compares them to the stickers in car lots that claim a certain miles-per-gallon rating, right down to the warning that “your mileage may vary.”

And before companies start comparing each other’s data center production, a warning from The Green Grid:

Comparisons between data centers would be valuable in a marketing or evaluation sense, but it is unlikely that any proxy for data center productivity will be comparable across multiple data centers. Rather, the primary use for a proxy will continue to be an indication of improvement over time for a single data center, and very constrained comparisons between data centers that perform the same function.

The data center production proxies

Proxy 1: User-defined measure of useful work divided by energy consumption. This proposal defers to the user to define useful work in a data center. That could be the number of emails sent, or the number of database queries handled. Whatever the case, it’s up to the user to define and measure it.

  • Pro: User gets to define useful work
  • Con: That definition could vary from application to application and server to server, making an overall measure of the data center difficult
  • Measured in tasks per kilowatt-hour (kwh)

Proxy 2: Green Grid-provided measure of useful work divided by energy consumption. Through The Green Grid, Intel Corp. will provide a software development kit with an application programming interface that you can install on a subset of your IT equipment. It will report data from software running on those servers, which can be converted to useful work and compared to energy consumption for that subset. That number can then be extrapolated for the entire data center.

  • Pro: Provides a standard way to measure useful work across applications
  • Con: Requires download and running of external software
  • Measured in tasks/kwh

Proxy 3: Sample workload divided by energy consumption of a subset of servers. The Green Grid says it will provide a bunch of sample workloads that users can run on a subset of servers. The user decides which sample workload best describes what the overall data center does. The workload is run to get a measurement of work completed. That is divided by energy consumption and extrapolated for the entire data center.

  • Pro: Similar to current benchmarking tests in the data center
  • Con: Mixed-workload data centers might not benefit as much, and sample workloads must be made to work on as many server platforms as possible
  • Measured in tasks/kwh

Proxy 4: Bits per kilowatt-hour. Add the total number of bits coming out of all outbound routers, and divide by energy consumption.

  • Pro: Easy to set up and measure, with an easy-to-understand result
  • Con: Uncertainty about whether all bits are created equal
  • Measured in megabits/kwh

Proxy 5: Server utilization using SPEC’s CPU benchmark. Measure CPU utilization over a period of time with the existing CPU benchmark from the Standard Performance Evaluation Corp. (SPEC), and divide by energy consumption.

  • Pro: Easy to implement, schedule and understand
  • Con: Benchmark is not available on all server platforms, and only measures the CPU utilization of the application in question, and not underlying framework such as the operating system and systems management tools
  • Measured in jobs/kwh

Proxy 6: Server utilization using SPEC’s power benchmark. Same as the previous proxy, but this time using the SPEC power benchmark, which measures performance compared to power for a server.

  • Pro: Easy to implement, schedule and understand
  • Con: SPEC power results depend on manufacturers publishing updated measurements for their server products
  • Measured in power-weighted jobs per kwh

Proxy 7: Compute units per second trend curve. Group your servers by the year purchased. A server produced in 2002 equals one million compute units per second. The value then increases or decreases by a factor of seven every five years depending on when the servers were purchased. Add the total number of compute units and divide by energy consumption.

  • Pro: No software needed and no benchmarks to run
  • Con: Bias toward newer servers and a lack of comparison of different servers released in the same year
  • Measured in millions of compute units per kwh

Proxy 8: Operating system workload efficiency. Calculates the number of operating system instances and compares that number to the power being used at that time.

  • Pro: Provides good high-level estimate of efficiency and utilization
  • Con: Not as granular as some might want (ie., what if the operating systems aren’t even running applications?)
  • Measured in operating instances per kwh

Wow, that’s a lot of proxies! Let us know which one or ones you like, and which ones you think are stupid. The Green Grid plans to digest comments from its members and the data center user crowd for at least a few months, and then decide sometime after that which one it favors. The group has even set up an online survey on the proxies, so you can let The Green Grid know which you prefer.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: