I found out about “Power IT Down Day” this morning from Ken Oestreich, the director of product marketing at Cassatt Corp., who wrote a blog post about “Power IT Down Day” in his Fountainhead blog yesterday. Oestreich wrote that, although it’s a good idea, Power IT Down Day is also “missing the boat” because its sponsors stress PC power management, but not server power management. He goes on:
At the time of this writing, the official website at HP showed over 2,700 participants, saving an estimated 35,000 KWh. But here’s a sobering statistic: At a recent Silicon Valley Leadership Group Energy Summit, Cassatt piloted Server power management software. The organization using the software operated a number of its own data centers — and the case study findings showed that if this software were used enterprise-wide, the annual savings could be 9,100KWh for this enterprise alone.
You’ll never guess what Cassatt does. That’s right! It makes server power management software.
But the fact remains that Oestreich has a point. In a report to Congress last year, the federal Environmental Protection Agency recommended server power management as one way to reduce data center energy levels. Other industry groups like The Green Grid and The Uptime Institute recommend the same.
The good news is that data centers are listening. In our own purchasing intentions survey last year, only 18% said they were using power-down features on their servers, with another 13% saying they planned to sometime last year. Those numbers have since jumped. According to our new survey results from this year, 31% have implemented power-down features, and another 22% said they plan to sometime this year.
So I guess data center managers have gotten the hint. Maybe “Power IT Down Day” doesn’t have to be extended after all…]]>
Microsoft had already said they were looking in the area for a suitable data center location. It is expected to compare in size to one Microsoft is building in San Antonio, Texas, and employ about 75 high-tech workers at about $70,000 a pop.
Iowa has become a popular place lately for data center facilities, with Google already there building in Council Bluffs, which is right next to Omaha, Neb., about 120 miles west of Des Moines (which in turn is about 300 miles west of Chicago, for those unfamiliar with the Midwest). Why is it popular?
It also doesn’t hurt that Iowa is thirsty for data centers to the point of offering financial incentives to two of the largest companies in the country (Microsoft and Google) to go there. Do those incentives benefit both sides? Not everyone agrees.]]>
The perimeter patrol is an integral part of our data center operations designed so our staff can constantly monitor and control the data center’s status and its operational readiness. Each patrol takes around an hour to perform and is a top-to-bottom inspection of our facility and server environment.
The list provides a great guideline for ensuring your data center’s health.
Link VIA Rich Miller’s Twitter feed.]]>
The IT efficiency-focused group has published a new paper on the “productivity indicator.” Christian Belady, the principal power and cooling architect at Microsoft who was the driving force behind the PUE/DCIE metric, edited the paper and said it should be used as “a communication tool” between various members of a company – IT workers, data center facility folks and company executives.
“What this does is give you a quick visual of how you’re doing, especially if you’re communicating up to executives,” he said.
The paper suggests building a radial graph with five “spines,” with each spine representing a metric:
The paper doesn’t say how to come up with each of these numbers, but there are tools and software out there to get the data points for each of them (see the definition of storage utilization, for example). And if for some reason you are still trying to figure out how to measure network utilization, for example, you can still plot a graph using the productivity indicator, but with fewer spines. Here is a sample picture of a productivity indicator radial graph:
Belady and John Tuccillo, a Green Grid member from APC, said businesses can add target lines if they want as well. They could have targets for six months out, a year, and 18 months out. Companies can use it for whatever data center metrics they’re actually using. So if it’s not a pentagon, it might be a square or a triangle with four or three data points, respectively. In which case it might look like this:
Companies can also break down one of those categories, such as data center utilization, into a more detailed radial graph all its own, such as this:
They emphasized that this isn’t something that companies should use to compare to other companies. Instead, it’s a way for businesses to realize their existing energy situations and set target goals for themselves down the road.
“Different companies have different risk thresholds. A business may say, ‘You know what? My storage utilization, because of my business plan, should only be at 50%,’” Tucillo said. “One of the strengths of this tool is that it allows for the end user to weigh the spines to what their business practice is.”]]>
“We bid out the price to buy some of these thermal mapping products,” Porter said. “The systems start around $1,000 and cost around $100 per sensor. We’re able to deploy our system for well under $25 per sensor, including bus and reader. The software is fully supported in the Linux kernel so we don’t have to write any drivers. When I told management how much it would cost it didn’t take me long to get them to fund the project.”
Porter says Core NAP customers are interested in high-density server configurations, and modern blade servers can throw off hot-cold aisle set ups, so thermal mapping is critical to staying on top of customers’ density demands.]]>
“Ken [Brill of The Uptime Institute] has some valid points, clearly there needs to be more clarity and refinement in the definition to make it a rock solid benchmark. PUE is a “living metric” that the industry and in particular the Green Grid is working. But with all of the issues that are in the process of being resolved, here are three basic facts:
1) All metrics can and will be gamed regardless of the crispness of their definition. Show me a metric and I can come up with a way to game it.
2) Companies that are measuring PUE are improving their PUE over time. So they are improving their efficiency
3) Companies that are not measuring, are likely not improving. So these companies will be at a competitive disadvantage.
Microsoft is a company that is measuring (since 2004) and improving our PUE benchmarking against ourselves. There is no reason any other company cannot do this. Comparison with other companies is useful but less important to us as long as we demonstrate continuous improvement in our PUE. We hope that the issues people have with PUE for external benchmarking will be cleaned up in time but we do not plan on waiting until then for continuous improvement in our own operations.”
Here are a few key points to consider in the ongoing evolution of PUE:
Gaming PUE is going to happen
A lot of data center providers have included PUE ratios in press releases lately, many of them incredibly low. Rich Miller at Data Center Knowledge says he’s seen it before. “That’s pretty much what happened with the Uptime Tier System, which set forth a four-tier rating system for data center reliability. Data centers began describing themselves as equivalent to ‘tier three-plus’ or even ‘tier five.’”
PUE will need to evolve into a dynamic quality control metric
Dave Ohara at GreenM3 has a great explanation of how data center pros should use PUE in a dynamic way. “What helped me to think of PUE as a dynamic number is to think of it as quality control metric. The quality of the electrical and mechanical systems and their operations over time are inputs into PUE. As load changes and servers will be turned off the variability of the power and cooling systems influence your PUE. So, PUE can now have a statistical range of operation given the conditions. This sounds familiar. It’s statistical process control.”
Standards and training needed on how and when to measure PUE
Data center managers getting started with a PUE measurement program need some guidance — where, when and how do you take the most meaningful measurements? Microsoft’s Mike Manos and Belady have put together an excellent PUE Strategy post on their blog, The Power of Software. This checklist takes PUE newbies from measuring by walking around with a clipboard to data center chargeback. The Uptime Institute’s Pitt Turner has a great webcast on how to measure PUE on UPS and PDU equipment. The next step will be to get everybody doing this in the same way — which is where ASHRAE TC 9.9 comes in. The organization supports PUE and announced plans to develop a publication that would standardize PUE measurement methodology in November 2007, but no word so far on the progress of that project.
Brill said he’s seen companies talking about a PUE of 0.8 — which is physically impossible. “There is a lot of competitive manipulation and gaming going on,” Brill said. “Our network members are tired of being called in by management to explain why someone has a better PUE than they do.”
If you’re going to compare your PUE against another company, you need to know what the measurement means. “You need to know what they’re saying and what they’re not saying,” Brill said. “Are you going to include the lights and humidification system? If you’re using free cooling six months of the year, do you report your best PUE?”
Brill conceded that The Green Grid’s PUE whitepaper has gained traction in the industry, spurring more action and debate than any other efficiency effort so far. But Brill takes issue with the measurement’s use of the term “power”. According to Brill, the fundamental problem with PUE is that it’s a snapshot in time. Power by definition is a spot measurement, Brill said. Power over time is “energy”. So power is measured in kilowatts, energy is measured in kilowatt hours.
Proponents of PUE like Microsoft’s Christian Belady have advocated measuring PUE over time, but Brill said that is not expressed explicitly in the standard.
I think it’s a bit of a stretch to assume C-Level execs are even aware of PUE (let alone calling data center staff out on the carpet about it).
I recently wrote an article about a data center manager that made huge efficiency improvements at a massive facility, saving hundreds of thousands of dollars through engineering projects. I asked him what his CIO thought about the data center efficiency he was achieving, and he told me the CIO had no idea. He’d never actually met the CIO…
Nonetheless, Brill makes a very important point. The first goal of PUE is to make a ratio to improve on internally. But the larger goal is to use the metric to compare data centers — as a benchmark against competitors, or as a way to compare various configurations, geographical locations, and technologies. Without standardization, comparative measurements will be meaningless.
Are your executives measuring you against competitors’ PUE? We’d like to hear from you.]]>
“We have not seen extreme measures being taken by IT organizations, such as hiring freezes, but we do expect to see enterprises take a more conservative and ‘wait-and-see’ approach to staffing for the rest of 2008,” said Gartner research vice president Lily Mok in a recent report.
Nonetheless, data center facility manager jobs are still in high demand. The New York Times reported on it recently and I’m still getting emails from Google’s recruiters asking if I know anybody looking for a job on with the Google facility engineering team.
So what gives? Is there a data center facilities job gap on the horizon? Is working knowledge of Ohm’s Law and computational fluid dynamics protection against a down economy?]]>
LinkedIn already rents space in two of Equinix’s Silicon Valley area locations. The release didn’t say how much space in Chicago LinkedIn would be renting out, or in which Chicago data center it would be in. Equinix now has three data centers totaling about 500,000 square feet of space in the Chicago area.
Equinix’s newest location in the Chicago area is located in a northwestern suburb called Elk Grove Village. We went on a video tour of that Equinix facility earlier this year.]]>