Data center facilities pro

ACRHIVED. Please visit our new blog at: http://itknowledgeexchange.techtarget.com/data-center/


June 11, 2009  1:40 PM

EPA releases documents on Energy Star for storage



Posted by: Mark Fontecchio
Data center power, Energy Star, Green data center

The federal Environmental Protection Agency has now started down the road to developing an Energy Star spec for storage equipment. Late last month, the EPA released the first version of its Energy Star spec for servers, and it continues to work on making enterprise data centers more efficient. (At least for now, it seems that most data center managers are indifferent to Energy Star for servers.)

This week the EPA released a framework document for Energy Star storage equipment. Among other things, it looks like the spec will cover direct attached storage (DAS), network attached storage (NAS) and storage attached network (SAN), hard disk, tape, optical, solid state, hybrid storage, and bladed storage. The EPA has a dedicated Energy Star site for storage if you want to check it out.

June 8, 2009  8:46 PM

Energy Star for servers: Unexpected impact on data center efficiency



Posted by: Jbeltran
Data center power, Energy Star, Green data center

This blog post was written by SearchDataCenter.com contributer Julius Neudorfer.

It’s out! After several years, the Environmental Protection Agency (EPA) has released the first version of the Energy Star specification for servers. Now, how fast can we adapt?

It was created through a solid collaborative effort between government and major equipment vendors. The spec addresses many areas of power usage and waste in the power supply (and redundant power supplies) for servers. Until now, there has been no standard for server power supply efficiency — the existing Energy Star program covered PCs but exempted servers. Most server manufactures have been voluntarily improving their power supply and server energy efficiency, but few published their complete specifications. The spec also addressed standby power and efficiency at less than full load. In fact, it calls for a minimum of 85% efficiency at 50% of rated load, and 82% at only 20% of rated maximum. This is an extremely important step forward, since many servers normally run with dual power supplies, each one only loaded at 20-30% of its maximum rating due to load sharing (under 2N, they normally never operate above 50%). The spec also calls for the second (redundant) power supply to have a static loss of 20 W or less. This is a major improvement to the fixed losses found in typical servers with dual power supplies.

The spec covers more than just power supplies. It even limits the idle power of hard drives to 8 W and memory to only 2 W per gigabyte. (Note: There are some limitations on this, but is part of the requirements.)

Power management is required! Moreover, the spec mandates that idle servers must draw much less power than existing servers and that power management must be enabled when shipped. Until now, most manufacturers shipped servers with the power management disabled, and most IT shops never used or chose not to enable it. In fact, the Energy Star spec calls for a base server with one CPU to draw only 55 W at idle. This is about one-third or less of the typical single CPU server, which can draw 150-200 W at idle.

However, the spec excludes blade servers! Due to the complexity of blade chassis, power supply options and server blades from different vendors, this first standard wisely decided not to further delay the release date to include blade servers. (This is still being worked on and is expected to be addressed later this year.)

I predict the specification will have unexpected effects on data center efficiency in the future. While at first blush all this should save energy in the data center, this is just the first of many mandated server and IT equipment energy-efficiency regulations that will impact the relatively flat power curve of data centers. Until recently, most servers and IT equipment drew a substantial amount of the maximum power, even while idle. As these new servers begin to replace older equipment, it will begin to be felt and seen in the IT power load, which will vary much more widely that it does today. This means the UPS and cooling loads will become much more dynamic and will require more responsive, scalable infrastructure systems to operate efficiently (on demand) at peak power and cooling loads as well as low loads.

The data center infrastructure of the future will need to be responsive to continuous and rapid changes in power demands and especially to moving and changing cooling loads, as IT equipment powers up and down in different areas of the floor. As this new paradigm evolves, The “smart” data center of the future will use advanced power management systems to interactively broker and negotiate power requests (and perhaps charges and rates) from IT equipment to the UPS, intelligent PDUs and, most importantly, intelligent cooling systems, which will need to adapt to varying heat loads while trying to operating efficiently.

Stay tuned as this specification develops. As they say in the auto business, “Your actual mileage may vary”


June 2, 2009  3:38 PM

Data center pro Michael Manos draws a comic strip, so I do too



Posted by: Mark Fontecchio
Data center humor

Michael Manos, the former data center pro at Microsoft who is now at Digital Realty Trust, just wrote a blog admitting his childhood dream of being a comic strip artist. So he gives it a try here. It’s data center related:

Not bad, although I don’t think Manos would be offended if I told him he should keep his day job. Actually, Manos inspired me to do one of my own. Here goes (mine isn’t data center related, but it’s in color!):


June 2, 2009  3:10 PM

CA to buy some Cassatt assets



Posted by: Mark Fontecchio
CA, Cassatt, Data center power management

Last month we reported that Cassatt Corp., a data center energy management software company, was nearing bankruptcy. The CEO said they had been unsuccessful in finding any suitors.

That has changed, as CA announced this morning that it would buy some of Cassatt’s assets, including patents, technology and staff. According to the CA press release:

Cassatt’s Rob Gingell, executive vice president of Product Development and Chief Technology Officer, and Steve Oberlin, Chief Scientist and co-founder, have joined CA, along with their team of developers, engineers, and other key employees. In addition, CA has acquired several Cassatt patents and patent applications, as well as other intellectual property.

Cassatt’s founder and CEO Bill Coleman will not be moving to CA — it’s unclear why.

Cassatt made its name by selling software that controlled a server’s power consumption by putting it to sleep when it wasn’t processing any work, though it was trying to also make a name for itself in the cloud computing space. Just yesterday we wrote about Cassatt’s recent user survey on data center energy efficiency, which had some bright points and sore spots.

Terms of the deal were not disclosed. It will be interesting to see whether CA will incorporate the server sleeping technology into its own systems management software, but from the looks of the press release, it seems like CA will use the technology mostly to push some kind of cloud- or utility-based computing infrastructure. Our guess is that CA will probably run into the same issues as Cassatt did if they try to encourage users to shut down unused servers. The mentality against it within IT is just too strong. We’ll see.

Illuminata analyst Gordon Haff has a piece on the deal, going into some detail about why he thinks Cassatt was doomed — mainly because it was a small company trying to sell cutting-edge software to big companies. Not often is that a recipe for success:

Automation technologies such as Cassatt’s address very real problems. But they’re tough for a small company to sell for a couple of reasons.

The first is that they remain on the leading edge of the adoption curve. Large IT departments are indeed handing off more and more operations to their management software. But relinquishing control of data center operations has long been a slow and incremental process.

The second is that automation software is primarily interesting at large scale. If you only have 10 servers, you probably don’t feel a pressing need to automate. It’s when you have a thousand servers and you can’t run things manually any longer that you are most driven to turn to software for help.

But adopting a management platform for large swaths of a data center is a big commitment and requires a level of trust that enterprises are more likely to place in a CA, Hewlett-Packard, or IBM than they are in a start-up–however great the products.

Jay Fry, a Cassatt marketing guy, weighs in on the CA-Cassatt deal, addressing Haff’s points by saying that an acquisition by CA makes adoption of Cassatt’s technologies by large companies more likely. Unlike Coleman, Fry will be moving over to CA.


June 1, 2009  6:03 PM

Cassatt survey on data center efficiency: Some great findings



Posted by: Mark Fontecchio
Data center power management, Green data center

Data center power management software and cloud computing company Cassatt Corp. has a great survey out now on users’ perceptions of data center efficiency. What makes the survey especially powerful is it’s in its second year, so it can be compared to last year.

Jay Fry from Cassatt has a couple blog posts on the survey, and takes a detailed look at how the results this year differ from last year (check out our own report on Cassatt’s survey last year). I won’t go into all the details of the posts, each of which are worth your time to read, but here’s a quick bullet list on some of the findings:

Good

  • More companies have a corporate “green” initiative
  • More people know how efficient their data center is, and are measuring it
  • More companies have a data center efficiency program in progress
  • The IT/facilities gap is shrinking

Bad

  • Fewer companies can justify shutting off servers when they’re not being used
  • Users are still going primarily to vendors to get information on data center efficiency (as opposed to the media or industry groups like The Green Grid and The Uptime Institute)

Check out Fry’s first post and second post for all the details.


May 29, 2009  12:06 PM

Syracuse University data center to be powered by microturbine generators



Posted by: Mark Fontecchio
Data center power

Syracuse University and IBM are teaming up on a new data center that will be off the utility’s power grid, instead relying on natural gas to power microturbine generators and supply both electricity and cooling to the facility.

The cost of the 6,000-square-foot data center is estimated at $12.4 million, with The New York State Energy Research and Development Authority pitching in $2 million. The server infrastructure will be a mix of IBM boxes — blades, Power-based machines, and z10 mainframes — and the data center will also use IBM’s Rear Door Heat Exchanger, which is a chilled water door on the back of server racks.

Perhaps most interesting about the project, however, will be its use of microturbine generators to power the facility (picture courtesy of Capstone, shown at right). I spoke yesterday to Roger Schmidt, a distinguished engineer at IBM who was a part of this project, about those turbines. He said the plan is for them to have 12 of the microturbine generators, each with a power capacity of 60-65 kilowatts, for a tA picture of a microturbine, courtesy of Capstoneotal of about 750 kilowatts maximum power to the facility. Schmidt estimated that when the facility first gets up and running, it will only need 150-200 kilowatts, and so it will be able to grow into the capacity. The data center will also be running uninterruptible power supplies (UPSes) and will be tied into the electrical grid as backups.

The microturbine generators come from Capstone MicroTurbine, one of the biggest manufacturers of these machines. Other companies that make them include Kawasaki and Solar, a division of Caterpillar. Schmidt explained how they work:

“So we bring in natural gas, which in the U.S. is pretty prevalent.  Basically we run that to a turbine, which you burn to create energy that rotates the turbine wheel. That ties into a generator that drives electricity similar to a power plant to power up the data center.”

Schmidt added that the waste heat from the turbine can then be used in two ways: to cool the data center and then for heating buildings elsewhere on campus.

How can waste heat be used for cooling? By using an adsorption chiller, the heat goes through a thermodynamic cycle, according to Schmidt, that can convert it into cooling energy. This process is typically known as cogeneration, meaning it generates both electricity and useful heat. Schmidt called it “trigeneration” because the generator not only creates electricity, but also creates useful heat that can be used in two ways — to cool the data center and warm campus buildings.

“I only know of a few other data centers using this technology of cogeneration,” Schmidt said. “It’s kind of a unique technology and really hasn’t been applied to data centers.”


May 26, 2009  6:11 PM

The $1 billion data center?



Posted by: Mark Fontecchio
Data center construction

Data Center Knowledge cites local media reports on speculation that Apple is building a massive data center that could cost as much as $1 billion.

Most of the massive data centers out there from Microsoft and Google have price tags around $500 million, so $1 billion would be a big jump. What makes the cost so high? Who knows. If it happens, this thing is likely to be monstrously large.


May 21, 2009  6:40 PM

Building data centers in Afghanistan



Posted by: Mark Fontecchio
Container Data Center, data center cooling

A couple weeks ago I got the chance to spend the morning with Paul Brenner, who works in the high-performance computing department at the University of Notre Dame. Brenner is spearheading a project to build a containerized data center next to a local municipal greenhouse so that, during winter months, the heat from the servers can be piped into the greenhouse to warm it up. Check out the Notre Dame greenhouse data center story (there’s a cool video).

Another thing I learned from Brenner when hanging out with him is that he is an engineering officer in the U.S. Air Force Reserves, and actually just returned from an overseas deployment in Afghanistan a few weeks ago. While there, Brenner helped build data centers.

Obviously it’s not an ideal place, and Brenner had to do a lot of improvising. A few things complicated his mission. First, with it being the military, so much information is siloed, with select people able to access it. So not only do different branches of the military want their own data centers and their own servers, but divisions within each branch want close control of their IT assets. So a lot of data centers there are hodgepodge, small, and consist of a rack here or a rack there.

Brenner mentioned how some of the major IT vendors out there, such as IBM and HP and Sun Microsystems, have tried pitching their containerized data centers as a suitable option for military operations. But Brenner said that even in ideal conditions, deployment time is measured in months. In many cases, Brenner doesn’t have that much time.

So he made do. Oftentimes he would take a bunch of household air-conditioning units and daisy-chain them together, which he said actually led to a good deal of cooling redundancy. It’s all about adjusting to conditions, and when your overseas serving your country in a barren desert land, you do whatever you can to keep the computers running.


May 20, 2009  7:10 PM

Facebook spending big money on data center real estate



Posted by: Matt Stansberry
Data Center

Rich Miller at Data Center Knowledge did some great research into how much money the social networking giant Facebook is spending on data center real estate a year. The majority of that with Digital Realty Trust and DuPont Fabros. At this point the company does not plan to build its own data centers. For more info on Facebook’s data center growth, check this Facebook video.


May 4, 2009  11:18 PM

Uptime Institute to open up data center tier standards



Posted by: Mark Fontecchio
data center availability, data center tier standards, Uptime Institute

The Uptime Institute plans to open up its data center availability tier standards, with two programs catered toward end users and design engineers.

The Uptime tiers have become the de facto standard for availability in the data center industry. The system includes four tiers that escalate in availability as the number increases, with Tier 4 being completely fault tolerant. Uptime has tried to rein in the standards, as many data centers have claimed a certain tier availability without official certification from Uptime. On the other side, some have questioned the relevancy of the tier standards, saying that putting them to practical use can be as difficult as solving the Da Vinci Code.

The first program Uptime will announce tomorrow is the Owners Advisory Committee, a program that could lead to changes within the decade-old tier system itself. The committee will consist of data center end users who are Uptime Institute Site Uptime Network members, and will make annual recommendations on how to update the tier system.

Currently there are about 30 companies that have agreed to be part of the committee, Uptime Institute officials said. They expect there to be many more. Hank Seader, an Uptime consultant who helped develop the tier system, mentioned two issues in particular that he expected to pop up in the committee’s infancy. First, he expects there to be a further refining of Tier 1 that differentiates a non-redundant data center from a server closet or a desktop with a bunch of servers on it. Next, he expects the group to recommend changes that better define what components should be redundant in a Tier 2 data center.

The committee will make recommendations primarily through a Web-based forum. From that forum issues will emerge, and then the committee will make a formal recommendation to Uptime through a voting process.

“Right now the idea is that they’ll make the recommendation, and it will be the current tier certifying authorities deciding how it will go into the standard,” Seader said.

There is more information at Uptime’s Owner Advisory Committee site. Though it doesn’t cost extra to be a part of the committee, you must be a Site Uptime Network member to join the committee. Membership costs $12,000 a year, according to the Site Uptime Network’s call for new members.

Secondly, Uptime will announce tomorrow an accreditation course for certified engineers on the tier standards. The two-day course will take place quarterly throughout the country, with the first two scheduled to take place in September in Denver, and the third in Dallas, likely in December. The course will cost $5,000. There’s more information at the Accredited Tier Designer site.

Seader said these will be detailed, technically heavy courses on practically applying the tier standards to real data centers. There will be seven sessions: overview, mechanical infrastructure, electrical infrastructure, ancillary systems (such as water and backup fuel, for example), common disqualifying design omissions, a hands-on group exercise, and the exam.

Julian Kudritzki, vice president of development and operations for Uptime’s professional services division, said the courses will give accredited engineers an advantage as they’ll have an “enhanced understanding of the tier classification standards, and it will be a competitive differentiator for them when they respond to RFPs that have a clear tier design goal aspect to them.”

Perhaps just as importantly, Uptime is moving toward opening the certification process up to outside engineers. Currently only members of Uptime’s professional services division can certify data centers as having a certain tier level. Seader said he foresees a time when engineers who have taken the accreditation course could certify data centers with a certain tier level.

“I anticipate that in three to five years – closer to three – that there will be tier certification authorities outside of The Uptime Institute,” Seader said.

I’m going to try to talk to Site Uptime Network members as well as Uptime critics and have a more fleshed-out story later this week, so stay tuned.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: