Major data center real estate company Digital Realty Trust has released the results of a new data center study that shows most large companies ready to expand their data center footprints.
The research was done for Digital by Campos Research and Analysis. Some findings:
- 83 percent of respondents are planning data center expansions in the next 12 to 24 months
- 36 percent of respondents have definite plans to make those expansions during 2010
- The need for additional power is the top reason for data center expansions, rising from fifth place on last year’s survey to first place this year
- Data center and IT budgets are both projected to increase by 8 percent in 2010, up from 7 percent and 6 percent, respectively, last year
- Of those planning to expand, 70 percent are planning large projects of at least 15,000 square feet in size or 2 mW or greater of power
- 83 percent of respondents with definite plans to expand in 2010 plan to do so with a partner that specializes in data center design and construction or data center leasing.
- Of those expanding, 53 percent plan to do so by leasing from a wholesale data center provider
There were also some data center energy efficiency results from the survey:
- 76 percent of respondents now meter their power use
- The number of companies that meter power down to the PDU level increased by 29 percent over last year
- The average reported PUE energy efficiency rating for respondents’ data centers is 2.9
- One in six respondents report PUE ratings of less than 2.0 for their facilities
The Web-based survey was taken in January of 300 “IT decision makers” at large companies in North America with annual revenues of at least $1 billion and/or at least 5,000 employees.
Christian Belady, formerly the data center power and cooling expert at Microsoft who basically devised the power usage effectiveness (PUE) metric that has become the industry standard for measuring data center energy efficiency, is changing jobs within the company.
According to a blog he posted yesterday, Belady will move to Microsoft Research and into something called the “Extreme Computing Group.” Belady explains:
Cloud computing has made mining and developing the “right” opportunities even that much more important. We need to think about how we tie together the complete ecosystem of the software stack, the IT, the data center and the grid today and what efficiencies we can drive from our research and development for the future. For those of you that know me – this is the kind of opportunity that makes me salivate. There aren’t many people around tasked with this kind of challenge and this is the opportunity I have been given in the evolution of my career at Microsoft.
In addition to these roles, Belady is also part of SearchDataCenter.com’s data center advisory board.
Forrester analysts Rachel Dines and Doug Washburn put a damper on the excitement around the possibility of running a data center on the Bloom Box.
Bloom produces a new fuel-cell technology that feeds off natural gas and can be used as an alternative to getting power off the electric grid. According to Dines and Washburn, Google said it has 98% uptime. In other words, one nine of reliability. The gold standard is five nines — 99.999% uptime. Though it may not seem like much, it equals a difference of more than seven days worth of uptime, which is nothing to sneeze at when you’re talking about running mission-critical applications.
Secondly, it isn’t so cheap. The boxes, which cost up to $800,000 each, will cost about 8 cents per kilowatt hour. That’s cheap compared to some utility prices in California and the Northeast, but definitely not when compared to Nowhere, Iowa (or Nowhere, Washington or Nowhere, Somewhere Else), where many big companies are siting their data center facilities. That said, if the electrical grid prices continue to rise, Bloom becomes more attractive.
The only real plus to the whole thing is being able to boost your company’s green IT credentials. That’s all well and good, and important too, but not necessarily at the top of a data center manager’s priority list. The Forrester analysts continue:
(F)or now the uptime and economic shortfalls of the Bloom Box will be a turn off to most data center managers. However, when these fuel cells start to come down in price and become more reliable, Forrester expects to see these as a significant technology in the next generation data center.
As it turns out, Google actually is using a Bloom Energy box to power an “experimental” data center at its main campus, according to a Reuters story.
Bloom has exploded (or I guess, bloomed) onto the data center industry in the past week, largely due to a segment on “60 Minutes” where Bloom Energy founder unveiled the company’s technology. Bloom produces a new fuel-cell technology that feeds off natural gas and can be used as an alternative to getting power off the electric grid. The machines (in the picture shown above), which Bloom calls “energy servers,” cost up to $800,000 and provide 100 kilowatts of electricity. Some major companies — Google, eBay, Bank of America, WalMart — already have them installed.
The 60 Minutes show said that Google is using a Bloom box to power one of its data centers. Then this week, Rich Miller at Data Center Knowledge reported a different story:
It turns out that’s not quite correct. “These fuel cells aren’t powering any off-site data centers,” said a Google spokesperson. “Instead, Bloom fuel cells are powering a portion of Google’s energy needs at our headquarters right here in Mountain View. This is another on-site renewable energy source that we’re exploring to help power our facilities. We have a 400kW installation on Google’s main campus. Over the first 18 months the project has had 98% availability and delivered 3.8 million kWh of electricity.”
Then today, however, Reuters reported that indeed, Google was using it for a data center:
Google founder Larry Page said he was a big supporter of the technology. The search giant was Bloom’s first customer in July 2008 and uses the fuel cell to power a building on its main campus in Mountain View, California, a facility that includes an experimental data center.
“I would love to see us having a whole data center running on this,” Page said.
Needless to say, Google might not be running a production data center on the Bloom box right now, but I’d be willing to bet that they will be sooner rather than later.
The federal Environmental Protection Agency (EPA) is developing an Energy Star specification for uninterruptible power supplies (UPSes), which are widely used in data centers.
The EPA has reached its arms into the data center industry a lot in the past few years, developing an Energy Star spec for servers and in the process of developing one for data center facilities and storage equipment. Now it’s looking at UPSes. Here’s an excerpt from the announcement letter EPA sent out earlier this month on the development:
EPA recently conducted a scoping effort to evaluate UPS products for inclusion in the ENERGY STAR program. EPA reviewed market data and held many discussions with product manufacturers, industry associations, and other parties. As a result of this effort, EPA found the following:
- UPS products are common throughout the enterprise data center, small office, and home office/entertainment markets, and represent an opportunity for ENERGY STAR to expand upon its energy efficiency experience with power supplies and data center IT equipment.
- National energy-savings potential compares favorably to that from other ENERGY STAR product categories.
- Several test standards, including IEC 62040-3, CAN/CSA-C813.1-0, serve the UPS market. There is opportunity for ENERGY STAR to expand upon these established standards in the development of a standard evaluation method for UPS energy efficiency.
- Efficiency improvements are not likely to have a negative impact on product performance.
- A labeling program will help consumers identify the most efficient UPS solution for their specific application.
- February 16 – EPA distributes Specification Framework Document for review
- March 24 – Stakeholder meeting to discuss framework
- April 02 – Comments due on Specification Framework
- Ongoing – Draft specifications and comment periods
- 4Q-2010 – Target Specification finalization and effective date
This week I talked to Carol Sherman, data center director for the state of Michigan, about California’s data center consolidation plan. In a story I wrote about the California consolidation, I raised the question of whether the project is feasible, especially in the tight timeline Governor Arnold Schwarzenegger wants.
A quick summary of the project: California has about 400,000 square feet of data center space in some 400 facilities. Schwarzenegger wants that cut 25% by this July, and 50% by July 2011.
Sherman thinks it’s doable, and people should listen. Not only has Michigan done a data center consolidation of its own — merging 38 data centers to three — it was Teri Takai, then Michigan’s CIO and now the California CIO, who led the project.
“I think she can, absolutely,” Sherman said. “I would say it was a large project for us, but I would do it again at the drop of a hat. We definitely saved a lot of money and got our data centers in a lot better shape, and she’ll do the same there.”
Sherman said one of the biggest issues of the consolidation was overcoming the anxiety of agencies’ losing ownership of their servers. What they did was invite all the agency directors to one of the IT department’s “professionally run” data centers, and then showed them their own facilities and gave them a risk assessment.
“They couldn’t get in line fast enough to have their stuff moved,” she said.
The IT department also gave the agencies free server hosting for a year, so it wouldn’t be a budget issue for them. In fact, the IT department was saving other agencies money during that first year, because they didn’t have to take on the operations and maintenance costs of running the servers themselves.
However, the Michigan data center consolidation took four years. California’s consolidation is about 10 times as large, with a shorter projected timeline. Sherman said that once the agencies signed on, they still had to schedule their migrations. There is a lot of time involved in the migrations, she added, including a lot of planning and time for doing the actual moves.
That said, because California is such a big state, perhaps it has a lot more low-hanging fruit that it can pick to reach those 25% and 50% goals. It all remains to be seen, and is something to keep and eye on for sure.
The Uptime Institute has now listed the program for its May symposium. The event will take place May 17-18 in New York City. The roster reads like a who’s who of some major players in the data center industry, and include Christian Belady, Mark Bramfitt, Jonathan Koomey, Mike Manos, Neil Rasmussen, Werner Vogels, Robert “Dr. Bob” Sullivan,
Data center real estate giant Digital Realty Trust has announced that it has completed its first customer agreement for its POD Architecture Services, with a Fortune 100 financial services company.
The company’s name wasn’t revealed, but its data center plans were. The company’s first phase of its data center plans include 30,000 square feet of raised floor space, and almost 3.4 megawatts of IT load. The project is ongoing and expected to be complete by 2010.
Digital’s POD Architecture Services is something of a middle ground between colocation and building a data center on your own. Digital lends its expertise and resources to help a company build and operate its own data center.
“The value proposition for customers is clear: by using our POD Architecture Services the customer is able to complete large datacenter projects, faster, more cost effectively and with lower risk than if they take a purely do-it-yourself approach,” Chris Crosby, senior VP of corporate development at Digital, said in a statement.
Digital first introduced the service in August.
Mike Manos, one of the more well-known and well-respected figures in the data center industry, is making another job change.
It was less than a year ago when Manos decided to leave Microsoft’s data center team, where he was the general manager of its data center services division. He decided to go to Digital Realty Trust to become a senior vice president there, summing up his decision in a post on his LooseBolts blog by saying:
In the end, my belief is that it will be companies like Digital Realty Trust at the spearhead of driving the design, physical technology application and requirements for the global Information Utility infrastructure. They will clearly be situated the closest to those changing requirements for the largest amount of affected groups. It is going to be a huge challenge. A challenge, I for one am extremely excited about and can’t wait to dig in and get started.
Manos wrote that in May. Now it’s January, and Manos is moving on, saying he “decided to leave the company to focus a bit more on some personal work/life balance issues.” We can only speculate what he means by the balance issues he refers to.
But then yesterday, Manos wrote another blog post saying he will be working for Nokia as their vice president of service operations. Manos describes the role in a new blog post:
In this role I will have global responsibility for the strategy, operation and run of infrastructure aspects for Nokia’s new cloud and mobile services platforms.
It appears Nokia is working to challenge big players like Google and Apple in the cloud computing space (whatever that means), and Manos will be a big part of building up Nokia to get there.
A group of five data center hosting companies has joined hands to help in the relief effort following the massive earthquake in Haiti. The effort is called Hosting for Haiti.
It’s not entirely clear what the five companies – Rackspace, Peer1, The Planet, GoGrid and ServInt – are doing, but it seems like they’ve come together to encourage people to donate, particularly their customers.