Tell me about your new role at CRG West so far.
Savageau: Since we’ve been rapidly growing our footprint, I spend half my time talking to customers, and half talking to our operations staff. As we continue to build out our data centers, we have a lot more flexibility to design to the 200-watt per square foot world we’re dealing with today for utility compute farms. The cloud computing community is rapidly growing. We’re building facilities to translate those requirements.
What role is cloud computing playing in your planning?
Savageau: I’ve been a grid fanatic for years since SETI @ Home, bringing large numbers of distributed CPUs together to solve problems. I think it’s important to encourage the growth of cloud computing into our data centers. Cloud computing is a marriage of grid computing and SaaS, and we had better be thinking about attracting cloud computing companies (or even doing it ourselves in the future).
The ability to have elastic computing as close to the non latency points as possible is important.
What’s the relationship between cloud computing and latency issues?
Savageau: For example, in New York or Chicago, we’d like to have the ability to create a zero latency cross-connection point, a cloud environment where trading companies can do business at an exchange point. If you’re looking at traders, latency and transactions are synonymous with loss of money. You have to talk about latency in fractions of milliseconds. Companies losing transactions are losing money.
Or in the entertainment industry, we’re talking about video on demand. The less hops that occur, the less delay you put in between the origination and the end user eyeballs, the better the experience. Lowering latency is going to be critical in a digital world.
How is cloud computing changing your data center infrastructure strategy?
Savageau: The mechanical side is easier to deal with than the switching side. It’s mostly dealing with the watts per square foot. If we look at what server deployments look like for cloud companies, we’re talking about putting in 25 racks of Verari blade servers, which will require us to have 250-amp, three phase power.
We’ve learned a lot over the past few years in regards to deploying high density rooms. Now we understand that 100 amps of 208v, three phase power is the mechanical design to meet most customer requirements. But building high density is build-to-suit.
Believe it or not, build-to-suit gives companies a really great opportunity to start thinking green. If you’re not thinking green you’re hemorrhaging money and having a negative impact on the environment. Thinking green on deployment, building the best efficiency is a religion with us now.
On the data center deployment side, strategies like cold aisle containment, extraction of heat into sealed plenums can be a huge factor in how much it costs for you and your customer.
How is cold aisle containment working out in your data centers?
Savageau: We’re doing cold aisle containment, in all of our new deployments. We looked at hot aisle containment, but we prefer to spend our energy cooling the intake side of the servers. The primary consideration is providing cool intake to servers.
When you walk into the data centers, the cold air is completely sealed. The server doesn’t care how much heat is coming in the back end. If you concentrate your efforts on the cold air you’re going to have a much happier server. Cold aisle containment can reduce the electrical draw of cooling systems up to 25%. That makes the customer really happy.]]>
Microsoft’s goal in 2008 was to shake up the data center community in a big way, starting with Mike Manos’ announcement at AFCOM that Microsoft would be deploying containerized data centers, to Christian Belady’s “Data center in a tent” experiment with a PUE of 1.0. Mission accomplished.
These guys are pushing the envelope like no one else in the industry — rabble rousing at ASHRAE TC 9.9 meetings, calling out vendors, and blogging about it every chance they get. They’re literally scaring people who have built their reputation and businesses on traditional data center design — and I don’t just mean the people selling chillers and raised flooring. These engineers are mad scientists, flipping their noses at decades of conventional wisdom.
You can read Microsoft’s proposal for yourself at Mike Manos’ blog, but the basic concept is this: data center trailers with minimal building envelop, using unconditioned outside air to cool servers. The servers will run on outside air with temperatures ranging 10-35 C and 20-80% relative humidity. “For this class of infrastructure, we eliminate generators, chillers, UPSs,” Belady wrote in the blog.
Here is a video:
Video: Microsoft Generation 4.0 Data Center Vision
The applications these servers are supporting have a built-in failover, Microsoft calls it “geo-redundancy”. If the server (or servers) die, the application automatically shifts over to another batch of servers, and Microsoft technicians replace the servers on a maintenance schedule.
For applications that demand higher redundancies, Microsoft will build more robust infrastructure. But thanks to its chargeback program, Microsoft’s business units will be less likely to adopt the more expensive higher-redundancy configurations if they can prove the bare bones approach works.
Microsoft doesn’t want you to put your data center in a tent. If you want to run big iron, have redundant components, pay big bucks for people to babysit your servers and keep them cool, that’s your business.
But they do want you to know that they plan to run all of their data centers at 1.125 PUE by 2012.]]>
That’s what Power Loft is doing today, aiming to scale his facilities up to 300 watts per square foot, which ends up being about 10kw/rack, according to Coakley.
Now the company is building a 200,000-square-foot facility in Manassas, Va. Coakley is hoping that construction, which started last year, will be done in the first quarter of 2009.
“We have not yet leased,” he said. “We’re still in the second phase of construction and activity looks good. I’m curious to see if we do get the high-density crowd like we were hoping. I’m pretty confident that we can operate more efficiently than anything I’ve ever been involved in.”
What’s the benefit for customers? Well, some want that level of high density to accommodate high-density equipment of their own, such as blade servers. Others simply want the ability to scale from 150 watts per square foot and go up from there, without having to rent out more floor space. That is something Power Loft is offering.
EYP, which Coakley has dealt with since about 2002, is assisting Power Loft with the design and construction, and will continue to do the same with future facilities. EYP is now a divisional company of Hewlett-Packard, having sold to them last year. Power Loft is one of the dozens of cases that HP is touting as new data center service customers for the company.
For its Manassas facility, all of the power and cooling infrastructure is on the first floor, and all the data center space is on the second floor. They’re also building a “power transmission backbone” that will be capable of expanding in AC or DC power. DC power has become a point of interest for some data centers because of the possibility of saving energy costs by eliminating some of the transfers between AC to DC and back again.
Power Loft has combined eight 20-ton CRACs into a single, massive, 160-ton air handler in the hopes of operating the cooling load more efficiently. The design is LEED-certified and includes a green roof — what Coakley called a “Chia Pet roof.” It also has vines on the outside of the building and will use waterside economizers to help save on power costs in the cooler months.
Coakley said that after Manassas, Power Loft hopes to expand to San Antonio, Colorado and Atlanta, and then possibly tackling the overseas market after that. Powerloft is using Total Site Solutions, a Columbia, Md.-based company, to maintain the data center facilities once they’re built.]]>