Data center facilities pro


January 19, 2010  11:46 AM

Data Center Pulse and The Green Grid team up

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

Data Center Pulse, a data center end user group, has formed a collaboration with The Green Grid, a nonprofit data center efficiency group.

Data Center Pulse has 1,350 data center end user members in 55 different countries, and so the news could fill in what once was a visible hole in The Green Grid membership — the dearth of end users. The collaboration will allow The Green Grid to access the end user base quickly to validate their work, most likely through end user surveys. Data Center Pulse members will now have a more direct path into Green Grid work.
The Green Grid has made inroads in developing its relationship with users, forming an end user advisory council from eight companies – AT&T, ADP, eBay, Nationwide, Strato, The Walt Disney Company, Tokyo Electric, and Verizon. The council will now be the major conduit between the Data Center Pulse community and the rest of The Green Grid.

January 5, 2010  1:24 PM

A monkey is better than Jim Cramer at picking data center stocks

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

In October, Jim Cramer, the host of “Mad Money” on CNBC, predicted that data center stocks would tumble, in part because the Intel Nehalem chip would do so much more inside a server.

Rich Miller at Data Center Knowledge took issue with it and called Cramer “dumber than a bag of hammers” when it came to data centers. As it turns out, Miller was right. He’s been tracking data center stocks since then and found that many actually had double-digit increases — Internap, Akamai, Equinix, Switch and Data, and Rackspace. Only one, Savvis, had a double-digit decrease since then.

Miller’s take on it all:

To be sure, data center stocks won’t go higher forever, particularly after the stellar runup for the sector during 2009. But if you’re interested in understanding the industry and its future, take the time to understand the factors that are driving the demand for data center space, and how technology is impacting these facilities and their design. As we noted in our October post, Nehalem processors will allow companies to do more with less, but they’re not going to empty out all the data centers.

In the comments, someone wrote that “(w)hile this particular rant of his is about your specific industry, his predictions in general are around 48% good. (That % came from a study someone did over the past 3-5 years, if memory serves.) Point being, don’t get all bent out of shape about it.”

I found that kind of funny, because at that rate, a monkey would be a better predictor than Jim Cramer. Check it out: Take a monkey and put two pieces of paper in front of it, one that says “buy” and one that says “sell,” and follow the advice on whichever piece of paper the monkey chooses. Over the long run, you’ll have a success rate of 50%, better than Jim Cramer. You’re welcome!


November 30, 2009  2:11 PM

Helsinki to be heated using recycled data center heat

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

Reuters has a story today on a plan to recycled data center heat to help heat Helsinki, the capital of Finland.

The data center will be located underground, below a city cathedral. The heat will be directed into the city’s heating system, which according to the story includes “water-heated pipes” used to warm homes in Helsinki.

The data center is expected to be up and running in January. The company building and operating the data center is Academica, a local IT services firm. The heat from the facility will have the ability to heat the equivalent of about 500 homes.

Using the recycled heat in a data center has gained more traction in recent years, as data center operators have looked for ways to put the heat to good use after it has traveled through the servers. Check out “Companies reuse data center waste heat to improve energy efficiency” and “Data center air recycling saves cash-strapped greenhouse.”


November 18, 2009  3:00 PM

Microsoft’s new data center container, designed for fresh air

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

Microsoft has unveiled a new container-based data center at the Windows Professional Developers Conference in Los Angeles this week. The container looks to be moving toward an even more open architecture in terms of eschewing raised floors, traditional cooling and even roofs. Take a look:

[kml_flashembed movie=”http://www.youtube.com/v/e8fPzpKDTEQ” width=”425″ height=”350″ wmode=”transparent” /]

This is part of what Microsoft calls its “Generation 4″ data center design. It’s the next step up from its container-based data center in the Chicago area, the grand opening of which we attended a couple months ago. The Generation 4 design looks enabled to take in fresh air from the outside of the container through filters into the cold aisle, rather than using traditional cooling.

Hat tip to Data Center Knowledge for pointing us to this video.


November 3, 2009  5:26 PM

FBI InfraGard program key to data center physical security

Matt Stansberry Matt Stansberry Profile: Matt Stansberry

By John Parkinson, Contributor

According to one data center pro, the FBI’s InfraGard program is key to his data center’s physical security. The FBI InfraGard program was initially started to share information between government and private industry about cyber threats. After the 9/11 attacks, the program’s mandate widened to include physical security threats to all of the nation’s critical infrastructure, including businesses, academic institutions, state and local law enforcement agencies, and other participants.

InfraGard has geographically-linked chapters with FBI field office territories throughout the U.S. These chapters meet regularly to share information on the latest threats and solutions, according to Noel Rojas, vice president of corporate security at the colocation company Terremark, and a member of InfraGard.

Rojas says this type of cooperation between the government, private industry, and academia had never existed at this level, and its importance cannot be overstated. “It is probably one of the most significant trends, if not the most significant in security, that has come about in the last few years,” states Rojas.

In addition to meetings and online forums, InfraGard has a secure Web site where information is posted regarding the latest threat advisories. One benefit of the site is that members can receive alerts on mobile communication devices about situations that need immediate attention, says Rojas. For example, if there is a regional or national alert, members would receive an email on their Blackberry advising them to sign in to their accounts to view the message.

Separately, but not mutually exclusive to this is the ongoing trend of downsizing the federal government’s traditional duties and responsibilities by outsourcing—including holding its confidential data. “It is estimated that 85% of the nation’s critical infrastructure is owned and operated by private industry,” says Rojas.

With so much data and infrastructure out of the federal government’s hands, the hope and the plan is that programs like InfraGard can open up the communication channels between the public and private sectors and keep the latter in the loop about potential threats and its vulnerabilities.

Click here to sign up for InfraGard and find your local chapter.


October 19, 2009  10:01 PM

Mark Bramfitt leaves PG&E, and a gap in data center utility relations

Matt Stansberry Matt Stansberry Profile: Matt Stansberry

Northern California utility Pacific Gas and Electric Co. made headlines by finding novel ways to offer rebates for energy efficiency in the data center. The company’s programs have become a model for other utilities in the state and around the world. Mark Bramfitt had headed up PG&E’s data center energy initiatives, until last Thursday when Bramfitt announced he was leaving PG&E.

John Sheputis, CEO of Fortune Data Centers and a recipient of nearly $1 million in PG&E data center efficiency rebates was at the Silicon Valley Leadership Group Data Center Energy Efficiency Summit when Bramfitt broke the news.

“The reaction from the crowd was impressive… and for good reason. Mark is an excellent speaker, a very well known entity in the valley, and among the most outspoken people I know of regarding the broader engagement opportunities between data centers and electricity providers,” Sheputis said. “No one has done more to fund efficiency programs and award high tech consumers for efficient behavior.”

Bramfitt and PG&E started focusing on data center utility rebates and incentives in 2006. According to Bramfitt, PG&E’s data center program saved three megawatts of load in 2007, and doubled that in 2008 saving seven megawatts. But getting IT pros to jump through the hoops to get the rebates was a difficult task.

“The effort Mark was undertaking was critical and immensely difficult,” said Mark Thiele, Director Business Operations R&D at VMware and co-founder of Data Center Pulse. “I sincerely hope that PG&E hires someone to pick up where Mark left off. However, if his replacement is to be successful, I think s/he will have to do more to bring the public and government agencies into the fight. It’s possible that getting new blood, as well as renewed enthusiasm for the effort will be just what the doctor ordered.”

“PG&E had the leading program in the US to reward the end user for their efficiency choices, but it was rather difficult to obtain the rebates,” said Dean Nelson, data center director at Ebay. Many companies found the effort not worth the return. I think Mark Bramfitt had some very good successes and all the right intentions but lacked the future support within PG&E to continue. I believe it is crucial that the utilities (all of them) interface with their corporate customers, and each other, to decrease consumption and optimize efficiencies. I hope that PG&E has plans to fill this position and/or refocus the program to make it simpler for data center users to take advantage it. They have momentum and should continue. Killing it would be a mistake.”


August 24, 2009  4:15 PM

Data center earns nearly $1 million in utility rebates from PG&E

Matt Stansberry Matt Stansberry Profile: Matt Stansberry

Hosting and wholesale data center provider Fortune Data Centers walked away with a $900,000 check from Northern California energy utility Pacific Gas & Electric for making energy efficiency investments in its new San Jose data center.

Fortune opened the new data center for business in April, 2009, reclaiming a former manufacturing site. After phase I of construction, the 78,000 square foot facility has 43,000 square feet of IT space, and the company is claiming a PUE of 1.37, which if correct would mean that the facility’s data center mechanical infrastructure is highly efficient compared to the industry average of 2.0. See Mark Fontecchio’s article for more on evaluating public data center PUE claims.

The utility rebate came from PG&E’s High Tech-IT Facility program, which offers incentives for California data centers to reduce energy demand.

According to Fortune Data Centers CEO John Sheputis, the energy savings came from optimizing the cooling system and purchasing the most efficient uninterruptible power supply systems possible. Sheputis said the PG&E payments offset about half the cost of the premium for implementing high efficiency infrastructure.

PG&E engineers were very involved in the design and commissioning of the project. “PG&E has a commissioning agent to make sure you’ve installed the energy efficient devices and to make sure that it’s working,” Sheputis said. “They have to see the equipment operating. We get payments as the measures are brought online. They’re not allowed to just go around handing out checks.”

While $900,000 is a lot of cash, it’s not the largest data center utility rebate so far. NetApp scored $1.4 million from PG&E for its data center efficiency measures. Sheputis golfs with NetApp data center execs, and joked that “They’re dissing me for not getting as big a check as they did.”


August 20, 2009  2:36 PM

More on UPS battery monitoring

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

In our story on extending UPS battery life, I wrote about how monitoring battery capacity is crucial. Well, Jim Reed, the president of battery monitoring company Power Data Systems in Boonton, N.J., wrote to add a few things to that list. Here’s what he wrote:

Hi Mark, we read your article “Four Ways to Extend Data Center UPS Battery Life”.  While we agree with everything that you had suggested you neglected to mention anything about using permanent online battery monitoring systems with ohmic measurement capability that will provide accurate data into the health of the battery string right to the cell or jar level.  This data will provide advance warning of batteries that are starting to exhibit a problem that will eventually lead to an early death.  The IEEE has written a standard for battery monitoring called “IEEE Guide for Selection and Use of Battery Monitoring Equipment in Stationary Applications”  IEEE Std 1491.

Another reader wrote in regarding my point that mixing batteries of different ages or internal resistances can prompt batteries to die more quickly. George Pederson, the business development manager at Rockaway, N.J.-based battery monitoring company BTech Inc. said that doesn’t mean you have to replace the whole battery string whenever one battery needs replacing. He elaborated:

This is not true, it is not the new battery that unbalances the string, it is typically because the battery unit that failed was not identified sooner and it had already done the damage to the other units. There is one caveat to that and that is, the replacement unit has to be fully charged a lot of times the service company will simply take a unit straight from the battery manufacturer and put it in service without a refreshing charge to ensure that the unit is fully charged and this will result in an unbalanced string.

Quite simply if the job is done correctly there is no reason why users shouldn’t get the full five years from a high rate battery.

Thanks to both Jim and George for writing in.


August 6, 2009  8:28 PM

Future Facilities updates data center CFD tool

Matt Stansberry Matt Stansberry Profile: Matt Stansberry

London-based Future Facilities released Version 5 of its data center computational fluid dynamics software, 6SigmaDC. Large data centers and facility design consultants (including Emerson/Liebert’s data center consulting arm) use the tool to model data center cooling.

New features in version five include:

-Modeling internally cooled cabinets so they can be registered for inventory purposes and accounted for in the power system as well as for power scaling needs.
-Modeling equipment weight to assess whether any given layout will breach floor loading limitations.
-Cable penetrations can now be attached to the raised floor to account for the situation where they are installed before racks and cabinets.
-The infrastructure outside the facility can now be included in graphical views and animation can be generated to demonstrate how airflows develop.

While data center pros rave about the tool (6SigmaDC won silver in our 2007 Products of the Year) it comes at a steep price, up to three times the cost of data center CFD software from TileFlow, according to data center design consultant Pete Sacco.


July 27, 2009  4:53 PM

Eaton pushes 400/230V unit, but is it ready for (U.S.) primetime?

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

Last week, Eaton Corp. announced a new uninterruptible power supply (UPS) with a 400V/230V power distribution that’s uncommon in U.S. data centers.

In most data centers, power comes from the utility at 480 volts and then gets stepped down to a 208/120V distribution at the server level. With the Eaton product, the power comes from the grid at 480 volts and gets stepped down to the 400/230V distribution, which is commonly used in Europe, Asia and South America. According to an Eaton whitepaper, the 400/230V distribution is 4% more efficient than the traditional UPS that distributes at 208/120V.

“It only took us about 100 years to get to where the rest of the world has been for decades,” Peter Sacco, president of data center engineering firm PTS Data Center Solutions, wrote in an email to me. “The technical advantages of 400V are undeniable and I believe and outstanding improvement to our 208/120V standard as well as better than DC power distribution.”

But hold on.

“However, don’t buy a new electric razor just yet,” Sacco advised. “Until there is broad acceptance by the power distribution industry, don’t expect to see it widely utilized just yet.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: