First, I have to tell you that the Tier Levels have nothing to do with energy savings. In fact, a True Tier 4 (of which there are very, very few) is likely to be less energy-efficient than a Tier 3 simply because all the redundant systems are usually operating well below their optimum efficiency ranges. I have had conversations with leaders of the Uptime Institute on this issue, and the conclusion has been that if the business demands of an organization can truly justify the enormous expense of a Tier 4 facility (capital costs, maintenance and IT systems), they can also justify any added energy costs associated with it. This is not to say that at Tier 4 facility can’t be designed with energy saving measures because it certainly can and should. But the same measures, equally well applied to an identically sized Tier 3 Data Center, would likely result in less energy consumption.
But to the point of your question. Your mention of covering holes in the floor and installation of blanking panels in unused equipment spaces is certainly an important first step. Likewise, filling any gaps between cabinets to create continuous rows should help. (See my Blog Article “<a href=”http://searchdatacenter.techtarget.com/general/0,295582,sid80_gci1149289%20,00.html”>Block Those Holes</a>”).
You might even consider putting old cabinets or baffle plates at the ends of rows to minimize air recirculation around the ends. This assumes, of course, that your cabinets are already arranged in a Hot Aisle/Cold Aisle configuration. If not, that would really be your first step, but it requires significant planning and a lot of effort to re-arrange in a working Data Center.
Without knowing the size or layout of your Data Center, it’s hard to make other specific suggestions, but some other important considerations are:
<li>Remove cabinet doors, or replace with High-Flow doors (generally about 68% open area). Get rid of cable blockages at the backs of cabinets – particularly the “folding wire managers” that are so popular but often block exhaust air from the servers. Equipment is designed to cool itself properly if you supply enough air to it at the right temperature, and don’t restrict its free passage through the hardware.</li><li>Clean out old under-floor cable and reorganize the good stuff to minimize obstacles to air flow.</li><li>Raise Air Conditioner Return Air Set Points to 75oF, or even higher if you can. Check temperatures near the tops of all cabinets to make sure your air supply and distribution are good enough to maintain a maximum of 77o at these levels. If not, and if you can’t improve your air flow to higher temperature cabinets, then unfortunately you will probably need to lower the set point until you can. (If you can’t properly cool upper cabinet equipment, even at much lower settings, you have much bigger problems and will need competent professional advice.)</li><li>Set relative humidity to 45%, or even 40% if your climate and facility will allow without ESD.</li><li>Shut off any hardware that is no longer actually being used. (Many Data Centers have equipment still running that they don’t realize no one is using any longer.)</li><li>Activate the energy saving features on servers. Most newer servers have this capability, but they usually come from the factory de-activated, and most tend to remain that way. It can make a big difference in energy usage.</li><li>If you haven’t yet started to think about consolidation and virtualization, it might be worthwhile to begin investigating. There is a lot involved, and it’s not the “magic bullet” solution to everything, but many Data Centers have been able to get rid of lots of servers by doing this that were wasting enormous energy running at 15% or 20% utilization.</li><li>Install motion-sensing timers to turn lights off when there is no one in the Data Center. This saves energy and reduces heat load.</li>
Of course, there are big factors such as raised floor height, air conditioner placements, return air paths, “free cooling”, UPS configurations, etc. that can make substantial energy usage differences. But making changes in these systems requires major engineering, operational disruption, risk exposure, and significant expense, which is why it usually means building a new Data Center to really “go green”.