I think it’s safe to say the concept of “Power IT Down Day” is extremely positive, but I have to agree with Oestreich here. While installing power management features in PCs is nice on the surface (a similar program is run by BigFix: http://blogs.eweek.com/masked_intentions/content/green_computing/federal_government_puts_the_bigfix_in_to_go_green.html), the larger issue of growing energy consumption in datacenters is more extreme among businesses (5 million kW consumed in 2005 in the U.S., expected to grow 76% by 2010). While Oestreich calls out in his blog a sobering statement that if the software used for Power IT Down Day was used enterprise-wide it would save 9,100 KWh – something as simple as switching from AC powered IT equipment to DC powered IT equipment would yield much larger benefits. This is because AC powered datacenters require 5-7 conversions (AC to DC) and transformations (higher voltage to lower voltage) from the utility to the point of use powering IT equipment (that runs on DC power), resulting in energy loss at each step in the form of heat. In other words, studies have shown that a typical AC power path is roughly 43%-72% efficient, compared to a Validus DC Power solution that is between 84%-90% efficient. This can equate to cost and overall energy savings of roughly 50%. Just more food for thought for next year’s “Power IT Down Day.”
I think the most important message is that you should not do one thing or the other but look at the bigger picture. There is lots to do in numerous areas. Some small, some big. And lots of small things (the low hanging fruit) will cost little or nothing are easy to implement. But still there are many ICT and faility departmens that have never looked at these opportunities. Things like:
- Measure your energy consumption and find out where it goes (cooling, electrical equipment, ICT, etc.). You will be surprised what insight in consumption can do to motivate and identify improvements.
- Raise the [B]inlet [/B]temperature of the DC to 24°C or 75°F. The outlet temperature is allowed to be higher then that (as long as it is not used again to cool)
- Make sure that your racks are front to front and back to back. Make sure that cold air only comes up in the cold lane (F2F), so close all other holes in the floor and place the perforated tiles accordingly
- Use Blind panels in the rack for unfilled U´s.
- Use ´brushes´ for holes in the floor where cables come up but air is not supposed to.
- Critically look at what is on your UPS . Take off anything that is not critical. Also look if the Airco equipment is on it. Because of UPS leakage (usally between 5%-20%) you continuously loose energy. So if the Airco is on the UPS, take it off. Either you have back up generators that will turn on in a short time and you will not miss the cooling for just that small timeframe or you don´t and the site will go down anyways if the power does not come up in time (it actually increases the time you have).
- Isolate (tape/board down) windows that bring in heat from the sun.
- Make the DC a dark room (and only turn the lighs on when you or your colleages enter the DC. Computer dont need light to operate
- Permanently turn off comatose servers.
- Temp. turn off servers that are not doing anything for the moment (could even be manual, but preferably tooled). Could be at night, in the weekends, or what about the server that is only used for the monthly run.
- Start consolidating and virtualizing. This will need some investment, but there usually is no need to do this big bang. Just start with one server, then add another that you already needed to replace, etc. etc. This gives you the chance to set up the virtual environment, build up experience, etc. The biggest investment is not the server you need to start with (you could even take one you already own), but the need to design, set up and test the virtualized platform and training your people. The rest of the investments can be paid from the savings you got from the severs you have just virtualized. Be aware that some oftware cost can increase (e.g. the virtualization software) but also some licence forms (e.g. DB´s) are more expensive on virtualized platforms then physical machines. Others may go down. In general with server consolidation you save ~5$ on HW, SW and staff for every 1$ you save on energy.
- Move storage to a central solution (assuming most of you will have something like central storage).
- Turn off unused storage devices (if a server has 4 drives, but you only use 2 for the system software. disable the other 2).
- Free orphaned or allocated space that is unused
- Delete data that no longer needs to be kept.
- Offload archive data to tape.
- Compress data that is not critial in performance
- Eliminate applications that are no longer in use (clean up the portfolio).
- Using the power save options on PC´s and laptops that are already available (though the tool offers a more complete solution, the basics are there and easy to roll out).
- Reduce local printers / copiers and scanners. Provide centrally positioned printer/copiers per floor.
- If you are already using SBC, look at thin clients as an option to replace PC´s when they need replacement or new ones.
- measure and publish energy usage.
- Make somebody in the MT responsible
- have ICT and facilities work together (may sound like a novel idea, but it actually helps )
- Look at small things in your day to day operations. E.g. procurement (look at energy consumption and cost when you buy new equipment), Change management (how does this new project effect my energy usage and does it fit my cooling capacity?), etc.
- Ask your staff to get involved and come up with ideas
Most of these things are relatively easy to execute and do not ask for large investments. Some will be more applicable to your situation then others, but just by implementing several of these optimizations will probably save you 10%, 20% or more on your montly energy bill and have the tree-huggers cheer you when you pass them in the hallway (no pun intended here).
Incidently most of them also free up floor(space) and create room to grow again in power and cooling consumption. It will probably save you or at least delay future investments in the facilities area you would have had to make without the improvements.
If you have any questions or remarks, mail me….
Aernoud van de Graaff
(sorry about the English, not my native language)