A recent Ponemon Institute study revealed that 95% of organizations represented by the 450+ U.S.-based data center professionals surveyed have experienced an unplanned data center outage within the past two years.
For some perspective, some data center outages can add up to thousands of dollars in loss per second of outage.
The numbers just don’t add up.
This seems to be the consensus. Of the respondents to the Ponemon study:
- only 37% stated they have ample resources to bring the data center back up in the event of an unplanned outage.
- a mere 32% believe they’ve utilized best practices in data center design and redundancy to maximize availability.
How can I prevent a doomed data center?
As member Carlosdl pointed out, money is a minor concern when issues such as unplanned outages or overwhelming data growth present themselves: “Well, yes, money tends to be an obstacle when solving situations (such as data growth) that weren’t seen as problems in the past implies the acquisition of new tools/services. But maybe we wouldn’t need new tools or services to manage data growth if an appropriate data retirement policy would have been set in the past.” He poses the question: Are there seemingly innocuous functions that we see as normal now that will require investment to be solved in the future?
Perhaps something like virtual machine sprawl, created by lack of policy regarding the haphazard creation of virtual machines for testing applications, will prove an obstacle for future data center administrators. Since free solutions are the exception rather than the rule, data center admins need to step up their game, get creative and cautious.
Caution? We don’t need no stinkin’ caution!
The truth is, when you’re wandering around the million square foot showroom floor at the latest IT conference, new, shiny products and promises can be enticing. But each new application or process you employ comes with risks; policies must be firmly in place before deployment.