Posted by: Nlyte Software
when relevant content is
added and updated.
when relevant content is
added and updated.
by Mark Harris, Vice President of Marketing and Data Center Strategy for Nlyte Software
Over the past year, the NSA has come under intense public scrutiny on its intelligence gathering and data mining practices, and if that wasn’t bad enough, in a recent set of articles published in the Wall Street Journal and Computerworld, among others, the very data center in Utah that was built to support these activities was literally melting down with some very dramatic and high-profile failures. As taxpayers it is important for us to step back and consider huge dollar data center projects like this and understand the reasons why they fail. Forget about what the NSA is doing with these data centers for a minute. We should ask the question “how could it invest almost a billion and a half dollars of public funds into projects that fail so miserably on day one?” Most concerning, the same group chartered to deliver the Utah data center is also tasked to deliver another $900 Million data center in Maryland.
In a nutshell, the NSA simply does not understand the technology and business of hyper-scale data centers. It doesn’t have state of the art experience when it comes to hyper-scale computing, and has chosen to go it alone rather than follow some of the best practices pioneered by companies who build these hyper-scale centers like Facebook, Apple and Google. Somehow the NSA embarked down a path to use public funds to build these mega-data centers, without the fundamental understanding about what has changed in technology over the past dozen years, and how to manage these investments over time. It would appear that the NSA built these data centers as larger versions of the simple types it built a dozen years ago. These new centers apparently were designed without provision for dense and highly utilized technology, such as blade chassis, virtualization, hybrid terabyte disk drives, huge in-memory databases and software-defined switches. It would seem that the NSA built these centers based upon an old data center model, and without any strategic thinking in terms of modernization or capacity planning. In the deployment phase, the Utah center was populated with vast amounts of the latest and most dense gear, and as it grew, it simply brought in more power to accommodate the unexpected demand. The ill-defined power structures literally melted and flamed as various loads were applied.
Data Centers today are highly dense and dynamic in nature. The amount of processing demand, the location of that demand within the data center and the physically deployed technologies each change over time. Whereas a rack in 2002 may have consumed just two kilowatts, in 2014 the modern dense equivalent may consume TEN TO FIFTEEN TIMES that amount when fully utilized. To make matters worse for poorly designed data centers, the amount of processing capacity found in a data center is not directly reflected by the amount of raw physical devices installed. It may vary dramatically throughout each day. What has happened over the last ten years is a separation between capacity and control. Virtualization does this for servers, and software-defined techniques provide this for storage and networks. What this means is that the link between physical assets and their business value changes over time. As a result of virtualization of processing, storage and networks, hardware can be refreshed or retired at will, without the need to impact applications. Hardware is simply added or removed and the amazing capacity abstraction technologies handle the re-provisioning and re-initializing of these new devices, bringing them into service quickly. But these same abstraction capabilities can wreak havoc on a data center that was designed for a simpler model of static computing where capacity and control were tightly connected. This is exactly what got the NSA in trouble and is still forcing rolling-outages and overall capacity reduction. The most recent estimate to fix the data center resource issues at the Utah center exceeds $100 Million.
As a publicly funded entity, the NSA is just one highly publicized example of the need to actively plan and manage data centers based upon modern best practices, from inception and throughout their long lifespan. Not only do the devices housed within a data center have practical resource requirements, each has its own lifecycle and value over time. These lifecycles demand change to remain cost-effective. Abstraction allows this change to happen easily, but the tools to manage all of this change must be deployed to allow data center operators to know what to expect, what to do next, and what the impacts might be. As a publically funded project, the NSA and all other government agencies would be well served to look at the hyper-scale architectures in use by some of its commercial counterparts. It should consider what pieces of their designs make sense to incorporate, look at what new tools could be deployed to better reflect the physical infrastructure lifecycle, and then look at capacity planning all the way from the cement up.