Vendor Tech Talk

Nov 5 2013   5:05PM GMT

The Software-Defined Data Center and Data Center Information Management



Posted by: Nlyte Software
Tags:
Data Center
DCIM

By Mark Harris, vice president of marketing and data center strategy at Nlyte Software

Ours is a remarkable, interconnected world, where mobile devices are now more plentiful than people, and the expectations are for anyone to have access to any information at any time. The concept of instant gratification has never been so pronounced. And this isn’t limited to our personal lives, it transcends into business just as much. Inside most corporations, remotely accessed applications are now the key to running their business, and so the demands upon their data center, and that of the company’s use of cloud services are rapidly growing. Much of this capacity growth is being addressed through the dynamic abstraction of computing inside data centers and in the cloud or any combination of these services.

Together they have become a critical component in any company’s fiscal livelihood. In short, our lives are being quickly transformed with access to information at any time night or day using mobile portals, which are being driven hard by a ton of back-end technology which itself is transforming to account for dynamic capacity and their underlying cost structures. While the front-end portal devices are becoming ubiquitous and highly available, when they’re tightly managed supporting back-end services falter, business stops.
The current trend towards addressing this need for robust dynamic capacity is to virtualize the data center infrastructure across the server, storage and networking domains and to span private and public clouds at the same time. This creates what is referred to commonly today as a Software Defined Data Center (SDDC). SDDC allow capacity to be added or removed without the knowledge of the users or applications. These dynamic data centers provide computing as a utility, rather than as a rigid structure, and in fact each service may be delivered differently from moment to moment.

The good news is that this abstraction will drive your overall computing costs down when done properly. For instance with each virtualized server (Guest) instantiated a physical (Host) server, that Host can provide additional application computing capacity with no need to purchase a new piece of hardware. With virtualization, the devices themselves become utilized at a much higher rate than previously seen. The bad news? With an increasingly virtualized data center, your success is even more susceptible to problems that arise from the lack of visibility of physical devices, and the complexity of power and cooling load fluctuations.

Put on your structural engineering “hard hat” and ask yourself this about your current data center: is it built on a solid, scalable, well-understood and well managed hardware foundation, or is it instead vulnerable, resting precariously on the assumption that there is enough physical capacity to handle whatever loads are placed at any point? Remember that historically when data center services failed, most users still had the ability to continue the majority of their work since their local devices contained significant local computing capabilities. In the traditional computing model of years past, data center failures were inconvenient, but not catastrophic. With the new paradigm of always-on connected portal access to backend computing services, data center failures stop business. Read on to understand more of what can be done to assure backend services continues to be available.

Today’s Data Center Challenge
Let’s consider what’s now occurring within the data center environment. As we know, the demands upon business applications and their information access are growing dramatically, while the actual floor-space available for computing is not. At the same time the economics of computing are driving the need to reduce all costs. As a result, data centers are being updated with much higher capacity and higher density equipment. As more servers are “crammed” into a rack, each rack draws more power and thus generates more heat, which requires more cooling (and even more power) per square foot. Virtualization layers on top of this highly dense structure. In roughly half of the data centers today, virtualization is being used to increase the utilization of this dense hardware to levels that are unprecedented in the past. All of this is driving the need for a well-managed and actively planned Data Center infrastructure. Devices need to be placed in service quickly, maintained accurately, and then decommissioned when their value declines. It’s really about lifecycle management. There is simply no room for low performance or aging equipment in this new high-density structure.

Are you managing the lifecycle of your data center asset devices, or are they sitting ghost-like, taking up precious space and power? How accurately are you able to plan and forecast your data center’s capacity? Are you executing fiscal asset planning which takes into account capital depreciation cycles and the resulting opportunity for technology refreshes? Do you have repeatable processes and operations to consistently execute all of the above? Do you know how much any compute transaction costs your business today and tomorrow? In the abstracted data center where failure can paralyze business, these questions demand your attention and every Datacenter manager and operator needs to consider whether their core foundations are ready for the transformations now underway.


The Challenges for SDDCs

The challenge for the owners and operators of Software-Defined Datacenters is that in today’s world, resources are finite. Long gone are the days where structures were over built, oversized, over provisioned and over cooled. In that world, data center capacity was a discussion about the active devices that were to be chosen. The underlying structure, since it was over built, was essentially infinite in nature. Enough headroom existed that new applications and new requirements would never come close to consuming all of the space, power and cooling available.

In the SDDC, abstractions exist across the board which allows work to be moved or migrated from place to place in real-time. Instances of servers can be started or moved dynamically. While this dynamic capability sounds good at first, the consumption of resources underneath also changes, and it is this very set of resources that are now no longer infinite in nature. It is quite conceivable that the movement of workloads in a datacenter could trigger catastrophic failures associated with power and/or cooling overloads.
As abstraction takes hold, the need for active management of the physical layer grows. In short, the adoption of SDDC technologies requires the deployment of DCIM to assure physical, logical and virtual layers are coordinated.

A Must Have for the Software Defined Data Center: DCIM
The solution for managing the foundation of your data center business is Data Center Infrastructure Management (DCIM), defined by Gartner as the integration of IT and facility management disciplines to centralize monitoring, management and intelligent capacity planning of a data center’s critical systems. It extends from the physical facilities up to and including the virtualized instances of capacity. DCIM is the purpose-built suite of software intended to provide management of the physical components in a data center, the devices, the resources, etc. DCIM provides this lifecycle management over long periods of time. Today’s DCIM suites can be tightly integrated with the other management fabrics that are in place. Most importantly, modern DCIM suites are strategic extensions to the corporate IT management fabric.

The current trend of virtualization is here to stay. When virtualization spans across the servers, networks and storage components it creates a Software Defined Data Center, and the SDDC must be built upon a solid foundation of actively managed resources. DCIM suites are the means to actively manage the physical aspects of the data center in the context of virtualization, assuring that the dynamic changes associated with abstraction are planned for. Regardless of the architecture deployed for virtualized servers, storage and networking, it is the physical infrastructure that supports your business and without it, your business stops.

 Comment on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: