Posted by: Matt Stansberry
Capacity Planning, DataCenter, IT Asset management
I recently spoke with Kenneth Gonzalez, leader of Symantec’s Data Center Transformation Services team about how data center managers can get rogue business units to give up their crappy old servers and how to make data center costs explicit to internal end users. This is an excerpt of that conversation.
In a recent data center paper you produced, you talk about decommissioning legacy servers to optimize data centers. So how do you get rid of them? Experts estimate that up to 30% of the servers in a given data center aren’t doing any work. It seems like business units like to hang onto these things, putting them under desks, in the Test-Dev lab., or in closets.
Ken Gonzalez: Asset management is a huge challenge for IT. It ends up being a real manual exercise because organizations grow by accretion. Going back to find what you have is so overwhelming, very few organizations ever start. The scope is too huge. It requires rigor and operating practices a lot of organizations aren’t willing to take on with a vengeance.
This is an asset management problem. IT needs to know where the assets are going when they unplug them. Are they being sent to an organization to responsibly dispose of the equipment? IT managers that don’t track this could be cutting their own throats. You should be able to bring in a more space saving, energy efficient asset in its place. The IT team needs to be responsible for having positive control over the assets under its charge.
In order to get people to change behavior, it often boils down to money, showing users how much it costs to deliver an IT service. Some folks, like The Uptime Institute and Vernon Turner at IDC have recommended chargeback. Does that work in these situations?
Gonzalez: Chargeback is one model, but a lot of organizations are against trying it. Many organizations don’t know how to price it. Applications aren’t one size fits all. Some applications don’t do a whole lot, but use a lot of resources. Organizations are reticent about coming up with a cost model.
The notion of a service catalogue is pretty popular — to be able to charge what it costs to deliver a service. The intent of the service catalogue would be to clearly communicate to your customer, what services you can provide and the most effective way for you to deliver them. You produce a standard profile of the services you offer. If there is something a customer needs that doesn’t fit the standard catalogue, you have to go through someone to see what resources the project will take. You expose the detail to the customer and there is a forecasting and capacity planning benefit that comes with that approach.
If the demand for power and computing resources continues to outstrip IT’s ability to provide capacity in a cost effective way, are companies going to turn to cloud computing and other outsourced options?
Gonzalez: I think that is an important component that private organizations are going to have to confront at some point. Some services are going to have to move into the cloud, either software as a service (SaaS) or infrastructure as a service. Whether or not a company moves an application into the cloud will primarily revolve around the criticality of the services and the security of the data, how long it would take to recover it if something happened. The issues that need to be worked through now are business issues. Dealing with the technical issues is putting the cart before the horse.
Right now we’re just getting an initial level of awareness. You could call it “Utility Computing Part 2.”
Do you have a data center question or comment for Ken? Leave a comment.