Looking for the heart to proceed with your stumbling virtualization plans? The brains to figure out how to deal with unchecked proliferation? The courage to tell your C-levels that no, Amazon E3 isn’t the answer to all of life’s problems?
Then Interop was the right place to be this week, as vendors pitched their roadmaps and panelists promised access to a sane (or at least saner) way to transition from your current server architecture to a more scalable, agile, cloud-like infrastructure without the security risks or loss of control.
And while the focus was on achieving the benefits of the cloud, the strategies presented were often grounded in virtualization fundamentals, paving a more organic path on the way to the fabled Oz of IT.
The quest for cloudiness begins, in large part, because of virtualization’s limitations, at least as it is largely implemented. Server virtualization was an early no-brainer due to the ability to drastically save on CapEx spending: Instead of buying 3 servers, which each used about 50% of their load at any given time, you could buy 2, and distribute one of those servers, virtualized, to whichever physical machine was under the least load at the time.
Or better yet, stuff 50 servers on one high device and save on not only server costs, but also server room space, power, cooling … The savings were immediate and tangible.
But the time spent managing these devices didn’t decrease alongside the original savings, and so while there were some ongoing savings (reduced cooling and power, for example), virtualization stalled out as far as the benefits it could offer, particularly since a virtualized environment offered a host of new challenges that IT hadn’t run into before, including the difficulty of troubleshooting a server when you might not even know where it is physically, or how to check and monitor machine-to-machine traffic effectively and efficiently.
“We were wildly successful, and we didn’t figure out all these problems,” said Jerry McCloud, director of marketing at VMware. “We [now] see all the issues, we know all the issues, and quite frankly, we have all the issues.”
Click your heels three times
Fortunately, even as virtualization was finding wide acceptance, top minds were working on ensuring that the new tools didn’t just pile new problems on top of the old ones.
Andi Mann, VP of marketing with CA and author of Visible Ops Private Cloud, said that companies need to work to evolve their virtualization activities along a sensible path.
1. Consolidation: The first stage of a virtualization strategy, in which hardware is consolidated using basic virtualization technologies. This is where you see the CapEx gains mentioned earlier.
2. Optimization: Start using infrastructure automation tools to ensure that the system self-tunes itself from time to time, ensuring best practices and sensible deployments throughout your virtualized infrastructure without manual intervention.
3. Orchestration: A deeper automation of a wider variety of tasks, including the ramp up of new services and servers. This is where business agility benefits begin to kick in, reducing deployment times and cost while reducing human error.
4. Dynamic IT: Welcome to Oz, where new services are up and running in days and where IT can sit contentedly back, watching the glorious green uptime fruits of its labors.
Does this virtualization road map make sense to you? Do you see your virtualized efforts becoming more cloud-like, on purpose or through coincidence? Is this just more marketing buzzwork as entrenched players push back against the likes of Amazon? We’d love to hear your take, below in the comments or directly to me via e-mail.