Bricks (standard rack-mount servers) are a known entity. You’ve been buying and deploying them forever. You are comfortable with bricks.
Blades (blade servers) are cool. They’re small. They look really neat in your rack. Your vendor really wants you to buy blades.
The question is, which platform is right for your virtual infrastructure? I’m here to say that, for the majority of environments, blades are the right answer. Why’s that, you ask? Well, it’s really pretty simple. Virtualization is all about simplifying your environment. It’s about having consistency in platform and in process/procedure. It’s about rapid provisioning and rapid recovery from failures. It’s all about commoditizing your IT infrastructure to enhance support to your line of business applications.
There has been a lot of mud slinging and FUD raising among virtualization vendors lately as the quest to rule the virtualization space continues.
One vendor will release information about its product, comparing performance, pricing or features to another vendor, with the other vendor firing back with its own response shortly thereafter. With all this going on, who are you to believe if you are in the market to adopt a virtualization solution in your own environment?
Comparisons by vendors themselves are always biased; after all, they want you to buy their product and not a competitor’s. Performance comparisons between vendors — even by third parties — don’t always tell the big picture and can be difficult to interpret.
I’ve been involved in many data center virtualization projects and one thing that continues to amaze me is that, far too often, the senior management team has no way of knowing if the project is a success. Oh sure, the manager of the data center is thrilled! He’s got only 10% of the systems he had before the project began — but what does that mean to the CxO? Absolutely nothing!
The CxO doesn’t care how many servers are in the data center. He doesn’t care if those servers are running below 10% utilization. He couldn’t care less that there are now only 500 network cables rather than the original 15,000. He doesn’t even really care that the cost of power and cooling has been slashed by 75%. In the grand scheme of things, all those savings amount to a few decimal points in the overall corporate budget. If you want to get your CxO excited, you need to demonstrate that you’re favorably impacting something that he does care about — his line of business applications.
VMware and its supporters have made like Mr. Burns and released the hounds on Microsoft.
The object of their ire is the “Top 10 VMware Myths” video, which features two Microsoft execs trying (sometimes a little too hard) to show why Hyper-V is better than VMware. Viewers on Microsoft’s own site called the video “embarrassing” and said “we deserve better than this,” but that criticism pales in comparison to reaction on other blogs.
The award for Most Thorough Response goes to VMware’s Mark Chuang, who posted a 2,500-word rebuttal on the VMware Virtual Reality blog. Check out some of these zingers:
Microsoft has taken its virtualization marketing push and kicked it up a notch — BAM! — but it may be doing more harm than good.
The company’s new “Microsoft Mythbusters: Top 10 VMware Myths” video aims to dispel VMware’s claims about how good its own products are and how much Hyper-V stinks up the joint. Some viewers see it differently, however. They say Microsoft is in no position to talk and that smugness-tinged videos like this one are “embarrassing.”
I have to admit that I have been less than kind when it comes to Oracle’s virtualization software and licensing policies; I’ve written articles about their stubborn refusal to support their customers who use VMware, user frustrations with their licensing policies, and their unsubstantiated performance claims about Oracle VM being three times faster than other server virtualization software.
But, the newly released Oracle Enterprise Manager 10g Release 5 (10gR5) includes a VM Management Pack for Oracle VM that gives people using Oracle VM competitive features like high availability, lifecycle automation and application relationship management, making it a more attractive virtualization option.
Oracle’s Xen-based hypervisor runs on x86-64 Intel- and AMD- based systems and can support any operating system that runs on those platforms. Oracle officially certifies Linux and Microsoft Windows to run as a guest OS. The Oracle VM management tool (Oracle VM Manager) comes in the form of a Web-based interface that manages virtual server pools and performs tasks like live migrations.
The Oracle VM Management Pack 10gR5 gives users a way to manage their physical and virtual environments from one console. Some features include diagnostics of whether a problem is due to an application component, a virtual machine or physical resource issue and built-in configuration management that gives IT a way to track application relationships and analyze configuration changes.
The new management pack also lets you assign specific policies for virtualization, automated deployment through Oracle VM Templates for packaged applications, middleware, database, and Oracle Enterprise Linux. There are also lifecycle automation features for testing, deployment, patching and maintenance capabilities, including automated patching of operating systems and Oracle software running inside the guest VMs.
Lastly, Oracle VM users can now get high availability with new features that allow for server pooling, automatic load balancing and server failover. Many analysts say high availability is an absolute necessity when it comes to virtualization, so it’s great that Oracle decided to add that feature.
Perhaps I should start considering Oracle VM a contender in the virtualization market, especially against Microsoft Hyper-V, which doesn’t even have live migration yet. Or maybe not. Either way, it is an option.
If you run a ton of Oracle apps and you want to give Oracle VM a shot, it is free to download form their website. Oracle VM support per two-socket server costs $599, and includes access to software and updates through the Unbreakable Linux Network and 24X7 support. Oracle VM Premier Support costs $1,797 per two sockets for three years, and includes network access plus 24×7 support.
Ever since I learned last summer that VMware had leased a massive 100,000 square feet of data center space along the Columbia River in East Wenachtee, Wash., I’ve always assumed that they would eventually become a cloud computing provider, and not just a provider of the underlying cloud infrastructure software. Turns out I was wrong.
At a lovely dinner for press and analysts at L’ecrin in Cannes tonight, I had the pleasure of speaking with VMware’s CIO Tayloe Stansbury, who assured me that the company has no designs on becoming a cloud provider. Not that they didn’t consider it. But the more they thought about it, the more they came to the conclusion that it would be reckless to compete with their partners (folks like Savvis, T-Systems and Terremark), Stansbury said.
So what are they doing with that ginormous data center in East Wenachtee? Testing and development, pure and simple. “It’s one of the ironies of being VMware that we have to develop our code on physical hardware,” Stansbury said. In order to assure that ESX can effectively virtualize workloads on any x86 system, VMware’s R&D must test the code against every conceivable server and storage platform. These days, all that hardware is being shipped to East Wenachtee, where it runs on electricity that costs two cents per kilowatt hour, versus the 20 – 30 cents per kilowatt hour VMware pays for power in the places like Palo Alto and Cambridge, Mass.
So I stand corrected (it won’t be the first time). Going forward, I hope VMware decides to share further details about its new data center.
This week’s launch of RNA Networks and its memory virtualization technology may not mean much for VMware administrators yet, but give it a couple years, and the technology could have broad implications for how you buy and configure your virtual host servers.
The idea behind RNA’s product, RNAmessenger, is to decouple the memory in a server, and to put it in a resource pool that can be accessed by several machines in times of need. The technology consists of a driver that gets installed in a server, plus control software that runs on an appliance.
For now, RNA is targeting applications like hedge fund programmed trading, 3D rendering and oil and gas reservoir modeling — classic high performance computing (HPC) applications with high volume, low-latency requirements. But fast forward a couple of years and another possible use case for the technology is to distribute memory across hosts in a virtual cluster, said Frank Tycksen, RNA vice president of engineering. .
Virtualization hosts, like a lot of high performance platforms, tend to run out of memory long before they run out of CPU power, he explained. But memory, unfortunately, tends to be prohibitively expensive. Thus, rather than buy additional memory, wouldn’t it be preferable if you could tap in to the excess memory of another host in your resource pool?
That could alleviate some of the pressure to purchase servers stuffed to the brim with expensive RAM. “Our goal is to help you become CPU-bound, rather than memory-bound,” said Tycksen.
But interested parties should not hold their breath; “this is future technology,” warned Andy Mallinger, RNA’s vice president of marketing.
You may have noticed by now that our SearchServerVirtualization blog has a different look and feel. That’s because we’ve migrated it to IT Knowledge Exchange (ITKE), a TechTarget IT user forum with a bunch of features that weren’t available to us before. For a quick rundown of ITKE’s features, I’ve invited Brent Sheets, TechTarget’s ITKE community manager, to describe it. So without further ado, here’s Brent:
Welcome to our new blog location on IT Knowledge Exchange.
I’d like to take a moment to introduce you to some of our new blog features and also some of the features on ITKE.
Instead of a long list of categories, we now have a tag cloud. Click any topic in the tag cloud, and you’ll see only posts on that topic. The tag cloud is dynamic, so the more a tag is used, the larger and darker it will appear. This helps you quickly see the most popular topics.
You’ll also notice we’ve integrated more of our related editorial content in the right-hand sidebar. If you’re on a post about a specific topic and wish to know more after reading the post, be sure to browse the links in the sidebar.
We always appreciate your sharing our content on social networking sites and we’ve increased the number of bookmarking tools from four to 43. If you enjoy a post, please be sure to share.
Near the top of the page, you’ll see a row of tabs. You can click the IT Blogs tab to find dozens of technology blogs, both user generated and TechTarget editorial blogs. You can even request your own blog.
There is also a tab labeled IT Answers, where you can ask your own IT question and have it seen by thousands of IT Knowledge Exchange members. So be sure to pose your own virtualization question, browse thousands of virtualization answers or help out a fellow IT pro by answering a question.
Thank you for stopping by, and be sure to bookmark our new blog location and visit the server virtualization section on IT Knowledge Exchange.
Last week, Citrix Systems discussed Project Independence and its plan to develop a Xen bare metal client hypervisor for Intel’s Centrino and Core 2 Duo chips, the same chips that power the world’s desktops and laptops. Now, the company has announced that it is joining hands with venture capital firms Highland Capital Partners and Flybridge Capital Partners to take a minority stake in Virtual Computer of Westford, Mass.
You may remember reading in this blog about Virtual Computer, whose NxTop PC management suite relies on a — surprise!! — Xen client hypervisor. But don’t think for a minute think that Citrix is paying Virtual Computer to do its development dirty work. “We’re not doing the investment in VCI so that they can build our client hypervisor for Intel,” said Andy Cohen, Citrix senior director of strategic development. Rather, the investment has more to do with the relative dearth of Xen experts in this world. “There’s are only so many really smart Xen guys in the world,” Cohen said, and one of them — Virtual Computer’s CTO Alex Vasilevsky, formerly of Virtual Iron — is one of them. Citrix’s “Xen guys”, meanwhile, include its vice president of special products Ian Pratt and CTO Simon Crosby, both formerly of Cambridge University and XenSource. Thus, the focus of the investment will be on “getting some really smart guys around the table.”
But Dan McCall, Virtual Computer president and CEO, acknowledges that VCI has a wealth of expertise about building a hypervisor for the wild-and-wooly world of client computers. Unlike servers, “PCs are complicated devices,” McCall said, that support a bewildering number of graphics and network cards, USB devices and the like, “and all of these different chips and technologies need to be virtualized.” VCI’s job, therefore, “is to make sure that the [virtualized] PC runs as well as it possibly can.”
However, it’s s “a little too soon to know” exactly which elements of the joint Citrix-Virtual Computer hypervisor will go back in to the open source Xen hypervisor, and which will stay proprietary, said Citrix’s Cohen. “There are a number of strategic questions about what goes in to the Xen open source hypervisor, and what part remains commercial,” Cohen said.
For its part, Virtual Computer hasn’t given up hope on its own NxTop PC management suite. “Our goal is to help Citrix get a ubiquitous Xen-based hypervisor out there,” said Dan McCall, the company’s president and CEO. That done, “there’s a whole bunch of intellectual property that is uniquely ours,” he said, for example, NxTop’s provisioning and patching, integrated backup and persistent end user personalization technologies.
The hypervisor itself, is less important, McCall said. “As we built out the product, we always intended to be able to use other hypervisors. So far, we’ve used the iTunes/iPod model where you can control both ends of the user experience, but if someone else’s hypervisor comes around, we’ll plug in to it.”