April 6, 2009 2:32 PM
Posted by: Colin Steele
, Microsoft Hyper-V
Microsoft has taken its virtualization marketing push and kicked it up a notch — BAM! — but it may be doing more harm than good.
The company’s new “Microsoft Mythbusters: Top 10 VMware Myths” video aims to dispel VMware’s claims about how good its own products are and how much Hyper-V stinks up the joint. Some viewers see it differently, however. They say Microsoft is in no position to talk and that smugness-tinged videos like this one are “embarrassing.”
March 4, 2009 7:57 PM
Posted by: Bridget Botelho
, Oracle Enterprise Manager
, Oracle VM
, server virtualization
I have to admit that I have been less than kind when it comes to Oracle’s virtualization software and licensing policies; I’ve written articles about their stubborn refusal to support their customers who use VMware, user frustrations with their licensing policies, and their unsubstantiated performance claims about Oracle VM being three times faster than other server virtualization software.
But, the newly released Oracle Enterprise Manager 10g Release 5 (10gR5) includes a VM Management Pack for Oracle VM that gives people using Oracle VM competitive features like high availability, lifecycle automation and application relationship management, making it a more attractive virtualization option.
Oracle’s Xen-based hypervisor runs on x86-64 Intel- and AMD- based systems and can support any operating system that runs on those platforms. Oracle officially certifies Linux and Microsoft Windows to run as a guest OS. The Oracle VM management tool (Oracle VM Manager) comes in the form of a Web-based interface that manages virtual server pools and performs tasks like live migrations.
The Oracle VM Management Pack 10gR5 gives users a way to manage their physical and virtual environments from one console. Some features include diagnostics of whether a problem is due to an application component, a virtual machine or physical resource issue and built-in configuration management that gives IT a way to track application relationships and analyze configuration changes.
The new management pack also lets you assign specific policies for virtualization, automated deployment through Oracle VM Templates for packaged applications, middleware, database, and Oracle Enterprise Linux. There are also lifecycle automation features for testing, deployment, patching and maintenance capabilities, including automated patching of operating systems and Oracle software running inside the guest VMs.
Lastly, Oracle VM users can now get high availability with new features that allow for server pooling, automatic load balancing and server failover. Many analysts say high availability is an absolute necessity when it comes to virtualization, so it’s great that Oracle decided to add that feature.
Perhaps I should start considering Oracle VM a contender in the virtualization market, especially against Microsoft Hyper-V, which doesn’t even have live migration yet. Or maybe not. Either way, it is an option.
If you run a ton of Oracle apps and you want to give Oracle VM a shot, it is free to download form their website. Oracle VM support per two-socket server costs $599, and includes access to software and updates through the Unbreakable Linux Network and 24X7 support. Oracle VM Premier Support costs $1,797 per two sockets for three years, and includes network access plus 24×7 support.
February 25, 2009 2:14 AM
Posted by: Alex Barrett
, Columbia River
, data center
, East Wenachtee
, Tayloe Stansbury
, VMworld Europe
Ever since I learned last summer that VMware had leased a massive 100,000 square feet of data center space along the Columbia River in East Wenachtee, Wash., I’ve always assumed that they would eventually become a cloud computing provider, and not just a provider of the underlying cloud infrastructure software. Turns out I was wrong.
At a lovely dinner for press and analysts at L’ecrin in Cannes tonight, I had the pleasure of speaking with VMware’s CIO Tayloe Stansbury, who assured me that the company has no designs on becoming a cloud provider. Not that they didn’t consider it. But the more they thought about it, the more they came to the conclusion that it would be reckless to compete with their partners (folks like Savvis, T-Systems and Terremark), Stansbury said.
So what are they doing with that ginormous data center in East Wenachtee? Testing and development, pure and simple. “It’s one of the ironies of being VMware that we have to develop our code on physical hardware,” Stansbury said. In order to assure that ESX can effectively virtualize workloads on any x86 system, VMware’s R&D must test the code against every conceivable server and storage platform. These days, all that hardware is being shipped to East Wenachtee, where it runs on electricity that costs two cents per kilowatt hour, versus the 20 – 30 cents per kilowatt hour VMware pays for power in the places like Palo Alto and Cambridge, Mass.
So I stand corrected (it won’t be the first time). Going forward, I hope VMware decides to share further details about its new data center.
February 4, 2009 6:17 PM
Posted by: Alex Barrett
, RNA Networks
This week’s launch of RNA Networks and its memory virtualization technology may not mean much for VMware administrators yet, but give it a couple years, and the technology could have broad implications for how you buy and configure your virtual host servers.
The idea behind RNA’s product, RNAmessenger, is to decouple the memory in a server, and to put it in a resource pool that can be accessed by several machines in times of need. The technology consists of a driver that gets installed in a server, plus control software that runs on an appliance.
For now, RNA is targeting applications like hedge fund programmed trading, 3D rendering and oil and gas reservoir modeling — classic high performance computing (HPC) applications with high volume, low-latency requirements. But fast forward a couple of years and another possible use case for the technology is to distribute memory across hosts in a virtual cluster, said Frank Tycksen, RNA vice president of engineering. .
Virtualization hosts, like a lot of high performance platforms, tend to run out of memory long before they run out of CPU power, he explained. But memory, unfortunately, tends to be prohibitively expensive. Thus, rather than buy additional memory, wouldn’t it be preferable if you could tap in to the excess memory of another host in your resource pool?
That could alleviate some of the pressure to purchase servers stuffed to the brim with expensive RAM. “Our goal is to help you become CPU-bound, rather than memory-bound,” said Tycksen.
But interested parties should not hold their breath; “this is future technology,” warned Andy Mallinger, RNA’s vice president of marketing.
February 3, 2009 2:33 AM
Posted by: Alex Barrett
You may have noticed by now that our SearchServerVirtualization blog has a different look and feel. That’s because we’ve migrated it to IT Knowledge Exchange (ITKE), a TechTarget IT user forum with a bunch of features that weren’t available to us before. For a quick rundown of ITKE’s features, I’ve invited Brent Sheets, TechTarget’s ITKE community manager, to describe it. So without further ado, here’s Brent:
Welcome to our new blog location on IT Knowledge Exchange.
I’d like to take a moment to introduce you to some of our new blog features and also some of the features on ITKE.
Instead of a long list of categories, we now have a tag cloud. Click any topic in the tag cloud, and you’ll see only posts on that topic. The tag cloud is dynamic, so the more a tag is used, the larger and darker it will appear. This helps you quickly see the most popular topics.
You’ll also notice we’ve integrated more of our related editorial content in the right-hand sidebar. If you’re on a post about a specific topic and wish to know more after reading the post, be sure to browse the links in the sidebar.
We always appreciate your sharing our content on social networking sites and we’ve increased the number of bookmarking tools from four to 43. If you enjoy a post, please be sure to share.
Near the top of the page, you’ll see a row of tabs. You can click the IT Blogs tab to find dozens of technology blogs, both user generated and TechTarget editorial blogs. You can even request your own blog.
There is also a tab labeled IT Answers, where you can ask your own IT question and have it seen by thousands of IT Knowledge Exchange members. So be sure to pose your own virtualization question, browse thousands of virtualization answers or help out a fellow IT pro by answering a question.
Thank you for stopping by, and be sure to bookmark our new blog location and visit the server virtualization section on IT Knowledge Exchange.
January 26, 2009 2:38 PM
Posted by: Alex Barrett
, client hypervisor
, Desktop virtualization
, Virtual Computer
Last week, Citrix Systems discussed Project Independence and its plan to develop a Xen bare metal client hypervisor for Intel’s Centrino and Core 2 Duo chips, the same chips that power the world’s desktops and laptops. Now, the company has announced that it is joining hands with venture capital firms Highland Capital Partners and Flybridge Capital Partners to take a minority stake in Virtual Computer of Westford, Mass.
You may remember reading in this blog about Virtual Computer, whose NxTop PC management suite relies on a — surprise!! — Xen client hypervisor. But don’t think for a minute think that Citrix is paying Virtual Computer to do its development dirty work. “We’re not doing the investment in VCI so that they can build our client hypervisor for Intel,” said Andy Cohen, Citrix senior director of strategic development. Rather, the investment has more to do with the relative dearth of Xen experts in this world. “There’s are only so many really smart Xen guys in the world,” Cohen said, and one of them — Virtual Computer’s CTO Alex Vasilevsky, formerly of Virtual Iron — is one of them. Citrix’s “Xen guys”, meanwhile, include its vice president of special products Ian Pratt and CTO Simon Crosby, both formerly of Cambridge University and XenSource. Thus, the focus of the investment will be on “getting some really smart guys around the table.”
But Dan McCall, Virtual Computer president and CEO, acknowledges that VCI has a wealth of expertise about building a hypervisor for the wild-and-wooly world of client computers. Unlike servers, “PCs are complicated devices,” McCall said, that support a bewildering number of graphics and network cards, USB devices and the like, “and all of these different chips and technologies need to be virtualized.” VCI’s job, therefore, “is to make sure that the [virtualized] PC runs as well as it possibly can.”
However, it’s s “a little too soon to know” exactly which elements of the joint Citrix-Virtual Computer hypervisor will go back in to the open source Xen hypervisor, and which will stay proprietary, said Citrix’s Cohen. “There are a number of strategic questions about what goes in to the Xen open source hypervisor, and what part remains commercial,” Cohen said.
For its part, Virtual Computer hasn’t given up hope on its own NxTop PC management suite. “Our goal is to help Citrix get a ubiquitous Xen-based hypervisor out there,” said Dan McCall, the company’s president and CEO. That done, “there’s a whole bunch of intellectual property that is uniquely ours,” he said, for example, NxTop’s provisioning and patching, integrated backup and persistent end user personalization technologies.
The hypervisor itself, is less important, McCall said. “As we built out the product, we always intended to be able to use other hypervisors. So far, we’ve used the iTunes/iPod model where you can control both ends of the user experience, but if someone else’s hypervisor comes around, we’ll plug in to it.”
January 16, 2009 4:07 PM
Posted by: Bridget Botelho
, Virtual machine
, Virtualization strategies
A small virtual appliance company in Portsmouth, N.H. called vKernel first grabbed my attention last year with its virtualization management software, and they have it again with a new online virtualization community called Compare My VM.
The site gives users a way to annonymously compare their virtual machine (VM) configurations, by application category, with peers to see how others are allocating resources, and hopefully, take something useful back to your own environment.
vKernel’s Founder and CEO, Alex Bakman, came up with the CompareMyVM idea to help the IT community learn from each other about allocating resources for specific application VMs.
“How to properly allocate resources in a virtual environment is still a trial and error process. Simply using the same allocations of a physical server when virtualizing it can quickly lead to resource capacity issues caused by either over or under allocations,” said vKernel’s communications director, Christian Simko. “Ultimately, users can come to the site to learn how to ‘right size’ VMs so that they can drive higher VM densities without impacting performance.”
By setting Compare My VM up as a community site, visitors are more apt to share with and learn from their peers, than to have a product vendor tell users how and what to do, Simko explained.
So far, Compare My VM has around 300 submissions. Users typically enter their VM info either because they think their VM set up is da bomb, or because they need some help, which is why vKernel added a peer to peer ranking system on the site, Simko said.
“One person may think their set up for an MS SQL VM supporting X number of users is allocated just perfectly,” but it might not be so hot when viewed outside the four wall of that users data center. “We give others a chance to rank what they think is the right way, much like how Blog sites give others the ability to rank stories,” Simko said.
As is vKernel’s style, the site is designed so that it is simple to navigate and submit information to, allowing users to find similar profiles and compare them.
“It is a tool to help admins learn, share, and improve,” Simko said. “VKernel has only set up the framework of this site; we are not populating it or dictacting how people should be doing things. It’s purely a community tool.”
I encourage you to check out the free CompareMyVM.com site and anonymously compare your VM resource allocation profiles with that of your peers. You will either feel pretty good about what you are doing, or really bad – and in that case, you’ll probably learn something.
January 9, 2009 12:59 PM
Posted by: Alex Barrett
, virtual desktop infrastructure
When it comes to the desktop, it’s clear that virtualization has a huge role to play. But is the desktop best served by VMware’s server-based virtual desktop infrastructure (VDI) model? Some people don’t think so.
At Virtual Computer, a new startup in Westford, Mass., the thinking is that for desktops, the virtualization layer belongs directly on the client, in the form of a bare-metal hypervisor. There the hypervisor brings management benefits like simplified provisioning and patching of images, but without of the mobility and performance limitations of VDI, said Doug Lane, Virtual Computer’s director of product marketing and management.
When VMware announced its intention to deliver a client hypervisor for “offline VDI” this fall, the company tacitly acknowledged VDI’s shortcomings, according to Lane. Meanwhile, the company is still focused squarely on delivering the desktop from the server.
“With VMware, offline VDI is the niche case,” he said. But when Virtual Computer looks out at the enterprise, it sees a preponderance of laptops and thick clients. “Our model starts there, and we think that server-hosted desktops are the niche case.”
To that end, Virtual Computer is developing NxTop, a PC management suite pronounced “nextop.” It consists of a Xen bare-metal hypervisor called NxTop Engine optimized for laptop-class hardware and that runs Windows virtual machines. Those are managed by its NxTop Control console from which administrators can configure and provision images, set up access and protection policies, and the like. NxTop is currently in beta and is scheduled to ship by the end of the first quarter of 2009.
Without making a stake in the ground and validating one strategy over another, Gartner senior research analyst Terry Cosgrove agreed that there several issues with hosted virtual desktops (Gartner-speak for VDI). “Hosted virtual desktops are an immature, adolescent technology” that won’t be ready for mainstream use for a number of years, he said. In the meantime, “there’s a place for alternative architectures to achieve the same thing – centralized management and control, but that gives users some autonomy.”
Cosgrove also said that several stealth-mode startups working on VDI alternatives will emerge over the next couple of months. There is also speculation that Microsoft and/or Citrix are developing client hypervisors of their own, and questions about which tack laptop OEMs like Dell and Lenovo will promote. One thing is clear, though: With laptop sales now exceeding desktop sales, those OEMs “are highly motivated to have a solution that will not prohibit the sales of laptops,” Cosgrove said.
December 12, 2008 5:23 PM
Posted by: Alex Barrett
This week, when VMware announced its partnership with Hewlett-Packard to integrate its Lab Manager with HP’s Business Technology Optimization software, more specifically, HP Operations Orchestration, it showed that the company realizes that it’s not an all-virtual world — yet — and that there are large pockets of physical systems not under its direct control.
“Lab Manager is a great tool from the point that you already have a physical box with ESX, storage and networking installed,” said Bogomil Balkansky, the senior director of product marketing at VMware. “From there, developers can self-deploy all these virtual configurations. But without that, Lab Manager can do nothing for you.”
To that end, the integration between VMware Lab Manager and HP’s orchestration software aims to offer “one seamless process to do all this [provisioning] from the same place,” he said, enabling the provisioning of bare metal, in addition to virtual, resources.
The target market for the Lab Manager/HP Orchestration suite, to be delivered sometime in 2009, will be the same as the target market for Lab Manager today, namely large independent software vendors (ISVs) and “nonsoftware companies that nevertheless develop a lot of software in-house, for example, telcos and banks,” Balkansky said.
VMware also plans to OEM HP’s Discovery and Dependency Mapping (DDM) Inventory software for use in a new VMware product to be announced in 2009.
The HP deal marks the third time in four months that VMware has partnered with one of the big four systems management companies (HP, BMC, CA and IBM). In September, BMC and VMware said they would collaborate on integrating VMware’s Lifecycle Manager with BMC’s Atrium Orchestrator (formerly Run Book Automation) and Remedy IT Service Management, such that joint customers could make change requests or initiate automation processes from either Lifecycle Manager or BMC products. Then, just last month, CA announced that it would OEM and resell VMware’s Stage Manager as part of its Data Center Automation suite.
“A core tenet of our virtualization management strategy is to integrate our products with the larger systems management offerings,” Balkansky said. That approach should appeal to “larger companies that aspire to a single pane of glass” while at the same time giving them the benefit of “the feature-rich products our tools provide,” he said.
This all seems logical enough, but one question I have is whether there is customer demand for these integrations. Frequently, these sorts of product integrations are a result of customer clamoring for them, but at least in the case of the HP/VMware partnership, a request for a customer reference came up short. “The idea of a single pane of glass resonates very well,” Balkansky said, “but honestly we haven’t solicited quotes and validation given that the integration hasn’t happened yet.”