Posted by: Alex Barrett
Microsoft, Uncategorized, Virtualization, Virtualization platforms
I just returned from Microsoft’s WinHEC conference in Los Angeles, where I traveled in hopes of getting some juicy virtualization news. Alas, Microsoft dropped its virtualization bombshell last week – that it will eliminate key features from the Windows Server Virtualization beta that will ship with Longhorn — so no news was to be had, per se.
But even if Microsoft doesn’t have much to show in the way of shipping, competitive hypervisor, one thing I came away with is that the company is clearly paying extremely close attention to virtualization, and has its best minds on the job.
Speaking in a session called Virtualization Technology Directions, Mike Neil, Microsoft general manager in the Windows Server Group, said virtualization stands to be one of the key “scale-up” apps to run on a next generation of servers featuring multi-core processors, and thanks to 64-bit operating systems, terabytes of memory.
One positive aspect of being late to market with Viridian is that Microsoft can develop it to take advantage of hardware virtualization assist technologies going in to chips — CPU virtualization in Intel-VT and AMD-V, but also new features like MMU, DMA remapping, interrupt handling and I/O virtualization. “The software layer for virtualization doesn’t go away, but we do see the hardware taking on some capabilities that will make the virtualization layer thinner — and hopefully more robust as well,” Neil said.
Neil, as a former employee at Connectrix, the first virtualization company Microsoft acquired, recalled how “we didn’t have any hardware assist, and spent a lot of our time working around the limitations of x86 that made it very difficult to virtualize.” VMware, I’m sure, was in the same boat.
Taking advantage of hardware assist capabilities is what Microsoft calls “enlightenment,” which Neil further described as “an intimate arrangement between the kernel and the hypervisor.” But don’t think Microsoft wants to invite everyone in to this cozy party. “We’re not trying to drive this as a standard,” Neil said. Other companies, notably VMware, are espousing paravirt-ops, but not Microsoft.
That’s not to say that hardware assist solves everything — it doesn’t, not by a long shot. Some of the challenges Neil called out include the shift to network storage, increased data rates and ever-expanding storage requirements. When it comes to memory and I/O bandwidth, “there are changes that are going to need to occur,” Neil said. Today, “the straw that we’re sipping the data through is too thin.”
Another area of concern Neil cited is security. “It’s unfortunate that we’re in a situation where malicious software is a business. Hackers aren’t doing this because they are pranksters or its fun, but because they make money off it.” It stands to reason, therefore, that “the hypervisor, as the lowest level piece of software on a system,” is a place where people are going to look to compromise a system. “If I can get at the hypervisor, I can get at all the VMs,” Neil said. Furthermore, today, virtual machines do not know if they’ve been “hyperjacked,” he said; “understanding the layer of software beneath [the VM] will become more and more important.”