The Windows Server Notebook

Dec 10 2010   10:47PM GMT

Microsoft and VMware memory battle is the same song and dance



Posted by: Bcournoyer
Tags:
Hyper-V
Server Virtualization
VMware
Windows Server 2008 R2

“Dynamic Memory for Hyper-V is Microsoft’s answer to VMware’s memory overcommit technology for ESX. “

“Dynamic Memory is in fact not the same as memory overcommit, and now that you mention it, VMware’s stuff is still way better.”

If you’ve paid any attention at all to server virtualization news this past year, you’ve likely heard some variation of those two statements. Ad nauseam. Over and over again.

The funny thing is, Dynamic Memory isn’t even officially out yet; even though the release candidate is now available, SP1 for R2 won’t officially ship until early next year. But ever since news of the feature first broke back in the spring, IT folks have debated the degree to which it will put Microsoft virtualization at equal footing with VMware.


There’s no denying it’s taken a while for Microsoft to add more control over VM memory allocation to its virtualization platform. Some said it was because the company simply hadn’t figured it out yet. Others took more technical views, opining that the presence of Address Space Layout Randomization (ASLR) with Windows 2008 was slowing things down. As for Microsoft itself, the company traditionally downplayed the importance of a memory overcommit feature and questioned the performance impact of having one.

But no matter how Microsoft spun it, improved memory management for Hyper-V remained high on IT pros’ wish lists. And while it wasn’t quite ready for the first Hyper-V R2 launch (it’s been reported that it was originally intended for that release), Hyper-V will soon come with Dynamic Memory functionality for tying a VM with more RAM than is physically available on a host machine. End of story.

Except, of course, it’s not.

The same folks who criticized Microsoft for not embracing the over-commitment of memory are now hollering about how Dynamic Memory is still not up to snuff. The other side has fought back with claims that Microsoft’s is the better approach and VMware’s memory overcommit is trouble waiting to happen.

In a recent article by Mike Laverick, he notes that Microsoft’s memory management approach is actually much more similar to that of Citrix than VMware. He also links to a video with Microsoft’s Ben Armstrong describing how Dynamic Memory works. In the video, Armstrong (who maintains Microsoft’s Virtual PC Guy’s Blog) acknowledges the differences between the two vendors’ approaches to memory allocation:

I always find it interesting when you have two companies like Microsoft and VMware, both out there saying things that seem to conflict, with one company saying ‘This is the way to do it’ and the other company saying ‘No, this is the way to do it.’

While Armstrong jokingly references some of the back-and-forth often heard from both sides, he explains that in his eyes, a lot of the differences arise from simply having different ways of achieving the same goal:

Something I always try to do when I look at different technologies is that I go in with the assumption that the other people are just as smart as I am. You know, they’re not morons – they know what they’re doing. And that kind of leaves two possibilities. The first one (which I always hope isn’t the case) is that they know something we don’t know. The other one, which is actually more often the case, is just that they’re viewing the problem in a different way… and given this different view, a different solution seems more attractive.

According to Armstrong, Dynamic Memory is designed the way it is because with Microsoft being Microsoft, they have a better understanding of how Windows memory management works. Therefore, they are better suited to build memory management on top of that “guest OS knowledge” as opposed to VMware, where memory overcommit takes what he described as a “black box” approach that intentionally avoids gathering memory info from the guest operating system.

The Microsoft side obviously feels its concept is better, and the VMware side feels the same about memory overcommit. For his part, Laverick said that the performance risks of using VMware overcommit are the same when using Dynamic Memory. He also added that no matter what Microsoft says, VMware users are mostly very satisfied with memory management for ESX, and will likely see little reason to switch to Microsoft’s approach.

And once again, here we are. Windows Server 2008 R2 SP1 is still a month (or more) away, but the memory management back-and-forth already seems like old news. That doesn’t mean the SP1 release won’t provide more ammo for the memory management militia. Then again, there’s always the cloud to give the two sides something new to argue about. Oh, the possibilities.

For more information on Microsoft Hyper-V and other server virtualization topics, visit SearchWindowsServer.com.

 Comment on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: