ZDNet UK just posted some screenshots from the SCVMM 2012 demos last week, for anyone interested in a few visuals of the interface in action.
ORIGINAL POST 1/25/2011
Those who enjoyed Microsoft’s cloud push last year are really going to like 2011. Many of the technologies that have been discussed over the past year are slowly beginning to see the light of day, starting with next version of System Center Virtual Machine Manager (SCVMM).
During a live meeting this week, Microsoft product manager Kenon Owens demoed the yet-to-be released SCVMM 2012 (previously dubbed “v.Next” before TechEd Europe in November). Microsoft is positioning SCVMM as a key component for organizations looking to private clouds, and Owens broke down some of the ways the software can be used to control infrastructures and services in a cloud-based environment.
The infrastructure part is based on what Microsoft is calling fabric management. Senior program manager Shon Shah described fabric management at TechEd Europe as “taking a bare-metal machine and provisioning it to be a standalone Hyper-V host or even a Hyper-V host cluster. It also involves configuring the storage and network, which is new in (SCVMM 2012).”
The idea is to allow administrators to take the server, network and storage aspects of their physical resources and specify how those resources are allocated in the cloud. For example, an admin can create various “clouds” in SCVMM, then specify how much access each cloud has to their physical resources (networking, load balancing, storage capacity, etc.). Owens described the whole process as “really just taking logical servers and then passing them up into the clouds.”
Once the infrastructure is in place, services can then be pushed out to admins and users based on Active Directory and identity specifications. Owens said admins will be able to assign group permissions to specific clouds, limit quotas for what certain users can access, and so on — straight from the SCVMM console. “These services are more than just the OS images, but the apps inside it, like SQL Server, etc.,” he explained.
These kinds of definitions are becoming commonplace in any cloud-focused presentation from Microsoft these days. Senior product manager Ian Carlson said the company plans to fight the inevitable cloud fatigue currently washing over IT by being very clear as to how Microsoft defines cloud computing and everything in it. He implied that much of the confusion and frustration is rooted in how different vendors have different definitions for different words, which has exacerbated some of the cloud pushback from IT professionals.
“We are trying to come up with a firm taxonomy of the terms we are using,” Carlson said. “So when we talk about something like an appliance, people know that we mean something very specific.”
A few more tidbits of note:
For more information on System Center Virtual Machine Manager and Microsoft virtualization, visit SearchWindowsServer.com.]]>
Microsoft’s Michael Kleef just posted a short update on his personal TechNet blog about a new discovery involving Hyper-V R2 SP1. Apparently, the latest version of Hyper-V will now support up to 12 virtual machines per logical processor, up from the previous max of eight, but only for Windows 7 SP1 guests.
In other words, admins can host up to 12 guest operating systems per logical processor on a Hyper-V host as long as each of those guest OSes are running Windows 7 SP1. Otherwise, the ratio of VMs/logical processor remains at 8:1.
The concept of “VMs per logical processor” is a sort of confusing one, however, as most admins are more familiar with VMs per core. One commenter on Microsoft’s requirements and limits page for Hyper-V even pleaded that the company change its lingo, noting that “while [logical processors] might be technically correct, what really matters is virtual processors per physical cores. This is a much easier concept to grasp.”
For his part, Kleef attempted to clear up the confusion in his post. He noted that “[Microsoft doesn’t] really support VMs/core…we actually support VMs/logical [processors]”, explaining that the term core implies “physical core support”, while logical processors “can be either physical or multi-threaded cores.”
Of course, he also added to the confusion by writing “Hyper-V SP1 increases support for VM/core” right in the title of the blog post, but still…
For more information on the first service packs for Windows 7 and Server 2008 R2, visit SearchWindowsServer.com.]]>
“Dynamic Memory is in fact not the same as memory overcommit, and now that you mention it, VMware’s stuff is still way better.”
If you’ve paid any attention at all to server virtualization news this past year, you’ve likely heard some variation of those two statements. Ad nauseam. Over and over again.
The funny thing is, Dynamic Memory isn’t even officially out yet; even though the release candidate is now available, SP1 for R2 won’t officially ship until early next year. But ever since news of the feature first broke back in the spring, IT folks have debated the degree to which it will put Microsoft virtualization at equal footing with VMware.
There’s no denying it’s taken a while for Microsoft to add more control over VM memory allocation to its virtualization platform. Some said it was because the company simply hadn’t figured it out yet. Others took more technical views, opining that the presence of Address Space Layout Randomization (ASLR) with Windows 2008 was slowing things down. As for Microsoft itself, the company traditionally downplayed the importance of a memory overcommit feature and questioned the performance impact of having one.
But no matter how Microsoft spun it, improved memory management for Hyper-V remained high on IT pros’ wish lists. And while it wasn’t quite ready for the first Hyper-V R2 launch (it’s been reported that it was originally intended for that release), Hyper-V will soon come with Dynamic Memory functionality for tying a VM with more RAM than is physically available on a host machine. End of story.
Except, of course, it’s not.
The same folks who criticized Microsoft for not embracing the over-commitment of memory are now hollering about how Dynamic Memory is still not up to snuff. The other side has fought back with claims that Microsoft’s is the better approach and VMware’s memory overcommit is trouble waiting to happen.
In a recent article by Mike Laverick, he notes that Microsoft’s memory management approach is actually much more similar to that of Citrix than VMware. He also links to a video with Microsoft’s Ben Armstrong describing how Dynamic Memory works. In the video, Armstrong (who maintains Microsoft’s Virtual PC Guy’s Blog) acknowledges the differences between the two vendors’ approaches to memory allocation:
I always find it interesting when you have two companies like Microsoft and VMware, both out there saying things that seem to conflict, with one company saying ‘This is the way to do it’ and the other company saying ‘No, this is the way to do it.’
While Armstrong jokingly references some of the back-and-forth often heard from both sides, he explains that in his eyes, a lot of the differences arise from simply having different ways of achieving the same goal:
Something I always try to do when I look at different technologies is that I go in with the assumption that the other people are just as smart as I am. You know, they’re not morons – they know what they’re doing. And that kind of leaves two possibilities. The first one (which I always hope isn’t the case) is that they know something we don’t know. The other one, which is actually more often the case, is just that they’re viewing the problem in a different way… and given this different view, a different solution seems more attractive.
According to Armstrong, Dynamic Memory is designed the way it is because with Microsoft being Microsoft, they have a better understanding of how Windows memory management works. Therefore, they are better suited to build memory management on top of that “guest OS knowledge” as opposed to VMware, where memory overcommit takes what he described as a “black box” approach that intentionally avoids gathering memory info from the guest operating system.
The Microsoft side obviously feels its concept is better, and the VMware side feels the same about memory overcommit. For his part, Laverick said that the performance risks of using VMware overcommit are the same when using Dynamic Memory. He also added that no matter what Microsoft says, VMware users are mostly very satisfied with memory management for ESX, and will likely see little reason to switch to Microsoft’s approach.
And once again, here we are. Windows Server 2008 R2 SP1 is still a month (or more) away, but the memory management back-and-forth already seems like old news. That doesn’t mean the SP1 release won’t provide more ammo for the memory management militia. Then again, there’s always the cloud to give the two sides something new to argue about. Oh, the possibilities.
For more information on Microsoft Hyper-V and other server virtualization topics, visit SearchWindowsServer.com.]]>
VMware’s vCenter Update Manager for vSphere is designed for this exact same purpose, but it doesn’t work with Microsoft’s VM management product, as vCenter Server is required to run the tool. Microsoft’s VMST is similarly platform-specific, with System Center Virtual Machine Manager (SCVMM) 2008 or R2 being a prereq. You are also required to have either WSUS 3.0 SP1/SP2 or some version of System Center Configuration Manager 2007 to apply the updates.
VMTS 3.0 supports Windows Server 2008 R2 and Windows 7 (naturally), and the Windows Task Scheduler can be used to schedule servicing jobs, according to Microsoft.
Earlier this month, VirtualizationAdmin.com published an extremely detailed breakdown by Janique Carbone of the installation process for VMST 2.1, which has now been completely replaced by this latest version. I’d expect the process has changed somewhat with 3.0, but the piece does include a lot of configuration tips and how-tos, so it still might be worth a read. Users also have the option to upgrade from version 2.1 if it’s already installed.
For more information of the latest server virtualization tools and utilities, visit SearchServerVirtualization.com.]]>
As everyone knows, VMworld 2010 took place this week in San Francisco. Like last year, Microsoft’s booth did not provide any demos for Hyper-V, as the company claimed VMware’s expo agreement prevents directly-competing products from being showcased (which VMware countered and Citrix found a way around).
Microsoft did make its presence felt in other creative ways, however.
The tactic getting the most attention (both negative and positive) is a full-page ad in USA Today where Microsoft’s corporate VP Brad Anderson expresses doubt over VMware’s ability to deliver a “complete cloud computing environment.” And while company reps weren’t standing outside the conference hall handing out copies, my colleague Beth Pariseau writes that issues were delivered to the hotel rooms of all VMware attendees.
Here are a few choice excerpts from the ad:
“But with the arrival of cloud computing, signing up for a 3-year virtualization commitment may lock you into a vendor that cannot provide you with the breadth of technology, flexibility, or scale that you’ll need to build a complete cloud computing environment.”
“Not only is Microsoft’s server virtualization solution approximately one-third the cost of a comparable solution from VMware, but also a recent Microsoft study of 150 large companies showed those running Microsoft virtualization spent 24% less on IT labor than an ongoing basis.”
“Most importantly, as you build out the next generation of your IT environment, we can provide you with scalable worldwide public cloud computing services that VMware does not offer.”
Our friend Greg Shields posted the ad in its entirety on Tuesday, asking folks if they thought it was a smart move. While the comment section wasn’t exactly on fire, one unimpressed reader did urge Microsoft to spend more time improving their technologies and less time on timely ad campaigns.
On a smaller scale, the company posted a video this week featuring Microsoft and Citrix reps reacting to the opening keynote. The footage is of Microsoft general manager Mike Neil, along with Simon Crosby and Harry Labana of Citrix, standing outside the show discussing VMware’s stance on desktop virtualization and the cloud. The best line comes from Labana, who sarcastically declares “Windows is dead!” before later questioning how serious VMware truly is about the desktop. You can view the full video below:
[kml_flashembed movie="http://www.youtube.com/v/SbqGMgXu850" width="425" height="350" wmode="transparent" /]
Of course, things always get a little ugly between the two vendors around this time of year, and when compared to past antics, a full-page ad could be considered rather tame. At the same time, plenty of other Microsoft technologies not named “Hyper-V” were highlighted throughout the week. Microsoft demoed Windows Azure at its expo booth, and some of the most popular sessions were reportedly geared toward virtualizing Exchange Server and Microsoft Active Directory environments.
Were you at VMworld 2010? What did you think of the companies’ tactics this year? Sound off in the comments below.
For more news and analysis from VMworld 2010, visit SearchServer Virtualization.com.]]>
At the time of the first article (which predates Hyper-V), questions about I/O bottlenecks, security and even Microsoft’s questionable support of DC virtualization made the whole concept seem somewhat dicey. The recommendation was that while it was possible (Microsoft did have “how to” documentation, after all), virtual domain controllers should really only be implemented on a limited, non-critical basis.
Obviously, virtualization is a lot more popular today than it was back then, and continued technological advances have made it arguably the driving force in today’s IT market. So of course everyone is all together when it comes to virtualizing DCs now, right? Wrong.
I was reading a thread recently on the subject, and the debate is as heated as ever before. People were basically falling into three camps:
Those who think domain controller virtualization is a great idea. (Not good, mind you, great.) The one opinion these folks seem to share is the importance of following the recommended best practices to a T. Not virtualizing all of your DCs and leaving some physical is also a common suggestion, though not something everyone finds necessary. (I believe Microsoft recommends two physical DCs per domain.)
These are the people who simply say, “Thanks, but no thanks.” Questions involving security, backups and high availability abound, or the planning/configuration process is too much.
The last camp is made up of people who are all for domain controller virtualization, but can’t seem to agree on the right way to do it:
“Don’t keep all your virtual DCs on the same host machine!”
“No way that defeats the whole point!”
“Don’t virtualize FSMO roles!”
“Why the heck not?”
You get the idea. One thing that’s clear is that DC virtualization is getting more popular. But while those who have done it successfully appear set to never look back, others remain reluctant to take the plunge.
What are your thoughts on domain controller virtualization? Do you fall in the pros or the cons camp? Share your thoughts in the comment section below.
For more information on virtualization, Active Directory and more, visit SearchWindowsServer.com.]]>
As of R2, Hyper-V supported up to 384 VMs per server. That number dropped to 64, however, if those virtual machines were running in a cluster. Microsoft has updated Hyper-V to allow clustered nodes to also support a maximum of 384 virtual machines. The total number of VMs per cluster has also been increased to 1,000 (up from an initial limit of 960).
Jeff Woolsey, Microsoft principal group program manager for server virtualization, said during a conference call that additional testing was required to increase these limits, but that increasing the number of supported VMs in a cluster was a priority based on customer feedback. “Now people can really cram as many VMs as they want onto a single server and get maximum density,” he said.
The maximum number of nodes per cluster remains at 16. IT pros can decide for themselves how many VMs to run per node, a number that will vary depending on how many nodes they have in their cluster. For example, a three-node cluster (two active, one failover) can run up to the maximum 384 VMs per node for a max total of 768 VMs in the cluster. A five-node (four active, one failover) cluster, however, could have each node running 250 VMs to reach the max of 1,000.
Confused? Microsoft provides a table to help clear up some of the bewilderment (and hopefully give your calculator a break). The takeaway here, however, is that the number of clustered VMs that Hyper-V can support has gone up, and the update is already available for R2 (i.e. it’s not part of SP1).
Woolsey said the company isn’t finished upping these limits. “Scalability is one of those areas that’s never really done. So it’s something that you’ll always see us work on, so we can continue to see those scalability numbers climb,” he said.
One Microsoft MVP I spoke to said he’s pleased with the continued improvements that have been made to Hyper-V, both for R2 and since. “They are working hard, there’s no doubt,” he said. “And [Hyper-V] is really coming along.”
He also noted that the new dynamic memory improvements will help with the capacity planning process for this increase in clustered VM support. He stressed, however, that the company is still behind VMware when it comes to other memory management features, such as bubble memory capabilities.
For more information on Microsoft Hyper-V R2, visit SearchWindowsServer.com.]]>
Judging by the number of times I heard some variation of this, it’s clear that this is something that Microsoft is determined to drive home. In fact, the point was reinforced several times during the opening keynote, where the focus was on extending the data and tools that IT professionals use on premise to a cloud computing environment.
While this was a major topic at Microsoft Management Summit (MMS) 2010 in regards to System Center, it was even more so at TechEd, where Active Directory was added to the mix. I sat down with Microsoft’s Justin Graham not long after the keynote, and while we spent a good amount of time discussing what to expect from Windows 7 and Server 2008 R2 SP1, we also chatted a little about Microsoft’s plans for AD in the cloud and the company’s overall strategy.
Below are a few snippets from that conversation.
On extending Active Directory to the cloud:
“Microsoft is really the only company that is uniquely positioned to help customers with this choice, and to get through all of the stages. No one has the history we have when it comes to identity management and Active Directory.
Active Directory has been huge in the identity management space for 10 plus years now. And we are working on that identity extending into the cloud, so that when a customer does want to make that leap to Azure, [he or she doesn’t] have to worry about a brand new, completely different identity model. They can use their existing investments with their on-premise identity and just stretch that into the cloud without having to worry about anything.
One of the things customers can do today to start to prepare themselves is to take a really hard look at Active Directory Federation Services (AD FS) 2.0, and really look at deploying that and understanding that, so when the next generation of AD comes along with Azure, they can start to make that connection and it will be very simple and seamless for them.”
On customer interest in private cloud environments:
“That’s all about virtualization being the road into the cloud. So the more you virtualize, and the more you orchestrate, and the better you manage your environment and think about your data center as a set of services and things that you deliver to your users, and the more you have management and all the underlying virtualization pieces working to orchestrate that — the easier it’s going to be, and the more benefit they are going to get out of the private cloud.”
On Microsoft’s long-distance live migration capabilities:
“Its’ just going to make things a lot easier for users. When you think about fault tolerance and high availability, of course you would want to make sure that can fail things over to a number of different areas. And of course if you can stretch it, then that gives customers more flexibility. So I think that that’s even a better story when you start to think about the private cloud, and it’s just going to give [users] that additional plus.”
For more from Microsoft TechEd 2010, visit SearchWinIT.com.]]>
There is some basics stuff included here, but a lot of it is really insightful — especially the sections on ensuring a successful migration and specific “gotchas” to be aware of.
First things first. Why are people so excited about Hyper-V?
Greg Shields: Hyper-V has got a lot of really compelling features, and there’s been a lot of hype surrounding it over the last few months or so since Windows Server 2008 came out. It took another four to six months for Hyper-V to come out, so there’s been a lot of people waiting for it. What’s interesting about Hyper-V is that when it arrived on the scene, even as a 1.0 release, it pretty much came out in force. It’s a very, very strong first release coming out of Microsoft.
One thing I’ll tell you is that one of the reasons why Hyper-V took so long [to come out] after the release of Windows Server 2008 is that I think Microsoft recognized that in order for them to have a really good virtualization solution, it needed to be absolutely “bomb proof”. So they spent an extra four to six months testing under the release code to make sure Hyper-V was going to be at those levels. So what we’ve got now is an extremely fast virtualization platform, and some would argue it’s even faster than other technologies from places like VMware. It’s very easy to use, and even if you’re an IT generalist you’re going to be able to pick it up very quickly and easily.
It also comes at basically no cost. It’s effectively free so organizations that wouldn’t normally do virtualization are going to be able to because it doesn’t cost them anything.
How does Hyper-V compare with VMware ESX?
Shields: Well I’ll tell you this, and this is not the standard Microsoft party line because if you ask Microsoft they’ll tell you its parity with them right now. I’m not going to say it’s parity at this point, and that’s because the partner ecosystem for Hyper-V just isn’t there yet. You know ESX has a substantial partner ecosystem for backup, replication and management technology, and all that stuff that wraps around the virtualization platform to make it useful for a lot of organizations, and that’s just not there yet [with Hyper-V]. It’s coming; but it’s just not there yet.
Hyper-V also has one sort of key limitation at this point that’s going to shy some people away, and that’s live migration. It takes a little bit more time to complete a live migration [with Hyper-V]. It’s not instantaneous like ESX is. And some people are just genuinely afraid to deploy Microsoft technology when it first comes out; you know, the old “Wait until it hits service pack 1” concept.
But for organizations that have actually deployed it — and I’ve done two production deployments here in consulting engagements on Hyper-V, one actually three weeks after Hyper-V was released — they have just been patently surprised at how it works. Like with one particular engagement, this group was trying to use a different virtualization platform, I showed up at the door and within six hours we had a clustered Hyper-V instance up and running with Terminal Services virtual machines. So it’s just painless to install and painless to use.
Can you go into a little more detail about Hyper-V’s live migration capabilities?
Shields: Sure, so live migration is the idea that I have a virtual workload, a virtual machine, and it is running off of Server A. I recognize that there is something wrong with Server A or I want to move it around for load balancing purposes. In doing that, ESX has this technology called VMotion, and VMotion has been around for a long, long time. What it allows — without delay and without any loss of service – is for the processing of that virtual machine to move from Server A to Server B, and nobody will notice the difference.
Now Microsoft has a similar capability — which they call quick migration — that can do that. The only difference is that with this version of Hyper-V, the motioning part of it takes somewhere between six and 60 seconds to complete. Now you’re not going to see a reboot of the server when this happens. You’re going to see the sever go into a pause state like you’re used to seeing with virtual machines in other ways. But it goes into a pause state, moves to the other node, and un-pauses.
Now for some environments, this is not something they can have. They can’t handle having that six to 60 seconds of downtime. For a lot of environments though, especially smaller businesses and mid-market organizations, [they might ask themselves] “What is that [six to 60 seconds] compared to having things virtualized?” That’s not too bad.
So this version of Hyper-V available today has that little bit of delay that we don’t see with ESX. Now what’s interesting is the technical reasons are relatively trivial to fixing the problem with Hyper-V. In fact, Microsoft has already fixed the problem and with the version of Hyper-V that will be released with — and I think I can say this — Windows Server 2008 R2. They have actually fixed the problem, and now Hyper-V’s live migration will be effectively the same as VMotion when that releases.
So what do people absolutely need to know to ensure a successful migration?
Shields: Well a familiarity with Microsoft technology first of all! Actually, if you are considering deploying Hyper-V today, the one thing you have to recognize is that the Hyper-V role that may be available on your Windows Server 2008 instance is really not the correct version of Hyper-V. You actually need to download a number of updates to update Hyper-V to the RTM code. [Microsoft KB article] 950050 is that specific one that updates you to the RTM code, but there are a number of other patches that you need to apply to your Hyper-V instance to really get it ready to go. And that’s probably the biggest “gotcha” right now, making sure you have that right cocktail of patches in order to get that server up and running.
Here are a couple more numbers, and you might want to take some notes on this. KB 950050 is a big one. You’ll also need the Hyper-V management console on your desktop, which is KB 952627. If you’re going to use Hyper-V in a clustered environment, your going to need KB 951308.And then there’s two more that you want to install to update Hyper-V to work with System Center Virtual Machine Manager, which are KB 956589 and 956774.
So there’s a whole cocktail of patches that you’re going to need to get it ready to go. Just knowing what those are is the big limiting factor at this point.
What are some other “gotchas” to be aware of for people looking to use Hyper-V?
Shields: Probably the one these days is going to involve the backups of your virtual machines, and even ESX saw this in the early days because traditional backups are pretty easy. These days you throw a client onto your computer and it goes and sucks up all the files and the 10,000 files that make up your computer and puts it on tape somewhere. Well, the image level backup, entire server backup or single file backup that comes to mind when we think of virtualization is … well it is a relatively new technology. VMware has been doing it for a while now, but Hyper-V again is still a 1.0 release, so there are some gotchas associated with Hyper-V and backups.
Now let me frame that. I’m going to tell you what the gotchas are, but I’m also going to tell you what the benefits are, and the major benefit here is that Hyper-V integrates with VSS [volume shadow copy service]. So whereas some other virtualization platforms may not do the proper coesing of your virtual machine file so that they restore properly, Hyper-V is going to do that with no problem whatsoever. This is really really good for your domain controllers and your Exchange and SQL Server databases — those sorts of transactional databases that would otherwise have problems at restore.
So that’s the good point. With that though there are a couple of gotchas, and I’ll just sort of list some of the ones you need to be aware of. If you’re going to use Windows Server Backup, be aware that there’s a special registry key that you need to set to enable the VSS support. That’s just if you’re using Windows Server Backup. If you’re using another type of backup tool, make sure that it is VSS-aware. A lot of them are these days, but they need to be VSS-aware so they can back up the virtual machines and coess them the way they are supposed to.
Never use dynamic disks, but we just know that these days that you should just never use dynamic disks anyway. And also be careful with your snapshots so that when Hyper-V does a backup using VSS it leverages the use of snapshots, and if you have more than one snapshot or two or more snapshots in place, it will actually fail that backup.
Also be careful with the use of network attached storage. If you are directly connecting in storage via pass though or a direct connect line using iSCSI, that will not get backed up if you do an image level backup, so you may need to do that separately. And also be aware of any network attached storage at all, making sure that storage is up and running before starting the backup. [Otherwise it’s] not going to get the backup.]]>