Virtualization administrators are in a unique situation where older operating systems (OSes) can potentially “live forever” in the virtual world. While we may not wish to enable older OSes to remain in our environments indefinitely as virtual machines (VMs), situations arise where we need to do just that.
Recently, I had a situation where an older OS had been removed from the installable toolkit platform — in this case it was VMware Tools. The older operating system, Windows ’98, had been removed from the VMware Tools installation with the release of VMware Server 2.0. While the need for a Windows ’98 virtual machine is rare, it does exist.
To solve the immediate problem, I was able to install a VMware Tools .ISO image from the 1.0.3 version of VMware Server that I’d been using on another host system. Once installed, the older tools are listed as ‘out of date’ as expected, but the basic features of driver optimization are present on the guest VM.
At first this dilemma did not appear to be much of an ordeal, but it started an important thought process. While Windows ’98 was the first occurrence of platform removal from a guest toolkit installation that I have observed directly, I don’t expect Windows NT or 2000 guestOSes to be that far from the chopping block of supported platforms.
One way to prevent this issue is to hold onto the tools installations for each platform of the hypervisor. VMware Tools, XenTools and Hyper-V Integration Services all exist as virtual CD-ROM .ISO images that you can hold onto for re-installation on another guest VM. Also, keep in mind that there may not be support from the host side either, so check to see which supported guest operating systems are available.
As you might expect, configuring an environment in this fashion may be met with some skepticism, as it could possibly divert resources. As a result, it may be worth placing this type of guest workload on a free hypervisor like VMware Server or on a similar lower tier of virtualization and storage. Having a flat file backup (.VMDK or .VHD) of the VM is a good idea as well.
While this situation is less than ideal for truly obsolete guest operating systems, the rare instance may arise where archiving toolkits can prove very beneficial.
I was just a teenager visiting family in France when I saw my first Minitel, France Telecom’s widely distributed teletext terminal for looking up phone numbers, viewing train schedules, and perusing naughty (!) message boards. While it looks hopelessly archaic now, in those pre-Web days, it was très cool.
Now I hear France Telecom is at it again through its subsidiary Orange Business Services. But this time, instead of targeting every French man and woman, it’s targeting small and medium-sized business (SMB) users with hosted IT services based on virtual desktop infrastructure (VDI) and low-cost access terminals (i.e., thin clients).
Judging from its website, the OBS Forfait Informatique seems to be based on Citrix XenDesktop, and starts at 99€ (about $125) per user, per month for a basic Microsoft Office pack. Virtual desktops can be accessed from existing desktops, or if you’d rather, OBS will subsidize a thin client from Wyse Technology, much in the same way cell phone carriers will give you a phone when you enter in to a long-term contract. Tarken Maner, Wyse CEO, tells me that Australian carrier Telstra is engaged in a similar project with Google to offer IT services to SMBs.
The idea that cable, telephone etc. providers might someday start offering hosted desktop services isn’t exactly novel — it’s certainly a logical progression — but is nevertheless an interesting development. How long can it be before the France Telecoms, Verizons and Comcasts of the world set their sights back on regular consumers, and offer virtual desktops as a monthly subscription, along with phone, cable and internet? Now that would be très très cool.
I received an email the other day from Wayne, Pa.-based SunGard Availability Services outlining some “essential” steps for addressing virtualization security challenges. In their email, the company urges users take certain measures, including installing security software, to make sure their virtual machines (VM) are safe from security threats.
There are many virtualization security products on the market today, yet reports of major VM security breaches are nil. In fact, the largest virtualization vendor, VMware Inc., asserts that its software is completely secure – possibly more secure than physical machines.
And even though the majority of VM security breaches I’ve heard about were hypothetical, performed by scientists through demonstrations or at hacker conventions, not in real data centers, I still receive a steady flow of press releases and product announcements addressing VM security issues.
So now, when I see security vendors warning users about un-named threats they need to prepare for, I am reminded of the U.S. Homeland Security Threat Level warning system.
Unfortunately, there are no published criteria for the threat levels of the Homeland Security system, so there is no way to tell whether the current threat level is accurate. And by the way, the threat levels have never been green or blue.
Because of this, the system can be manipulated by government officials. For example, during the Presidential election of 2004 when Republican President George W. Bush was running against Senator John Kerry, the Homeland Security Threat Level was bumped up, prompting some academics to speculate this was done by the Bush administration to scare voters into re-electing him. If so (and we will never know), it worked.
Unfortunately, decisions based on fear are usually not well thought out.
But I haven’t heard of any 9-11-style attacks on virtual infrastructures, and the virtualization users I speak with aren’t convinced they have anything to worry about. The thing that gets people to buy into virtualization security software is that haunting “what if” question that makes everyone default to the”better safe than sorry” mantra. After all, there is no harm in taking proactive steps to protect against the unknowns – just in case.
For instance, according to this article on the security benefits and risks of virtualization, “the [virtualization] drawback is based on fear of threats that aren’t around today but could become serious problems in the future.” Natalie Lambert, a security analyst with Cambridge, Mass.-based Forrester Research, continues in the article:
“One big concern is about what could happen if a flaw were found in a hypervisor, which would give attackers access to thousands of desktops sitting on a virtual server…That’s not a reality today, but it’s certainly a fear for the future.”
And as Sunguard said in its email, “With many organizations focusing on virtualization benefits, they must also examine core risks before it is too late – meaning security needs to be built in from the start.”
It is why we buy life insurance and car insurance and fire insurance for our homes. (Those damn what ifs and their expensive safeguards).
So, for the paranoid among us, check out SunGard’s suggestions for securing your virtual infrastructure here. As they say, better safe than sorry, right?
Integration Services is Microsoft Hyper-V’s installation interface on guest virtual machines that is designed to optimize the drivers of the virtual environments and provide the best experience. Here is a rundown of what you want to know about Integration Services when getting started with Hyper-V.
- Integration Services are installed via virtual CD – For default installations, the C:\windows\system32\ path of the Hyper-V server contains the guest.iso file. This virtual CD provides the installation of Integration Services and is launched from the Action menu on the virtual machine as shown below:
- Integration Services are native on some platforms – Selected releases of Windows Vista and Windows Server 2008 are Integration Services aware and do not need to be installed specifically.
- New Services and the Control Panel – Integrations Services shows up as Hyper-V Guest Components in the Control Panel and installs five Windows-based services. These are Hyper-V Data Exchange Service, Hyper-V Guest Shutdown Service, Hyper-V Heartbeat Service, Hyper-V Time Synchronization Service and Hyper-V Volume Shadow Copy Requestor. These Windows-based services exist in the task manager as vmicsvc.exe.
- Ease of Use – The installation of Integration Services permits the full use of Hyper-V Manager through remote desktop connections. Without this installation, interaction with the guest VM within Hyper-V Manager through a remote desktop from a different system will not permit mouse use.
Installing the hypervisor driver packages, such as VMware Tools, VirtualBox Guest Additions or Hyper-V Integration Services is always a wise decision in order to optimize the experience for guest systems. The configuration and setup of Integrations Services is very light, and can be managed much like the other driver packages. More information on Hyper-V can be found on the Microsoft website.
Today, as I read coverage of the VMware Mobile Virtualization Platform, or MVP, on the virtualization.info blog, I noticed a confusing tidbit. In his post, Alessandro Perilli states, “The fact that customers can run a Windows XP virtual machine [VM] on their phones doesn’t mean that it’s usable.”
While I totally agree with Perilli’s skepticism about the usability of Windows XP on a phone, I have to quibble with one point. In fact, VMware MVP won’t enable customers to run Windows XP on their phones at all.
Correct me if I’m wrong, but as a hypervisor for the ARM processor, MVP can’t run a Windows XP virtual machine, because XP is designed to run on an x86 processor, even if it is virtualized. (It can, however, run a Windows CE VM.)
If MVP presented an x86 emulation layer, it would be a different story. But that’s not the story VMware told me.
The more I think about it, the more I like Citrix’s proposed ICA client for the iPhone. While the Citrix approach doesn’t do much for embedded device manufacturers, it gets at end users’ goal of accessing their desktops and all the data that resides on them in an elegant, secure fashion. And if the effusive comments on the Citrix blog are any indication, I’m not alone in thinking that way.
Last week I noticed that the Payment Card Industry’s Data Security Standard (PCI-DSS) was recently updated on October 1, 2008, from version 1.1 to 1.2. PCI-DSS is a security standard set forth by a conglomerate of all the major credit card companies and is designed to protect cardholder data. As a result, any company that accepts credit cards is forced to comply with it.
About six months ago I wrote that the PCI-DSS standard did not specifically address virtual environments, and instead only focused on servers and networks that are directly involved with cardholder data. In other words, the specification dictates what must be done to secure a server that may store or process cardholder data, but if that server happened to be a virtual guest the host server would not be considered in the scope of the specification. Subsequently you could secure a virtual guest all you want, but if you do not properly secure the host server you could easily compromise the virtual guest regardless of how it was secured.
I downloaded the summary of changes document that specified all of the changes that were made from version 1.1 and 1.2, anxious to see if they had finally added parameters for virtual host servers. Out of the 14 pages of changes, there was still no mention of virtualization technologies in the specification. Surprised by this, I searched through the whole version 1.2, 72-page specification document for the word virtual and found only one instance of it for virtual private network.
I am puzzled as to why they would continue to ignore virtualization. After all, isn’t just about every company virtualizing in some fashion these days? Are the people that write the specification parameters just ignorant of what virtualization is, and that it has a direct impact on their regulations? Or are they just trusting that we are all securing our virtual hosts properly and there is no need to address them? If that’s the case then they have misplaced a critical amount of trust as I am sure there are a great many virtual environments that are not properly secured. Likewise, ignoring virtualization completely greatly reduces the effectiveness of their efforts to secure environments that deal with cardholder data. It’s essentially fortifying everything within a castle, but leaving the front gate open.
It wouldn’t require a great deal of effort for them to address virtual hosts. A number of security specifications for virtual hosts already exist, such as cisecurity.org’s for VMware’s ESX. Let’s hope that they wise up and address virtualization in their next update of the specification. Until then their efforts to protect cardholders are not complete. I just hope that my credit card data is not lying on a virtual machine somewhere that resides on an insecure host server that is ripe for the picking. After all, why try and hack a single virtual machine when you can instead hack into a whole host and gain access to all the VMs and their data?
Every good virtualization administrator owes it to themselves to survey the field. In today’s virtualization climate, this includes looking at Microsoft’s Hyper-V. Various Hyper-V configurations, including those on Windows Core installations, will need to add Hyper-V Manager to manage, run and configure virtual machines (VMs). Administrators evaluating Hyper-V may have trouble getting started on this step, so let’s go through the required actions to get rolling with Hyper-V.
For Core installations, including the recent free version of the hypervisor, Hyper-V Manager is not able to be run locally. One easy way to manage the remote hypervisor is to add Hyper-V Manager from a separate Windows Server 2008 installation. The most difficult part of getting started with Hyper-V is this specific step. Knowing where to add Hyper-V Manager is not intuitive, especially on the Core installations.
Windows Server 2008 is still somewhat young for widespread adoption in the data center at this point, so the initial configuration may take a moment to figure out. Hyper-V Manager exists as a feature for Windows Server 2008, and adding it is done from running Server Manager, selecting Features, Add Features, expand Remote Server Administration Tools, expand Role Administration Tools, and selecting Hyper-V Tools. This option is shown in the figure below:
Hyper-V Manager is now installed and ready for use on the server. It is important to distinguish that the Hyper-V Manager feature is not included with the Hyper-V Manager role as they are separate items from the Server Manager perspective. Servers with the Hyper-V role can be added to the Hyper-V Manager on the local installation. This takes a little thought to add from a permissions standpoint, and is best done through an Active Directory domain installation for distributed permissions.
For more information on Hyper-V Manager, check out the Microsoft website.
Converting a system with a large amount of locally attached storage can be a challenging task given the time required to perform the conversion. Here area a few tricks I’ve found that can help ease the pain on these types of conversion tasks.
- Private network: Making a private network between the physical host and the virtual host can provide two benefits. The main benefit is that conversion traffic will be isolated from the rest of the network traffic; the other advantage is that there is no risk of a user or process connecting to any resources and making changes during the conversion. The downside is that there may be special host-side configuration for a temporary network presence to allow the special network.
- Direct LUN mappings: For virtualization platforms that allow guest VMs to access a LUN directly, it can be much easier than performing a lengthy conversion of large data volumes that are already on a storage area network (SAN) and mapped to a physical system. Here is a blog post with a little more detail on that topic.
- Housekeeping: If there is junk on the physical system, does it need to be converted to the virtual environment, which may have more expensive storage? Clean up the candidate’s file system, and perform obvious tasks like emptying the Windows Recycle Bin. This allows for a more accurate re-sizing of the drives during the conversion.
- Agent backup and restore: For standard file volumes, such as a file server, it may make more sense to only convert the system drive and perform an agent-based restore to the virtual machine for the additional volumes. This does not necessarily save time from the entire conversion, but saves time within a tool like VMware Converter.
- Get a good time estimate: If you have to go at the large storage system as-is, make sure you have a baseline of about how many GB can be converted per hour. A good way to test this is to convert a good candidate system of about 100 GB and use that as a multiplier for your environment. There are a lot of factors, such as network speed and traffic, virtualization platform, storage systems (on both ends), and the conversion mechanism used. This allows for a good estimate on any downtime that needs to be coordinated if this applies to the selected workload.
These tricks can make converting a large amount of storage a little less daunting. What tricks have you employed to tackle physical systems with large amounts of storage in the course of being converted to a virtual machine? Leave a comment below and let us know.
A successful VDI pilot is a critical step toward embracing desktop virtualization technology when migrating from traditional desktops. In this video blog, Rick Vanover discusses how to go about creating a pilot to obtain optimal results without engaging vendors in the pre-sales capacity.
[kml_flashembed movie="http://fr.youtube.com/v/KHt_0z-gdyE" width="425" height="350" wmode="transparent" /]
A recent report from IDC claims that Microsoft’s market share in the virtualization arena grew drastically in the second fiscal quarter of 2008 because of the release of Hyper-V. While the accuracy of the report is questionable, as pointed out by one blogger, it does beg the question: Do customers really care about market share?
One common misconception is that market share makes one product better than another. Although typically the product with the greatest market share is the best product, this isn’t always the case. Just because a product is popular doesn’t mean it’s better than its competitors (take Internet Explorer versus Firefox or Opera as an example). In this specific case, however, VMware does have the better and more popular product. The recent market share increase by Microsoft is due in great part to the excitement generated by Hyper-V’s recent release rather than it being better than VMware ESX.
According to a recent Gartner report, VMware has an 89% market share and is the clear leader in the management/automation, maturity/stability, security and ISV support categories. The one area where it gets low marks is price, which in my opinion is not a big deal because if you look at value instead of price VMware would also get high marks.
Purchasing one product over another simply because of market share is not smart shopping. Someone looking to virtualize should carefully consider all of the available products before choosing one. This includes evaluating them, gathering RFPs, reading product reviews and talking to others who are using the product before finally making an informed decision on which product is best.
Would you buy a particular car brand simply because it was the most popular? Probably not. You would look at features, price, reviews, take a test drive and do whatever else you can to find more information before choosing the car that works best for you.
So is market share important to you and would it influence your decision to choose a virtualization product? Let us know in the comments below.