Last week I noticed that the Payment Card Industry’s Data Security Standard (PCI-DSS) was recently updated on October 1, 2008, from version 1.1 to 1.2. PCI-DSS is a security standard set forth by a conglomerate of all the major credit card companies and is designed to protect cardholder data. As a result, any company that accepts credit cards is forced to comply with it.
About six months ago I wrote that the PCI-DSS standard did not specifically address virtual environments, and instead only focused on servers and networks that are directly involved with cardholder data. In other words, the specification dictates what must be done to secure a server that may store or process cardholder data, but if that server happened to be a virtual guest the host server would not be considered in the scope of the specification. Subsequently you could secure a virtual guest all you want, but if you do not properly secure the host server you could easily compromise the virtual guest regardless of how it was secured.
I downloaded the summary of changes document that specified all of the changes that were made from version 1.1 and 1.2, anxious to see if they had finally added parameters for virtual host servers. Out of the 14 pages of changes, there was still no mention of virtualization technologies in the specification. Surprised by this, I searched through the whole version 1.2, 72-page specification document for the word virtual and found only one instance of it for virtual private network.
I am puzzled as to why they would continue to ignore virtualization. After all, isn’t just about every company virtualizing in some fashion these days? Are the people that write the specification parameters just ignorant of what virtualization is, and that it has a direct impact on their regulations? Or are they just trusting that we are all securing our virtual hosts properly and there is no need to address them? If that’s the case then they have misplaced a critical amount of trust as I am sure there are a great many virtual environments that are not properly secured. Likewise, ignoring virtualization completely greatly reduces the effectiveness of their efforts to secure environments that deal with cardholder data. It’s essentially fortifying everything within a castle, but leaving the front gate open.
It wouldn’t require a great deal of effort for them to address virtual hosts. A number of security specifications for virtual hosts already exist, such as cisecurity.org’s for VMware’s ESX. Let’s hope that they wise up and address virtualization in their next update of the specification. Until then their efforts to protect cardholders are not complete. I just hope that my credit card data is not lying on a virtual machine somewhere that resides on an insecure host server that is ripe for the picking. After all, why try and hack a single virtual machine when you can instead hack into a whole host and gain access to all the VMs and their data?
Every good virtualization administrator owes it to themselves to survey the field. In today’s virtualization climate, this includes looking at Microsoft’s Hyper-V. Various Hyper-V configurations, including those on Windows Core installations, will need to add Hyper-V Manager to manage, run and configure virtual machines (VMs). Administrators evaluating Hyper-V may have trouble getting started on this step, so let’s go through the required actions to get rolling with Hyper-V.
For Core installations, including the recent free version of the hypervisor, Hyper-V Manager is not able to be run locally. One easy way to manage the remote hypervisor is to add Hyper-V Manager from a separate Windows Server 2008 installation. The most difficult part of getting started with Hyper-V is this specific step. Knowing where to add Hyper-V Manager is not intuitive, especially on the Core installations.
Windows Server 2008 is still somewhat young for widespread adoption in the data center at this point, so the initial configuration may take a moment to figure out. Hyper-V Manager exists as a feature for Windows Server 2008, and adding it is done from running Server Manager, selecting Features, Add Features, expand Remote Server Administration Tools, expand Role Administration Tools, and selecting Hyper-V Tools. This option is shown in the figure below:
Hyper-V Manager is now installed and ready for use on the server. It is important to distinguish that the Hyper-V Manager feature is not included with the Hyper-V Manager role as they are separate items from the Server Manager perspective. Servers with the Hyper-V role can be added to the Hyper-V Manager on the local installation. This takes a little thought to add from a permissions standpoint, and is best done through an Active Directory domain installation for distributed permissions.
For more information on Hyper-V Manager, check out the Microsoft website.
Converting a system with a large amount of locally attached storage can be a challenging task given the time required to perform the conversion. Here area a few tricks I’ve found that can help ease the pain on these types of conversion tasks.
- Private network: Making a private network between the physical host and the virtual host can provide two benefits. The main benefit is that conversion traffic will be isolated from the rest of the network traffic; the other advantage is that there is no risk of a user or process connecting to any resources and making changes during the conversion. The downside is that there may be special host-side configuration for a temporary network presence to allow the special network.
- Direct LUN mappings: For virtualization platforms that allow guest VMs to access a LUN directly, it can be much easier than performing a lengthy conversion of large data volumes that are already on a storage area network (SAN) and mapped to a physical system. Here is a blog post with a little more detail on that topic.
- Housekeeping: If there is junk on the physical system, does it need to be converted to the virtual environment, which may have more expensive storage? Clean up the candidate’s file system, and perform obvious tasks like emptying the Windows Recycle Bin. This allows for a more accurate re-sizing of the drives during the conversion.
- Agent backup and restore: For standard file volumes, such as a file server, it may make more sense to only convert the system drive and perform an agent-based restore to the virtual machine for the additional volumes. This does not necessarily save time from the entire conversion, but saves time within a tool like VMware Converter.
- Get a good time estimate: If you have to go at the large storage system as-is, make sure you have a baseline of about how many GB can be converted per hour. A good way to test this is to convert a good candidate system of about 100 GB and use that as a multiplier for your environment. There are a lot of factors, such as network speed and traffic, virtualization platform, storage systems (on both ends), and the conversion mechanism used. This allows for a good estimate on any downtime that needs to be coordinated if this applies to the selected workload.
These tricks can make converting a large amount of storage a little less daunting. What tricks have you employed to tackle physical systems with large amounts of storage in the course of being converted to a virtual machine? Leave a comment below and let us know.
A successful VDI pilot is a critical step toward embracing desktop virtualization technology when migrating from traditional desktops. In this video blog, Rick Vanover discusses how to go about creating a pilot to obtain optimal results without engaging vendors in the pre-sales capacity.
A recent report from IDC claims that Microsoft’s market share in the virtualization arena grew drastically in the second fiscal quarter of 2008 because of the release of Hyper-V. While the accuracy of the report is questionable, as pointed out by one blogger, it does beg the question: Do customers really care about market share?
One common misconception is that market share makes one product better than another. Although typically the product with the greatest market share is the best product, this isn’t always the case. Just because a product is popular doesn’t mean it’s better than its competitors (take Internet Explorer versus Firefox or Opera as an example). In this specific case, however, VMware does have the better and more popular product. The recent market share increase by Microsoft is due in great part to the excitement generated by Hyper-V’s recent release rather than it being better than VMware ESX.
According to a recent Gartner report, VMware has an 89% market share and is the clear leader in the management/automation, maturity/stability, security and ISV support categories. The one area where it gets low marks is price, which in my opinion is not a big deal because if you look at value instead of price VMware would also get high marks.
Purchasing one product over another simply because of market share is not smart shopping. Someone looking to virtualize should carefully consider all of the available products before choosing one. This includes evaluating them, gathering RFPs, reading product reviews and talking to others who are using the product before finally making an informed decision on which product is best.
Would you buy a particular car brand simply because it was the most popular? Probably not. You would look at features, price, reviews, take a test drive and do whatever else you can to find more information before choosing the car that works best for you.
So is market share important to you and would it influence your decision to choose a virtualization product? Let us know in the comments below.
Many IT departments feel the squeeze from the current economic crisis and have seen their budgets slashed. When times are tough you must get creative, and the best way to do that is to utilize products that won’t cost you a dime. Can’t afford new ESX licenses right now? Why not recycle some of that older hardware with one of the free hypervisors? Or better yet, take one of your big servers that only runs one application and install ESXi so you can run other applications concurrently. Let’s go over some free products that you can download and use in your VMware environment.
VMware Server – Version 2.0 has lots of new features and can be installed on several versions of Windows, Linux and almost any hardware.
VMware ESXi – The entry-level edition of VMware’s enterprise-class hypervisor; the installable version installs bare metal on a variety of supported and unsupported hardware.
VMware Player – A great tool for starting up virtual machines without installing a full hypervisor on your system.
The VMware appliance marketplace has hundreds of free appliances that span a variety of categories. Appliances range from simple firewalls to enterprise monitoring systems to full-blown Web and database packages (LAMP). You can run these appliances with VMware Player or import them into ESX/Server/Workstation and run them there.
Free management and reporting tools:
Embotics v-Scout – A free, agentless tool for tracking and reporting on virtual machines in VMware VirtualCenter-enabled environments.
Hyper-9 – This soon-to-be-released free search-based reporting tool is a great addition to every administrator’s toolbox. Watch for its release around the end of the year. If you are interested in participating in a beta version of this tool, drop me an email. Not all beta requests will be approved and the company is looking for feedback if you do participate.
RVTools – A handy little tool that displays a multitude of information about your virtual machines.
Solarwinds VM Monitor – A free management tool that monitors ESX hosts and virtual machines.
VMotion Info – A free utility that gathers system and CPU information from your hosts and puts it in a single overview to check for VMotion compatibility.
VM Explorer – A management tool that eases management, backup and disaster recovery tasks in your VMware ESX Server environment.
MCS StorageView – A utility that displays all of the logical partitions, operating systems, capacity, free space and percent free of all virtual machines on ESX 3.x or Virtual Center 2.x .
ESX HealthCheck – A script that collects configuration information and other data for ESX hosts and generates a report in HTML format.
Free administration tools:
Putty – A must-have utility for every administrator to remotely SSH into their ESX hosts.
Veeam FastSCP – A great SSH file transfer utility application.
WinSCP – Another speedy SSH file transfer utility application.
KS QuickConfig – Designed to reduce the time needed to deploy and configure VMware ESX servers as well as eliminate inconsistencies that can arise with manual operations.
VP Snapper – A free utility that lets you revert to multiple VM snapshots at once rather than one-by-one.
VMware Converter – VMware’s free application that lets you perform physical-to-virtual and virtual-to-virtual operations.
vmCDconnected – A handy utility that scans all virtual machines in your infrastructure and shows if they have a CD connected to any of them. After scanning you can disconnect all of the CDs with a click of a button.
CPU Identification Utility – VMware’s free utility that displays CPU features for VMotion compatibility, EVC and 64-bit VMware support.
VMTS Patch Manager – A great ESX host-patching application for those who don’t have Update Manager.
Free backup utilities:
VISBU – A free backup utility that runs from the Service Console and provides VMDK-level backups of any VM in storage that is accessible by the host.
VM Backup Script – A backup script to perform hot backups of your virtual machines.
Free storage utilities:
Openfiler – A free, open source, browser-based storage appliance that supports NFS and iSCSI. It can be downloaded as an ISO file to install on a server or as a VMware appliance to import to an ESX host. A great way to get more shared disk in your environment by turning physical servers into network-attached storage servers or turning the local disk on your ESX hosts into shared disk when using the appliance.
Xtravirt Virtual SAN – A free solution that turns local disk space on your ESX hosts into shared VMFS volumes to avoid purchasing costly storage area network disk space.
Free security tools:
Tripwire ConfigCheck – A free utility that rapidly assesses the security of VMware ESX 3.0 and 3.5 hypervisor configurations compared to the VMware Infrastructure 3 Security Hardening guidelines.
Configuresoft Compliance Checker – A free tool that provides a real-time compliance check that can analyze multiple VMware ESX host servers at a time. Also provides detailed compliance checks against both the VMware Hardening Guidelines and the CIS benchmarks for ESX.
If you know of any other free tools that you use in your VMware environment, feel free to list them in the comments section of this post.
I’m always looking for ways to explain virtualization to the nontechnical people in my life, and just came across a really good analogy, courtesy of Luke Kanies, the author of Puppet system administration tool:
A virtual machine is to the host as an egg is to the carton.
Actually, this is what Kanies really said:
The truth is that [VM] images make a lot of things a lot easier, but when it all comes down to it, VMWare is great for managing the outside of a box. I’ve been told this is a horrible analogy, but the way I think of it is, all of these virtual machine systems — they’re really good at producing and managing eggs, you know these self contained, sealed eggs of functionality. But they’re not very good about getting inside the system. They can’t get inside the egg and manage what’s going on there.
That’s so true. For now, VMware doesn’t do anything to help you get at the whites (the OS) or the yolk (the application), to say nothing of the yucky membrane between the two.
Much like egg cartons and their contents, virtualization doesn’t discriminate on the basis of color: You can have brown eggs (Linux), white eggs (Windows), green eggs (Solaris) — but you can’t have ham. Nor can you have ostrich eggs (z/OS) or goose eggs (Unix) — they just wouldn’t fit in the carton.
I also like this analogy because of the implicit 12:1 egg-to-carton consolidation ratio. Although, from what I’ve been hearing, most folks have graduated from regular cartons and moved on to those scary 5X6 trays.
Props to the Lone (not Lonely) Sysadmin, Bob Plankers, for pointing out this article and who, incidentally, really likes Puppet.
In prior posts, I mentioned that determining how the expiration of a virtual machine (VM) will be managed and implemented is just as important as deciding to have an expiration date. I have been using Emobtics V-Scout since its release in early September of this year, and it is one of the quickest and easiest ways to get started with a VM’s expiration date for free.
Depending on the technology climate, the concept of a VM’s expiration date may or may not be received well by internal IT teams such as developers. I have taken the stance that test-and-development systems should have an expiration date. The expiration date can be extended, of course, but it is more important that it is a defined process. Certain VMs will not have an expiration date, such as a QA environment for a live system. This can be managed in the same fashion as well.
In my experience thus far, I’ve found that the process makes the requesting development teams a little more aware of the system footprint. There has been little resistance to the concept of an expiration date, and it is well communicated from the virtualization team to the requesting groups. Using V-Scout, the procedural steps are to define the owner of the VM as well as the expiration date. When a VM is provided to the requesting group, I generate the V-Scout inventory report. The inventory report is then saved and sent to the requesting group as a way to clearly identify the definition of the VM in the virtual environment. The V-Scout inventory report comes with information about the operating system version, amount of memory, owner information and email address, expiration date as well as other information. With this information, the report adds an element of service credibility to the virtualization administrator.
Since I have been using the expiration date, the requesters of virtual machines have been proactive in letting me know that the VM needs to be extended in duration. I don’t mind accommodating that request, as I’m trying to avoid a long list of systems that in four years nobody remembers anything about. This proactive request for an extension is very welcome and stems from a few other small practice issues that accompany the VM expiration date. The most noticeable of this is an automatically scheduled email that reminds the requester that the VM is due to expire in one week. The other part of that is a scheduled task in VMware VirtualCenter to change the power state of the VM due to expire. Lastly, there is another scheduled email that reminds me to remove the VM from the virtual environment storage and Windows Active Directory.
These small practice points with the use of a tool that fits your needs allows for an expiration date to be implemented without using more expensive lifecycle or lab management products. V-Scout is a free download from the Embotics website.
Bucking market trends, VMware posted “another solid quarter,” said CEO Paul Maritz on a conference call with investors, with revenues of $472 million for the third quarter of 2008. That represents an increase of 32% over the third quarter in 2007, which sent the stock soaring over 20% in after-hours trading.
But whatever growth the sales team managed to eke out, it wasn’t because of increased enterprise license agreements (ELAs), which grant customers the right to deploy unlimited VMware licenses across an organization. Instead, customers have turned their backs on those.
“The trend that we’ve seen is that customers that aren’t necessarily ready to pull the trigger on an ELA, but still have needs for their planned deployment,” said Mark Peek, VMware’s CFO. “So they make a smaller transactional purchase,” he said.
In fact, Maritz said that beginning in Q208 VMware had seen hesitancy concerning ELAs, although most of its customers had come around. Asked if customers were “dropping out” of ELA discussions, Maritz responded that discussions were “rolling from quarter to quarter.” Of the ELA deals that slipped from Q208, “we were able to close 90% of those deals in Q3,” Maritz said.
How does VMware explain its customers’ reticence? Competition from Microsoft or the economy? The latter, Maritz said. This fall, “uncertainty set in in a big way. Customers are adopting a buy-as-they-go approach instead of [purchasing] for the long term with an ELA,” he explained.
Microsoft’s entry into the market, meanwhile, has barely registered. “We did not see any major losses to Microsoft. We did see a couple of customers indulge in bake-offs. But by and large, those worked to our favor,” Maritz said. In short, “Microsoft is still behind in terms of product roadmap, and I don’t see them catching up until the next 12 to 24 months.”
Whatever the competitive pressures, Maritz remained cautiously optimistic about VMware’s financial prospects. “Virtualization remains at the top of [companies’] priorities going forward because it can bring levels of efficiency only more important in the forthcoming economic environment,” he said. “I expect VMware will be one of the companies that weathers this storm well, and we will emerge from it stronger and able to take advantage of new opportunities,” he said.
The relationship between the virtualization administrator and storage administrator can sometimes be less than cordial. Yet it can become less contentious when approached in a certain manner. One way that administrators can work closer with storage teams is to identify a nomenclature for shared storage resources in the virtual environment. This is critical because as virtual environments grow, storage management becomes an extremely important part of a successful implementation.
In larger environments where the storage and virtualization server teams are separate groups, there are a number of ways to organize a standardized nomenclature of a logical unit number (LUN). For example, the virtualization admin may only care about the size of the LUN and possibly the tier of performance. The storage administrator is concerned about those factors and more. As I’ve mentioned in the past, the process of getting down to details in the storage environment is critically important as the infrastructure grows. As time goes on, we simply can’t refer to the “500 GB LUN” or “the last one you gave me” anymore.
Recently I had an opportunity to work with an entirely different storage environment than I’m used to. As can be expected, this situation arose, but I perceived it as a great time to hammer out a crude yet effective specification for a LUN nomenclature. The situation involved a VMware environment connected to a Fibre Channel storage area network (SAN) with a number of storage devices presenting disks to the VM environment. The basic objective of having a standardized nomenclature is so both parties can determine which LUNs are available and the basic information present on them. The figure below shows the end result of this ad-hoc policy:
Determining a nomenclature for storage allocations will vary widely for different environments. Designations such as which tier the storage lies upon, whether it is developmental or live system storage, or which virtualization cluster owns the storage can all be elements of the process. Furthermore, if you’re dealing with a storage management system, the requirements for the name of the storage system may be different than the directly attached storage system in the example above. The end result for all parties involved is a far more orderly process with a much better understanding of how the storage is allocated in the virtual environment. In addition, the virtualization team will be able to more clearly communicate with the storage team about specific resources in use.