A common VMware Communities question is how to P2V or convert a system from within a demilitarized zone (DMZ) to a virtual machine (VM) running within an ESX host that will be part of the DMZ virtual network.
P2V works by imaging the physical host within the DMZ and transferring that image to the administrative/management network attached to the service console (management appliance) of the VMware ESX(i) host. This in essence crosses security zones and could connect the hostile DMZ to the ‘in need of protection’ virtualization management network. Access to this network from the DMZ could be disastrous.
One solution is to perform the P2V migration in stages.
- Create the DMZ virtual network within your virtual infrastructure.
- Get your security team to bless a laptop/workstation for work within the DMZ. Ensure this laptop/workstation has enough removable storage to contain the resultant VM or VMs of the physical servers you wish to convert.Use your P2V tool to convert the VM and store it on the removable media.
- Disconnect the removable media and bring it to your secure administrative network.
- Connect the removable media to a workstation within the administrative network. Ensure this connection is read-only for the moment if possible.
- Virus Scan the removable media, but note a VMDK can give false positives; you are really looking for anything that may be hidden from view.
- Use VMware Converter to import the VM or VMs into the virtual infrastructure ensuring they are connected to the proper virtual network.
- Power on the VM with the network disconnected and fix any issues that are caused by the P2V migration, such as the need to remove hardware agents, and fix anything that needs to be fixed.
- Reboot the VM with the network connected
The P2V migration is now complete and isolated from the network. The key to this is to only power on the VM once you are within a safe environment and to check for viruses and worms that may live within your DMZ.
Beginning with VirtualCenter 2.5 Update 2, VMware has provided the ability to pass your currently logged-in Windows domain credentials to VirtualCenter using the VMware Infrastructure Client so you no longer have to log in to VirtualCenter separately. To do this you can create a special shortcut for VirtualCenter on your workstation as outlined below:
1. Create a new shortcut on the desktop of the PC that you want to setup single sign-on for VirtualCenter.
2. In the Create Shortcut Wizard, click Browse and navigate to the location of the VpxClient.exe program and click OK. (By default it is located in C:\Program Files\VMware\Infrastructure\Virtual Infrastructure Client\Launcher\)
3. After the full path is in the Location field append
-passthroughAuth -s <VirtualCenter Server hostname> to the end of the line, where
<VirtualCenter Server hostname> is the hostname or IP Address of the VirtualCenter instance you want to connect to.
4. Click Next and give a name for the shortcut and then click Finish. Once you double-click on the newly created shortcut you will be logged into VirtualCenter using your currently logged-in Windows credentials.
If you choose to use this convenient feature, make sure you take precautions to prevent someone from accessing an un-locked workstation and connecting to VirtualCenter using your credentials.
If you log into shared workstations make sure you log out when done. If you do not all someone has to do is open a browser and access the default page on any ESX or VirtualCenter server to download and install the VMware nfrastructure Client on that workstation and they can log in to VirtualCenter as you and access anything you have access to.
Anytime you leave your workstation make sure you lock it, you can make a shortcut for doing this in one mouse click or use the Windows and L keystroke combination. Also make sure you have your workstation set to automatically lock at a minimum of 15 minutes of idle time as protection in case you forgot to lock it when you walk away.
For more information on this feature including hot to change the default Security Support Provider Interface (SSPI) that is used see VMware KB article #1006611.
When it comes to finding technical information on VMware products there are a number of obvious sources of information such as the official documentation and books but there are also some not so obvious sources that contain tons of great technical information.
VMware’s Knowledge Base
First there is VMware’s online knowledge base. It continues to amaze me about the volume and quality of information provided in VMware’s knowledge base. Typically many users access vendor knowledge bases as a tool to try and resolve problems in their environment as many known problems are documented there. VMware’s knowledgebase, however, has much more than that and also contains how-to articles, troubleshooting tips, sample configurations, best practices and so on. Here’s just a sample of some of the great content published there in the last 10 days:
• Configuring the speed and duplex of an ESX Server host network adapter
• Cisco Discovery Protocol (CDP) network information via command line and VirtualCenter on an ESX host
• Sample Configuration – ESX connecting to physical switch via VLAN access mode. External Switch VLAN Tagging (EST Mode)
• Troubleshooting SCSI Reservation failures on Virtual Infrastructure 3 and 3.5
• Advanced Configuration options for VMware High Availability
• Advisory for advanced VMkernel parameter NFS.LockDisabled
• Sample Configuration – Network Load Balancing (NLB) Multicast mode over routed subnet – Cisco Switch Static ARP Configuration
• VLAN Configuration on Virtual Switch, Physical Switch, and Virtual Machines – ESX 3.X
• Diagnosing an ESX Server that is Disconnected or Not Responding in VirtualCenter
They even have a knowledge base article on how to search the knowledge base. I encourage you to frequently check out the knowledge base as new documents are added daily. You can also subscribe to a weekly digest by editing your VMware account preferences and setting up your email subscriptions.
Next there is a new website VMware launched called VI:OPS that contains many documented proven practices from both VMware itself and customers. The information is organized into several different zones: Strategy, Applications, Security, Management and Availability. Already there are dozens of great documents there that provide some fabulous information.
There is a wealth of information available on the VMworld website from the all the great sessions each year. Only attendees can access the current year’s sessions but anyone can register for a free account and access previous years.
You may have used the VMTN community forums for posting technical questions but did you know there is a whole separate documents section that users can create technical documents to share their information and tips with other users. There are hundreds of documents already created there with some great tips from both other users and VMware employees.
While VMware Tools guest OS upgrades are an inconvenience in the life of a VMware ESX and VMware Infrastructure 3 (VI3) administrator, they need to be done. VMware Tools is a set of drivers designed to greatly improve console interaction, but also includes drivers such as disk and network devices for most operating systems. As new versions of ESX are available, the guest operating system will need to be upgraded. Having the VMware Tools running with an old version will give basic functionality, but will need to be upgraded eventually on the virtual machine (VM).
The good news there are a few ways to approach getting tools out to the guest VM. Here are a few from my bag of tricks that you can use:
Install through Windows Group Policy: For 100% virtualized Windows environments within an organizational unit (OU), this is easy for the one-time installation. This can be applied to a computer accounts OU that contain VMware-based virtual machines.
Interactive or automatic install on guest console: This option can allow the tools to be upgraded from the VMware Tools .ISO that’s mounted and run on the guest VM. Note that the automatic upgrade may require an automatic reboot of the VM. The option to install or upgrade tools is shown below:
Run VMware Tools upgrade from a UNC path. If you copy the contents of the mounted .ISO of VMware Tools to a shared path on the network, Windows-based guest VMs can run the install from that central location. This can save the work of adding permissions for junior administrators to have device mount permissions in VMware, and you can control the access entirely in the operating system.
One caveat is that installations or upgrades of VMware Tools usually cause an interruption in network connectivity by updating the network driver. Depending on the mechanism accessed and its configuration for disconnected sessions, the install may be cancelled. This can happen in certain Remote Desktop for Windows systems in particular during installations of VMware Tools.
Lastly, this challenge of updating VMware Tools is not unique to VMware ESX Server. The same challenge exists on VMware Workstation, VMware Server as well as other products, but may be solved in the same fashion.
The latest version of ESX, Version 3.5 Update 3 has just been released to complement the release of VirtualCenter 2.5 Update 3 that was released a month ago. The new features and supported devices included in this release are listed below. One of the nicest new features is an experimental feature that enables you to recover VMDK files that were deleted. This feature is a new ESX Service Console command (not supported yet with ESXi) and is designed to recover VMDK files if they are deleted or if a VMFS volume is deleted or corrupted. (Almost like having a recycle bin for ESX!). Until now, if you accidentally deleted a VM and its disk file it was gone and almost impossible to recover.
According to the Compatibility Matrix, ESX 3.5 Update 3 will only work with VirtualCenter 2.5 Update 3 and Update 2. It will not work with versions prior to 2.5 Update 1, so upgrade your VirtualCenter first if necessary. You can read the full release notes for this version here and download it here.
New features and supported devices:
• Increase in vCPU per Core Limit — The limit on vCPUs per core has been raised from 8 (or 11 for VDI workloads) to 20. This change only raises the supported limit but does not include any additional performance optimizations. Raising the limit allows users more flexibility to configure systems based on specific workloads and to get the most advantage from increasingly faster processors. The achievable number of vCPUs per core will depend on the workload and specifics of the hardware. It is expected that most deployments will remain within the previous range of 8-11 vCPUs per core. For more information, see VI3 Performance Best Practices and Benchmarking Guidelines.
• HP BL495c support — This release adds support for the HP Blade Server BL495c with all Virtual Connect and IO Options allowing 1 or 10Gb connection to the network (upstream) and 1Gb connections only to the servers (downstream).
• Newly Supported NICs — This release adds support for the following NICs:
• Broadcom 5716 1Gb, Broadcom 57710 10Gb Adapters, Broadcom 57711 10Gb Adapters at 1Gb speed only
Note: iSCSI/TOE hardware offloads available with these adapters are not supported by VMware with ESX 3.5.
• Newly Supported SATA Controllers— This release adds support for the following SATA controllers:
• Broadcom HT1000 (supported in native SATA mode only with SATA hard drives and Solid State Disk devices)
• Intel ICH-7 (supported in IDE/ATA mode only with SATA CD/DVD drives)
Note: Storing VMFS data stores on drives connected to these controllers is not supported
• Newly Supported Guest Operating Systems — Support for the following Guest Operating Systems have been added by VMware during the ESX 3.5 Update 3 release cycle:
• Solaris 10 U5
• Ubuntu 8.04.1
• RHEL 4.7
• Internal SAS networked storage controllers — This release adds experimental support for Intel Modular Server MFSYS25 SAS Storage Control Modules (SCMs). For known issues with this platforms and workaround see SAS Link and Port Failovers with the Intel Modular Server Running Update 3 and Later Versions of ESX 3.5 and ESXi 3.5 (KB 1007394).
• Interrupt Coalescing (IC) for Qlogic 4Gb FC HBAs — Introduced in this release, the feature reduces CPU utilization (and CPU cost per IO) and improves throughput of IO intensive workloads by generating a single interrupt for a burst of Fibre Channel frames, when received in a short period of time, rather than interrupting the CPU each time a frame is received. The feature is enabled by default.
• Experimental Support for the VMDK Recovery Tool — This release adds support for the VMDK Recovery tool, a script intended to help customers to recover VMFS/vmdk data stores from accidental deletion of VMFS/vmdk data store or physical disk corruption. For more information, see VMDK Recovery Tool (ESX 3.5 Update 3) ( KB 1007243).
• Small Footprint CIM Broker — Updated SFCB to version 1.3.0
• IBM SAN Volume Controller — SVC is now supported with Fixed Multipathing Policy as well as MRU Multipathing Policy.
Understanding virtual machine networking intracacies can be difficult. You might wonder how the network traffic routes between two virtual machines (VMs) that are both located on the same host server — does the traffic go out onto the network at all? The answer implies that by assigning certain VMs to the same host, vSwitch and port group you can increase network speed and reduce latency, but you’ll need to understand how traffic routes between VMs on ESX hosts first.
A vSwitch on an ESX host is basically software that is contained in the memory of the host servers that connect virtual machines (VMs) with physical NICs. Here are a few scenarios that cover how network traffic is routed in different situations between two VMs on the same host server:
Different vSwitches, same port group and VLAN – VM1 is connected to vSwitch1 and VM2 is connected to vSwitch2. In this example the VMs are plugged into separate vSwitches on the same host server. Network traffic between VM1 and VM2 goes from a physical NIC on vSwitch1 to a physical switch that it is connected to and then back to a physical NIC on vSwitch2 and then to VM2.
Click to enlarge.
Same vSwitch, different port group and VLAN – VM1 is connected to vSwitch1, Port Group A. VM2 is connected to vSwitch1, Port Group B. In this example the VMs are plugged into the same vSwitch on the same host server. Network traffic between VM1 and VM2 goes from a physical NIC on vSwitch1 to a physical switch that it is connected to and then back to a physical NIC on vSwitch1 and then to VM2.
Same vSwitch, same port group and VLAN – VM1 is connected to vSwitch1, Port Group A and VM2 is connected to vSwitch1, Port Group A. In this example the VMs are plugged into the same vSwitch and the same port group on the same host server. Network traffic between VM1 and VM2 never leaves the host server and does not go to the physical NICs on the host server and thus never travels on the physical network.
Because network traffic between VMs on the same host, same vSwitch and same port group does not leave the host it can be advantageous to configure VMs that have a lot of network traffic between them in this manner (for example, a Web server and an application server or an application server and a database server ). Doing this will result in increased network speed and reduced network latency between the VMs. If you use VMware Distributed Resource Scheduler (DRS) you might also consider creating a rule to ensure that the VMs stay on the same host.
Click to enlarge.
Replacing or upgrading a SAN is no trivial task. There are a few tried-and-true steps to take when replacing a SAN which I’ll outline in this blog post, including a key step to the process that will ensure a successful switch.
I recently upgraded from an HP MSA 1000 to an IBM DS3400 because I wanted to improve performance and lower my overall energy costs. One of the reasons I decided to replace my old SAN is because it is much cheaper to have a single 2U device running than the three devices for the old SAN. In addition, I dropped from 42 drives to 12 drives with more storage. Minimally my SAN power costs should drop to just 1/3 the original. I also have gone from 2.5 TB to 3 TBs of storage. Not a huge increase in storage capability.
I will know next month if my energy cost reductions have been realized and will report back then.
The steps for replacing a SAN are not all that tricky, but there was a single gotcha that could be avoided with careful planning. To replace a SAN, follow these steps:
1. Plug in the new SAN to your existing fabric. Luckily I had a pair of unused fibre connections and Gbics available else this would have been another expense and a delay until the cables and Gbics arrived.
2. Find a system on which to install the management console. For the IBM DS3400 I chose my VirtualCenter and VMware Consolidated Backup (VCB) server to be the management console for the SAN. There are two methods to manage the IBM DS3400: in-band or over the fibre channel fabric, or out of band using Ethernet — even a VM would suffice given networking is connected to the SAN. Software exists for both 64-bit Linux and Microsoft Windows.
3. Create the LUNs on the new SAN. This is a good chance to correct any problems you may have with the LUN configuration on the old SAN. I did a one-to-one mapping, except I slightly increased the size of the LUNs.
4. Present the LUNs to your VMware ESX host(s) and VCB server(s).
5. Rescan the storage adapters for new LUNs using the VMware Infrastructure Client (VI Client) for the first VMware ESX host. Once this is completed, you can then add as many Virtual Machine File Systems (VMFSs) as required.
6. Rescan the storage adapters for new LUNs and VMFSs using the VI Client on all the other ESX hosts.
7. Employ Storage VMotion via the VI Client to migrate VMs from one LUN to another LUN. This works if you have the patience to move all the VMs one by one. If not you can employ other measures. If you do this, however, you will end up having to edit the VMX files for each VM migrated to change the location of the virtual disk files. There are scripts to do this for you as well. This second option, however, also requires you to power off all VMs. Use of Storage VMotion does not require any VM downtime. Be sure to move all files from the LUNs in use.
8. For a LUN with an RDM (mine was a Linux file server), use Storage VMotion to move any VMDKs related to the VM. Then map the new RDM to the VM. You will have to reboot the VM to complete. Then create a new filesystem on the new RDM mount the file system. Then you must copy all the files from the old RDM to the new RDM. I used the following command to complete this task to copy all files from /files to /files2.
- cd /files; rsync -ravlpog * /files2
9. Then I modified the mount point for /files within /etc/fstab to be the correct new location. Finally I powered off the VM, deleted the old RDM from the VM and powered on the VM picking up the new data.
Here is the gotcha. I missed it, but it will be extremely useful for you (and me) going forward: Remove the old SAN’s LUNs from each VMware ESX host. If you miss this step when you finally disconnect the old SAN, the ESX hosts will go into a state of constantly attempting to failover the old LUNs. This will spew massive failures into the log files. If this happens there is no recourse but to reboot the VMware ESX hosts.
Now the SAN has been replaced. With the exception of dealing with any RDMs, it is possible to migrate to a new SAN without any downtime.
VMware ESXi may have a smaller footprint than VMware ESX, but the pro-security theory behind the skinny ESX version may be defunct given the lack of ability to create a Defense in Depth strategy around the hypervisor. As is, I suggest you consider ESXi a safe hypervisor only when behind a firewall.
VMware touts ESXi as being more secure by having less of an attack footprint, but it is missing the most important feature of modern operating systems: The ability to build a strategy to protect the system from those reaching it and users gaining access to those things to which they are not authorized, currently known in the IT world as Defense in Depth.
Defense in Depth is more than just the availability of a packet filtering firewall, a la VMware ESX’s iptables-based firewall. It is the ability to control when and from where users can log in, and how they can access or view information on the system as well as audit all actions within the system for later perusal or immediate notification of unauthorized access to data or the system. Defense in Depth often starts with the use of a directory service as a centralized management point for all users.
VMware ESXi v3.x is missing all of these capabilities. Directory services are not supported within the VMware ESXi management appliance, there is no ability to audit actions that take place while on the management appliance, there is no control of when or how a user can access the appliance, and most importantly there is no built in firewall.
All of this begs the question: does ESX’s smaller footprint really offer a more secure hypervisor? From the network facing view, its attack surface is limited but not as much as you would expect. What a user can do or access once on the system is also limited, but also not as much as you would expect.
Network daemons almost the same as ESX, minus default SSH
From the network perspective, all the normal daemons that are available for VMware ESX are also available for VMware ESXi: vmware-hostd daemon, cimserver, time daemon, and webAccess. What is missing by default is SSH access, which most ESXi users enable immediately. The ability to start most other services on the system is also missing. In other words it has nearly the same network daemons running by default that ESX does.
The major difference is that you can no longer log directly into the ESXi box without degrading your security first, in other words without enabling the dropbear SSH server. This does not need to happen and should not according to VMware.
Management of ESXi is performed by either direct access via Remote Command Line Interface (RCLI), the VMware Infrastructure Client (VI Client) and VMware webAccess or by going through VirtualCenter. Each of these use SSL in order to encrypt and protect all traffic from the workstation and ESXi host. These are the same tools you can use to manage ESX and, as such, share all the weaknesses and strengths. All administrative access via these tools is performed through the vpxuser account which in turn runs many commands as the root user. This is no different than what ESX does. If you go through VirtualCenter, however, you can gain the benefits and disadvantages of using a directory service, but this is not the case when going direct to ESXi.
Possible split-brain authentication
The largest security difference as discussed above is that there is no Defense in Depth, and that once you break the shell by enabling SSH you now run into possible split-brain authentication and authorization that did not exist before. This implies that unprivileged users can gain access to data which they do not own and should not be able to access or even see.
Lastly, since ESXi has no Defense in Depth, its management appliance belongs behind a firewall of its own. This is a step backward in my opinion, and hopefully will be fixed in future releases!
Google and Microsoft are forming mega data centers with low energy costs and serious tax advantages by using renewable energy solutions. Each have negotiated extremely low energy costs direct with the energy providers and have created practically a zero-carbon footprint. The costs for a data center are not all about energy, although energy is a major expense. What would happen if/when a bank or federal government bought into the cloud? Surely security in a cloud computing infrastructure would need to be top of the line, thereby expensive.
The idea is that Microsoft and Google will get their energy at extremely low per-watt prices, and eventually, since they are using renewable energy sources, they may receive a few credits back from the energy providers as they sell off their excess. They could even get state and federal subsidies and tax breaks by using renewable resources.
But what does this really mean to the virtualization world? Who will actually use these mega data centers? I imagine part will be for Microsoft and Google themselves, but they plan on selling or renting space within their cloud for applications and services. They may sell quite a bit to the low-hanging fruit that comprises startups and other SMBs who can not afford all the modern equipment, but will they be able to sell to others?
What about security?
The big question in my mind is can you trust either Microsoft and Google from a security perspective. Cloud computing security is still up in the air. Would my bank use these new clouds? Would the federal government?
If my bank uses it, I imagine there will be extremely tight security. How much will I have to pay for this level of security? Would this level of security increase the cost so much that the benefits of low cost energy go by the way side? Security implementations cost, sometimes heavily.
Google and Microsoft may be able to reduce cloud computing energy costs, but at what other costs. If I gain part of a host due to virtualization, how much of my data is comingled on the storage devices and network paths? If a badly configured host is in use, can another company see and gain access to my data? How is this protected within the cloud? In the current virtualization world, this requires dedicated resources which are more expensive than shared resources.
Simply put, we can not ignore security going forward. What is the security and privacy guarantee — not to mention the real cost of use once these concerns have been addressed?
VMware Server 2.0 has a fundamentally different interface compared to the thick client that was used in the 1.0 product. Along with the different interface comes a logging mechanism for access to the web interface, VI Web Access. This is one of the key new display features of VMware Server 2.0. For more information on the look and feel of the new VI Web Access, be sure check out this SearchVMWare.com tip.
Simply put, VI Web Access is a purpose built web engine for access to the VMware Server configuration and console access. For both Windows and Linux hosts, there is one log file that is kept for the web interface. The file is kept at in the following locations by default:
Windows hosts: C:\Documents and Settings\All Users\Application Data\VMware\tomcat-logs
Linux hosts: /var/log/vmware/WebAccess
For my Linux installation, there is one log file, called proxy.log. The log file is relatively easy to read, however I recommend an enhanced text viewer such as NoteTab Light as there are many lines per server-side event in this log. The line below shows an authentication failure to VI Web Access:
[2008-10-23 00:13:34,523,http-8308-2,RequestProcessor] Error processing action request /action/login : [InvalidLogin] Login failed due to a bad username or password.
VI Web Access logging by default log only shows authentication issues, session timeouts, or other errors that occur. This log is separate from the other VMware Server logs, as they are generally separated by process. More information on the VMware Server product can be found in the online user’s guide.