I have been evaluating the VMware Server 2.0 beta on Windows and CentOS Linux systems since the release in November of 2007. The second beta of Server 2.0 was released in March, and both versions support adding remote datastores for guest virtual machines. Adding datastores can be a great way to store your lesser priority virtual machines without immense storage requirements on your VMware server system.
I have used the 1.0x version of VMware Server for years in testing and development systems. With version 2.0, I have started to use remote datastores for storing test systems. Datastores are storage locales in VMware Server 2.0. You can add NFS datastores for Linux-based installations of VMware Server. The Microsoft networking server message block (SMB) is available for Windows versions.
From a performance and configuration perspective, a non-local datastore is not ideal for live production systems. For situations like mine where a large number of infrequently used virtual machines are used from VMware server, a remote slower-speed disk suits the need very well. This storage configuration may also be good for archival of certain virtual machines, such as a project build that has gone into support mode.
In my example, I used a network attached storage (NAS) device, which can provide some of the cheapest storage available. NAS default configurations usually provide Windows file and print as a native file-serving option. Adding datastores from the “add datastore” command in the web interface after the initial installation is straightforward:
Once this is done, a virtual machine can be added to the local inventory. When virtual machines run from a remote datastore, the disk I/O is executed on the remote system. Adding a remote datastore also puts the CPU, network and memory functions on the host locally. There is definitely a performance hit from this configuration. But for archival purposes, you can free up more capable storage for your most frequently used virtual machines.
Administrators should also note that the virtual machines must be updated before being visible in 2.0 from a remote datastore before you make a remote datastore for all of your existing servers running VMware Server 1.0x from your VMware Server 2.0 beta system. Once updated to virtual machine version 6.0, the 1.0x systems cannot use the guest virtual machines without being upgraded to VMware Server 2.0.
The following section illustrates how to add a virtual machine to the inventory after upgrading:
This can save time by possibly eliminating the need to copy large virtual machine files back and forth over your network.
When attending a vendor sponsored conference, you expect that most – or all – of the sessions will focus on that vendor’s products along with partner products. So, I wasn’t all that surprised when I found that to be the case at VMware Inc.’s Virtualization Forum 2008 in New York, NY on May 8.
But I felt a tad cheated when I left.
You see, the agenda I received from VMware prior to the event looked great. There would be analysis from Gartner’s Distinguished Analyst, Thomas Bittman, about virtualization technology adoption and trends. Also, Quest Diagnostics was scheduled to talk about their specific pain points and challenges and how virtualization saved the day. Good, and good.
Other sessions included Introduction to Server Virtualization, Datacenter Management and Automation, Business Continuity, Application Virtualization and Remote and Branch Office Management.
The agenda I received in my inbox before the event also included three afternoon sessions scheduled from 2:15 to 3:15 for Platinum Sponsor / Customer Story; the latter being of most interest to me.
So, I got up at 5 a.m. on Forum day and drove from Rhode Island to New York City – a lovely four and a half hour trek with traffic.
I registered, got my Conference Guide, ID badge, marketing package full of VMware ads, and headed to hear Bittman talk. It was interesting enough. Then Quest Diagnostics did a quick talk about their VMware implementation. Cool.
Then lunch. Which was free. Which is awesome.
After that, I hit the Datacenter Automation and Management Session, where John Suit, CTO and co-founder of Fortisphere, discussed how their product,Fortisphere Virtual Forsight, helped Duke University Hospital set policies for virtual machines. Then John Brock, senior product marketing manager at VMware, stepped up to bat to talk about all the wonderful things Virtual Infrastructure can do for IT.
Now for the 2:15 session – Platium Sponsor/ Customer Story. Much to my chagrin, the Customer Story portion of those sessions were not included in the eight page guide. All I see listed are Platinum Sponsor EMC Corp., Platinum Spronsor HP, Platinum Sponsor Dell/ Equalogic Corp., Platinum Sponsor NetApp. Brows furrowed, I flipped the pages looking for the customer stories. Were the times changed? Confused, I asked the nice VMware marketing representative if VMware mistakenly ommitted the customer stories from the guide.
“No, there won’t be anymore customer stories this afternoon,” she said.
“Oh, but the schedule I received last month had customer stories as part of the afternoon sessions,” I said.
“The sponsors will probably talk about some customer stories,” she said.
What the *%#@?
But, I was not too surprised by this program change. It happened at the last VMware event I attended as well.
The VMware Virtualization Seminar Series in Providence, RI in February was the same way – customer story on the schedule, but not at the podium.
I complain about this not because of the horrid commute I made expecting to hear from more than one real user, but because I can safely assume that, like me, data center managers want to hear about virtualization implementations from their peers–the folks who manage virtual environments in real data centers every day. You know, the people who don’t have the word “marketing’ in front of their name. I’m sure that attendees would have like perspective from the system engineers and architects who can give honest, detailed descriptions about pitfalls to avoid, the real costs and ways to deploy virtualization successfully.
I understand users also want to learn about products that can solve their IT issues, and I am sure that some of the information vendors provide is very useful. But, VMware, I beg of you–please include real users in your future seminars, and if you don’t, then don’t put it in your agendas.
VMware may be king of the virtualization mountain now. But it should beware of bridge-building competitors.
Chris Wolf, an analyst with the Burton Group, warned that Novell Inc. was similarly at the top of the world in the late 1980s with its NetWare network operating system, which filled a key gap in Microsoft products. Addressing the problem, Microsoft responded with Windows NT, a weaker alternative to NetWare but “good enough” and stronger over time, and featuring a Gateway Services tool with just enough interoperability to make it easy to port data between the two systems.
But that “gateway” eventually became a floodgate, siphoning off Novell NetWare customers, who now had an easy way to migrate to Microsoft NT and a motive for doing so: Microsoft had a much larger package of software solutions while Novell’s NetWare was just a single point solution. Farewell, Novell NetWare.
Fast forward to 2008. VMware is the undisputed leader in virtualization, the hottest thing in the software market. And as part of its interoperability measures, Microsoft’s new System Center Virtual Machine Manager will have extenders to VMware. Microsoft will also launch its far more modest Hyper-V virtualization software at the giveaway price of $28 per server this summer. But, clearly, Microsoft will be working at furious speed to make it more competitive.
VMware, like Novell NetWare, is a point solution, and Microsoft, even more than in the 1980s, is a giant ecosystem with an overwhelming share of the global software market.
“This is a great strategy for Microsoft,” Wolf said. “It’s providing just enough interoperability [with VMware] to give some management with the goal of facilitating migration. And when users get comfortable with those tools, they will slowly migrate over to Microsoft.
“It’s exactly the same runbook as Microsoft ran against Novell,” Wolf said. “It’s pretty eerie.”
You may not have used the maps feature within the VMware Infrastructure Client because large environments become difficult to decipher. The maps view has become an important part of visualizing certain elements of the overall configuration of an ESX environment. One of the more useful mapping views that can help to illustrate relationships is the virtual machine (VM) to datastore map. This shows how many virtual machines are contained in each datastore. It will also show the virtual machines that have connections in multiple datastores.
To use the maps view to show the correlation between VMs and datastores, select the maps button from the main toolbar and then deselect all options except “VM to Datastore”. Then select the datastores, hosts, clusters or resource pools you wish to have represented in the map. It is best to construct your diagram based on the storage configuration. So, if your storage is per datacenter, diagram from the datacenter level down. Once you click the “Apply Relationships” button, a map is drawn based on all of the datastores. You can zoom out of the map by pressing the “-“ key or [ctrl] and scroll on a wheeled mouse. Below is a sample map of a datacenter and the VMs connected to each datastore:
In this example, there are three datastores without a VM assigned within the map. This is the local datastore on the ESX servers in this environment. As a result, it can be quickly determined if a VM is on the local disk. Depending on the environment, locally stored VMs may be prohibited if you use VMotion or certain VMware DRS configurations. This visibility presents an opportunity to see the variance from your standards. For example, lets say I build an environment of Windows Server 2003 VMs with 32 GB of storage assigned. In this example the shared storage resources are in increments of 320 GB, making it so that 9 VMs comfortably fit in each logical unit number which then becomes the datastore through the fiber channel interface, while leaving room for snapshots or .ISO files. If I see a datastore with a very large number of VMs attached, I can easily assume that they are very small. Or, in the situation where a VM is connected to multiple datastores, I can look into why that’s the case if it isn’t the norm. The most common example of a VM attached to multiple datastores is a CD-ROM image mapping to an .ISO image.
VMware, Inc. recently made two announcements surrounding its Virtual Desktop Infrastruture (VDI) product–a new certification program for thin client devices, and a suite of services to help implement and manage virtual desktops.
VDI is desktop virtualization software that replaces traditional PCs with virtual machines (VM) deployed from and managed in the data center. This presents a number of potential benefits: all of the information on that desktop VM, is protected from disaster and theft; thousands of VMs can be updated from the data center without touching actual desktops; and employees can also log into their virtual machine remotely.
VMware is not alone in the desktop virtualization space, however. Several vendors offer desktop virtualization products, including Sun Microsystems, Inc., Citrix Systems, Inc. and Pano Logic, Inc.
VMware’s certification program is based on the company’s open standards. Virtual desktop users can expect a consistent experience when using VMware certified thin client devices.
After thin client devices have been certified, they will be listed on the VMware Certified Compatibility Guide. The devices listed in the Guide will have passed VMware’s testing criteria for interoperability and quality assurance .
VMware’s other announcement today is a set of new Professional Services that offers best practices and guidance from virtualization experts. Here is a rundown of what these services include:
*Virtual Desktop Infrastructure Jumpstart: A VMware Certified Professional will train up to five staff in setting up VMware products, provide knowledge transfer and discuss best practices of deployment.
*Application Virtualization Jumpstart: VMware Professional Services offers training on running any version of any application on a single OS without conflict.
*Plan and Design for VMware Virtual Desktop Infrastructure and Application Virtualization: Begins with assessment and analysis of the customer’s objectives and existing infrastructure. VMware Professional Services then builds a blueprint for VDI and/or Application Virtualization deployment
*Remote Office/Branch Office (ROBO) Services Acceleration Kit: helps simplify the process of optimizing customers’ remote and branch offices using VDI.
The list price for Jumpstarts in North America range from $6,000 to $13,500. For education classes, it’s $2995 for the four day class, accoring to a VMware spokesperson.
Bogomil Balkansky, Senior Product Marketing Manager at VMware, gave the keynote address at the VMware virtualization forum a couple of weeks ago in New York City. His presentation focused on how virtualization must evolve into a new computing platform now that it is considered a mainstream technology. As a result of the centralized management capabilities that VMware affords, data centers can build entire automated, virtual architectures that outperform traditional data center infrastructures.
The most striking thing about his presentation to me, however, was his slide referencing the Redmond Magazine 2008 Editor’s Choice award. Redmond gave top honors to VMware ESX for most reliable platform. Taking second was the IBM mainframe. In fact, Redmond said that “the least stable part of ESX is usually the administrator. The code is virtually bomb-proof.”
I shared this slide with my friend and colleague Mark Fontecchio who covers the mainframe, among other things, on SearchDataCenter.com. He shared this information on the Server Specs blog and here is what he had to say about the award:
Please keep in mind that this is a magazine focused on the Microsoft IT community, not the IT community as a whole. So for the mainframe, which doesn’t run Windows (yet), to even make it on this list is something. I’m pretty sure the mainframe was the only non-Microsoft related product that placed in any category.
That’s fair. But where it gets interesting is in the user comments. One user had this to say:
Our shop has had more crashes and outages with VMware than ever on the mainframes. We’ve been running multiple mainframe LPARs for decades and have NEVER had an outage due to a failure of PR/SM. By the way, we’ve never experienced an operating system crash or hardware failure either.
He goes on to discuss how “toy” systems continually have hardware and OS failures, and how their repair techs practically work at his facility.
Another reader commented that he has “not seen a VMware server stay stable & secure for more then a week. IBM VM can do this without breaking a sweat & run 100 times more servers than any WINTEL machine.” To this, Balkansky would respond with one of his slides showing 1000 days of uptime for one VMware VM.
So, who has the last laugh? IBM recently reported strong sales of their System z mainframe line, largely due to the recent enterprise class z10. But VMware also reported strong sales in the first quarter of 2008. The truth is that there really isn’t a showdown between VMware and the mainframe . . . yet.
For the time being, the rift is about hardware. Mainframes are more powerful and more reliable than x86 boxes; that’s just the truth of the matter. But in terms of software environment, we may have an interesting battle brewing.
Will VMware eventually lose it’s market leadership position among hypervisor vendors? Several articles I have read recently speculate that, with offerings from Microsoft, Citrix and a handful of others, VMware’s days at the top are limited. Many reason that competition will ultimately force VMware to lower prices because so many options mean that the hypervisor will no longer be specialized technology, but instead become a commoditized offering companies can get from anyone and everyone. Another argument is that VMware’s current pricing is unattractive to small to medium sized business (SMB). The consensus among analysts is that the virtualization opportunity is still relatively untapped for SMBs, and the competition has the advantage due to price.
Maybe it’s because I just spent a week at the VMware Partner Exchange in San Diego and I am full of the VMware “Kool Aid”, but it appears to me that VMware has a pretty good strategy, focus and direction for staying ahead of the competition. While other vendors are still perfecting and marketing their hypervisor, VMware is talking about automation and management of the virtual data center with products like Site Recovery Manager, Lab Manager, Stage Manager, and Lifecycle Manager. Secondly, VMware is “winding up” it’s partners by providing incentives in the form of margins, programs, and intellectual collateral. You did not have to attend the Partner Exchange to realize this. VMware’s recent acquisitions, new product betas and announcements, and public communications have have shown this for some time now.
If hypervisor competition is really just about the hypervisor, or more specifically consolidating multiple physical servers on to a single virtualization host, then I have to agree that VMware will has some legit challengers. VMware ESXi (previously ESX3i) and the free VMware Server, however, continue to be the products well positioned to compete for the “I just want to squeeze as many guests as possible on a host” business. Let’s face it, VMware established this several years ago with the ESX 2.x product, and this is where most of the competition is entering the market today.
As far as the untapped market, if the hypervisor is truly all that SMBs want or can afford then VMware has it covered. Dell appears to have set the market pricing for the embedded hypervisor offerings just last week, and surprise, ESXi is the cheapest option! For $99 extra you can order new hardware pre-installed with the VMware hypervisor. Assuming all hardware manufacturers follow with similar competitive pricing, don’t be surprised if ESXi quickly becomes the most frequently used virtualization host in the data center – SMB or Enterprise.
During a session on desktop virtualization at the VMware virtualization forum in New York last week it became clear that many hurdles still hinder adoption of what Gartner called in 2004 the next disruptive PC technology.
How much does desktop virtualization really save?
Kicking off the session was NEC departmental servers director Ken Hertzler who went through the usual sponsor sales pitch mixed in with some moderately interesting statistics in hopes of making the case for desktop virtualization. When businesses hand out laptops to employees, they are actually making a bigger investment than the few hundred dollars for the machine. NEC puts management costs somewhere around $4500 over a three year period; and those costs are rising. Of course, Hertzler identified desktop virtualization as the key to reducing those costs.
Mark A. Margevicius is the research director at Stamford, Conn.-based research organization Gartner, and he agreed that businesses can save by deploying virtual desktops. He said that quantifying exactly how much savings, however, can be sticky. “On average, our customers save two to 12% from a TCO perspective,” he said. Measuring total cost of ownership makes it difficult to pin down savings. For example, how do you measure how much you save in PC uptime? Or, to put it another way, how often do non-virtual PCs go down?
But it isn’t just about the cost of maintaining remote workers. The security risks are often enormous. Hertzler cited a local banking firm that estimates the cost of a lost or stolen laptop in the neighborhood of $50,000 when it’s all said and done. Although Margevicius wasn’t surprised by this number, he said that every organization measures these expenses differently. “You could argue that losing the laptop you gave to the janitor to play solitare with would result in high cost,” he said.
For Margevicius, it’s all about the capital costs versus TCO to which people need to pay attention. Most customers get hung up on capital costs, i.e. the investment it takes to get things going. What many people are realizing, however, is that desktop virtualization is a shared resource and that many of the savings come in the form of things like higher levels of redundancy.
Hardware requirements coming to the forefront
After throwing out some scary numbers, Hertzler ended with a demonstration of NEC’s desktop virtualization server designed specifically for use in VMware virtual desktop environments. The crowning feature of the server is its fault tolerance and automatic failover capability. In fact, Hertzler had been running his entire presentation on a virtual machine hosted on one of these very servers.
He asked a volunteer to unplug the power, and to the surprise of absolutely no one, the server automatically failed over to the backup with no interruption to the single VM running an instance of Power Point and what looked like Windows Explorer. Sweetening the deal was the nearly unnoticeable 30 seconds before he could open his applications back up–very impressive.
But it wasn’t the server technology, per se, that Margevicius was concerned about—it’s storage. “Most people at minimum take for granted the amount of storage a PC has. People expect 80, 10, 250 gigs of local storage as part of the platform,” he said, it goes back to the question of capital costs. “How much storage do you allocate in your data center [for local storage on virtual PCs]?” This, again, is a question of how much capital cost you want to invest in your virtual infrastructure.
Are we ready for desktop virtualization?
Gartner still sees maturity as a major hurdle. Although Mark acknowledges the progress made by VMware, Citrix and Microsoft, components such as software and brokering technology still need to be addressed and improved.
The question of maturity is raised in terms of scalability. Most deployments are still in the area of 100 or so virtual desktops and many of those are still in pilot or testing. Although he couldn’t release names, Mark said that he knows of a handful of people who’ve “gotten religion” over virtual infrastructure and who have plans to move into the 500-1000 virtual desktop range with the end goal of an entire virtual desktop architecture.
Palo Alto-based VMware, Inc. made three product announcements today; its disaster recovery software, VMware Site Recovery Manager will be available for orders next week, VMware Stage Manager will begin shipping May 19th, and these products will also be available as part of two new management and automation software bundles from VMware starting May 19th.
Site Recovery Manager
VMware Site Recovery Manager, which provides integrated management of disaster recovery (DR) plans with VirtualCenter, offers automated DR plan testing, failover and recovery.
Jon Bock, senior product marketing manager for VMware, said that Site Recovery Manager should allow users to implement DR plans where they could not do so before.
“Because of cost and resources required for DR, disaster recovery has only been done for mission critical applications, but virtualized workloads can be protected with minimal cost and effort,” Bock said.
More and more companies have started virtualizing mission critical workloads, as VMware has been quick to point out.
The company presented a number of case studies showing companies like Milwaukee-based Johnson Controls Inc., which uses virtual machines (VMs) for “almost everything” including its Microsoft SQL database, EMC Corp.’s Documentum, and Active Directory, with success.
Companies virtualizing those types of apps should use DR capabilities from software like VMware’s to protect those applications during disaster.
Midvale, Utah-based Burton Group analyst and virtualization expert Chris Wolf explained in a tip recently that Site Recovery Manager makes disaster recovery planning and execution simple.
“With Site Recover Manager, you can automate your disaster recovery plan with software, initiate that plan with a mouse click, and pre-program the sequence in which VMs are brought online at a disaster recovery site,” Wolf wrote. “During the course of this year, I expect other vendors to offer similar technologies as well.”
VMware Stage Manager automates the process of moving application environments through release stages — from integration to testing to staging and to user acceptance — before being released into production.
Stage Manager, which was highly acclaimed during its beta phase, is also managed by VMware Infrastructure 3. It aims to reduce time spent on configuring hardware and prevent virtual machine sprawl, which commonly occurs when virtual machines are released across the data center for staging, Bock said.
New management and automation bundles
VMware has designed two new software bundles that include the management and automation products now available. The VMware IT Service Delivery Bundle includes all of VMware’s IT lifecyle automation products, including VMware Lifecycle Manager, VMware Lab Manager and VMware Stage Manager.
The VMware Management and Automation Bundle includes all of the above, plus the disaster recovery product VMware Site Recovery Manager. It is priced at $2,995 per two processors.
The new Management and Automation Bundle includes all products from the IT Service Delivery Bundle with the addition of VMware Site Recovery Manager, priced at $3,995 per two processors.
Both software bundles will be available form VMware distributors, resellers and OEMs beginning May 19.
Yesterday I was a speaker at “Virtualization: Getting from Pilot to Production.” During my second session I claimed that you could VMotion a virtual machine (VM) that uses a Raw Device Mapping (RDM) to access a raw logical unit number (LUN). Two audience members challenged this claim, saying that they had previously run into a scenario where it was not possible to VMotion a VM that makes use of RDMs. I was sure I was right, and they were positive they were correct. It turns out we were *all* spot on. You can VMotion a VM that uses RDM as long as the RDM is configured in virtual compatibility mode. When you map a SAN LUN using a RDM, you choose between two modes of operation: physical and virtual. Per VMware documentation:
Virtual mode for an RDM specifies full virtualization of the mapped device. It appears to the guest operating system exactly the same as a virtual disk file in a VMFS volume. The real hardware characteristics are hidden. Virtual mode allows customers using raw disks to realize the benefits of VMFS such as advanced file locking for data protection and snapshots for streamlining development processes. Virtual mode is also more portable across storage hardware than physical mode, presenting the same behavior as a virtual disk file.
Physical mode for the RDM specifies minimal SCSI virtualization of the mapped device, allowing the greatest flexibility for SAN management software. In physical mode, the VMkernel passes all SCSI commands to the device, with one exception: the REPORT LUNs command is virtualized, so that the VMkernel can isolate the LUN for the owning virtual machine. Otherwise, all physical characteristics of the underlying hardware are exposed. Physical mode is useful to run SAN management agents or other SCSI target based software in the virtual machine. Physical mode also allows virtual-to-physical clustering for cost-effective high availability.
Additionally, you can also VMotion a VM with an RDM that uses network port ID virtualization (NPIV), as long as you use virtual compatability mode.
So there you have it. The audience members were right. My memory is not as shot as I thought it was, and everyone is happy.