Looking for an alternative to VMware ESX, Xen, or Microsoft Hyper-V? SWsoft, soon to be renamed Parallels, announced the public beta of its Parallels Server hypervisor today, a few short weeks since the hypervisor entered into private beta.
In addition, Apple and Parallels have worked out a deal to allow Parallels Server to run on the new Apple Xserve and Mac Pro hardware, which use the latest generation of Intel chips featuring Intel Virtualization Technology for Directed I/O (Intel VT-d). Parallels has promised experimental support for VT-d systems in beta and full support when Parallels Server becomes generally available.
This announcement makes Parallels Server the first bare-metal hypervisor to run on Apple hardware. Apple shops will be able to run Mac OS X Leopard as well as Windows, Sun Solaris and Linux virtual machines on their high-end Apple hardware. Parallels Server is also notable in that it can be installed either on bare metal à la VMware ESX Server or as a “lightweight hypervisor” running on top of a host operating system. At installation, users can choose how they want to deploy Parallels Server.
Last fall, Apple paved the way for this announcement when it altered its end-user license agreement (EULA) to enable the virtualization of Mac OS X. Of course, Parallels Server also runs on non-Apple x86 hardware, although Apple prohibits Mac OS X from running there.
According to Parallels, Parallels Server beta includes the following features:
- support for more than 50 different varieties of x86 and x64 guest operating systems;
- remote control of the virtual machines via the Parallels Management Console;
- support for up to 64 GB of RAM on the host computer;
- two-way symmetric multiprocessing (SMP) support in virtual machines, to go to four-way SMP in the final version;
- multi-user access to the same virtual machine;
- support for ACPI (Advanced Configuration and Power Interface) in virtual machines; and
- open, fully scriptable APIs for customized management* Full support for Intel VT-x, and experimental support for Intel VT-d.
For more information or to participate in the beta, visit the Parallels Web site.
Of late we’ve written a lot about VMware pricing (see VMware pricing draws large enterprises’ ire, and VMware costs inspire Virtual Iron purchase), which in turn prompted a conversation with VMware Certified Professional (VCP) and former customer Michael Tharp, who is now a server virtualization practice lead at Mainland Information Systems, a VMware Authorized Consultant (VAC) in Calgary, Alta.
Tharp has an interesting perspective on VMware: Given the value that most firms derive from it, VMware is “fairly priced, or perhaps even a bit underpriced.”
Back in the day when Tharp worked in IT operations, he was stunned by the low cost of VMware ESX Server. “I would have paid twice as much for VMware without blinking an eye,” he said.
Today, as a VAC, “The worst ROI I’ve ever seen was for a customer with just 14 servers that needed to buy a new SAN [storage area network]. They saw ROI in less than a year and saved $180,000 over three years,” he said. “That’s still pretty compelling in my book.” Shops with more servers and a pre-existing SAN tend to see dramatically better numbers.
But setting its pricing so low was “a mistake” on VMware’s part, Tharp argued. “It gave them no wiggle room” a situation that manifests today as a “reluctance to negotiate any kind of discount.”
This intransigence may rub some potential customers the wrong way; but for the time being, Tharp sees no valid alternatives. When it comes to VMware competitors like Citrix Systems’ XenServer or Virtual Iron, “I don’t see the value and don’t think I could find enough customers to make it worth the effort,” he said.
Systems admin Michael Gildersleeve wishes for 100% uptime and wonders whether virtualization will bring him closer or further away from that goal. It seems to him that virtualization options only cover one server at a time. “What if I need to do an OS update or patch, or what if some critical hardware fails?” he asks. In that case, he feels a bit more comfortable with a cluster than with virtualization.
Gildersleeve is evaluating high-availability options for virtual machines. VMware’s High Availability (VMware HA) is on his list, but he’s not sure whether that product will work well with his legacy software. He’s also not sure whether HA is as mature and robust as other products on the market.
I’m answering his call for more information. I hope that you will too, either by commenting on this post or emailing me at email@example.com.
Gildersleeve works for a company that has a Progress database running on a Unix server. Hundreds of Windows clients and Web applications are attached to that database and server through Progress Brokers via service file ports. “I need to provide 365 by 24 by 7 uptime,” Gildersleeve said. “With our new Web business, East and West Coast facilities, and vendors managing our stock and replenishment, we need to be available all of the time.”
He wants to run his database across at least two servers, in a setup like an Oracle Real Application Cluster. He continued:
This would allow me to upgrade the OS, reboot a server or take a server down for maintenance without affecting the database or the users. So far I have only found solutions that will give me a two- to five-minute downtime between switching from one server to another.
Yes, Gildersleeve has looked a little at server virtualization. He’s evaluating server virtualization options and VMware HA to see whether he can reduce the downtime to nil.
What I have seen so far is that if I upgrade my Progress app to v10 (Progress OpenEdge), and then move to two Integrity servers running High Availability, that if one server fails or if we need to do maintenance on a server, we can manually switch to the second server; but the problem with this is that my users will feel the switch because I will need to bring one server down. They will need to log out and in again to the app, or whatever needs to be done to bring the ready server into production mode.
Gildersleeve is willing to evaluate Sun Microsystems options, if they are truly viable for running Progress. Microsoft operating systems are out of the question.
In his evaluations, Gildersleeve has come up with a lot of questions, and he’s looking for advice from HA experts. Can you provide some advice and share your experiences by commenting on this post or emailing me a firstname.lastname@example.org?
If you haven’t seen Mike DiPetrillo’s latest blog, “VMware Patch Tuesday,” it’s definitely worth a few minutes of your time. Mike’s post contrasts patch management on the ESX hypervisor with that of competing platforms. I think the picture DiPetrillo paints is much darker than reality (at least with Windows hosts) being that a given Windows Server 2003 host will not require every available patch (many are service-specific) and since not all updates require a reboot. The patch reboot requirements will further diminish in Windows Server 2008 thanks to hot patching support.
That being said, Mike’s latest post is about much more than VMware’s patch management strategy. Instead, consider it the start of the VMware Offensive. In 2007, VMware for the most part smiled and waved at their competition. That’s not going to be the case in 2008. Citrix, Microsoft, Novell, SWsoft, Sun, Oracle, and Virtual Iron all have plans to chip away at VMware’s market share, and rather than ignoring their competitors, I expect VMware to be much more aggressive at highlighting what makes their approach to virtualization different from the competition.
The virtualization world was very excited for the release of VMware ESX 3.5 and Virtual Center 2.5 last week. However, should everyone jump on to the new platforms quickly? I say no. To be fair to myself, I have performed a limited set of upgrades already within the week, and some are planned over the next few weeks. Yes there was a beta process. Yes VMware generally publishes good software. Yes I know these are not Microsoft products. But here is why I say no to ‘jumping’ onto an upgrade immediately for virtualization products:
The inherent nature of virtualization reaches scope farther than one system as it has historically. With a single ESX server hosting upwards of 30 virtual machines, the magnitude is amplified should there be an issue. So, as with anything critical – a test environment is a must. The development environment may be a small number of ESX servers that hold non-critical virtual machines so you can accept any risks that may arise in your upgrade.
Cover Your Bases
Be sure you are able to execute all scenarios with great confidence before proceeding into the upgrades. One example I will deal with soon is I will have a large number of critical virtual machines hosted on ESX 3.02. If I take one server into maintenance mode, then upgrade it to ESX 3.5 can I migrate from the 3.02 to the 3.5 systems without issue while online? I did upgrade Virtual Center to 2.5, so that was a good starting point for my 3.02 to 3.5 upgrades. VMware has put out release notes with a list of Known Issues with ESX 3.5 and Virtual Center 2.5 that are a good starting point to identify your migration upward in the ESX and Virtual Center versioning.
Upgrade or New Install? You Decide
ESX is released as a full/new install (as a CD ISO) or an upgrade (tar file) installation. I personally will go for the new install mechanism rather than the upgrade. This is because I find the ESX install quite straightforward and easy and rebuilding an ESX host can be done in very little time at all. With the rebuild process very quick for ESX and most management and configuration elements configured from Virtual Center, ESX is unique in build time requirements.
Old School Wait and See
Many people offer the old adage “Wait six months before upgrading” or some other variable time frame when core updates or service packs are available. This is to let other people “work out the bugs” in software before you have to deal with them. There is little basis in virtualization for this logic, but many people have adopted it as a policy related to updates. VMware is unique as new core functionality like Storage VMotion is available with ESX 3.5 and I know I am very excited for this new functionality. This ultimately is your call, but the best advice before anything is to get informed on the product releases and the known issues instead of starting a blind installation.
Despite the impending holiday and our stubbornly long shopping list, SearchServerVirtualization.com and SearchVMware.com keep pumping out the virtualization content. Highlights of the week include the following:
- An extremely well read-article on how VMware pricing may be alienating large enterprise customers. We’d love to hear from you if you’ve had similar experiences, or if, on the contrary, you’re A-OK with how VMware prices its products and negotiates its enterprise license agreements.
- More about storage — specifically, some thoughts on sizing logical unit numbers (LUNs) for virtual machines; and an Ask the Expert question on who supports iSCSI best: VMware, Citrix/XenSource or Microsoft?
- A beta tester’s guide to deploying Microsoft’s new Hyper-V; as expert Anil Desai finds, “the system requirements for Hyper-V are far from ordinary,” requiring the latest 64-bit, virtualization-enabled chips. That may prohibit all but the most advanced test labs from taking this baby out for a spin.
- The benefits of virtual desktop infrastructure (VDI), by Barb Goldworm, who describes one IT shop that virtualized 800 desktops to 14 IBM BladeCenter machines accessed by thin clients, cutting their annual power consumption from 80,000 to 17,000 watts, for a savings of almost 80%!
- Scott Lowe answering what inquiring minds really want to know: What exactly is a connection broker, and what does it do? You read it here first.
- Last but certainly not least, expert Craig Newell discusses how to configure Ethernet network interface cards on your ESX host, whether its a tower, rack or blade server. Word to the wise: A VMware host needs more Ethernet ports than your average server.
One last thing: A story on SearchDataCenter.com finds that, despite growing industry concern about saving power, no one actually shuts down servers over the holidays. Or do they? If you’re planning on using VMware’s new Distributed Power Management (DPM) next week and shutting down some systems, let us know.
I recently received a press release from London-based TechNavio, the creator of a Web-based information and research tool, that outlines the top five virtualization trends. Here they are, along with my own thoughts on these trends:
1. Business process automation.
TechNavio’s take. “Virtualization is expected to speed up the wider movement toward business process automation and remote collaboration. The TechNavio findings appear to indicate that the market in general is expecting a major investment in this area within the next two to three years.”
My thoughts. On the subject of business process automation, if TechNavio means “scripting,” I can agree with this trend. SearchServerVirtualization.com contributor Andrew Kutz has received a few questions from readers about automation, which suggests that there are plenty of other IT pros with similar questions. Also, he increasingly writes tips about scripting for X or Y, often concerning disaster recovery or hot backups. Most recently I’ve seen questions about scripting virtual machines (VMs) to power on and off at a certain time.
Food for thought. If scripting VMs advances, what will happen to the number of system admins and data center managers needed to run a data center? Perhaps all you IT programmers should slow down the scripting process before you script yourself right out of a job!
On the subject of remote collaboration, I definitely agree with TechNavio. I wrote an article on emerging client-side desktop virtualization technologies. In response, I received comments from readers who said that they had found a surprising number of companies that are exploring client-side virtual desktop infrastructure (VDI) technologies for implementation in 2008. I think it’s due time for VDI; just consider the number of stolen or misplaced laptops, or CDs that went missing in the mail containing personal information. . . .I don’t know about you, but identity theft certainly isn’t on my holiday wish list. And I certainly would appreciate company investment in this kind of technology, considering the increasing mobility of technology.
2. Network-delivered computing.
TechNavio’s take. “Virtualization is also expected to boost the move toward network delivered computing or what is being termed PC-over-IP. This in turn will place vendors such as Cisco, NEC and Sun at the heart of the market, but interestingly leaves the door open for a host of innovative start-ups.” <br>
My thoughts. I would agree here as well. My aforementioned article discusses vThere, which focuses on primarily providing client-side virtual desktops via their own (i.e., third-party) servers that a client notebook would connect to when opening the virtual desktop. During interviews, my subjects all mentioned the trend of software vendors moving to providing their software via virtual machine. We have already seen a few virtualization companies provide beta versions of newer software via VM. As virtualization continues to grow in adoption, I can easily see all kinds of independent software vendors providing their products via virtual machine download.
3. Legacy applications and virtualization.
TechNavio’s take. “As application virtualization speeds up, applications development and maintenance or ADM, vendors have a real opportunity to grow into a new market defined as optimizing legacy applications for virtualization.”
My thoughts. We haven’t focused much on application virtualization on SearchServerVirtualization.com and SearchVMware.com, so I don’t have an informed opinion on this subject. Readers, do you?
4. Small and midsized businesses (SMBs).
TechNavio’s take. “The biggest long-term opportunity for virtualization vendors lies in the SMB space, specifically end-to-end solutions that allow SMBs to outsource and virtualize their entire network.”
My thoughts. I disagree here. Clearly. there is opportunity and space for virtualization in the SMB market, but to say it’s the biggest long-term opportunity? That’s a stretch. I doubt that larger businesses, once virtualized, will stop virtualizing. I think that a more accurate statement would be that virtualization vendors should target SMBs to further extend virtualization.
5. Labor market and skills.
TechNavio’s take. “As the market for server virtualization heats up, finding people with the right skills is set to get harder. With this environment TechNavio predicts that there will be increased opportunities for IT services companies as well as for IT staffing solutions providers.”
My thoughts. I don’t know if I agree that finding people with the right skills will become more difficult; it depends on the IT workers and their drive to stay on top of certifications that prove their worth. (Cough, the VMware Certified Professional (VCP) exam, cough, cough.) And whenever technology advances, desired skill sets change, so this prediction isn’t all that impressive. As far as increased opportunities for IT services companies, yes. It’s easier to go to a business and say, “Get me a sys admin with a VCP stamp of approval!” than it is to shuffle through résumés looking for those who are VCPs. And I definitely think that those who have the right credentials will find themselves in increasing demand: So stay on top of what you’re worth salary-wise given the move toward virtualizing mission-critical servers. Just because your current company doesn’t realize your worth, it doesn’t mean that Company Y — which has more virtualized servers and a greater need for those with virtual environment management experience — doesn’t.
TechNavio’s press release also included a quote after these “top five trends.” Co-founder of Chicago-based Infiniti Research S. Chand (who conducted the research for this report) said, “Currently the biggest beneficiaries of server virtualization are the enterprise users whose businesses tend to be dependent on running compute-heavy, high availability, application intensive data centers. These include: ISPs, hosting and managed service providers, bank’s trading divisions, gaming, online retailers and the like.”
So if you are looking to get the most (read: more money) from your virtualization experience, check job offers with companies that deal with these types of services.
VMware has made two significant news announcements of late: the long-awaited availability of ESX 3.5, and SAP’s blessings for its applications running within VMware virtual machines.
What we failed to mention, or at least to highlight, was the release of VirtualCenter 2.5, which now supports up to 200 hosts and 2,000 virtual machines (VMs); other new features are listed here. One, in our defense, John Gilmartin, VMware senior manager, product marketing, said that the 2.5 release was largely about enabling some of the new features of the Virtual Infrastructure 3 (VI3) suite, such as Update Manager and Distributed Power Management. “Sure, there was some work underneath that ensured the scalability was in place, but primarily [2.5] is about enabling these new features,” Gilmartin said.
Taken together, these announcements underscore just how far VMware has come as an enterprise software vendor. Contrast that with the big news from the latter half of the week: Microsoft’s unveiling of the Hyper-V beta. No huge surprises there, unless of course, you count the fact that it was released ahead of schedule. We’re eagerly awaiting reviews of the latest Hyper-V; if you were among its downloaders, don’t be shy, tell us what you think by leaving a comment.
VMware, are you listening? Microsoft’s Hyper-V hypervisor, formerly known as Viridian, is available for public beta, the company announced today. It can be obtained by downloading Windows Server 2008 RC1 Enterprise from http://www.microsoft.com/ws08eval.
The beta is shipping ahead of schedule, with Microsoft previously promising it for the first quarter of 2008.
This is not the first glimpse the public has had of Hyper-V; Microsoft released a Community Technology Preview (CTP) release of Hyper-V in September. With this release, Microsoft is demonstrating additional features, notably Quick Migration, high availability, Server Core role and Server Manager integration. The final release of Hyper-V is still scheduled for release within 180 days of “release to manufacturing” (RTM) of Windows Server 2008, currently scheduled for Feb. 27, 2008.
Writing on the Windows Server Division Weblog, Mike Neil, GM of virtualization strategy, also revealed that Windows Server 2008 Standard Edition will also grant one virtual instance of the OS — which is not the case with WS 2K3 Standard edition, he pointed out.
According to the post, specific new features of Hyper-V include the following:
- Quick Migration and cluster high availability
- Default integration with Windows Server management;
- Support for running Hyper-V with Server Core in the parent partition;
- Volume Shadow Copy Service, or VSS, support;
- VHD tools support (compaction, expansion and inspection);
- Hyper-V Microsoft Management Console (MMC)-only installation. The Hyper-V MMC can be installed on Windows Server 2008 without installing the complete Hyper-V role enabling remote management of Hyper-V servers;
- Support for up to four virtual SCSI controllers per VM, with 255 VHDs per controller;
- Support for multiple network adapters per VM;
- Support for up to 64 GB of memory per VM; and
- Guest support for Windows 2003 x86/x64 and Windows 2008 x86/x64 (support for other guest OS platforms will arrive before the Hyper-V RTM).
A webcast detailing the new Hyper-V beta is scheduled for next week, on Tuesday, Dec. 18, at 12:30 PST; interested parties can register here.
More to come.
Planning your virtual environment is critically important – and should not be taken on without the proper resources. Sure you have IT colleagues that are giving you ideas and making you wish your virtual environment was where they are now, but make sure you select the correct solution. And if you are new to virtualization, chances are you will need help in doing that. Take, for example, VMWare’s Plan and Design Services for the planning and design phases of your virtualization implementation. It is not a free service, but you will have a VMWare certified professional (VCP) available to assist in your pre-implementation decisions. This can be nothing in comparison to the costs of a solution that is undersized, oversized, or otherwise unable to deliver your performance and functionality expectations.
As the other virtualization platforms mature, these services will become more avialable there as well. As virtualization technology is moving towards a commodity — and management tools lead the way in value — the planning and design phases will become more important going forward.