July 24, 2007 3:24 PM
Posted by: Marcia Savage
, Virtualization strategies
IT professionals may wear many hats in their organizations, but we tend not to be known for our fashion sense. To assist in that area, I’d like to cover one of the latest styles in virtualization: The return of the thin client. Case in point: see Alex Barrett’s coverage of HP’s acquisition of thin-client vendor Neoware, Inc: Virtualization informs HP’s Neoware Acquisition. It’s time for traditional fat desktops to start becoming even more self-conscious. Of course, thin clients never really went away – they’ve been around since the popularitzation of the ”network computing”, which started in the late 90′s. Lest any of you commit a social faux paus while strutting down your data center’s loading ramps, I wanted to point out some of the issues that prevented the predicted takeover of thin clients:
- Cheaper desktops: Reducing hardware acquisition costs were a goal for thin client proponents. As desktop computers hit the sub-$500 range, however, the cost advantages of using thin client computers became far harder to justfiy.
- Fatter apps and OS’s: A while ago, I heard someone ask the most pertinent question I’d heard in years: “Is hardware getting faster faster than software is getting slower?” The answer, my friends, seems to be “no”. As hardware gets more capacity, OS’s and applications tend to swallow it up like a supermodel at a salad bar.
- Single points of failure: Thin clients (and their users) rely on centralized servers and the network that allows access to them. Failures in these areas mean major downtime for many users.
- The Application Experience: Remote desktop protocols could provide a basic user experience for the types of people that use a mouse to click on their password fields when logging on to the computer. Single-task users adapted well to this model. But what about the rest of us? I’d like the ability to run 3-D apps and use all of my keyboard shortcuts. And, I’d like to be able to use USB devices such as scanners and music players.
- Server-side issues: Server-side platforms from Citrix, Microsoft, and other vendors had limitations on certain functionality (such as printing).
So, is it possible for these super-skinny client computers to address these issues? I certainly think it’s possible. Server and network reliability has improved over the years, forming a good basis for reliability. Thin clients are inexpensive, and server-side hardware and software has improved in usability features. For example, Windows Server 2008′s Terminal Services feature provides the ability to run specific applications (rather than the entire desktop) using a remote connection. And, multi-core processors that support large amounts of RAM help enable scalability. Overall, thin clients are cheap dates, they’re more readily avaialble, and they’re less needy than in the past. What IT admin wouldn’t like that? Only time will tell if this relationship will last.
Oh, and one last fashion tip: Don’t throw away your old fat clients just yet. Like so many other fads, they may be back in style sooner than you think. Order a slice of cheesecake and think about that!
July 23, 2007 7:19 PM
Posted by: Jan Stafford
, Virtualization strategies
, Why choose server virtualization?
Burton Group analyst Chris Wolf shared some good advice about
consolidating servers with virtualization in our recent interview. Here are some quick tips gleaned from our conversation and some more-info links and questions for you about these topics.
Making a pitch
Make these key points when pitching server consolidation via virtualization to upper management:
- Virtualization is a means to running fewer physical servers and, thusly, consumer less power in the data center.
- With fewer physical servers, hardware maintenance and upkeep costs go down.
- Virtualization increases server availability via dynamic failover enacted at the virtual machine level. So, any application oncan support high availability, and that is a big difference with virtualization compared to traditional clustering solutions.
(Have you made this pitch? What did you say? What were the results? Let me know in the comments below or by writing to email@example.com.)
Converting multi-purpose servers to VMs
Watch out. This is tricky territory, says Wolf.
“When I have multi-purpose servers, I generally want to take each application or service on that server that I need and run it as its own VM instance. So, in those cases, you are better off manually reprovisioning those services as separate virtual machines again; because in a dynamic failover environment, the VM itself is the point of failover. So, if I have a multi-purpose server, if I am looking at failover, every application on that server is going to be off-line for the period of the failover. If I have a single application per virtual machine, if the VM fails over now, only a single application would be down.”
(Wolf talks more about this process in the interview. Has anyone out there tackled multi-purpose server-to-virtualization conversions? If so, please share your experiences with me at firstname.lastname@example.org.)
Physical-to-virtual (P2V) migration
There are several approaches, says Wolf. Some common practices that work in small environments — such as manually staging a VM and migrating the data and relying on a backup product to help with the migration — are not a good fit for larger data center environments. When migrating many servers, use a product designed for that job to do do a hot clone of a virtual machine.
“Not only does it let me move each VM in a live state, I can schedule when the VMs get converted so I can do a conversion during off-business hours.”
More P2V info can be found here:
SSV’s P2V news and expert advice;
Measuring the success of your server consolidation project.
Got other good P2V links or advice? Let me know: email@example.com.
July 23, 2007 3:16 PM
Posted by: Barney Beal
, Virtualization management
, Virtualization platforms
, Virtualization strategies
Article after article and post after post have compared and contrasted Xen, VMWare, Veridian, and a host of other virtualization technologies, with opinions on performance, management tools, implementations, etc., etc. in abundant supply. Inevitably when it comes to Xen, the story comes full circle with some sort of declaration about “data center readiness.” The definition of “ready for the data center” is quite subjective, of course, based largely on the author’s personal experience, skills, and their opinion of the technical capabilities of those managing this vague “data center” to which they are referring.
Sadly, most seem to think that IT professionals managing the data center are buffoons who are somehow incapable of working with anything that doesn’t include a highly refined set of GUI tools and setup wizards. Personal experience shines through when an author balks at the notion of editing a text or XML configuration file – a common task for any system administrator. Consequently, a declaration of immaturity is often the result, without regard for the performance or functionality of the technology. In the case of Xen, this is particularly prevalent, as the Xen engine and management tools are distinctly separate. In fact, there are already several dozen management and provisioning tools available and/or in-development for the highly capable Xen engine, at varying degrees of maturity.
And yet, I can’t help but think that comparing features of management tools is completely missing the point. Why are we focusing on the tools, rather than the technology? Shouldn’t we be asking, “where is virtualization heading” and “which of these technologies has the most long term viability?”
Where is virtualization technology heading?
To even the most passive observers it has to be obvious that virtualization is here to stay. What may not be so obvious are the trends, the first being integrated virtualization. Within a year, every major server operating system will have virtualization technology integrated at its core. Within a few short years, virtualization functionality will simply be assumed – an expected capability of every server class operating system. As it is with RHEL now, administrators will simply click on a “virtualization” checkbox at install time.
The second trend is in the technology, and that is the “virtualization aware” operating system. In other words, the operating system will know that it is being virtualized, and will be optimized to perform as such. Every major, and even most minor operating systems either have or will soon have a virtualization aware core. Performance and scalability sapping binary translation layers and dynamic recompilers will be a thing of the past, replaced by thin hypervisors and paravirtualized guests. Just look at every major Linux distro, Solaris, BSD, and even Microsoft’s upcoming Veridian technology on Windows Server 2008, and you can’t help but recognize the trend.
Which of these technologies has the most long term viability?
Since we now know the trends, the next logical step is to determine which technology to bet on, long term. Obviously, the current crop of technologies based on full virtualization, like KVM and VMWare (it’s not a hypervisor, no matter what they say,) will be prosperous in the near term, capitalizing on the initial wave of interest and simplicity. But, considering the trends, the question should be, “will they be the best technology choice for the future?” The reality is that, in their current state and with their stated evolutionary goals, full virtualization solutions offer little long term viability, as integrated virtualization continues to evolve.
And which technology has everyone moved to? That’s simple – paravirtualization on the Xen hypervisor. Solaris, Linux, several Unix variants, and, as a result of their partnership with Novell, Microsoft will all either run Xen directly or will be Xen compatible in a very short time.
Of course, those with the most market share will continue to sell their solutions as “more mature” and/or “enterprise ready” while continuing to improve their tools. Unfortunately, they will continue to lean on an outdated, albeit refined technology core. The core may continue to evolve, but the approach is fundamentally less efficient, and will therefore never achieve the performance of the more logical solution. It reminds me of the ice farmers’ response to the refrigerator – rather than evolving their business, they tried to find better, more efficient ways to make ice, and ultimately went out of business because the technology simply wasn’t as good.
So then, is Xen ready for the “data center?”
The simple answer is – that depends. As a long time (as these things go, anyway) user of the Xen engine in production, I can say with confidence that the engine is more than ready. All of the functionality of competing systems, and arguably more, is working and rock solid. And because the system is open, the flexibility is simply unmatched. Choose your storage or clustering scheme, upgrade to a better one when it becomes available, use whatever configuration matches your needs – without restriction. For *nix virtualization, start today.
For Windows virtualization, the answer is a bit more complex. Pending Veridian, the stop gap is to install Windows on Xen with so-called “paravirtualized drivers” for I/O. Currently, these are only available using XenSource’s own XenServer line, but will soon be available on both Novell and Red Hat platforms (according to Novell press releases and direct conversations with Red Hat engineers.) While these drivers easily match the performance of fully virtualized competitors, they are not as fast as a paravirtualized guest.
Of course, you could simply choose to wait for Veridian, but I would assert that there are several advantages to going with Xen now. First, you’ll already be running on Xen, so you’ll be comfortable with the tools and will likely incur little, if any conversion cost when Veridian goes golden. And second, you get to take advantage of unmatched, multi-platform virtualization technology, such as native 64bit guests, and 32bit paravirtualized guests on 64bit hosts.
So what’s the weak spot? Complexity and management. While the engine is solid, the management tools are distinctly separate and still evolving. Do you go with XenSource’s excellent, yet more restrictive tool set, a more open platform such as Red Hat or Novell, or even a free release such as Fedora 7? That depends on your skills and intestinal fortitude, I suppose. If you are lost without wizards and a mouse, I’d say Xensource is the way to go. For the rest of us, a good review of all the available options is in order.
What about that “long term?”
So we know that virtualization aware operating systems are the future, but how might they evolve? Well, since we know that one of the key benefits of virtualization is that it makes the guest operating system hardware agnostic, and we know that virtualization aware guests on hypervisors are the future, then it seems reasonable to conclude that most server operating systems will install as a paravirtualized guest by default, even if only one guest will be run on the hardware. This will, by its very nature, create more stable servers and applications, facilitate easy to implement scalability, and offer improved performance and manageability of platforms.
As for my data center, this is how we install all our new hardware, even single task equipment – Xen goes on first, followed by the OS of choice. We get great performance and stability, along with the comfort of knowing that if we need more performance or run into any problems, we can simply move the guest operating system to new hardware with almost no down time. It’s a truly liberating approach to data center management.
July 20, 2007 3:30 PM
Posted by: Jan Stafford
, Virtualization security
Beware of hackers attacking virtual machines (VMs) via the hypervisor or virtual switch. These two avenues of attack will probably pose the most problems to IT security managers in virtualized data centers, Burton Group analyst Chris Wolf told me in a recent interview.
Here are some quick takes from that interview, offered as a heads-up about security and management issues one might face with virtual machines. At the end of this post, I’ve put in some links to other resources on virtualization security.
It’s not so easy to compromise each operating system (OS) living within VMs on a server; but an attack on the underlying hypervisor layer in a virtual environment wouldn’t be too hard to accomplish. Such an attack can take down or limit access to several VMs in one fell swoop, Wolf said. Even worse, the hacker could introduce his own virtual machine to a network without the administrative staff knowing about it.
There’s no silver bullet to protecting the hypervisor. The best practice is, of course, keeping it up to with patches and software updates.
As for virtual switches, Wolf said:
“Not every virtual switch provides the layer to isolation that it should in comparison to a physical switch. Hardware-assisted virtualization is starting to do a lot to provide more hardware-level isolation between virtual machines, but as of today you really have isolation on the address base-level, but no isolation currently in terms of memory, and that is something that is coming with forthcoming virtualization architectures.”
Chris Wolf offers more advice on data protection and server virtualization management in this webcast. (It requires registration.)
You’ll find more VM security tips in the article by SearchServerVirtualization.com resident expert Anil Desai on VM security best practices.
Ed Skoudis and Tom Liston give a detailed rundown on Thwarting VM Detection in this white paper. I found it in a post on Stephen R. Moore’s blog. Thanks, Stephen.
If you’ve had any problems with or can offer any advice on virtualization security, please sound off here, or write to me at firstname.lastname@example.org.
July 20, 2007 3:02 PM
Posted by: Ryan Shopp
Links we like
, Microsoft Virtual Server
, Virtualization platforms
, Virtualization strategies
I was crusing the Web just now, trying to find some interesting blogs that aren’t chock-full of code that an associate editor simply does not understand. I clicked on Roudy Bob’s blog (see our blogroll for his link) and low and behold, my boredom was alleviated!
To read the following analogy of the virtualization game and the boardgame RISK from the source, visit RoudyBob’s blog.
“I somewhat miss the days when virtualization was at the fringe of the market and just about everything that came along was new and exciting. Now, it’s a high-stakes game – with hundreds of millions (if not billions) of dollars of software and services to be had for the company that plays it right. Along with maturity comes incremental, conservative product releases aimed to grow cautiously while nurturing the existing customer base. Also involved now is the politics and strategy of mergers and acquisitions – not the typical fare for your standard geek. The more I thought about my last post, the more I realized what we’re seeing in the market today is a lot like the RISK game most of us played as a kid. Take for example, the game board:
Microsoft, VMware, SWsoft, XENSource and other smaller players are trying to carve out their piece of the total virtualization pie. The company that claims the most territory (share of the market) wins. Sure, it’s probably a bit of an obvious analogy to make – but it does provide a little different perspective on things.
“Let’s say for the sake of argument that the virtualization RISK map is laid out like this:
“North America – Data Center Virtualization
South America - Development and Test
Africa - Virtual Infrastructure Management (a.k.a., utility computing)
Europe – Linux Virtualization
Asia – Virtualized Desktops
Australia / Pacific Rim – OS X Virtualization“Each time we observe the likes of Microsoft and VMware (EMC) opening the war chests to dole out large sums of money for smaller companies doing interesting things, the map shifts a little more in the favor of one or the other. New entrants also shake up the dynamics of the map.“Take the Microsoft acquisition of Softricity for example – having the ability to virtualize applications on the desktop would significantly advance Microsoft’s position in the Virtualized Desktops arena – a place that has seen little traction to date. Previously, VMware’s ACE product was really the only large player in that game. When VMware acquired Akimbi this month, they definitely made a further push in two areas they are already strong in – Development and Test as well as Virtual Infrastructure Management.“Continuing the RISK analogy, then, which players occupy the most territory and where should a company like Microsoft (amazingly the underdog, for once…) focus its efforts?
“I think it’s safe to say that the North American continent, er, the Data Center Virtualization space is occupied in a big way by VMware. The fact that they were first to market with an enterprise-class virtualization product (ESX Server) made it easy to make headway in IT organizations who made the early move to virtualization. The ESX Server product is fairly well positioned to satisfy companies’ urge to consolidate and rationalize their physical servers onto virtual machines. Microsoft’s Virtual Server product, despite the company’s efforts, has made little progress in getting into these larger-scale virtual machine environments. Remember, though, that the first player to advance isn’t always the winner.
“Development and Test is a different story. I think Microsoft has an amazing opportunity to leverage the Windows platform and its broad developer tools offering to really win this part of the market. And, if you want my opinion, that’s a much better strategy for going after Data Center Virtualization than trying to fight an uphill battle against ESX Server. A large presence in this space and some strategic offensive moves to the north (remember the analogy, right!?) could turn the tide away from VMware. Everyone is waiting with eager anticipation the release of the Windows-based hypervisor due sometime after “Longhorn”. But in a year and a half – the market will have likely left Microsoft behind. I think it’s a very large bet on their part that will most likely not pay off.
“Virtual Infrastructure Management is where all of the major players (and other folks like Altiris, BMC, Acronis, etc.) seem to be focusing these days. And rightly so. Being able to manage a large virtualized infrastructure easily and bring the concept of “utility computing” to reality is a guaranteed way to differentiate yourself. Again, I think VMware has the early lead as its VMotion and VirtualCenter solutions have helped them to garner mindshare in this area. But, products like System Center Virtual Machine Manager, Systems Management Server and Operations Manager from Microsoft give that company at least a way to make inroads.
“This is undoubtedly the biggest portion of the virtualization market (the greatest customer need) and would be the place where I would choose to play if I were an up-and-coming company that wanted to focus on the space. The reason management is so appealing is that there are all sorts of interesting problems to solve – management, monitoring, backup, restore, provisioning, auditing, asset management, etc. And for the most part, they’re problems that customers are willing to spend some money to address. Startups can grow quickly by providing something customers need and folks like VMware, Microsoft and SWsoft can easily differentiate themselves from one another by leveraging the management “story” around virtualization.
“Linux Virtualization, analogous to the Europe of RISK, is where companies like SWsoft with their Virtuozzo product and XENSource with their Xen product have dominated. Sure, VMware Workstation and VMware Server both run on Linux and the ESX Server hypervisor is based on it. But, in terms of catering to the needs of the open source community and the requirements of large-scale hosting providers running Linux, the Virtuozzo and Xen products have the most traction. SWsoft used their Virtuozzo for Linux product as a foothold into the broader Windows market when it released Virtuozzo for Linux. And Xen is scrambling to provide Windows guest OS support based on the new virtualization support in the latest generation of Intel processors. Your starting position on the game board doesn’t dictate the outcome, just the strategy.
“The biggest untapped market for virtualization has to be leveraging virtualization as part of the end user experience on the desktop. VMware’s ACE product was the first to focus on this, but no one company – even VMware – has seemed to get any traction. The potential opportunity for an interesting solution to problems like mobile workforce empowerment, workstation security, etc. is enormous. The shear numbers dictate that a successful solution could yield impressive financial returns.
“Ironically, Microsoft is probably best positioned to do something in this space and hasn’t. There are plans for providing VirtualPC capabilities to enterprise Vista customers but in reality this is just more of the same. What if users could run their browser in a seamless window running as part of a background virtual machine that was isolated from the corporate network? What if the applications and user date for a workstation PC were somehow virtualized so that users could move easily between different pieces of hardware? These are some of the possibilities that Microsoft could start to address by leveraging its Windows monopoly on the desktop and the pervasiveness of centralized management solutions like Active Directory and Group Policy. And their “innovation” in this area is to bundle a couple of license together and calling it Virtual PC Express.
“Lastly, there’s the OS X Virtualization market. In the game of RISK, completely occupying Australia is one way to gain an advantage early – leveraging the additional armies provided by controlling the entire continent. As far as virtualization is concerned, I don’t think owning the Mac market is going to yield any huge advantage in areas like Data Center Virtualization or Virtual Infrastructure Management. It’s still an interesting space – especially with the switch to Intel-based Macs. What was once dominated by Microsoft’s Virtual PC product is now up for grabs again with products like Parallels Workstation for OS X gobbling up earlier adopters who bought new intel-based machines and want to virtualize Windows. Apple may also have a play here as well if rumors are true that they are looking to integrate virtualization into the next version of the OS X operating system.”
Well done, Roudy Bob!
June 29, 2007 5:31 PM
Posted by: Jan Stafford
Editor: This is a post by Simon Crosby, Xen Project leader. In the first sentences he refers to a previous blog posting on this site.
I wanted to re-phrase some key points from my blog posting of this (which I have withdrawn) because I failed to tease out and succinctly articulate the core argument, and in doing so unintentionally aroused the ire of some in the community. Thanks to those who offered feedback — you were right, and I stand corrected. Let me try to get it right.
Novell’s announcement of its Windows driver pack for the Xen hypervisor implementation in SLES is interesting because it both challenges the existing business models of the Linux distros while offering them previously inaccessible opportunities through the delivery of mixed-source offerings.
When Linux was just Linux, and not capable of virtualizing other operating systems, the concept of the Linux OSV Supporting the OS and all open source components in the app stack that they deliver as part of the distro, was straightforward. The business model of the major distros is based on their ability to Support (that is, take a phone call from their customer, and deliver fixes where necessary) any of the technology they deliver in their product (whether or not they originally developed it). An open source product philosophy enables them to develop, debug and develop expertise in the entire stack that they deliver.
But with virtualization as an integral component of the distro (whether Xen, KVM or one of the other open source virtualization technologies), Linux is only one (arguably the key) component of the stack, and when a different OSV’s product is virtualized on Linux (Windows, perhaps, or another open source OS), two new opportunities emerge: First, a Linux OSV can extend its value proposition to its customers by offering to Support other open source OSes virtualized; and second, by adding to their offerings the requisite closed source add-ons such as the Novell Windows Driver Pack for closed source OSes, the distros can artfully deliver high value mixed-source offerings that “price to value”, and protect themselves from the kind of discounting attack that Oracle used on Red Hat.
Both Novell and Sun have announced their intention to support their customers’ use of other open source operating systems virtualized on their implementation of the Xen hypervisor. Thus, one might expect Novell to see new business opportunities to support competitive Linux distros on SLES, and in so doing give customers a migration path to SLES as an OS while leveraging SLES and the hypervisor to virtualize existing competitive Linux installations in use by the customer.
The fact that Linux, BSD and OpenSolaris source code is available to the virtualization vendor, and the fact that the key vendors and communities behind those OSes work within the context of the Xen project to develop a common open source “standard” hypervisor, means that from a virtualization perspective at least, all are compatible with the same hypervisor ABI, hopefully reducing any support complexity. Thus far, only Red Hat has maintained a steady focus on only RHEL, and possibly future Windows support.
The possible adoption by the Linux OSVs toward the delivery of mixed-source offerings is extremely important. Upcoming releases of Netware and OES to run on SLES/Xen gives Novell an important opportunity to price to value, specifically because the mixed-source nature of the combined product contains IP, and the market is good at determining the value of such things. Contrast this with the traditional open source business model, in which there is no IP, but the vendor markets a high value brand (such as RHEL or SLES) and associated service offering.
This is vulnerable to attack by lower labor cost and/or competitive offerings – a problem that the mixed-source offering does not seem to me to have.
It is a specific goal of the Xen Project to develop an open source “engine” that can be delivered to market by multiple players, in
multiple products. Virtualization of closed source OSes forces (in the case of Windows) the delivery of closed source value-added components that are not part of the core hypervisor itself. The value-added components that vendors must add to the “engine” in order to deliver a complete “car” to their customers allows them to differentiate their products, and gives customers choice. By contrast, had it been the Xen project’s goal to deliver a complete open source “car” there would be no value proposition for the different vendors seeking to add virtualization to their products, and it would put Xen in conflict with the Linux OSVs — some of the most important contributors to the project.
The nutshell: I think that Xen has pioneered a new model of open source business – one which uses open source as a reference standard implementation of a component of the offering, but which stops short of a whole product. This encourages multiple vendors to contribute, because adopting that model allows them to add value to the final product and be compensated for it.
June 26, 2007 3:24 PM
Posted by: cwolf
, Virtual machine
, Virtualization management
, Virtualization platforms
, Virtualization security
, Why choose server virtualization?
A couple of weeks ago I spoke with Alex Barrett regarding what I though was a talk on the direction of the server virtualization landscape. Our conversation resulted in her article “Xen virtualization will catch up to VMware in 2008.” After reading the article, I was a little surprised at how some of my words were quoted out of context and wanted to offer my take on the virtualization market and its future direction.
VMware’s Role in Shaping the Future
Many of VMware’s competitors have based their product development road map on VMware’s VI 3 feature set. When I state that Xen platforms can catch-up to VMware’s VI3 features by mid 2008, I mean just that. By this time next year, several Xen vendors will offer mature dynamic failover (comparable to VMware HA) and live migration (comparable to Vmotion) solutions. In doing so, Xen platforms will offer the features that today’s enterprise environments are demanding. Virtual Iron has been very aggressive with their development roadmap and XenSource is working hard as well.
Still, in order to “catch up,” one would have to assume that VMware is sitting on their hands, which of course if far from the case. So will the Xen vendors be caught up to VMware next year? I don’t think so. Will they offer the features and maturity that allow them to be observed as an alternative in the enterprise? Yes.
However, looking into my crystal ball, I see the next generation VMware virtual infrastructure architecture as once again raising the bar. VMware’s ESX hypervisor will have a smaller footprint and improved security. Features that are important in the enterprise, including dynamic VM failover and backup will see significant improvements. You should also to see the complexity of storage integration reduced as well. Technologies such as N_Port ID Virtualization (NPIV) and the proliferation of iSCSI will significantly ease VM storage integration and failover.
I also expect to see more leadership from VMware in the following areas:
- Virtual network security, including monitoring and isolation
- Storage virtualization – development of consistent standards and best practices for integration between server and storage virtualization platforms
- Centralized account management and directory service integration (this is one of my VCB pet peeves)
- Virtual desktop management
Keep in mind that oftentimes many VMware Workstation features find their way into ESX as well. So you should expect some of the new Workstation 6 features to play a part in the next ESX Server product release.Record/replay, is one of my favorite new features, and has numerous uses for testing, troubleshooting, and security auditing.
As the market leader, we should all expect VMware to continue to provide leadership in virtualization innovation, and I don’t expect that to subside.
Virtualization and Security
Security has been getting much more attention lately and will continue to do so in coming years. My recent article “Virtual Switch Security” outlined some of the current weaknesses regarding Layer 2 traffic isolation in some virtual switches. Virtual switches need to improve their default isolation as well as manageability. Port mirroring is an important feature in virtual switches and will be needed for integration with intrusion detection and prevention systems. However, administrators need to be able to control port mirroring within a virtual switch and in turn enable or disable port mirroring on specific ports as needed. VLAN integration is and will remain a concern for virtual switches and vendors that do not offer 802.1Q VLAN support will remain at a disadvantage.
Intrusion detection is becoming more of a concern for numerous organizations, and the uptake of virtualization support by many security ISVs is evidence of that. For example, Catbird’s V-Agent can be used to quickly add an IDS to existing virtual networks.
Hypervisor security is naturally important as well. If you would like to see some of the issues out there today, take a look at Harley Stagner’s excellent article on preventing and detecting rogue VMs. The blue pill attack has also received considerable interest. For more information on blue pill, take a look at Joanna Rutkowska’s presentation “Virtualization – the other side of the coin.”
The security concerns relating to virtualization are no more scary than what we already see with existing operating systems and applications. While security concerns should not prevent you from implementing virtualization, you cannot ignore security either. Hypervisors and management consoles (such as the ESX console which uses a Red Hat-based kernel) still must be managed and updated like all other server operating systems.
To validate the security of their architectures, you should expect virtualization vendors to obtain EAL certification for their respective platforms.
At the moment, standards are more on my wish list than an actual prediction. I’m hopeful that we will see a common virtual hard disk format within the next 2-5 years. Doing so could provide virtual machine portability amongst all server virtualization platforms and make it considerably easier for ISVs to package and deploy virtual appliances. Administrators would be free to choose their preferred virtualization platform and run virtualization systems on that platform regardless of the virtualization engine that may have packaged a particular VM.
Management standards would also go far in easing virtualization deployments and management. Common APIs for management and backup would allow any third party management or backup tool vendor to support all major virtualization platforms. With industry support of the DMTF System Virtualization, Partitioning, and Clustering (SVPC) Working Group, realization of standardized virtualization management can become a reality.
Application and OS virtualization, fueled by vendors such as SWsoft, Sun, DataSynapse, and Trigence, will continue to add to the virtualization mix in the enterprise. Down the road, application virtualization can significantly ease application deployment by allowing ISVs to package their applications in virtualized containers, thus far reducing application deployment complexity. These technologies run alongside server virtualization deployments today, and it’s very likely that they may be deployed within server virtualization frameworks down the road.
Much work still remains in aligning the non-virtualized industry with the virtualized world. Both application and OS vendors need to be clear on their virtualization licensing terms, with licensing models that support virtualization that are either based on physical or virtual resources. Hybrid licensing that includes terms for virtualization and restrictions on relocation of VMs to other physical resources impedes virtualization adoptions and adds unnecessary confusion. In 2005 Microsoft added a needed jolt to virtualization by being the first vendor to define product licensing in support of server virtualization. Today they need to go further and set the gold standard for licensing of operating systems and applications inside virtual environments. That model should be clear and concise, with simple terms for virtual machines and without limits on portability. “Buffet” style licensing that provides for unlimited VMs on a physical host is ideal as well. Choices and rules are good, but let’s not get carried away. In terms of licensing, less is more. If Microsoft gives us a simple licensing model, many other industry vendors will follow.
Virtualization’s future holds plenty of promise, and we’ll all be the beneficiaries of that promise.
June 20, 2007 12:41 PM
Posted by: Alex Barrett
No, Apple hasn’t made any announcements about virtualization for the iPhone, but all this reporting about VMware ESX Lite jogged my memory of a conversation I had recently with XenSource CTO and founder Simon Crosby. While talking about Xen 3.1, Simon mentioned that since Xen is an open-source project, some developers in the consumer electronics space have adopted it as the basis of an embedded hypervisor for your cell phone or MP3 player.
Yes, even down on a mobile device, virtualization has a role to play. For security reasons, Crosby explained, electronic device manufacturers typically use multiple chips to perform different functions — one processor to run the real-time operating system, another for graphics, a third for sound, and so on. And just like in contemporary servers, all those chips are wildly underutilized. Enter virtualization. By running an embedded hypervisor on a single CPU, every process can run logically isolated from one another, while allowing the manufacturer to cut down on the number of chips in the device.
“I call it PDA consolidation,” Crosby joked.
The benefits of integrating virtualization in to consumer electronics are similar to the benefits IT managers derive from server virtualization: better utilization of hardware equals less hardware. In consumer devices, that translates in to smaller, lighter devices with better battery life and that cost less to manufacture, and therefore, that cost less for consumers to buy. Cool.
This got me curious about who is actually doing this. A simple Google search gave me a couple of leads. Last year, LinuxDevices.com reported on a company called Sombrio Sytems developing a “Xen Loadable Module” (XLM) but the company appears to have gone by the wayside. However, Trango Virtual Processors, based in Grenoble, France, seems to be actively involved in embedded virtualization. According to their web site, just this week, the company announced a version of its TRANGO Hypervisor for the latest generation of ARM processors. With TRANGO, ARM processors gain the ability to run up to 256 virtual processes, executing a “rich operating system (such as Linux or Windows CE), a real-time operating system (RTOS), or a standalone driver or application.” I have no idea how far along they are on this process, or when virtualization-enhanced mobile devices might hit the market, but it certainly sounds promising.
June 20, 2007 10:31 AM
Posted by: Joe Foran
Links we like
, Virtual Iron
, Virtualization security
Virtualization.info, Gridtoday, the eGenera website, and a lot of other sources reported that eGenera has received a patent for an all-in-one N+1 tiered disaster recovery solution that combines grid technology and virtualization to provide a hardware-neutral Disaster Recovery product that takes your entire data center and encapsulates it. This is an awesome product because it can greatly improve DR and perhaps make DR more accessible to smaller businesses, but it’s not patent-worthy. It smacks of a way to stifle competition and generate revenue via patent suits over product sales. Or it may just be that patent itself is just pointless. I’m not sure which case is more true, honestly. It all depends on if it’s challenged, and how.
DISCLAIMER: I’m not a lawyer. In fact, I don’t ever even want to be a lawyer. I’m happy as an IT Director and SysAdmin, and don’t want to ever be a source of legal advice. The below is informed opinion, not legal advice. Tell it to the Judge. Under the recently-relaxed “Obviousness” rule that governs patents, a patent is useless if the idea behind it is only an obvious improvement over an existing idea. eGenera’s patent seems to fall squarely under that rule, from the language in their patent application. Just for kicks, I read it, and will quote it. Under the quotes I’ll comment on what this means, in my not-so-humble and not-so-attorney opinion.
It starts out with a SCREAMING cry of obviousness in section 1…
“A method of providing processing resources to respond to a fail-over condition in which a primary site includes a configuration of processing resources, comprising: generating a specification that describes a configuration of processing resources of the primary site; providing the specification to a fail-over site having a configurable processing platform capable of deploying processing area networks in response to software commands; using the specification to generate software commands to the configurable platform to deploy processing resources corresponding to the specification; wherein the processing resources at the primary site include a plurality of independent processing area networks and wherein the specification describes only a subset of the plurality of independent processing area networks. “
That, my friends, is commonly known in the IT field as a failover cluster. The link even defines the N+1 method that eGenera is using in its product. The short of it – You have multiple boxes on a network that are mirrors of one another. One fails, another takes over its role. There’s usually hardware or software in-between that keeps things synchronized and detects the failure. This part of the patent is worded to be host-, network- and processor-inclusive, which would be obvious because most clusters are situated on networks, don’t necessarily need to run the same processors, and are hosts. The “big” improvement is in the use of the term “site” – where the product is meant to restore an entire data center’s configuration. In the press release, this means that if you have four data centers and one disaster site, if any one data center fails, the disaster site takes on the complete configuration of the failed site (i.e., all nodes, network configurations, etc.). This is a huge step forward in disaster recovery, but it’s not patent-worthy because there are a zillion ways to do this.
Here’s one – If you put 100% of your data center(s) onto VMware’s VI3 with shared storage, and had a huge WAN pipe between each site to cover the overhead, you would have this “network in a box” N+1 approach because ESX provides virtual network switching and virtual guest machines, without having to worry about the value of N except in scalability terms. The same is true of most Xen-based products, like Virtual Iron. I’ve been doing this for years on a much smaller scale. If my main data center drops off the face of the earth, I’ve got all of my critical systems in VMware, with snapshots and backups of the machines and the storage they’re on, as well as configuration of the virtual switches. If the worst happens and my data center goes down, my staff and I drive over to a remote office, restore, and have exactly what eGenera is talking about – a complete restoration of all configurations at a remote data center. The method – backing up virtualized systems. The process – recovery to virtualized systems. It’s not as slick as an automated system, but we’re getting to the point that eGenera talks about in its patent (thanks to an Open Source project called Openfiler and some extra bandwidth on our WAN to handle sending snapshots between sites rather than driving tapes around).
Soon, the site backups will be automatically done over WAN links, meaning that when something fails, I switch over to the DR network and everything comes back online from the virtual machine snapshots / backups. It won’t be long after that until we automate that switchover, and have exactly what eGenera is describing. It’s been a long process because that deep a level of DR isn’t a critical requirement to the business, but it was obvious where we wanted to go, and obvious how we needed to get there – through the use of virtualized host and network environments.This brings me to the next few sections:
“2. The method of claim 1, wherein the act of using the specification to generate commands to deploy processing resources is in response to the receipt of a fail-over condition.
3. The method of claim 1, wherein the act of generating includes specifying data specific to device configuration at the primary site.
4. The method of claim 1, wherein the act of generating includes specifying management information relevant to the primary site. “
Summary – we’re patenting how we’re going to do what we claim, how we’re going to encode the configuration, and how we’re going to send notifications when it kicks off. All irrelevant if the concept of the patent is obvious. Also these are all in themselves obvious – any virtualized system has to have information on configuration of the underlying virtualized hardware. Any cluster sends notification on a failure of a node. Any mirrored port has awareness of the cache in its partner port. Outside of the patent office, and in the IT office, this stuff goes without saying.
Next up, section 5… this is identical to section 1, except the last few words: “wherein the specification describes all of the independent processing area networks.”. There’s no significant difference as to obviousness here – it’s just the parts of the patent which are different from each other that are referenced. This is a matter of scale – rather than a “subset of the plurality” (English: part of the whole) this is a “master file” of the entire environment being monitored. It adds grid technology into the mix, another obvious case for virtualization.
Section 6 changes the plurality part to “a minimum configuration of processing resources at the primary site”, which is just saying that the system will use the most efficient (i.e., minimal) configuration possible to get the job done. Duh. Do I have the same number of VMware hosts at my remote sites? No, I don’t. I don’t even always have the same builds or even the same versions! Do I have all of the same configurations? No. Can I really bring up 100% of my data center at a remote site? Sure. And eat a performance bullet on the servers.
So what would I do? I would bring up critical systems only – Active Directory, DNS, Email, mission-critical apps. My Jabber server would stay down. My dev environment would stay down. I would run the minimal configuration I need to get the business working. Can it get any more obvious than “if I don’t have the all the resources I need, I’ll get by with what I have, the best that I can”?
The seventh section tacks this part onto the end: “…wherein the primary site includes a configurable processing platform having a pool of processors and wherein the act of generating includes specifying information descriptive of how the processors are pooled at the primary site. “A pool of processors. Also known as a computing grid. And it goes on to describe having a documented system for how that grid works, and tying that to the application. This is truly obvious. If you have a system, you document it. If you have an automation system, it’s documented, and it uses documentation on how the system it automates functions. This sort of thing has been around forever.
On a non-grid level, Detroit has been outsourcing human labor to robots using this exact methodology for decades. On grids, this is how they work… regardless of distance. Each node is aware of each other node, and so the grid has an internal documentation of the pool of resources that are available.
Section 8: “The method of claim 7 wherein the act of generating includes specifying data to describe logical partitioning of processors and interconnectivity into logical processing networks. ” This is the virtualization component. In virtualized systems and virtualized DR products like VMware’s VMotion, this is a core component of how the systems work to provide fault tolerance. The service console knows what virtual machines are out there, and what host systems are out there. It has descriptive information about the resources, how they’re pooled, and how they’ll be moved in the event of failure. eGennera’s idea is obviously a small improvement to the process, applying it towards a virtualized grid concept, but it’s not a huge leap forward (again). Virtualized grids have already been in the works for some time. See here and here.
Section 9 states:
“A system of providing processing resources to respond to a fail-over condition in which a primary site includes a configuration of processing resources, comprising; a computer-readable specification that describes a configuration of processing resources of the primary site; a configurable processing platform capable of deploying processing area networks in response to software commands; logic to generate software commands to the configurable platform to deploy processing resources corresponding to the specification; wherein the processing resources at the primary site include a plurality of independent processing area networks and wherein the specification describes only a subset of the plurality of independent processing area networks. ”
In other words, the systems to the methods described above.
Section 10 is similar, switching plurality for totality like sections 1 and 5 did. So they’re going to build a computer system to do what DR specialists and Virtualization specialists have been doing for some time now, only under a commercial brand. Seems obvious to me.
The next sections reference art that I won’t reprint or link to here, as they’re not very original. In fact, the art is as obvious as the concept for this patent. I won’t need the art to describe the obviousness of some of what is printed in the text. Here’s my favorite example so far:
“To date, a considerable body of expertise has been developed in addressing disaster recovery with specific emphasis on replicating the data. Processor-side issues have not received adequate attention. To date, processor-side aspects of disaster recovery have largely been handled by requiring processing resources on the secondary site to be identical to those of the first site and to wait in standby mode.”
Processors have not received adequate attention because in virtualized environments, they are largely irrelevant as long as you’re not mixing widely different types (such as AMD and Intel). You do not need to maintain identical processors, or quantities of processors, or anything like this. I can restore the virtual machines running on my Intel dual core Xeon servers on my Intel single core Xeon machines with a great deal of flexibility amongst processor family types. Does it matter if one is 2.8GHz and another is 1.6GHz? Not really. The processors at my remote sites aren’t sitting in standby mode, either. They’re running apps on the local servers. They are live, running, and chugging along. They’re ready to load up more virtual machines and take over the load at any time.
So, considering the giant logical fallacy presented here, I’m left wondering if there’s even a need for this patent. I could get REALLY brave and open up a huge pipe to the remote sites and run shared storage and VI3 over my WAN… assuming I had unlimited funds for a 1+gig WAN pipe, and then I could get away with having no other process beyond VMware’s built-in recovery with VMotion, CB, and HA.
And yes, I recognize that not everything can be virtualized, but in all honesty, what eGenera proposes is no less disruptive and impactful to a data center than that of virtualization. The rest of the document gets into specifics and details that are very patent-sounding, detailed diagrams, how the parts of the product will work, a definition of PAN (processor area network, as opposed to personal area network), how control nodes manage the environment, etc.
Here’s the summary: We’re going to build a set of interconnected boxes that will virtualized your environment down to the tiniest level. Then when something fails, we’ll load up resources at a remote site and make it all come back online.
Can it get any more obvious than this? It seems like eGenera is using patents to block competition. It strikes me that the folks as eGenera collectively went “Oh, I have an idea to improve this and this and this. It seems kind of logical, but we should patent it so nobody else with the same idea can compete with us without licensing from us”. It’s a great product, but it uses existing technology and existing ideas about how to use technology to provide a product that is already out there, just not in a commercial pacakage. I personally don’t think the patent will stand up to challenge, given the recent changes to patent law.