The Virtualization Room


July 20, 2007  3:30 PM

Virtualization security advice: Hypervisors, switches are vulnerable

Jan Stafford Jan Stafford Profile: Jan Stafford

Beware of hackers attacking virtual machines (VMs) via the hypervisor or virtual switch. These two avenues of attack will probably pose the most problems to IT security managers in virtualized data centers, Burton Group analyst Chris Wolf told me in a recent interview.

Here are some quick takes from that interview, offered as a heads-up about security and management issues one might face with virtual machines. At the end of this post, I’ve put in some links to other resources on virtualization security.

It’s not so easy to compromise each operating system (OS) living within VMs on a server; but an attack on the underlying hypervisor layer in a virtual environment wouldn’t be too hard to accomplish. Such an attack can take down or limit access to several VMs in one fell swoop, Wolf said. Even worse, the hacker could introduce his own virtual machine to a network without the administrative staff knowing about it.

There’s no silver bullet to protecting the hypervisor. The best practice is, of course, keeping it up to with patches and software updates.

As for virtual switches, Wolf said:

“Not every virtual switch provides the layer to isolation that it should in comparison to a physical switch. Hardware-assisted virtualization is starting to do a lot to provide more hardware-level isolation between virtual machines, but as of today you really have isolation on the address base-level, but no isolation currently in terms of memory, and that is something that is coming with forthcoming virtualization architectures.”

Chris Wolf offers more advice on data protection and server virtualization management in this webcast. (It requires registration.)

You’ll find more VM security tips in the article by SearchServerVirtualization.com resident expert Anil Desai on VM security best practices.

Ed Skoudis and Tom Liston give a detailed rundown on Thwarting VM Detection in this white paper. I found it in a post on Stephen R. Moore’s blog. Thanks, Stephen.

If you’ve had any problems with or can offer any advice on virtualization security, please sound off here, or write to me at jstafford@techtarget.com.

July 20, 2007  3:02 PM

Virtualization is a real life game of RISK (A fun analogy)

Ryan Shopp Ryan Shopp Profile: Ryan Shopp

I was crusing the Web just now, trying to find some interesting blogs that aren’t chock-full of code that an associate editor simply does not understand. I clicked on Roudy Bob’s blog (see our blogroll for his link) and low and behold, my boredom was alleviated!

To read the following analogy of the virtualization game and the boardgame RISK from the source, visit RoudyBob’s blog.

Enjoy.

“I somewhat miss the days when virtualization was at the fringe of the market and just about everything that came along was new and exciting. Now, it’s a high-stakes game – with hundreds of millions (if not billions) of dollars of software and services to be had for the company that plays it right. Along with maturity comes incremental, conservative product releases aimed to grow cautiously while nurturing the existing customer base. Also involved now is the politics and strategy of mergers and acquisitions – not the typical fare for your standard geek. The more I thought about my last post, the more I realized what we’re seeing in the market today is a lot like the RISK game most of us played as a kid. Take for example, the game board: 00044169999 Main400

Microsoft, VMware, SWsoft, XENSource and other smaller players are trying to carve out their piece of the total virtualization pie. The company that claims the most territory (share of the market) wins. Sure, it’s probably a bit of an obvious analogy to make – but it does provide a little different perspective on things.

“Let’s say for the sake of argument that the virtualization RISK map is laid out like this:

“North America – Data Center Virtualization
South America - Development and Test
Africa - Virtual Infrastructure Management (a.k.a., utility computing)
Europe – Linux Virtualization
Asia – Virtualized Desktops
Australia / Pacific Rim – OS X Virtualization
“Each time we observe the likes of Microsoft and VMware (EMC) opening the war chests to dole out large sums of money for smaller companies doing interesting things, the map shifts a little more in the favor of one or the other. New entrants also shake up the dynamics of the map.“Take the Microsoft acquisition of Softricity for example – having the ability to virtualize applications on the desktop would significantly advance Microsoft’s position in the Virtualized Desktops arena – a place that has seen little traction to date. Previously, VMware’s ACE product was really the only large player in that game. When VMware acquired Akimbi this month, they definitely made a further push in two areas they are already strong in – Development and Test as well as Virtual Infrastructure Management.“Continuing the RISK analogy, then, which players occupy the most territory and where should a company like Microsoft (amazingly the underdog, for once…) focus its efforts?

“I think it’s safe to say that the North American continent, er, the Data Center Virtualization space is occupied in a big way by VMware. The fact that they were first to market with an enterprise-class virtualization product (ESX Server) made it easy to make headway in IT organizations who made the early move to virtualization. The ESX Server product is fairly well positioned to satisfy companies’ urge to consolidate and rationalize their physical servers onto virtual machines. Microsoft’s Virtual Server product, despite the company’s efforts, has made little progress in getting into these larger-scale virtual machine environments. Remember, though, that the first player to advance isn’t always the winner.

“Development and Test is a different story. I think Microsoft has an amazing opportunity to leverage the Windows platform and its broad developer tools offering to really win this part of the market. And, if you want my opinion, that’s a much better strategy for going after Data Center Virtualization than trying to fight an uphill battle against ESX Server. A large presence in this space and some strategic offensive moves to the north (remember the analogy, right!?) could turn the tide away from VMware. Everyone is waiting with eager anticipation the release of the Windows-based hypervisor due sometime after “Longhorn”. But in a year and a half – the market will have likely left Microsoft behind. I think it’s a very large bet on their part that will most likely not pay off.

“Virtual Infrastructure Management is where all of the major players (and other folks like Altiris, BMC, Acronis, etc.) seem to be focusing these days. And rightly so. Being able to manage a large virtualized infrastructure easily and bring the concept of “utility computing” to reality is a guaranteed way to differentiate yourself. Again, I think VMware has the early lead as its VMotion and VirtualCenter solutions have helped them to garner mindshare in this area. But, products like System Center Virtual Machine Manager, Systems Management Server and Operations Manager from Microsoft give that company at least a way to make inroads.

“This is undoubtedly the biggest portion of the virtualization market (the greatest customer need) and would be the place where I would choose to play if I were an up-and-coming company that wanted to focus on the space. The reason management is so appealing is that there are all sorts of interesting problems to solve – management, monitoring, backup, restore, provisioning, auditing, asset management, etc. And for the most part, they’re problems that customers are willing to spend some money to address. Startups can grow quickly by providing something customers need and folks like VMware, Microsoft and SWsoft can easily differentiate themselves from one another by leveraging the management “story” around virtualization.

“Linux Virtualization, analogous to the Europe of RISK, is where companies like SWsoft with their Virtuozzo product and XENSource with their Xen product have dominated. Sure, VMware Workstation and VMware Server both run on Linux and the ESX Server hypervisor is based on it. But, in terms of catering to the needs of the open source community and the requirements of large-scale hosting providers running Linux, the Virtuozzo and Xen products have the most traction. SWsoft used their Virtuozzo for Linux product as a foothold into the broader Windows market when it released Virtuozzo for Linux. And Xen is scrambling to provide Windows guest OS support based on the new virtualization support in the latest generation of Intel processors. Your starting position on the game board doesn’t dictate the outcome, just the strategy.

“The biggest untapped market for virtualization has to be leveraging virtualization as part of the end user experience on the desktop. VMware’s ACE product was the first to focus on this, but no one company – even VMware – has seemed to get any traction. The potential opportunity for an interesting solution to problems like mobile workforce empowerment, workstation security, etc. is enormous. The shear numbers dictate that a successful solution could yield impressive financial returns.

“Ironically, Microsoft is probably best positioned to do something in this space and hasn’t. There are plans for providing VirtualPC capabilities to enterprise Vista customers but in reality this is just more of the same. What if users could run their browser in a seamless window running as part of a background virtual machine that was isolated from the corporate network? What if the applications and user date for a workstation PC were somehow virtualized so that users could move easily between different pieces of hardware? These are some of the possibilities that Microsoft could start to address by leveraging its Windows monopoly on the desktop and the pervasiveness of centralized management solutions like Active Directory and Group Policy. And their “innovation” in this area is to bundle a couple of license together and calling it Virtual PC Express.

“Lastly, there’s the OS X Virtualization market. In the game of RISK, completely occupying Australia is one way to gain an advantage early – leveraging the additional armies provided by controlling the entire continent. As far as virtualization is concerned, I don’t think owning the Mac market is going to yield any huge advantage in areas like Data Center Virtualization or Virtual Infrastructure Management. It’s still an interesting space – especially with the switch to Intel-based Macs. What was once dominated by Microsoft’s Virtual PC product is now up for grabs again with products like Parallels Workstation for OS X gobbling up earlier adopters who bought new intel-based machines and want to virtualize Windows. Apple may also have a play here as well if rumors are true that they are looking to integrate virtualization into the next version of the OS X operating system.”

Well done, Roudy Bob!


June 29, 2007  5:31 PM

New open source business models based on Xen

Jan Stafford Jan Stafford Profile: Jan Stafford

Editor: This is a post by Simon Crosby, Xen Project leader. In the first sentences he refers to a previous blog posting on this site.

I wanted to re-phrase some key points from my blog posting of this (which I have withdrawn) because I failed to tease out and succinctly articulate the core argument, and in doing so unintentionally aroused the ire of some in the community.  Thanks to those who offered feedback — you were right, and I stand corrected.  Let me try to get it right.

Novell’s announcement of its Windows driver pack for the Xen hypervisor implementation in SLES is interesting because it both challenges the existing business models of the Linux distros while offering them previously inaccessible opportunities through the delivery of mixed-source offerings.

When Linux was just Linux, and not capable of virtualizing other operating systems, the concept of the Linux OSV Supporting the OS and all open source components in the app stack that they deliver as part of the distro, was straightforward.  The business model of the major distros is based on their ability to Support (that is, take a phone call from their customer, and deliver fixes where necessary) any of the technology they deliver in their product (whether or not they originally developed it). An open source product philosophy enables them to develop, debug and develop expertise in the entire stack that they deliver. 

But with virtualization as an integral component of the distro (whether Xen, KVM or one of the other open source virtualization technologies), Linux is only one (arguably the key) component of the stack, and when a different OSV’s product is virtualized on Linux (Windows, perhaps, or another open source OS),  two new opportunities emerge: First, a Linux OSV can extend its value proposition to its customers by offering to Support other open source OSes virtualized; and second, by adding to their offerings the requisite closed source add-ons such as the Novell Windows Driver Pack for closed source OSes, the distros can artfully deliver high value mixed-source offerings that “price to value”, and protect themselves from the kind of discounting attack that Oracle used on Red Hat.

Both Novell and Sun have announced their intention to support their customers’ use of other open source operating systems virtualized on their implementation of the Xen hypervisor.  Thus, one might expect Novell to see new business opportunities to support competitive Linux distros on SLES, and in so doing give customers a migration path to SLES as an OS while leveraging SLES and the hypervisor to virtualize existing competitive Linux installations in use by the customer.

The fact that Linux, BSD and OpenSolaris source code is available to the virtualization vendor, and the fact that the key vendors and communities behind those OSes work within the context of the Xen project to develop a common open source “standard” hypervisor, means that from a virtualization  perspective at least, all are compatible with the same hypervisor ABI, hopefully reducing any support complexity.  Thus far, only Red Hat has maintained a steady focus on only RHEL, and possibly future Windows support.

The possible adoption by the Linux OSVs toward the delivery of mixed-source offerings is extremely important.   Upcoming releases of Netware and OES to run on SLES/Xen gives Novell an important opportunity to price to value, specifically because the mixed-source nature of the combined product contains IP, and the market is good at determining the value of such things.  Contrast this with the traditional open source business model, in which there is no IP, but the vendor markets a high value brand (such as RHEL or SLES) and associated service offering.

This is vulnerable to attack by lower labor cost and/or competitive offerings – a problem that the mixed-source offering does not seem to me to have.

It is a specific goal of the Xen Project to develop an open source “engine” that can be delivered to market by multiple players, in
multiple products.   Virtualization of closed source OSes forces (in the case of Windows) the delivery of closed source value-added components that are not part of the core hypervisor itself.   The value-added components that vendors must add to the “engine” in order to deliver a complete “car” to their customers allows them to differentiate their products, and gives customers choice.   By contrast, had it been the Xen project’s goal to deliver a complete open source “car” there would be no value proposition for the different vendors seeking to add virtualization to their products, and it would put Xen in conflict with the Linux OSVs — some of the most important contributors to the project.
The nutshell: I think that Xen has pioneered a new model of open source business – one which uses open source as a reference standard implementation of a component of the offering, but which stops short of a whole product.  This encourages multiple vendors to contribute, because adopting that model allows them to add value to the final product and be compensated for it.
 


June 26, 2007  3:24 PM

Virtualization Today and Tomorrow

Jan Stafford cwolf Profile: cwolf

A couple of weeks ago I spoke with Alex Barrett regarding what I though was a talk on the direction of the server virtualization landscape. Our conversation resulted in her article “Xen virtualization will catch up to VMware in 2008.” After reading the article, I was a little surprised at how some of my words were quoted out of context and wanted to offer my take on the virtualization market and its future direction.

VMware’s Role in Shaping the Future

Many of VMware’s competitors have based their product development road map on VMware’s VI 3 feature set. When I state that Xen platforms can catch-up to VMware’s VI3 features by mid 2008, I mean just that. By this time next year, several Xen vendors will offer mature dynamic failover (comparable to VMware HA) and live migration (comparable to Vmotion) solutions. In doing so, Xen platforms will offer the features that today’s enterprise environments are demanding. Virtual Iron has been very aggressive with their development roadmap and XenSource is working hard as well.

Still, in order to “catch up,” one would have to assume that VMware is sitting on their hands, which of course if far from the case. So will the Xen vendors be caught up to VMware next year? I don’t think so. Will they offer the features and maturity that allow them to be observed as an alternative in the enterprise? Yes.

However, looking into my crystal ball, I see the next generation VMware virtual infrastructure architecture as once again raising the bar. VMware’s ESX hypervisor will have a smaller footprint and improved security. Features that are important in the enterprise, including dynamic VM failover and backup will see significant improvements. You should also to see the complexity of storage integration reduced as well. Technologies such as N_Port ID Virtualization (NPIV) and the proliferation of iSCSI will significantly ease VM storage integration and failover.

I also expect to see more leadership from VMware in the following areas:

  • Virtual network security, including monitoring and isolation
  • Storage virtualization – development of consistent standards and best practices for integration between server and storage virtualization platforms
  • Centralized account management and directory service integration (this is one of my VCB pet peeves)
  • Virtual desktop management

Keep in mind that oftentimes many VMware Workstation features find their way into ESX as well. So you should expect some of the new Workstation 6 features to play a part in the next ESX Server product release.Record/replay, is one of my favorite new features, and has numerous uses for testing, troubleshooting, and security auditing.

As the market leader, we should all expect VMware to continue to provide leadership in virtualization innovation, and I don’t expect that to subside.

Virtualization and Security

Security has been getting much more attention lately and will continue to do so in coming years. My recent article “Virtual Switch Security” outlined some of the current weaknesses regarding Layer 2 traffic isolation in some virtual switches. Virtual switches need to improve their default isolation as well as manageability. Port mirroring is an important feature in virtual switches and will be needed for integration with intrusion detection and prevention systems. However, administrators need to be able to control port mirroring within a virtual switch and in turn enable or disable port mirroring on specific ports as needed. VLAN integration is and will remain a concern for virtual switches and vendors that do not offer 802.1Q VLAN support will remain at a disadvantage.

Intrusion detection is becoming more of a concern for numerous organizations, and the uptake of virtualization support by many security ISVs is evidence of that. For example, Catbird’s V-Agent can be used to quickly add an IDS to existing virtual networks.

Hypervisor security is naturally important as well. If you would like to see some of the issues out there today, take a look at Harley Stagner’s excellent article on preventing and detecting rogue VMs. The blue pill attack has also received considerable interest. For more information on blue pill, take a look at Joanna Rutkowska’s presentation “Virtualization – the other side of the coin.”

The security concerns relating to virtualization are no more scary than what we already see with existing operating systems and applications. While security concerns should not prevent you from implementing virtualization, you cannot ignore security either. Hypervisors and management consoles (such as the ESX console which uses a Red Hat-based kernel) still must be managed and updated like all other server operating systems.

To validate the security of their architectures, you should expect virtualization vendors to obtain EAL certification for their respective platforms.

Standards

At the moment, standards are more on my wish list than an actual prediction. I’m hopeful that we will see a common virtual hard disk format within the next 2-5 years. Doing so could provide virtual machine portability amongst all server virtualization platforms and make it considerably easier for ISVs to package and deploy virtual appliances. Administrators would be free to choose their preferred virtualization platform and run virtualization systems on that platform regardless of the virtualization engine that may have packaged a particular VM.

Management standards would also go far in easing virtualization deployments and management. Common APIs for management and backup would allow any third party management or backup tool vendor to support all major virtualization platforms. With industry support of the DMTF System Virtualization, Partitioning, and Clustering (SVPC) Working Group, realization of standardized virtualization management can become a reality.

Emerging Architectures

Application and OS virtualization, fueled by vendors such as SWsoft, Sun, DataSynapse, and Trigence, will continue to add to the virtualization mix in the enterprise. Down the road, application virtualization can significantly ease application deployment by allowing ISVs to package their applications in virtualized containers, thus far reducing application deployment complexity. These technologies run alongside server virtualization deployments today, and it’s very likely that they may be deployed within server virtualization frameworks down the road.

Much work still remains in aligning the non-virtualized industry with the virtualized world. Both application and OS vendors need to be clear on their virtualization licensing terms, with licensing models that support virtualization that are either based on physical or virtual resources. Hybrid licensing that includes terms for virtualization and restrictions on relocation of VMs to other physical resources impedes virtualization adoptions and adds unnecessary confusion. In 2005 Microsoft added a needed jolt to virtualization by being the first vendor to define product licensing in support of server virtualization. Today they need to go further and set the gold standard for licensing of operating systems and applications inside virtual environments. That model should be clear and concise, with simple terms for virtual machines and without limits on portability. “Buffet” style licensing that provides for unlimited VMs on a physical host is ideal as well. Choices and rules are good, but let’s not get carried away. In terms of licensing, less is more. If Microsoft gives us a simple licensing model, many other industry vendors will follow.

Virtualization’s future holds plenty of promise, and we’ll all be the beneficiaries of that promise.


June 20, 2007  12:41 PM

Virtualization on your iPhone?

Alex Barrett Alex Barrett Profile: Alex Barrett

No, Apple hasn’t made any announcements about virtualization for the iPhone, but all this reporting about VMware ESX Lite jogged my memory of a conversation I had recently with XenSource CTO and founder Simon Crosby. While talking about Xen 3.1, Simon mentioned that since Xen is an open-source project, some developers in the consumer electronics space have adopted it as the basis of an embedded hypervisor for your cell phone or MP3 player.

Yes, even down on a mobile device, virtualization has a role to play. For security reasons, Crosby explained, electronic device manufacturers typically use multiple chips to perform different functions — one processor to run the real-time operating system, another for graphics, a third for sound, and so on. And just like in contemporary servers, all those chips are wildly underutilized. Enter virtualization. By running an embedded hypervisor on a single CPU, every process can run logically isolated from one another, while allowing the manufacturer to cut down on the number of chips in the device.

“I call it PDA consolidation,” Crosby joked.

The benefits of integrating virtualization in to consumer electronics are similar to the benefits IT managers derive from server virtualization: better utilization of hardware equals less hardware. In consumer devices, that translates in to smaller, lighter devices with better battery life and that cost less to manufacture, and therefore, that cost less for consumers to buy. Cool.

This got me curious about who is actually doing this. A simple Google search gave me a couple of leads. Last year, LinuxDevices.com reported on a company called Sombrio Sytems developing a “Xen Loadable Module” (XLM) but the company appears to have gone by the wayside. However, Trango Virtual Processors, based in Grenoble, France, seems to be actively involved in embedded virtualization. According to their web site, just this week, the company announced a version of its TRANGO Hypervisor for the latest generation of ARM processors. With TRANGO, ARM processors gain the ability to run up to 256 virtual processes, executing a “rich operating system (such as Linux or Windows CE), a real-time operating system (RTOS), or a standalone driver or application.” I have no idea how far along they are on this process, or when virtualization-enhanced mobile devices might hit the market, but it certainly sounds promising.


June 20, 2007  10:31 AM

Why the eGenera Patent is Dangerous

Alex Barrett Joseph Foran Profile: Joe Foran

Virtualization.info, Gridtoday, the eGenera website, and a lot of other sources reported that eGenera has received a patent for an all-in-one N+1 tiered disaster recovery solution that combines grid technology and virtualization to provide a hardware-neutral Disaster Recovery product that takes your entire data center and encapsulates it. This is an awesome product because it can greatly improve DR and perhaps make DR more accessible to smaller businesses, but it’s not patent-worthy. It smacks of a way to stifle competition and generate revenue via patent suits over product sales. Or it may just be that patent itself is just pointless. I’m not sure which case is more true, honestly. It all depends on if it’s challenged, and how.

DISCLAIMER: I’m not a lawyer. In fact, I don’t ever even want to be a lawyer. I’m happy as an IT Director and SysAdmin, and don’t want to ever be a source of legal advice. The below is informed opinion, not legal advice. Tell it to the Judge. Under the recently-relaxed “Obviousness” rule that governs patents, a patent is useless if the idea behind it is only an obvious improvement over an existing idea. eGenera’s patent seems to fall squarely under that rule, from the language in their patent application. Just for kicks, I read it, and will quote it. Under the quotes I’ll comment on what this means, in my not-so-humble and not-so-attorney opinion.

It starts out with a SCREAMING cry of obviousness in section 1…

“A method of providing processing resources to respond to a fail-over condition in which a primary site includes a configuration of processing resources, comprising: generating a specification that describes a configuration of processing resources of the primary site; providing the specification to a fail-over site having a configurable processing platform capable of deploying processing area networks in response to software commands; using the specification to generate software commands to the configurable platform to deploy processing resources corresponding to the specification; wherein the processing resources at the primary site include a plurality of independent processing area networks and wherein the specification describes only a subset of the plurality of independent processing area networks. “

That, my friends, is commonly known in the IT field as a failover cluster.  The link even defines the N+1 method that eGenera is using in its product. The short of it – You have multiple boxes on a network that are mirrors of one another. One fails, another takes over its role. There’s usually hardware or software in-between that keeps things synchronized and detects the failure. This part of the patent is worded to be host-, network- and processor-inclusive, which would be obvious because most clusters are situated on networks, don’t necessarily need to run the same processors, and are hosts. The “big” improvement is in the use of the term “site” – where the product is meant to restore an entire data center’s configuration. In the press release, this means that if you have four data centers and one disaster site, if any one data center fails, the disaster site takes on the complete configuration of the failed site (i.e., all nodes, network configurations, etc.). This is a huge step forward in disaster recovery, but it’s not patent-worthy because there are a zillion ways to do this.

Here’s one – If you put 100% of your data center(s) onto VMware’s VI3 with shared storage, and had a huge WAN pipe between each site to cover the overhead, you would have this “network in a box” N+1 approach because ESX provides virtual network switching and virtual guest machines, without having to worry about the value of N except in scalability terms. The same is true of most Xen-based products, like Virtual Iron. I’ve been doing this for years on a much smaller scale. If my main data center drops off the face of the earth, I’ve got all of my critical systems in VMware, with snapshots and backups of the machines and the storage they’re on, as well as configuration of the virtual switches. If the worst happens and my data center goes down, my staff and I drive over to a remote office, restore, and have exactly what eGenera is talking about – a complete restoration of all configurations at a remote data center. The method – backing up virtualized systems. The process – recovery to virtualized systems. It’s not as slick as an automated system, but we’re getting to the point that eGenera talks about in its patent (thanks to an Open Source project called Openfiler and some extra bandwidth on our WAN to handle sending snapshots between sites rather than driving tapes around).

Soon, the site backups will be automatically done over WAN links, meaning that when something fails, I switch over to the DR network and everything comes back online from the virtual machine snapshots / backups. It won’t be long after that until we automate that switchover, and have exactly what eGenera is describing. It’s been a long process because that deep a level of DR isn’t a critical requirement to the business, but it was obvious where we wanted to go, and obvious how we needed to get there – through the use of virtualized host and network environments.This brings me to the next few sections:

“2. The method of claim 1, wherein the act of using the specification to generate commands to deploy processing resources is in response to the receipt of a fail-over condition.

3. The method of claim 1, wherein the act of generating includes specifying data specific to device configuration at the primary site.

4. The method of claim 1, wherein the act of generating includes specifying management information relevant to the primary site. “

Summary – we’re patenting how we’re going to do what we claim, how we’re going to encode the configuration, and how we’re going to send notifications when it kicks off. All irrelevant if the concept of the patent is obvious. Also these are all in themselves obvious – any virtualized system has to have information on configuration of the underlying virtualized hardware. Any cluster sends notification on a failure of a node. Any mirrored port has awareness of the cache in its partner port. Outside of the patent office, and in the IT office, this stuff goes without saying.

Next up, section 5… this is identical to section 1, except the last few words: “wherein the specification describes all of the independent processing area networks.”. There’s no significant difference as to obviousness here – it’s just the parts of the patent which are different from each other that are referenced. This is a matter of scale – rather than a “subset of the plurality” (English: part of the whole) this is a “master file” of the entire environment being monitored. It adds grid technology into the mix, another obvious case for virtualization.

Section 6 changes the plurality part to “a minimum configuration of processing resources at the primary site”, which is just saying that the system will use the most efficient (i.e., minimal) configuration possible to get the job done. Duh. Do I have the same number of VMware hosts at my remote sites? No, I don’t. I don’t even always have the same builds or even the same versions! Do I have all of the same configurations? No. Can I really bring up 100% of my data center at a remote site? Sure. And eat a performance bullet on the servers.

So what would I do? I would bring up critical systems only – Active Directory, DNS, Email, mission-critical apps. My Jabber server would stay down. My dev environment would stay down. I would run the minimal configuration I need to get the business working. Can it get any more obvious than “if I don’t have the all the resources I need, I’ll get by with what I have, the best that I can”?

The seventh section tacks this part onto the end: “…wherein the primary site includes a configurable processing platform having a pool of processors and wherein the act of generating includes specifying information descriptive of how the processors are pooled at the primary site. “A pool of processors. Also known as a computing grid. And it goes on to describe having a documented system for how that grid works, and tying that to the application. This is truly obvious. If you have a system, you document it. If you have an automation system, it’s documented, and it uses documentation on how the system it automates functions. This sort of thing has been around forever.

On a non-grid level, Detroit has been outsourcing human labor to robots using this exact methodology for decades. On grids, this is how they work… regardless of distance. Each node is aware of each other node, and so the grid has an internal documentation of the pool of resources that are available.

Section 8: “The method of claim 7 wherein the act of generating includes specifying data to describe logical partitioning of processors and interconnectivity into logical processing networks. ” This is the virtualization component. In virtualized systems and virtualized DR products like VMware’s VMotion, this is a core component of how the systems work to provide fault tolerance. The service console knows what virtual machines are out there, and what host systems are out there. It has descriptive information about the resources, how they’re pooled, and how they’ll be moved in the event of failure. eGennera’s idea is obviously a small improvement to the process, applying it towards a virtualized grid concept, but it’s not a huge leap forward (again). Virtualized grids have already been in the works for some time. See here and here.

Section 9 states:

“A system of providing processing resources to respond to a fail-over condition in which a primary site includes a configuration of processing resources, comprising; a computer-readable specification that describes a configuration of processing resources of the primary site; a configurable processing platform capable of deploying processing area networks in response to software commands; logic to generate software commands to the configurable platform to deploy processing resources corresponding to the specification; wherein the processing resources at the primary site include a plurality of independent processing area networks and wherein the specification describes only a subset of the plurality of independent processing area networks. ”

In other words, the systems to the methods described above.

Section 10 is similar, switching plurality for totality like sections 1 and 5 did. So they’re going to build a computer system to do what DR specialists and Virtualization specialists have been doing for some time now, only under a commercial brand. Seems obvious to me.

The next sections reference art that I won’t reprint or link to here, as they’re not very original. In fact, the art is as obvious as the concept for this patent. I won’t need the art to describe the obviousness of some of what is printed in the text. Here’s my favorite example so far:

“To date, a considerable body of expertise has been developed in addressing disaster recovery with specific emphasis on replicating the data. Processor-side issues have not received adequate attention. To date, processor-side aspects of disaster recovery have largely been handled by requiring processing resources on the secondary site to be identical to those of the first site and to wait in standby mode.” 

Processors have not received adequate attention because in virtualized environments, they are largely irrelevant as long as you’re not mixing widely different types (such as AMD and Intel). You do not need to maintain identical processors, or quantities of processors, or anything like this. I can restore the virtual machines running on my Intel dual core Xeon servers on my Intel single core Xeon machines with a great deal of flexibility amongst processor family types. Does it matter if one is 2.8GHz and another is 1.6GHz? Not really. The processors at my remote sites aren’t sitting in standby mode, either. They’re running apps on the local servers. They are live, running, and chugging along. They’re ready to load up more virtual machines and take over the load at any time.

So, considering the giant logical fallacy presented here, I’m left wondering if there’s even a need for this patent. I could get REALLY brave and open up a huge pipe to the remote sites and run shared storage and VI3 over my WAN… assuming I had unlimited funds for a 1+gig WAN pipe, and then I could get away with having no other process beyond VMware’s built-in recovery with VMotion, CB, and HA.

And yes, I recognize that not everything can be virtualized, but in all honesty, what eGenera proposes is no less disruptive and impactful to a data center than that of virtualization. The rest of the document gets into specifics and details that are very patent-sounding, detailed diagrams, how the parts of the product will work, a definition of PAN (processor area network, as opposed to personal area network), how control nodes manage the environment, etc.

Here’s the summary: We’re going to build a set of interconnected boxes that will virtualized your environment down to the tiniest level. Then when something fails, we’ll load up resources at a remote site and make it all come back online.

Can it get any more obvious than this? It seems like eGenera is using patents to block competition. It strikes me that the folks as eGenera collectively went “Oh, I have an idea to improve this and this and this. It seems kind of logical, but we should patent it so nobody else with the same idea can compete with us without licensing from us”. It’s a great product, but it uses existing technology and existing ideas about how to use technology to provide a product that is already out there, just not in a commercial pacakage. I personally don’t think the patent will stand up to challenge, given the recent changes to patent law.  


June 20, 2007  10:29 AM

The End of the Appliance As We Know It, And I Feel Fine

Alex Barrett Joseph Foran Profile: Joe Foran

I am writing this little op-ed piece in lieu of a full-blown obituary. Why? the market speaketh, and it declared hardware-based appliances dead. Like dinosaur-dead, dead. Sure, some of the specialists may survive, just like the Crocodile and the Shark have managed to keep evolving and avoid the big-rock-hit-earth-make-dino-dead era.

From the Metaphoric Journal-Register:

Appliance, Mr. Hardware B. passed away on June 19th, 2007. Mr. Appliance was renowned for the uncanny abilities to bith create controversy and save money. During his career he was the muscle behind most modern network equipment, many network security services, the complete setup of numerous small and home office businesses, and a host of other specialized IT functions. His ability to reduce cost and complexity is duly noted and many have expressed great appreciation for his efforts. While many did not agree with his one-device-one-task approach, his fame and popularity continued to rise even in conflict. He is survived by one child, a Ms. Virtual Appliance. Said Ms. Appliance in her eulogy “My father was of great service, and it is with great pride that I take up his mission. I promise to provide the public with the same services, the same muscle, and the same fiscal attention. Furthermore, I plan to take his vaunted career one step further and sever my ties to proprietary equipment. I know Dad would have been proud of this decision, which will give greater economic and administrative freedom to you, my beloved supporters.”

Why am I, at the risk of sounding like the world’s biggest (well, you can insert your own word here), being so haughty as to declare the hardware appliance dead? Because hardware is mattering less and less in the commodity server market, and it’s bleeding over into the commodity appliance market. Hardware appliances were great – they did one task (or one category of tasks) very well, had minimal overhead, and were often cheaper than a full-blown server-and-software solution. Who uses their own server-based routers? Not many people. And yet Cisco has gone quite far in undocking much of IOS from the hardware, a move that (among other things) is good for virtualization. If you’re looking for a security appliance, you could buy a Symantec hardware appliance, or you can download any number of similar appliances from VMTN. Inboxer makes an email archival hardware appliance. They also make a virtual appliance. Need a NAS or iSCSI SAN? These even come in virtual appliance flavors like the Openfiler appliance – and they’re great for taking those old hanging-chad JBOD storage arrays off their legacy hosts, linking them up on a single host, and converting to centralized storage. Zeus‘ network traffic monitoring hardware appliances are now available in virtual appliances. This list of links goes on and on and on, and it shows an interesting trend; that hardware appliances are giving way to virtual appliances across most of the market.

And like the big-rock-hits-earth scenario, it’s happening fast. It’s not flashy like a meteor strike, but its just as quick – a few short years and hardware appliances will take a backseat behind the virtual appliance mammals. It’s cheaper for vendors to work on the software and not have to integrate it onto hardware that can change from revision to revision, and it’s as easy for customers to deploy and manage virtual appliances as it was to do the same with their dinosaur cousins. There’s even an extra layer of manageability with virtual appliances, since you can manage the hardware. A huge boon – business continuity. There’s more built-in DR/BC in virtual appliances that you just don’t get in hardware. Wanna be ready for DR? Ok, get all of those hardware appliances duplicated. Or take snapshots of your virtual appliances. Which is easier? Which is less expensive? Which is more rapidly verifiable?

What about the performance hit? In all but the most demanding cases, such as a core switch or the load balancer for the storage arrays of a Fortune 500’s ERP systems, the 10% degradation of performance caused by virtualization is of minimal importance. These then, are the crocs and sharks of the new era – highly specialized, long-term survivors that will continue to proliferate when the rest of market enters the long sleep of a the virtual asteroid impact.

Mitchell Ashley’s blog on the same subject takes a similar look here, and even uses some of the same analogies I use (I was quite grumpy when I found that mine wasn’t a very original thought, but such is what it is).

A few more links for the weary web-traveler:


June 19, 2007  2:54 PM

Get a VMware job and increase your salary by 115%

Ryan Shopp Ryan Shopp Profile: Ryan Shopp

Alessandro Perilli, SearchServerVirtualization.com contributor and owner of virtualization.info, has a fantastic virtualization-related jobs section on his site. At the time of this post positions are mostly for VMware gurus, but a few are for sales. Locations range from coast to coast in the US.

Check out the virtualization.info job board now — but not while your boss is watching… ;)

Another great site if you’re looking for virtualization-related jobs is indeed.

If you’re still not convinced you should switch jobs, vi411.com has a post from January 22, 2007 that says VMware salaries are 115% higher than average data center salaries.


June 18, 2007  2:34 PM

Linux users: Xen, VMware, or Virtual Server?

Ryan Shopp Ryan Shopp Profile: Ryan Shopp

In Monday’s newsletter column, I included a question to our Linux users: Do you prefer Xen, VMware or Virtual Server, and why?

It’s only Monday afternoon, but I’ve gotten some interesting responses. Chris, the CIO for Oxford Archaeology: Explorying the Human Journey, wrote:

In response to your question, we prefer VirtualBox, which offers a degree of flexibility that only VMware VI3 gets close to. Without the entry costs! We currently are working with VMware server, and a lack of a Linux client for VMware VI3, along with its MS SQL dependency, prevented a planned migration to VI3. VirtualBox is the young upstart on the block; the list of features that it is currently lacking in comparison with ESX grows shorter at an alarming rate, it is cross platform, independent of hardware extensions (but can benefit from them), high performance, and remarkably quick to get to grips with.

David of Code No Evil, LLC wrote:

I prefer VMware because it’s a non-free commercial product with support.  Microsoft, for example, doesn’t even list in their support site VPC 2K7 as a product.  As for Xen, I’m rarely a proponent of the OSS community.  As for VMware, my current support case just became a known bug # 154399.  Nice to know that VMware was willing to admit a fault in their platform and intends on fixing it. 

I asked him for clarification on the bug. Here’s what he said:

I am running Vista x64 on a Mac Pro.  My intent was run XP off the hard drive from my old machine (a Dell Precision 340) in a USB enclosure using VirtualPC 2K7.  VPC crashed every time I attempted to access the virtual drive (mapped to the physical drive).  Support is non-existent for VPC 2K7 because Microsoft doesn’t even list it as a product at the support website.  I even reached out to the  “Virtual PC Guy”0, but he was no help either.  At this point, I figured that I should try VMware Workstation.  At least if it didn’t work, I could open a support incident and I’d get some help.  Well, long story short, there is a permissions issue that despite going back and forth with VMware tech support (in India none-the-less) was irresolvable even in VMware Workstation.   The support overall was not bad.  A few times I had to send an extra email to get them to wake up, but all-in-all it was satisfactory.  The rep even called me because the issue became too difficult to talk about on the phone.  Now, the real test is to see how long it will be before a fix is released.  I would gather that it will  be soon because this bug precludes anyone from using a VMware virtual drive instance mapped to a physical drive on Vista.  I would, as a developer, classify this as critical defect.

Richard of OnX Enterprise Solutions Inc. wrote in suggesting Virtuozzo.

Chris, a system architect, wrote in with his preference for Xen:

I prefer Xen as it’s free on Red Hat 5 or SuSE 10 for Linux environments.  EMC ESX rocks though if customers can afford with its small Linux Red Hat kernel and the various tools for both Linux and Windows environments.  MS Virtual Server is better for test labs and with Microsoft platforms.  I had a very bad experience with MS Virtual Server and NetWare systems although that’s another OS.

Possibly with improvements in MS Virtual Server there might be a point where if it’s free and if MS really does support Red Hat and especially SuSE underneath it that if it’s free with MS licenses that it could move up within the server marketplace.

More to come. In the meantime, what are your thoughts, readers?


June 18, 2007  11:06 AM

Virtualization community responds to ESX Lite rumor

Alex Barrett Alex Barrett Profile: Alex Barrett

In case you missed it, SearchServerVirtualization.com reported last week that VMware is developing an embedded “ESX Lite” hypervisor. And while VMware may have opted not to comment about it, the virtualization community has plenty to say on the topic. For one, Bob Plankers, over at The Lone Sysadmin thinks that ESX Lite could save him money on server hardware:

So you have an ESX server that doesn’t need local disk. That saves you $300 for a RAID controller and about $300 per 15K RPM 146 GB disk. For my RAID 1 + hot spare configurations that’s $1200. No moving parts equals theoretical better reliability, though flash drives have a limit to the number of read/write operations they can do over their lifetime. Also very little power consumption, and very little heat. Without all the extra heat from the disks you can reduce the number of fans in the chassis, which further reduces the price and power draw.

I for one, totally agree with this assessment. Spinning disk drives inside a server are a major bummer. Since the vast majority of ESX instances are already SAN-attached, why not go all the way and ditch the internal boot drives?

The flipside, said Fred Peterson, a system administrator writing on the VMTN message board, is that an ESX Lite appliance could not be reused like general purpose hardware:

Once it becomes “out dated” it has to be tossed, you wouldn’t be able to re-use as a test windows box or linux box or something. While not a bad thing, its life span to justify the upfront cost would have to be pretty good.

Over at MindSecure, a blog about “information security, virtualization, application delivery and storage,” ESX Lite is paired with Citrix Ardence, an OS streaming application, to positive effect.

Embedding ESX Lite in the hardware and using Ardence to stream the operating system would allow for complete hardware abstraction at the server and desktop level as well as the ability to remove spinning disk from servers and desktops, use solid state storage strictly on these devices, reduce storage utilization by using Ardence shared images, reduce cooling costs in the data center by using less disk, and many other advantages which these two solutions provide when paired together.

Scott Lowe on his blog says that ESX Lite has interesting competitive implications:

It’s smart because it derails Microsoft’s attempts to marginalize the hypervisor by bundling it with the operating system (via Windows Server Virtualization, aka “Viridian”). It’s smart because it expands the hypervisor market in new directions that no one else has yet tapped, helping VMware retain mindshare about its technical leadership and innovation. It’s smart because it’s the hardware vendors that have the most to lose via virtualization, and by partnering with them you remove potential future opponents.

But it’s a strategy that his his risks, he points out, namely, if the embedded hypervisor doesn’t perform as well as regular ESX, or if VMware loses visibility by going too deep under the hood.

Meanwhile, rumor has it the the original story has some inaccuracies in it, but like the old advertising saying (“I know half my advertising dollars are wasted – I just don’t know which half!” ), without official word, I can’t speculate as to what’s right and what isn’t. An obvious possibility is that Dell is not participating with ESX Lite, or that the effort is not limited to just Dell. My gut tells me the latter is closer to the truth. Any thoughts are appreciated.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: