I chatted with server-based computing expert Brian Madden the other day, and we got on the topic of VMware Inc.’s long-term viability as a company. Unlike VMware’s stock holders, Madden believes that the company won’t always be the behemoth it is today, despite the massive changes it has spurned in the IT industry.
Today, VMware’s strength is its “first-mover advantage,” according to Madden. But the same lead that has enabled VMware to enjoy an overwhelming market share, and a several-year technological advantage over its competitors, might also come back to haunt it, Madden said.
“Look, Amazon wasn’t the first online bookseller, eBay wasn’t the first online auction house, Internet Explorer wasn’t the first Web browser,” he said.
The list of second-mover companies that have rapidly eclipsed the pioneers is substantial. (Then again, the list of companies with first-mover advantage that made it, so to speak, is probably also pretty long.)
With mounting pressure on VMware from all sides, Madden thinks the company’s days are numbered. “VMware is going to be a footnote in the history of IT,” he predicted, “albeit an important footnote, no doubt about it, because of the way that they’ve changed the industry. But in the long run, I think our kids will be talking about VMware in their history classes.”
We continue our first look at Hyper-V technology in Windows Server 2008 . While Windows Server 2008 has been released, Hyper-V is still in pre-release form. Hyper-V is scheduled to be officially released later this year. For now, we’re really looking at a beta version of Microsoft’s new virtualization technology.
All interface-driven tasks are done through the Hyper-V Manager MMC snap-in. This snap-in is added when the Hyper-V role is added to the Windows Server 2008 system. My evaluation of Hyper-V takes place in a full install of Windows Server 2008 Standard, x64. After populating a few virtual machines into Hyper-V, a nice feature caught my eye, one showing the virtual machine uptime. The Hyper-V Manager snap-in is shown below:
Creating a virtual machine
Creating a virtual machine is wizard-driven within the Hyper-V Manager. The wizard asks the following questions while creating a new virtual machine:
- Virtual machine name – This name is used within the virtual management piece, not the DNS name or computer name of the virtual machine.
- Storage location – Hyper-V has a default location for the virtual machines, whether that be local or remote storage. A different location can be specified during the wizard.
- Memory amount – Hyper-V will give a default amount of RAM at 512, which can be changed in the wizard.
- Networking – The network assigned to the virtual machine.
- Virtual hard disk selection and size – Where the virtual hard drive is assigned on the file system. Hyper-V gives a generous 127 GB virtual hard disk size for the virtual machine, but does not allocate that footprint entirely on the file system.
- Installation options – A boot media can be assigned from a physical optical-drive or .ISO image file.
Once the virtual machine is created, you can turn it on and the virtual BIOS splash screen will appear and the virtual machine will start. The virtual machine starting up is shown below:
The virtual machine created is available to run and have an operating system installed at this point.
I hope to answer all of your questions about this virtualization platform. Please post them in the comments section below. And don’t forget to come back to the SearchServerVirtualization blogs for my next entry where I look at the Hyper-V manager and how it will fit in the enterprise.
Edit: The command line interface in the core edition of Windows Server 2008 is not PowerShell but a core prompt: Different than the normal CMD, but nowhere near as functional as PowerShell.
Just about every large hardware vendor can bundle support services for virtualization platforms with your equipment and software purchases. This may seem like a good idea, but let us take a moment to identify the mechanics of the support services that you are expecting.
The types of support services I am referring to are your telephone support options for the virtualization management software, virtualization host software systems, conversion tools and any databases that may be involved in your virtual environment. There are generally two general different approaches on how the support options can be executed. These are shown below:
Virtualization vendor providing support
This is the more traditional approach where your support services are purchased and executed through the virtualization vendor. You will find that the quality of support and resolution time should be both better and quicker when dealing directly with the maker of the virtualization products. This support may be more expensive than the alternative described below, but may also be the only fit if you are a mixed environment for servers. In the example where the virtual host systems, the management software and database engine are all on different brands of servers; it may be a better idea to get the software support directly from the virtualization vendor.
If you are unsure about which way to go in this regard, I would nudge you towards purchasing support programs that are operated directly from the virtualization vendor based on recent experience.
Hardware vendor providing support
This would be where IBM, HP, Dell or any other top-tier server hardware vendor would be your point of contact for support on all of your virtual environment components. For core questions related to the virtualization components, the natural path is an escalation to the virtualization vendor. With this setup, the unfortunate consequence is that the resolution time for support operations will be increased due to the extra steps required in having the support operations originate through the hardware vendor.
This option may have slightly better pricing to the end customer, as the top-tier hardware vendors aggregate sales will drive volume pricing and appear as a complete solution. This also is a convenience as it can generally be rolled into a large implementation and function as part of the same purchase process. Probably the strongest positive in this mode is when any issues that are determined to be part of the hardware environment, they are going to be resolved quicker with all parties involved.
Okay, now what?
Why has this been laid out here? I just want you to be sure of what you are getting into with your support engagements and put your expectations in line with the stated deliverables. If you purchase your support through your hardware vendor, you may not be able to call VMware, Citrix, Microsoft or whoever your virtualization platform is provided by directly. This may add frustrating time to go through the basic troubleshooting first or even deal with quality of support issues.
I have been evaluating Windows Server 2008 since the recent release of the base product to retail sale. The highly anticipated virtualization hypervisor or Hyper-V is not part of the commercial product currently available, but Microsoft plans to have it available within six months of the initial product release. In this first installment of a series of SearchServerVirtualization blogs, I’ll go through my steps as I am taking a look at the beta implementation of the Hyper-V environment.
Evaluation installations of Windows Server 2008 with the beta Hyper-V are available for download from Microsoft. Installation of the base operating system is indistinguishable from the current retail versions. If you are going to evaluate the virtualization platform, start with Microsoft’s release notes and make sure you have adequate hardware available for the environment. The Hyper-V release notes outline specific system models and configuration items that need to enabled to permit operation of the hypervisor. If you are even remotely considering a Microsoft virtualization implementation, start with the release notes to get an idea of the operating environment requirements.
How does Hyper-V fit into Windows?
The Hyper-V hypervisor exists as a server role to the Windows Server 2008 installation. In my lab scenario, I will be using Windows Server 2008 Standard Edition, 64-bit with the full installation (instead of the core installation, which is explained later). For the Hyper-V beta, 64-bit processing is required. For the server role, adding the Hyper-V role is like most other roles in the Windows Server configuration, through the Server Manager as shown below:
The Hyper-V implementation on Windows Server 2008 is uniquely different from other enterprise virtualization products in that the virtualization engine may exist in line with other roles. For example, you would not want to make your VMware ESX server a file server or install a networking role like DNS or DHCP. The difference is that the other platforms are purpose-built environment only, where Hyper-V on Windows Server 2008 will integrate with other roles should you want to on your virtual environment. This configuration would be a full install of Windows Server 2008.
The alternative is the core installation of Windows Server 2008, where you can install Hyper-V and Windows Server 2008 without a Start Menu and Windows Explorer environment. The core installations of Windows provide only a PowerShell command line interface and specified server roles, including Hyper-V.
Now that I have your attention, I need to keep you on the edge of your seats until next week. As I’ll have some exposure to virtual machines running on the Hyper-V. You will see it here for more information on the beta Hyper-V!
IT pros are split on the potential impact of VMware ESX Server 3i and on the importance of new bells and whistles, such as Hewlett-Packard Co.’s plug-and-play deployment capabilities and support from other major hardware vendors.
Last week, VMware and HP announced that at the end of March, VMware ESX 3i will be packaged on 10 models of HP ProLiant servers. So do embedded hypervisors like ESX 3i represent the next stage of the virtualization evolution?
Of course VMware seems to think so, saying the integrated offering will provide “greater speed and simplicity for customers new to virtualization, as well as increased capacity expansion for customers who already use VMware’s data center virtualization and management suite, VMware Infrastructure 3 (VI3).”
Will this optimism translate into increased virtualization in the enterprise? VMware and virtualization expert Andrew Kutz thinks that the exclusivity of the plug-and-play capability of 3i on HP is a stretch:
Plug-and-play is another no-win for 3i. The plug-and-play functionality of 3i is as artificial as its simplified management. VMware asserts that independent hardware vendors (IHVs) will be able to ship servers with 3i directly to the customer, where the customer can simply plug the box into the network and storage, boot it, and presto: installation complete. That’s fantastic! But I can order a server from an IHV with ESX 3 pre-installed on it today. The difference is that VMware has added this data center plug-and-play functionality exclusively to its 3i product. There is no reason that it cannot work with 3.0 or 3.5 as well. This is just another example of a company trying to promote a new product with features that do not have to be exclusive; they are exclusive only because someone decided they should be.
While Kutz believes that 3i is a significant step up, he says on SearchVMware.com that “ESX 3i is simply an evolution, not a revolution.”
The biggest change between ESX 3i and its predecessors (ESX 3.0, 3.5) is that with 3i, agents cannot be installed on a host. Erik Josowitz, the vice president of product strategy at Austin, Texas-based Surgient Inc., a virtual lab management company, says that for independent software vendors, “VMware’s roadmap for virtualization management runs through VirtualCenter.” Putting 3i on solid state “sends a clear signal that VMware doesn’t want people installing on the host anymore,” according to Josowitz. He notes that “from a security standpoint, it’s a good thing,” since it locks down the partition that used to be available under the host, thus keeping out any applications that might weaken a system. But now, organizations that want to work with blended images will need to architect their tech support to talk through VirtualCenter rather than a host agent.
While the solid-state product promises plug-and-play deployment of VMware’s thin hypervisor product on HP’s ProLiant servers, some analysts are still saying, “Don’t believe the hype about 3i.” Citing problems with monitoring and scaling of 3i, the ToutVirtual blog complains that 3i is “a complete disappointment” at general release. “Combine this weak infrastructure design issue with the fact that you can not get any realistic information out of the hardware state of a 3i server,” makes VMware ESX 3i “dead on arrival.”
But SearchServerVirtualization.com expert Rick Vanover begs to differ. Vanover holds ProLiant servers in high esteem, and if ESX 3i is good enough for HP, then it’s good enough for him:
I’ve worked on many different types of servers, and I think the ProLiant servers are superior. The big reason is that the ProLiant BL blade series do not have a competitor to the Insight Control Environment. Further, the Integrated Lights-Out 2 Standard Blade Edition (or iLO) is a better management interface compared to its competition. If VMware takes HP as a partner (or at least as their first partner) for an ESX 3i supported platform, I would choose it in a heartbeat.
But does it really matter that 3i is overhyped? Major vendors now put 3i inside their servers. This reduces the need for major evaluation and opens the door for IT shops to choose servers with “3i inside” and use it when and how they want.
What do you think? Leave us a comment below or send us your thoughts.
Oracle doesn’t officially support its products running on VMware, but it will happily support you if you virtualize on its Xen variant, Oracle VM. But at least one large Oracle PeopleSoft customer with which I spoke recently refuses to play along and will maintain its VMware status quo.
“We looked at Oracle VM, but it’s where VMware was two or three years ago,” the systems administrator said, who asked that he and his organization not be named.
Not only did this system administrator find Oracle VM to be technically inferior to ESX Server, but also he didn’t want the burden of having a second virtualization environment to run and manage. “We’d rather not do that,” he said.
The other alternative — to switch from PeopleSoft to a competing product that’s supported on VMware — isn’t an option. “Our investment in PeopleSoft is too great,” he said, and “in the grand scheme of things, running it on dedicated hardware is a drop in the bucket.”
It’s a shame, he said, because in the past six months, his group has begun actively virtualizing not only “the low-hanging fruit,” but increasingly more production workloads. “This whole Oracle-hating-VMware thing has really put a crimp in our style,” he said. Meanwhile the organization’s CIO has approached Oracle and told the vendors, “We’d like to [virtualize Oracle applications], but with terms that don’t involve unilateral demands that we use only your software,” the administrator reported. As of yet, no word back from Oracle, but as far as he’s concerned, there’s no deal to be made.
“[Oracle] can either change their mind, or we’ll keep buying physical hardware. We’re not moving to Oracle VM.”
OK, I had a foul up in my last post about my physical-to-virtual (P2V) migration journey. I used Vizioncore’s vConverter in my file server P2V migration, and it didn’t work. I then posted PlateSpin’s product name with Vizioncore’s product name (i.e. PlateSpin vConverter) as if there were some merger from the great beyond, rather than doing the simple thing and actually editing my own posts for accuracy. It’s all been edited out correctly now, but for full disclosure purposes, there was indeed a company name and product name mix up.
Now, that said, I’ve used both products on other OEM boxes and they went just fine, so take it for what it is: a singular experience and the nature of blogging (and working with editors… that “no thanks to” line was not mine).
I seem to have a hard time with company names… I previously used incorrect capitalization for eGenera (it’s actually Egenera) some time back, and I often refer to Openfiler as OpenFiler.
Back to the tale… The domain controller from hell – a Windows 2000 server with OEM disk drivers, OEM RAID controller management tools, OEM disk management tools, and OEM network management tools. A machine nearly as old as my oldest child, one that shipped with Windows 2000 before there were service packs. It has had patches, service packs, driver roll-ups, OEM driver updates, and (probably) chocolate sauce slathered onto it in it’s lifetime.
It has also had so many applications added and removed that I think I could actually hear grinding from worn spots in the registry and creaking from the drive platters. Needless to say, this DC was spiraling down into the vortex at the end of usefulness. That’s not to say it wasn’t a great machine for a long time, but with full disks and a gurgling CPU, the poor thing was about done. It still doesn’t beat the teenagers I decommissioned early in 2007 – pair of Novell 4.11 boxes that were old enough to drive (well, with a Learner’s Permit anyway). Gotta love longevity.
First order of business: a little cleanup. Backup. Backup again to a second location. Remove IIS (it’s not used anymore). Remove an extraneous antivirus management console (a competing product is in use, this one’s just dead weight). Dump the temp files. Wipe out randomly saved setup files for applications like Acrobat Reader. Compress what I can. Defragment.
Finally, enough free space is there to support the VMware Converter agent. Lets try that and see how it goes (often, it’s the only tool I need). Failure. Hours of waiting are gone, even though the conversion hit 100% and claimed success. Turns out there’s an invisible OEM partition sitting out there that the OEM tools don’t show, and said partition has hosed the boot device order on the new virtual machine (VM). What do I see – the Blue Screen of Death (BSOD), pointing me to INACCESSIBLE_BOOT_DEVICE.
Not a huge deal, right? Just edit the boot.ini, right? No need to worry about that missing partition, right? Nope. Sure, I try to repair it by mounting to a good VM and going into the disk to edit the ini file. No luck. Ok, lets get rid of the driver. We can see the partition. Done. Now lets try again.
Failure. Are we sensing a pattern here? Same BSOD. Just like the first time, Converter goes 100% and the box BSODs on boot again. So, now that the disk management tools are no longer hiding the OEM partition, I edit boot.ini to get rid of the partition, make sure that the partition is unchecked in Converter, and try again. It succeeds!
Kind of. It’s slower than molasses in the Minnesota winter, that kind of winter where all you want to do is sit inside by the fire and let the good folks out at — sorry, for a minute there I was channeling my inner Garrison Keillor. I’m back. It’s drivers.
The OEM RAID drivers are still in there, but they are easy to strip. And it even boots up again and runs. There’s no network though. OEM NIC drivers get the strip next, but still no network (as expected). Reinstalling the VMware tools to replace the drivers doesn’t help. Next step is to shut down the VM, remove the NIC, boot again, and then add in a new NIC.
Hosed. Now the machine boots up, but it won’t let me log on. The OS is toast, and I’m not whipping out the recovery CD.
Time to pull out another product. Vizioncore’s vConverter, an acquisition from Invirtus that’s stable, robust, and more feature-rich than VMware’s offering. Redo the P2V with this tool. Same problems with the boot screen.
And there it sits for a day, in limbo, while I spend some time on Google, TechTarget, and VMware’s websites.
Finally, it’s all together… AD is corrupt. Somewhere in all that stripping of drivers, I’ve whacked Active Directory. Ok, lets fix that: Start over. Whack the VM. Re-backup. Run DCPROMO and demote the server so that it’s no longer a Domain Controller.
Time to P2V – I used vConverter again, but edited the VM before boot so that there’s no NIC. I boot it, and remove all the OEM drivers then add in the NIC. It boots. It runs. It flies. No need to robocopy. All the apps are in place and running. It just hums along happily and serves it’s purpose.
Murphy’s law: Whatever can go wrong, will.
After I wrote a short article on VMsafe last week, I received feedback from Burton Group analyst Chris Wolf, who was at the VMworld conference in Cannes, France. His comments weren’t included in the story, but they put things in perspective, so here they are:
“VMsafe is a very important technology in my opinion, as it changes how virtual environments are secured. Today, security appliance virtual machines (VMs) typically monitor other VMs by connecting to them over a virtual switch. The result is virtual network monitoring that resembles physical network monitoring,” Wolf said. “The current model is fine until VMs begin to dynamically move across a virtual infrastructure. Dependent security appliances always have to follow their associated VMs as a VM is relocated. This can complicate the live migration and restart processes.”
“With VMsafe, you would typically configure a security appliance per physical host, such as an [intrusion prevention system] virtual appliance. The security appliance vendor would leverage policies to determine what to monitor (such as by VM, virtual switch, subnet, etc). With VMsafe, the appliance can connect to any virtual switch by accessing it through the hypervisor; you no longer have to configure a special promiscuous port group for network monitoring,” Wolf said. “With security configured at the host level with no direct attachment to virtual networks, VMs can move freely about the virtual and physical infrastructure and still have their related security policies enforced.”
Wolf continued, “VMsafe also provides the framework for offloading many security activities to special-purpose security VMs, including roles such as antivirus monitoring. As we move to an automated or dynamic data center, having special-purpose security appliances that are capable of enforcing security policies at the hypervisor level can ease security management in an environment that will be constantly changing. Sure, it’s possible to enforce security policies with special purpose network-based appliances, but such configurations would be substantially more complex to deploy and manage than comparable solutions based on VMsafe technology.”
You may be considering new blade servers for your virtual host environments, and you are not alone. Gone are the days of the perception that blade servers have less horsepower than their general purpose counterparts. I recently attended a local virtualization user group meeting, and we talked at length about some new blade server products. Here are some takeaways of what virtualization administrators need to know about the new blade products:
Processor and memory inventory
The newest blade servers can run 4 sockets and 4 cores in one blade, and one model in particular that was favorably discussed is the HP ProLiant BL680c series. This is great for virtualization implementations with an incredibly small footprint. With the BL680c, each blade can house up to 128 GB of RAM. ESX 3 and Microsoft Windows Server 2008 are supported operating systems for virtualization implementations for this series of blades. One important note on the HP blade series is the Virtual Connect product for network connectivity. Fellow TechTarget contributor Scott Lowe covers this well in a recent tip.
You have to love the small footprint
With the momentum of virtualization migrations not slowing, the small footprint is very welcome in crowded data centers. The BL680c can have 80 hosts of the speck above in one 40U rack with four enclosures! Using general purpose servers would take at least double the space to get the same number of virtual hosts.
Given the very small footprint of the blade server, there are some limitations to connectivity. While the BL680c excels in most areas, it is limited to only three expansion interfaces for additional networking and fiber channel connectivity. Most implementations, however will be able to meet their connectivity requirements from the available options.
A smaller issue may be power sources. Blade servers will generally take different power sources compared to standard general purpose servers. The trade off is that in feeding a blade server a L15-30P outlet you may not need a power distribution unit (PDU). The PDU may take the same L15-30P interface, so some planning on your power sources and availability to get the correct sources available.
The current generation of blade servers are serious contenders for virtualization hosts. The small footprint only makes the case more compelling. As the blades now are able to offer comparable performance specs of the traditional server counterparts, we should consider them for the host hardware environment.
Based on the volume of media coverage, explosive growth of iSCSI storage area network (SAN) companies and my own experiences, I think it’s safe to say that NFS and iSCSI SANs have begun to prove their worth as valid storage platforms with VMware’s Infrastructure 3, and both are also a lot cheaper than 2 GB and 4 GB Fibre Channel SANs. This has made it very interesting to those who are in the lower-tier of purchasing power – namely the many small and medium-sized companies that can’t afford the initial acquisition costs (the bulk of my clientele, by the way). That brings in the great equalizer – open source NFS and software iSCSI solutions, though given a choice I prefer iSCSI, because it can support VM partitions.
One of these OSS iSCSI packages is called IET, the iSCSI Enterprise Target, and it’s rolled into a Linux distro called Openfiler, a distribution focused on being a storage platform. Before we all rush out and use Openfiler for all our needs, IET itself has had some problems with supporting VMotion, so it’s earned a bit of a bad rap. In fact, there are often wails of protesters screaming about how IET is unsupported by the good folks at VMware because of the way two SCSI commands are structured and how that can hose a VMotion move. Luckily, there’s some good news on that front.
The truth be told, the good news there is only half-good. IET’s release 0.4.5 addresses those problems. The better news is that this update is rolled into the latest patches for OpenFiler (which has a self-update option, just like any other distro). There’s a nice link in a blog here. The reason it’s only half-good… IET is still not supported, patch or no patch. Still, I’ve been using it in the lab, and it works just fine for me. I’ve done 1, 2, and 10 VMotion moves by manipulating DRS, both manually and automatically, and haven’t hit a snag yet. That said, it isn’t supported.
Hitting an Openfiler target with VI3.5 is easy. First, install OpenFiler, ESX and VirtualCenter and configure them. I will skip most of the details of basic setup, as it’s superflous. Openfiler’s install guide is here, but the Petri IT Knowledgebase has a specific article that covers soup-to-nuts here. I’ll post another below that’s a little shorter and to the point. You’ll need to create the virtual machine on the local disk of the ESX host (otherwise, what’s the point?) and configure it to take up just enough, but not too much of that precious internal disk space. In my example, I built a virtual machine with two virtual disks (you can do it with one, if you want). The trip from an unallocated lump o’ disk to iSCSI-enabled volumes went like this:
Part One: Disk Management
Set up your physical storage from the get-go. During setup, I chose NOT to do anything to the second drive – which is to say, I didn’t let Disk Druid touch it at all. When Openfiler boots, and you have your basic settings done, things like ntp, the admin password, etc., go to the Volumes button, and click on Physical Storage Mgmt. From there, create the extended and logical partitions you will need to get your iSCSI volumes up. The steps, once Openfiler is up, look a little like this:
The basic setup. System, and nothing else:
Creating the Extended Partition:
Creating the Logical (Physical) Partition:
The Partitons on the Second Drive:
During creation, I ran into a problem on both machines I did this build on – in each case I had created the logical partition, but OF didn’t see it. I had to go delete it, and create it again.
Part 2: Creating the LUN
First up, enable the server to act as an iSCSI Target. This is done from the Services tab, under Enable/Disable. In this example, I’m only using iSCSI, but if you wanted to enable NFS, this is where you would do so. Mine looked like this:
Then, go back to Volumes, under Volume Group Mgmt, and create the iSCSI volume group. In this example, I’m using the whole disk, but you don’t have to.
The before and after looks like this:
Next, head over to Create New Volume (one tab left). Creating a volume is, like the rest of the process, relatively straightforward. I made one volume, comprising the entirety of my one Volume Group. Having chosen the name”san” (how creative of me) for the process, I kept with the same simplicity for the volume I created.
After that, you are forwarded to the List of Existing Volumes tab. Congratulations, you have an iSCSI SAN. Now, about configuring that SAN…
Part 3: Security
There are two ways to lock down your SAN. One is via the network, and choosing only to allow certain hosts or networks access to the SAN. From the List of Existing Volumes tab, select the Edit link under the Properties column. You will then be presented with a screen called Volume’s Properties giving you a number of options for securing the LUN. I won’t dwell too much on the network access restrictions, as my screen captures will be different from other networks. The long and short of it is that in the General tab there is a sub-tab called Local Networks. You can create individual hosts and subnets there, and then allow or deny access to these networks on the Volume’s Properties tab.Using MS-CHAP to lock down who can connect to your target is a much smarter plan.
Part 4: Getting VMware to See Your LUN
This is extremely straightforward, and is no different than any other iSCSI implementation. The only thing I’ll mention here is something that is left out of many, many documents. Open the ports for iSCSI on your ESX hosts! In 3.5, it looks like this:
There are a lot of considerations that need to be addressed about what you keep there, chiefly around reliability and performance. For example, if you put virtual machines in the iSCSI SAN on the physical host, when the physcial host dies, it takes the SAN with it, along with all those virtual machines. Another consideration is networking. The long and short of it is this – if you have a whole lot of unused disk space on your ESX hosts, you can use Openfiler to put stuff there. As to what stuff, that’s entirely up to you. I personally prefer iso files and other junk that doesn’t need to go anywhere or do anything, and isn’t going to hurt me when something fails. Of course, you could also use NFS, which Openfiler supports.
The end result, Openfiler gets a solid 8 pokers from me. I give it a ten for being a soup-to-nuts full-featured distro that I can use at almost any client site for almost any purpose. If this were a storage column, it would remain a ten. It drops back because they haven’t yet earned a nod from VMware, which is something I would have expected them to pursue, considering their growth and media attention lately. What makes it frustrating is that Xinit systems, the company behind the project, makes storage appliances. Since they’ve been producing Openfiler since 2003, I think it’s about time they got on the move with this!