Ever since I learned last summer that VMware had leased a massive 100,000 square feet of data center space along the Columbia River in East Wenachtee, Wash., I’ve always assumed that they would eventually become a cloud computing provider, and not just a provider of the underlying cloud infrastructure software. Turns out I was wrong.
At a lovely dinner for press and analysts at L’ecrin in Cannes tonight, I had the pleasure of speaking with VMware’s CIO Tayloe Stansbury, who assured me that the company has no designs on becoming a cloud provider. Not that they didn’t consider it. But the more they thought about it, the more they came to the conclusion that it would be reckless to compete with their partners (folks like Savvis, T-Systems and Terremark), Stansbury said.
So what are they doing with that ginormous data center in East Wenachtee? Testing and development, pure and simple. “It’s one of the ironies of being VMware that we have to develop our code on physical hardware,” Stansbury said. In order to assure that ESX can effectively virtualize workloads on any x86 system, VMware’s R&D must test the code against every conceivable server and storage platform. These days, all that hardware is being shipped to East Wenachtee, where it runs on electricity that costs two cents per kilowatt hour, versus the 20 – 30 cents per kilowatt hour VMware pays for power in the places like Palo Alto and Cambridge, Mass.
So I stand corrected (it won’t be the first time). Going forward, I hope VMware decides to share further details about its new data center.
This week’s launch of RNA Networks and its memory virtualization technology may not mean much for VMware administrators yet, but give it a couple years, and the technology could have broad implications for how you buy and configure your virtual host servers.
The idea behind RNA’s product, RNAmessenger, is to decouple the memory in a server, and to put it in a resource pool that can be accessed by several machines in times of need. The technology consists of a driver that gets installed in a server, plus control software that runs on an appliance.
For now, RNA is targeting applications like hedge fund programmed trading, 3D rendering and oil and gas reservoir modeling — classic high performance computing (HPC) applications with high volume, low-latency requirements. But fast forward a couple of years and another possible use case for the technology is to distribute memory across hosts in a virtual cluster, said Frank Tycksen, RNA vice president of engineering. .
Virtualization hosts, like a lot of high performance platforms, tend to run out of memory long before they run out of CPU power, he explained. But memory, unfortunately, tends to be prohibitively expensive. Thus, rather than buy additional memory, wouldn’t it be preferable if you could tap in to the excess memory of another host in your resource pool?
That could alleviate some of the pressure to purchase servers stuffed to the brim with expensive RAM. “Our goal is to help you become CPU-bound, rather than memory-bound,” said Tycksen.
But interested parties should not hold their breath; “this is future technology,” warned Andy Mallinger, RNA’s vice president of marketing.
You may have noticed by now that our SearchServerVirtualization blog has a different look and feel. That’s because we’ve migrated it to IT Knowledge Exchange (ITKE), a TechTarget IT user forum with a bunch of features that weren’t available to us before. For a quick rundown of ITKE’s features, I’ve invited Brent Sheets, TechTarget’s ITKE community manager, to describe it. So without further ado, here’s Brent:
Welcome to our new blog location on IT Knowledge Exchange.
I’d like to take a moment to introduce you to some of our new blog features and also some of the features on ITKE.
Instead of a long list of categories, we now have a tag cloud. Click any topic in the tag cloud, and you’ll see only posts on that topic. The tag cloud is dynamic, so the more a tag is used, the larger and darker it will appear. This helps you quickly see the most popular topics.
You’ll also notice we’ve integrated more of our related editorial content in the right-hand sidebar. If you’re on a post about a specific topic and wish to know more after reading the post, be sure to browse the links in the sidebar.
We always appreciate your sharing our content on social networking sites and we’ve increased the number of bookmarking tools from four to 43. If you enjoy a post, please be sure to share.
Near the top of the page, you’ll see a row of tabs. You can click the IT Blogs tab to find dozens of technology blogs, both user generated and TechTarget editorial blogs. You can even request your own blog.
There is also a tab labeled IT Answers, where you can ask your own IT question and have it seen by thousands of IT Knowledge Exchange members. So be sure to pose your own virtualization question, browse thousands of virtualization answers or help out a fellow IT pro by answering a question.
Thank you for stopping by, and be sure to bookmark our new blog location and visit the server virtualization section on IT Knowledge Exchange.
Last week, Citrix Systems discussed Project Independence and its plan to develop a Xen bare metal client hypervisor for Intel’s Centrino and Core 2 Duo chips, the same chips that power the world’s desktops and laptops. Now, the company has announced that it is joining hands with venture capital firms Highland Capital Partners and Flybridge Capital Partners to take a minority stake in Virtual Computer of Westford, Mass.
You may remember reading in this blog about Virtual Computer, whose NxTop PC management suite relies on a — surprise!! — Xen client hypervisor. But don’t think for a minute think that Citrix is paying Virtual Computer to do its development dirty work. “We’re not doing the investment in VCI so that they can build our client hypervisor for Intel,” said Andy Cohen, Citrix senior director of strategic development. Rather, the investment has more to do with the relative dearth of Xen experts in this world. “There’s are only so many really smart Xen guys in the world,” Cohen said, and one of them — Virtual Computer’s CTO Alex Vasilevsky, formerly of Virtual Iron — is one of them. Citrix’s “Xen guys”, meanwhile, include its vice president of special products Ian Pratt and CTO Simon Crosby, both formerly of Cambridge University and XenSource. Thus, the focus of the investment will be on “getting some really smart guys around the table.”
But Dan McCall, Virtual Computer president and CEO, acknowledges that VCI has a wealth of expertise about building a hypervisor for the wild-and-wooly world of client computers. Unlike servers, “PCs are complicated devices,” McCall said, that support a bewildering number of graphics and network cards, USB devices and the like, “and all of these different chips and technologies need to be virtualized.” VCI’s job, therefore, “is to make sure that the [virtualized] PC runs as well as it possibly can.”
However, it’s s “a little too soon to know” exactly which elements of the joint Citrix-Virtual Computer hypervisor will go back in to the open source Xen hypervisor, and which will stay proprietary, said Citrix’s Cohen. “There are a number of strategic questions about what goes in to the Xen open source hypervisor, and what part remains commercial,” Cohen said.
For its part, Virtual Computer hasn’t given up hope on its own NxTop PC management suite. “Our goal is to help Citrix get a ubiquitous Xen-based hypervisor out there,” said Dan McCall, the company’s president and CEO. That done, “there’s a whole bunch of intellectual property that is uniquely ours,” he said, for example, NxTop’s provisioning and patching, integrated backup and persistent end user personalization technologies.
The hypervisor itself, is less important, McCall said. “As we built out the product, we always intended to be able to use other hypervisors. So far, we’ve used the iTunes/iPod model where you can control both ends of the user experience, but if someone else’s hypervisor comes around, we’ll plug in to it.”
A small virtual appliance company in Portsmouth, N.H. called vKernel first grabbed my attention last year with its virtualization management software, and they have it again with a new online virtualization community called Compare My VM.
The site gives users a way to annonymously compare their virtual machine (VM) configurations, by application category, with peers to see how others are allocating resources, and hopefully, take something useful back to your own environment.
vKernel’s Founder and CEO, Alex Bakman, came up with the CompareMyVM idea to help the IT community learn from each other about allocating resources for specific application VMs.
“How to properly allocate resources in a virtual environment is still a trial and error process. Simply using the same allocations of a physical server when virtualizing it can quickly lead to resource capacity issues caused by either over or under allocations,” said vKernel’s communications director, Christian Simko. “Ultimately, users can come to the site to learn how to ‘right size’ VMs so that they can drive higher VM densities without impacting performance.”
By setting Compare My VM up as a community site, visitors are more apt to share with and learn from their peers, than to have a product vendor tell users how and what to do, Simko explained.
So far, Compare My VM has around 300 submissions. Users typically enter their VM info either because they think their VM set up is da bomb, or because they need some help, which is why vKernel added a peer to peer ranking system on the site, Simko said.
“One person may think their set up for an MS SQL VM supporting X number of users is allocated just perfectly,” but it might not be so hot when viewed outside the four wall of that users data center. “We give others a chance to rank what they think is the right way, much like how Blog sites give others the ability to rank stories,” Simko said.
As is vKernel’s style, the site is designed so that it is simple to navigate and submit information to, allowing users to find similar profiles and compare them.
“It is a tool to help admins learn, share, and improve,” Simko said. “VKernel has only set up the framework of this site; we are not populating it or dictacting how people should be doing things. It’s purely a community tool.”
I encourage you to check out the free CompareMyVM.com site and anonymously compare your VM resource allocation profiles with that of your peers. You will either feel pretty good about what you are doing, or really bad – and in that case, you’ll probably learn something.
When it comes to the desktop, it’s clear that virtualization has a huge role to play. But is the desktop best served by VMware’s server-based virtual desktop infrastructure (VDI) model? Some people don’t think so.
At Virtual Computer, a new startup in Westford, Mass., the thinking is that for desktops, the virtualization layer belongs directly on the client, in the form of a bare-metal hypervisor. There the hypervisor brings management benefits like simplified provisioning and patching of images, but without of the mobility and performance limitations of VDI, said Doug Lane, Virtual Computer’s director of product marketing and management.
When VMware announced its intention to deliver a client hypervisor for “offline VDI” this fall, the company tacitly acknowledged VDI’s shortcomings, according to Lane. Meanwhile, the company is still focused squarely on delivering the desktop from the server.
“With VMware, offline VDI is the niche case,” he said. But when Virtual Computer looks out at the enterprise, it sees a preponderance of laptops and thick clients. “Our model starts there, and we think that server-hosted desktops are the niche case.”
To that end, Virtual Computer is developing NxTop, a PC management suite pronounced “nextop.” It consists of a Xen bare-metal hypervisor called NxTop Engine optimized for laptop-class hardware and that runs Windows virtual machines. Those are managed by its NxTop Control console from which administrators can configure and provision images, set up access and protection policies, and the like. NxTop is currently in beta and is scheduled to ship by the end of the first quarter of 2009.
Without making a stake in the ground and validating one strategy over another, Gartner senior research analyst Terry Cosgrove agreed that there several issues with hosted virtual desktops (Gartner-speak for VDI). “Hosted virtual desktops are an immature, adolescent technology” that won’t be ready for mainstream use for a number of years, he said. In the meantime, “there’s a place for alternative architectures to achieve the same thing – centralized management and control, but that gives users some autonomy.”
Cosgrove also said that several stealth-mode startups working on VDI alternatives will emerge over the next couple of months. There is also speculation that Microsoft and/or Citrix are developing client hypervisors of their own, and questions about which tack laptop OEMs like Dell and Lenovo will promote. One thing is clear, though: With laptop sales now exceeding desktop sales, those OEMs “are highly motivated to have a solution that will not prohibit the sales of laptops,” Cosgrove said.
This week, when VMware announced its partnership with Hewlett-Packard to integrate its Lab Manager with HP’s Business Technology Optimization software, more specifically, HP Operations Orchestration, it showed that the company realizes that it’s not an all-virtual world — yet — and that there are large pockets of physical systems not under its direct control.
“Lab Manager is a great tool from the point that you already have a physical box with ESX, storage and networking installed,” said Bogomil Balkansky, the senior director of product marketing at VMware. “From there, developers can self-deploy all these virtual configurations. But without that, Lab Manager can do nothing for you.”
To that end, the integration between VMware Lab Manager and HP’s orchestration software aims to offer “one seamless process to do all this [provisioning] from the same place,” he said, enabling the provisioning of bare metal, in addition to virtual, resources.
The target market for the Lab Manager/HP Orchestration suite, to be delivered sometime in 2009, will be the same as the target market for Lab Manager today, namely large independent software vendors (ISVs) and “nonsoftware companies that nevertheless develop a lot of software in-house, for example, telcos and banks,” Balkansky said.
VMware also plans to OEM HP’s Discovery and Dependency Mapping (DDM) Inventory software for use in a new VMware product to be announced in 2009.
The HP deal marks the third time in four months that VMware has partnered with one of the big four systems management companies (HP, BMC, CA and IBM). In September, BMC and VMware said they would collaborate on integrating VMware’s Lifecycle Manager with BMC’s Atrium Orchestrator (formerly Run Book Automation) and Remedy IT Service Management, such that joint customers could make change requests or initiate automation processes from either Lifecycle Manager or BMC products. Then, just last month, CA announced that it would OEM and resell VMware’s Stage Manager as part of its Data Center Automation suite.
“A core tenet of our virtualization management strategy is to integrate our products with the larger systems management offerings,” Balkansky said. That approach should appeal to “larger companies that aspire to a single pane of glass” while at the same time giving them the benefit of “the feature-rich products our tools provide,” he said.
This all seems logical enough, but one question I have is whether there is customer demand for these integrations. Frequently, these sorts of product integrations are a result of customer clamoring for them, but at least in the case of the HP/VMware partnership, a request for a customer reference came up short. “The idea of a single pane of glass resonates very well,” Balkansky said, “but honestly we haven’t solicited quotes and validation given that the integration hasn’t happened yet.”
Let’s face it: Spam filters are usually asked to do more, not less. But when McColo’s ISPs shut off its Internet service last month, sending global spam volumes plummeting, a lot of spam filtering applications found themselves, well, twiddling their proverbial thumbs.
That’s just one more reason that spam filtering company SpamTitan can breathe a sigh of relief because it packages its app as a virtual appliance. As volumes of spam go up or down, “you simply add or remove processing power or memory resources, effectively getting a bigger or smaller appliance without having to go back to the vendor,” said Ronan Kavanagh, SpamTitan’s CEO. The process is largely manual, but it’s still more efficient than the alternative.
As an independent software vendor, SpamTitan sees enormous benefits to packaging its software as a virtual appliance rather than as a hardware appliance or as a standalone application, Kavanagh said. “We don’t have to support any hardware. The entire sales cycle can happen online. We can send out evaluation units at no cost to us. The customer can take charge of their evaluation on their own time.” This list goes on.
But Kavanagh said that SpamTitan hasn’t experienced as much adoption of the virtual appliance version of its software as it might have liked. In 2006, it launched its first virtual appliance package on VMware’s Virtual Appliance Marketplace, and today about 50% of the units it sells ship as virtual appliances. The remainder ship as full ISO images, or bootable CDs.
Part of that may have to do with customer size, Kavanagh said. “Some people don’t use VMware, particularly in [small and medium-sized enterprises]. If they have less than 100 users, they tend to have very limited VMware deployments and are just as happy to use the ISO.”
Spam volumes, on the other hand, are very much on the upswing. “Yeah, they’re on their way up again,” Kavanagh said. Oh, well. It was a nice while it lasted.
We can talk until we’re blue in the face about universal clients, ubiquitous data access and streamlined image management, but ultimately the question of whether virtual desktops make sense comes down to what IT decisions always come down to: money.
Johnathan, a Server Virtualization blog reader, recently posted a comment on one of my posts detailing the math for a 250-seat virtual desktop infrastructure (VDI)/thin-client implementation, which amounted to a $350 per-desktop-capex advantage for VDI; a three-times faster deployment schedule and troubleshooting times that were orders of magnitude faster (albeit harder to quantify). Not too shabby.
Of course, that was before VMware announced new pricing for its re-branded VDI suite, View 3. At $150 per seat for View Enterprise or $250 for View Premier, capex savings would decrease to $300 or $200 per desktop. That’s assuming you pay list price, which is highly doubtful. But it also doesn’t account for the storage capacity savings you might realize by using View Composer to share desktop images: an average of 70%, according to VMware.
Suffice it to say that assigning ROI dollars to an IT project is a highly personal, subjective affair. And that the numbers posted by others are often suspect, as Bernard Golden points out in his article “Virtualization Projections Deserve Scrutiny.” Here, Golden looks into a Butler Group report that reports client virtualization savings of $159,000 for 1,000 desktops, or $159 per desktop, per year. Come to find out, the $159 savings was in energy costs alone. Who knows what the overall cost of the deployment really was?
At any rate, if you’ve done the math on a VDI implementation, and believe that your numbers bear scrutiny, go ahead and post the numbers in the comments section of our blog.
There’s a lot of virtual desktop news these days, and before too much time passes, I want to share some tidbits on VDI that I picked up this week and that had never occurred to me before.
- VDI can save you money on software licenses. At least, that’s what I hear from Jeff Cunningham, a network administrator at the Agricultural and Resource Economics department at the University of Maryland, who implemented about 70 virtual desktops for faculty, staff and graduate students. For instance, an individual license for the data analysis and statistical software package Stata runs about $700. In contrast, a 10-seat network license costs the university $2,000, for a savings of $5,000, and the budget to deliver interesting software to a greater number of students.
- Thin clients can withstand a long power outage. Kunal Patel, the IT director at Nina Plastics, whose VDI project I wrote about earlier this week, told me that during a recent power outage, the company’s regular desktops drained their APC battery backups in less than 10 minutes. Their Pano Logic thin clients, on the other hand, stayed on for four hours. In a similar vein, the University of Maryland’s Cunningham stuck a kilowatt meter on a bank of five Pano devices and a bank of five regular desktops and discovered that the Pano devices consumed one-fourth the power of the regular desktops.
- Some IT managers are skeptical of thin clients’ supposed cost advantages. As an example, check out Basilm’s comments on the Server Virtualization Blog. What about you, dear Server Virtualization Blog readers? Have you done the math on VDI and thin clients? What’s the verdict?
- Big companies need big security. With their strong security and compliance needs, verticals like finance, health care and government are a natural fit for VDI. But in order for them to adopt it, the VDI community needs to support biometric authentication mechanisms, such as fingerprint readers and face recognition software.
That’s all for now, folks. Brace yourself for a lot of news on virtual desktops. Things are about to get interesting 🙂