When two qlever, xany technologies meet, who knows what can happen? On Monday, Qlusters, which makes the open source OpenQRM server provisioning and monitoring software, will add official support for Xen, the open-source virtual machine monitor used by XenSource, Red Hat, Novell and others. Qlusters’ OpenQRM, which founder and CEO Ofer Shoshan likens to Cassatt and IBM Tivoli Orchestrator, but without the configuration management capabilities of an Opsware or BladeLogic, has been shipping since 2002, and has been in production since 2004. Customers number “in the tens,” Shoshan said, but what the Palo Alto, Calif.-based startup lacks in quantity, they make up for in quality: Network Appliance, Morgan Stanley, Credit Suisse First Boston (CSFB), and TradeWave.
As far as Xen is concerned, Shoshan said the company was supporting it to address some of customers’ current concerns around virtualization leader VMware: price, but also performance. “What we see in the market is virtualization is still mainly used in test/dev and QA [Ed. note: really?], not so much in production,” Shoshen said. Having more choices should “make people more comfortable.”
It looks like the ERP business is finally starting to catch on to the fact that their customers want to use server virtualization to cut costs, reduce downtime, and do all of those nifty things that come with SV. Novell and SAP seem to have sat down and sorted this out. It’s almost too bad it was Novell (well, the gossip-monger in me things it was Novell’s SuSe folks in Germany talking to SAP folks in Germany), because that means somehow this huge opportunity will get mis-marketed and then mis-sold, and some other company (VMware probably) will make the real bank off of it. Sorry Novell, I love ya dearly, but it’s a Fact of Life – you’re the Ted McGinley of technology companies. Anyway… this comes from a press release on Novell’s website:
“WALTHAM, Mass.—13 Mar 2007—Novell today announced that SUSE® Linux Enterprise Server 10 from Novell® with integrated Xen* virtualization technology is now available for SAP* NetWeaver* and mySAP* Business Suite. Jointly tested by Novell and SAP, SUSE Linux Enterprise Server with Xen met or exceeded SAP’s stringent performance requirements for SAP applications in a virtualized environment. Virtualization of the IT infrastructure for SAP deployments can result in enormous advantages for businesses, such as consolidation of workloads onto fewer servers for reduced capital and management costs. With this new validated solution, customers can confidently deploy their SAP applications in a virtualized environment using SUSE Linux Enterprise Server, resulting in a more reliable, flexible and cost-effective platform for mission-critical computing.”
It looks to me like VMware is behind the 8-ball on this. Looking over this 2006 IBM Techdoc, I found it wholly comprised of this rather disheartening statement:
“1. Does SAP support VMWARE for non-production?
As documented in SAP Notes # 674851, SAP currently supports VMWare in a non-production environment. This is now supported because of the improvement in storage and performance issues.
2. Does SAP support VMWARE for production?
As further documented in SAP Notes # 674851, SAP has issued a conditional statement of support. You may need to provide SAP access to a system on which VMWare is not running, but on which the error can be reproduced. This is only necessary if the error appears to occur in the layer between the operating system and the virtualizing software.”
Yet there’s more to it than that… it seems like SAP is saying the same thing MS said for years – “We will need a box that isn’t virtual in the event we can’t figure it out, or else we’ll just give up and blame it on the virtual environment”. Digging a bit deeper I found this post in a listserv relating to VMware and SAP:
“I found SAP note 171380-Linux Released IBM hardware :
The basic models listed below were successfully tested for use with SAP software in the LinuxLab and were released for practical operation: …
– eserver xSeries x445 VMware ESX Server 2.1″
And then there’s this from the actual note that a SAP-customer friend of mine was kind enough to snip for me:
“SAP does NOT support the production operation of SAP systems based on the Windows platform. The reason for this, among other things, is that Microsoft itself does NOT support the use of VMWare for MS products. If you still want to use VMWare in production operation, and you require some support, you must give SAP employees access to a system on which VMWare is not operated, but on which the error can nevertheless be reproduced.”
Here’s the rub – the references are to SAP Notes I can’t access because I’m no longer one of their customers, so I don’t know if they’ve been updated. They reference the use of Windows, meaning that the product line they’re issuing the note for is the GSX (now Server) line, not the ESX line. They also date from 2006 and 2004, prior to VI3. Back in the days of working for a Fortune 500 I could verify the currency of that Note and whether it applies to VI3 or just ESX 2.x and lower, but these days… no can do. So, lets ask the readers – is this still true? If it is, somebody at VMware needs to get a move on and push back on the “conditional support” BS. That 2004 SAP Note that’s referenced might not even really apply – its entirely possible the testing was done on the service console, without a thought that VMware isn’t Linux, just the console is. All in all, quite confusing, quite annoying, and quite difficult for SAP customers who are also VMware customers. I bet that’s an awful lot of them.
To the matter at hand of the press release – it’s about darn time that ERP vendors get on board the bandwagon that Microsoft started (by supporting their own ERP on their own virtualization platform). With all the work MS has put into their acquired Great Plains, Navision, and Solomon products, they’re clearly moving in the direction of taking on the big players (all two of them), and part of that strategy has to be the price point as well as the integration “ease” of an all Windows/AD environment. Part of that price point is going to include facts and figures on power consumption, license costs, reduced hardware, and all of the other virtualization-related benefits (namely the incredible ease of BC/DR when working with virtual machines).
Why is this important? Well, I can point to a real-world example. I happen to have it from an eminently reliable source that a mega-gigantic consumer goods company (which will remain nameless because I have a lot of money invested in their stock) has been migrating from Oracle to MS SQL to run their SAP environments as part of a move to cut licensing costs. I also happen to have it from an equally eminently reliable source that there are serious concerns about BC/DR with the SQL servers supporting SAP, and whether or not they can be brought back online at all in the event of a true site-wide disaster. Now add in that most of the IT staff at this large consumer goods company are in the process of being outsourced to one of the big computer company’s consulting arm, and you can see the disaster waiting to happen. This is exactly where such a large company should be embracing the built-in DR capabilities of VMware, Virtual Iron, or just plain old Xen. Well, ok, I wouldn’t trust a company worth tens and tens of billions to a small company, so VMware it is. If SAP won’t support VMware in production, a company like the one I’m speaking about can make them. They have the huge amount of clout required to get it done, so they should get it done.
Here’s my advice, even if you (and I’m speaking to the mega-consumer goods company here) have to use traditional DR for the SAP boxes – keep moving from Oracle to MS SQL. Save money on licenses. But for Pete’s sake (and you know who you are, Peter), don’t halt VMware project. Keep it alive. Move the MS SQL boxes to VMware. Replicate between SANs that spread between multiple sites, at the block level. Keep DR boxes there running VMware, even if you have to keep them cold. If you lose a box in the data center, VMotion the virtual machine to another box. If you lose the site, bring the systems back online from that replicated SAN storage with mere minutes (maybe even mere seconds) of downtime. Save billions of dollars in lost revenue that would otherwise result from SAP being offline. Imagine not producing any soap, or delicious beverages, or any of your other wonderful products that can be bought so affordably and yet make so much money. Now imagine that happens, and that there are documented concerns about the DR reliability of the existing systems, as well as a proposed solution to that problem.
So VMware, whatchya gonna do? If I were Diane Green, I’d be hight-tailing it over to SAP and Oracle to get virtual hardware certified just like IBM, HP, and other vendors do for their physical hardware.
We’re trying to build our reader base, so I added us to Technorati. Fun times.
(Editor’s Note: scrosby is Simon Crosby, CTO of XenSource)
As the enterprise Linux Distros go to market with integrated Xen virtualization, the jockeying for position based on the promise of integrated virtualization is going to heat up.
You may recall that last year, when Novell shipped SLES 10 with Xen 3.0.2 in Q306, they were criticized by Red Hat, who claimed that Xen was nowhere near Enterprise ready. In fact, SLES 10 delivers what it promises: a basic implementation of Xen in Linux, that allows SLES 10 to virtualize itself. Most of the flak that Novell received had nothing to do with Xen – for example, their use of the Linux loopback driver (which is known to be flaky) for guest block I/O caused some users a lot of pain, and their storage architecture is awkward – having all disks owned by the host makes it difficult to manage guest storage effectively.
The launch of Red Hat’s RHEL 5 with Xen is a very big deal. Why? Red Hat dominates the Linux market and RHEL 5 will deliver the Xen hypervisor to millions of servers world-wide, with a seven year support commitment. The implementation choices Red Hat has made for Xen will have a tremendous impact not only on the RHEL footprint itself, but also on the RHEL derivatives, such as Asianux, Red Flag, CentOS and (it must be said) Oracle “Unthinkable” Linux.
Fortunately, it appears from the beta that I’ve been running, that Red Hat has taken time to understand some of the key subtleties of virtualization. For example, RHEL 5 uses the blktap driver for guest block I/O that XenSource added last summer – a zero-copy, high performance alternative to loopback that I expect will improve performance and avoid the stability issues seen with Linux loopback. Also promising is the Xen-ready RHEL 4.5, which allows RHEL 5 to virtualize some of the Red Hat installed base, and Red Hat’s plans to integrate the cluster technology and file system GFS to deliver infrastructure components for high availability based on Xen.
It’s disappointing that Red Hat has elected not to packaage the Xen project’s implementation of the DMTF CIM standard for virtualization management. Every other virtualization vendor will support the DMTF API, so the virtualization management ecosystem vendors will have to re-jig their products to work with RHEL’s libvirt. Overall, the package still feels very much like Linux that virtualizes more Linux, and Red Hat has been wise not to over promote what this release will allow users to achieve.
RHEL 5 ships Xen 3.0.3, which I find amusing, because at about the same time as VMware and Red Hat announced a deal, in which (amongst other things) Red Hat will certify RHEL on ESX, VMware published a performance benchmark of Xen 3.0.3 vs ESX, in which they claimed that Xen was nowhere near enterprise ready. Admittedly the VMware “study” used Windows as the guest, but for Linux, Xen 3.0.3 easily outperforms ESX. (We also debunked the VMware study, showing XenEnterprise typically equals or beats ESX for Windows and Linux).
You won’t find many users virtualizing Windows using RHEL 5, but Novell is clearly planning a Windows future for SLES 10 SP1. Living up to its chameleon logo, the SUSE team is surely hoping to get some of the new SLES 10 licensees delivered by Microsoft to use SLES 10 to virtualize Linux and Windows. SLES 10 SP1 will package Xen 3.0.4, which has good support for Windows, and Novell recently announced an arrangement with Intel for help to develop Windows drivers for Xen. Hopefully SUSE will improve their storage management architecture for SP1, and overcome the need to emulate the 16 bit graphical SLES installer on Intel VT, which makes installation annoyingly slow.
Ultimately, the jury is still out on whether the Windows IT Pro wants to use Linux to virtualize Windows, even if the Linux comes for free from Microsoft. My guess is that the answer is “no” – and that the platform virtualization model of ESX and XenEnterprise will continue to dominate at least until the arrival of Microsoft’s Windows Hypervisor. <sales plug> XenEnterprise, through its upcoming VHD support and enhanced Windows feature set, allows Windows admins to achieve high performance consolidation today, and migrate to the Windows Hypervisor whenever it arrives. </sales plug> I’m similarly doubtful that having Windows support will cause RHEL customers to switch to SLES in large numbers, no matter how aggressive the Novell pricing. But the topic of Licensing is worth another blog entry on another day…
Bottom line: the Xen hypervisor is an engine, and not a car. A powerful engine that needs a great gearbox, shocks, tires and the body to go with it. ESX Server is a Bentley, XenEnterprise a Lexus. To paraphrase Diane Greene, OS vendors are not virtualization experts. So we shouldn’t be surprised that the first OSV implementations of Xen felt more like a Ford Fiesta, and moreover the Xen feature set has been improving rapidly. Don’t count the distros out just yet – they are very focussed on building value around enterprise ready Xen virtualization, and RHEL 5 marks a great achievement by Red Hat and the Xen community.
I just had a chat with Veeam Software’s Ratmir Timashev about the release of their first commercial product, Veeam Reporter for VMware Infrastructure 3. The Russian company may already be known to some of you for its free tools — Veeam FastSCP, which administrators can use for backup or to copy large ISO images, and Veeam RootAccess, which allows ESX administrators to gain root access to an ESX server remotely. At any rate, the deal with Veeam Reporter is as follows: it collects data on ESX virtual machines, virtual networks and virtual switches, stores that data, and then displays it as a Microsoft Visio document.
What’s this good for? “This is extremely useful for planning high availability with VMotion,” Timashev told me. There are no other convenient ways to get “a good representation of where you can move your VMs,” he claims.
Timashev admits that Veeam Reporter doesn’t expose any data that you couldn’t find with VMware Virtual Center; the advantage is that you can “conveniently save the data and report on it.” Currently, Veeam Reporter takes a point-in-time image that the administrator must manually launch. For future versions, Veeam is working on automating the change management capability, plus giving admins an easy way to see what has changed.
Other features might include reporting on security permissions and storage, plus storing data in a database rather than Visio. List price for Veeam Reporter is $120 per managed ESX CPU. You can get it here.
While researching P2V migrations, I came across some discussions of problems encountered during the process.
Dave Mast, Geek at Large, learned this lesson about P2V migrations and shared it on his blog:
“This Tuesday we attempted a P2V (Physical-to-Virtual) conversion on our domain controller. It worked, but not as well as I expected. We wound up losing the SYSVOL share (where all your group policy stuff is kept) during the conversion. We still had our physical machine present, so we fired it back up and all was well.”
Mast is prepared for his next P2V experience. “I set up a second domain controller and stuck it in one of the IDFs. Hopefully the domain data will replicate from the new DC to the old, and if not, we can always start from scratch with a new VM.”
Here’s a thread about Exchange problems in P2V migrations. It’s a tale of many crashes.
Another informative P2V migration tale comes from Scott Lowe’s blog. He talks about using VMware Converter. Here’s a tidbit, but you should check out the whole entry:
“The only odd thing we ran into was that Converter refused to log in to VirtualCenter. We tried short hostname, fully qualified hostname, and IP address, and still had zero luck getting Converter to connect to VC. Fortunately, connecting directly to one of the ESX servers using the root account worked without any problems whatsoever, and the overall conversion process took about 1 hour and 20 minutes to move the 12 to 13 gigabytes of data on the source server.”
I’m still looking for P2V migration stories — successes and failures — to help me build a best practices and “beware” guide. Got any tales to tell? Add a comment, please, or send me an email at firstname.lastname@example.org.
For those who enjoy calculating potential ROI, VMware put out a new ROI calculator at http://www.vmware.com/products/vi/roi_calculator.html. In general, I’m not fond of TCO/ROI/FUD calculators, since they’re undoubtedly biased in favor of the company that puts them out, but they have some merit in that they can be the launching point for a REAL investigation into TCO/ROI/SNAFU calculations. The VRC is easy to use, offers plenty of data entry points for plugging in applicable numbers, doesn’t really rely on sketchy figures that can’t be validated, and most of all, it has the option to remove soft-costs from the final tallies.
I like that. I like that a lot. In fact, that’s the main reason I like it.
It’s also done in Flash, which makes it fairly universal across platforms (I live on an Intel MacBook Pro running Camino) with a simple plugin. The ability to remove individual line-items, in addition to ignoring all soft-costs, is pleasant, and the design is simple and elegant – graphical without being gaudy, and efficient without being complicated.
What don’t like about it? Two things, really. The first I’ve already mentioned… It’s very clearly a sales too, and that makes it biased by design when it comes to spitting out the final ROI numbers. While the bias isn’t as bad as you might find when comparing HP and Dell’s calculators, it’s clearly there. I found that by taking it down to two physical server, zero shared storage, and half the hourly numbers that there was still positive ROI when I had soft (aka indirect) costs included. It *did* calculate out negative dollar figures when I removed them, but it *didn’t* give me a negative percentage (giving instead a 0% figure). I was able to get a negative with soft costs, but I had to drop many of the numbers down to levels that fall even further outside the VI3 target-demographic range – like downtime costs and DR assumptions. The second thing I didn’t like was the customization page. There’s an option to enter your own figures if you need to (rack size, burdened costs of staff, number of admins per server, etc.), but it’s limited. It doesn’t go into the costs per server, which are hugely variable from company to company, and can severely affect the outcomes. Likewise, storage equipment costs wasn’t even a field, let alone something customizable, which means the huge differences in NFS, iSCSI, and FC storage aren’t accurately represented.
Those negatives aside, the fact that I was able to manipulate so many figures is a positive thing. So, on a scal of 1 to 10, with 11 being too good but not too loud, I’d give it a 7. I’d give most other tools a 5, if that means anything.
From Virtual Iron’s Website:
“Virtual Iron Software (www.virtualiron.com), a provider of server virtualization and virtual infrastructure management software solutions, today announced the general availability of Version 3.5 of its enterprise-class virtualization platform. The new release provides for more simple and cost-effective shared storage to facilitate the mainstream adoption of virtual infrastructure management capabilities such as LiveMigration, LiveRecovery, LiveCapacity and LiveMaintenance of virtual servers. The new release also adds single server installation to simplify deployment and configuration.
Version 3.5 provides comparable capabilities to VMware for one-fifth the cost. It is generally available now. Users can try it for free simply by downloading it at http://www.virtualiron.com/free.
‘Cost and complexity are the two major hurdles to broad adoption of server virtualization inside the enterprise,” said Chris Wolf, Senior Analyst, Burton Group. “During the evaluation process, organizations should strongly consider server virtualization platforms that support automated capacity management and virtual machine failover. Virtualization solutions that reduce complexity while automating virtual machine availability are best positioned for widespread adoption and long term success.’
Virtual Iron is one of only two virtualization vendors with the advanced workload migration capabilities required to support automated capacity management and virtual machine failover. With Version 3.5, Virtual Iron has made these capabilities easier to use for the mainstream market by providing support for simpler and more cost-effective iSCSI-based shared storage. This will facilitate the broad adoption of Virtual Iron’s virtual infrastructure management capabilities including LiveMigration, LiveRecovery, LiveCapacity and LiveMaintenance. With Version 3.5, Virtual Iron is also providing the mainstream market with a server virtualization solution that is easier to install, deploy and configure than comparable alternatives. It combines advanced virtualization and policy-based management to deliver all the capabilities and benefits of primary virtualization use cases.”
This is interesting. This is very interesting. Why? Because it signals a full-on challenge to VMware not from a marketing strategy, not from a licensing-hardball strategy, and not from a completely proprietary strategy. Those are the hallmarks of VMware’s battle with Microsoft. This is the first time a seperately available virtualization product, not tied onto the OS by an OS vendor (such as Red Hat and Novell w/ their Linux distros), and therefore single-server-oriented, product has been released that challenged VMware on its home turf.
Here’s a rough equation of the similarities in the products put out by VMware and Virtual Iron:
- VMware’s ESX / VI3 platforms are bare metal-installed. Virtual Iron 3.5’s virtual services is likewise bare-metal installed.
- VMware’s Virtual Center manages VMware ESX/Server/VI3 environments. Virtual Iron’s equivalent prodcut is called Virtual Manager (personally, neither name should have passed muster with the folks in marketing, but then again, who likes the folks in marketing?)
- VMware’s Conveter is meant to take a physical machine and move it across the divide into the realm of the virtual. Virtual Iron’s LiveMigrate is designed to do the same thing.
What has me intrigued the most is this bit of information from the Virtual Iron producti information from their Product Technology Overview paper:
From what I see there, it looks like most of the main features VMware’s VI3 offers are present in Virtual Iron 3.5 (VI3 vs. VI3.5… oh, no, that’s not confusing at all…)
Virtual Iron, according to several reviews I’ve read about the beta product, does lack some of the capabilities of VMware’s VI3 suite, but it’s not missing 80% of them, which is roughly the price difference. There’s even a free version similar to VMware’s Virtual Server (though I’m unsure of whether you can manage the free Virutal Iron product with the enterprise edition) It’s also an open-source application (the Xen hypervisor project) warpped in proprietary software. This has become an increasingly common business model amongst OSS companies who wish to get valuable free labor and remarkable community-driven insight while still trying to balance the need to live up to the OSS ideals of giving back to the community and the drive for corporate profitability.
So is it hype or huzzah? I’d say at this point it’s a bit of both. Hype because it’s not a solution that’s going to be everything-to-everyone, people aren’t going to be saying “hark-the-herald-angels-sing” because they’ve been delivered from some oppression in the marketspace, or anything like that… despite what the rhetoric coming out of Virtual Iron says. It’s huzzah because it really is a Small-to-Enterprise business product that will save a lot of money for organizations. It’s hype because it doesn’t have the sort of community built-up around it that VMware has (NOT that OSS community, mind you… I’m talking about the user and vendor community), and so the company isn’t offering virtual appliances, it doesn’t have the huge base of expertise in Net-land, and it doesn’t have the sheer volume of vendors supporting it that VMware has. It’s huzzah because Virtual Iron is going to open up the market as it builds these things. As they expand, and expand they will, VMware will be forced to come into line with a competitive market rather than what was a two-player field (VMware, MS, and that’s really it). Enomalous will grow too, and so will other Xen-based products, not to mention SWSoft and their interesting approach to virtualization (but thats for another time).
It’s time to take this for a spin and see what it can really do… a lot of questions remain. Not so much questions like “Does Virtual Iron Have X?”, but rather, “Is Virtual Iron’s X as good as VMware’s X?”. Is Virtual Iron’s virtual networking as robust as that within VMware’s Virtual Infrastructure? How about smooth VMotion-like migration between virtual hosts? Both of these, and more, remain to be seen, but for now, the product looks good, solid, and like a viable alternative for the Small-Medium business market, an perhaps the enterprise as well.
The huge cash infusion, the keeping-mum on a lot of deeper details… I think there’s something else going on aside from just a move by MS to set itself up for IP Power-Plays against Red Hat, Ubuntu, Linspire, etc. etc. I’d have to really re-read the fine print, well, ok, what little there is, but here’s my conspiracy theory of the day –
I think the whole thing reeks of payoff. It’s as if something like this (completely fictitious) conversation happened: “Oh, hello Novell… you found out about those huge chunks of stolen NetWare code in the Windows kernel, you say? You can prove it, you say? Oh, well, I understand your Linux business isn’t doing as well as it could have been. How would you like that to change? Well, yes, you’re quite right. We had been opposing Linux for a long time, but our opposition campaign isn’t doing well at all. So, here’s our offer, and sorry about the bad spelling, but as you can see we get to be non-litigating friends again!”
Complete bunk? Quite possibly. Rampant paranoia? Yes, maybe. But something else still stinks in that agreement.
I am (was) the unofficial Microsoft Entourage support person for my organization. Susie Q. is having a problem with Entourage losing her e-mail, “Call Andrew” her office-mate would say. Those days are no more. I now use Outlook 2007 on my Mac and let users know that until the next version of Entourage appears, this child of mother earth is not going to to suffer the burden of two computers any longer. No more do I have to keep a Windows box on my desk just to manage VI3 or play games. The days of this systems administrator keeping up with the Joneses when it comes to many machines is over — behold the power that is Windows on a Mac. Behold the power that is Parallels.
I know Windows on a Mac is nothing new. People have been doing it since almost last year at this time, however, it is only recently that it has been so easy to seamlessly run the latest version of Office or the VI3 client on OS X. As I type this I am also composing a message in Outlook, checking on one of my ESX hosts, and trying to get a 5-man together for the Mana Tombs. With the release of the Core-2 Duo Mac laptops last year as well as the very new Parallels release, I can do all these things on my MacBook Pro, at the same time, and still have honk left over. The recent release of Parallels is the first production release to include their feature Coherence. Coherence lets you hide a VM’s background, so that applications running inside the VM appear to be running as part of the host OS’s (in this case OS X) window manager. And it works very well. Copy and paste is supported between Windows applications and OS X, as well as drag and dropping files! You can even stick Outlook 2007 and Word 2007 in your OS X dock!
You may be wondering, “But that means he is still running a full version of Windows inside his copy of Parallels? How is that any better than two computers? And how does his laptop have any, what did he call it? Oh yeah, how does his laptop have any of this so called honk left over with a full version of Windows running in the background?” Those are very good questions, thank you for asking.
Yes, I am still running a full version of Windows. It is however far more efficient to do so in a VM than on real hardware because I do not have to keep up with 2 computers, a KVM, or using synergy, keyboard/mouse synchronization errors. I have one system to rule them all. Besides, I don’t have enough Tycho and Gabe stickers for all of my boxes, it is good to whittle the number down.
Yes, my laptop still has plenty of honk. It screams. It is so bright with power it out-gleams Mr. Cruise’s teeth from that volley-ball scene in Top Gun. Shiny! The reason for this is that I am using a modified, stripped-down version of Windows XP. I stripped out all of the unnecessary services and other miscellaneous junk out of Windows using nLite (http://www.nliteos.com/). nLite allows you to create custom versions of Windows without all of the fluff. Before I installed Office inside my VM, it was running in around 100 M of memory. Not 640K, but not bad. If you are going to go this route, it is my very sincere suggestion that you do not use Vista. It is a resource hog and the point of running a Windows VM in the fashion that I have described to you is for it to consume as few resources as possible. In fact, Windows has now become not unlike a Java Virtual Machine (JVM). It exists to make other applications possible on top of an existing OS.
You may have caught that I mentioned I am able to play games. Does this mean that the new release of Parallels supports DirectX inside of a VM? No, no it does not. That feature is exclusive to the latest beta of VMware Fusion. However, you can use Parallels to run your BootCamp (http://www.apple.com/macosx/bootcamp/) version of Windows inside of a VM. That way you can boot into Windows bare-metal when you need to and use it inside of OS X the rest of the time. How well does this work? Well, I do not know. I have no need to do this, because the only game I play is World of Warcraft (until Spore comes out) and it runs just fine on my Mac 🙂
I could keep typing, but I need to modify an Active Directory account’s attributes. Pardon me while I load AD Users and Computers from OS X. It’s evil, it’s delicious, it’s fun, and it’s a darn good business reason I can present my boss (the one I’m married too) that I should be able to buy a quad-core MacPro. Please honey?