One example of the growing criticality of vCenter comes with vSphere 5’s Auto Deploy feature. In certain disaster scenarios, its dependency on vCenter can send users down a ‘rabbit hole’ of availability issues if the environment is not designed correctly, experts say.
Auto Deploy can be used to deliver host and VM configuration information across the network, while an Auto Deploy server manages state information for each host. It stands to be most appealing in large environments where quick deployment of new VMs is a must.
In an environment where vCenter, the vCenter database and the Auto Deploy server are all virtualized in the same vCenter datacenter, and all hosts and VMs are totally reliant on Auto Deploy for their state information, Auto Deploy cannot set up vSphere Distributed Switches (vDS) if vCenter Server is unavailable. If the host can’t connect to vCenter Server, it remains in maintenance mode and virtual machines cannot start.
I’ve been looking more deeply into the proposed IETF standard VXLAN of late, but my reading has left me with more questions than answers.
VXLAN, or Virtual eXtensible LAN, submitted to IETF last fall and talked up at last year’s VMWorld, is a protocol for routing Layer 2 traffic over Layer 3 networks, with the goal of either expanding the available VLAN address space, or supporting inter-data center VM mobility, depending on who you ask. It’s also important to note that VXLAN, for now, is still a proposed standard that remains largely theoretical.
Recently, a blog post by Scott Lowe caught my interest by pointing out some key differences between VXLAN and Cisco’s Overlay Transport Virtualization (OTV) protocol, which also encapsulates layer 2 frames in layer 3 packets. Continued »
The following is an expansion on our 2012 predictions piece on virtualization management, which ran on SearchServerVirtualization.com last week.
As server virtualization becomes mainstream, vendors like VMware have to come up with a new return on investment (ROI) proposition to sell to customers, and experts say to be on the lookout for a new sales pitch around automation and operational savings.
“The purpose of automation is to create a new hard-dollar ROI for virtualization, basically opex savings by streamlining operations, as opposed to the old ROI which was capex savings that came from server consolidation,” said Bernd Harzog, analyst with The Virtualization Practice.
Consolidation ratios won’t be as high as mission-critical apps are virtualized, and so “in order for the virtualization freight train to keep rolling down the tracks, VMware needs a second, new incremental hard dollar ROI,” he said. “That’s going to come from this automation-driven opex savings delivered by vCenter Operations. And that’s going to shake up the management software industry in an extremely profound way.”
With today’s release of version 4.5 of its vOps management software, Quest subsidiary VKernel is challenging VMware to an infrastructure automation duel.
“[Paul] Maritz from VMware has been discussing automation for the better part of a year,” said Alex Rosemblat, product marketing manager for VKernel. With vOps 4.5, Rosemblat claims, “we’re really starting to deliver on the future vision that VMware has laid out.”
An example of VKernel’s first foray into capacity management automation in vOps 4.5 is ‘Zombie VM’ deletion – the automatic removal of VMDK files deemed waste by the software. By default, vOps marks a VMDK as waste if it has no connection to anything in vCenter and has been sitting in storage for more than 90 days. Users can also customize these criteria.
“What users have had to do in the past, after they receive this list of waste files, is go through and manually find these waste files and delete them,” Rosemblat said. “If you have a couple hundred files, that can turn into several hours’ worth of work. We can do that basically with the click of a button.” In the next release, users will be able to schedule Zombie VM deletions.
Other new automation features introduced with 4.5 include snapshot auto-merge, and more “one-click issue remediation” similar to waste file deletion, but for memory limit sizing. Previously, vOps could address memory allocated to particular VMs, but not memory limits, another setting within vSphere which restricts the amount of allocated memory a VM actually uses. This release adds visibility and automated remediation for memory limits to existing support for memory allocations. vCPU sizing is also supported with this release, in addition to physical CPU resource sizing.
Also new with this release: a new way of doing reports designed to shave down the amount of time admins spend on this task. Now, an extended custom URL generation process allows admins to send management a link that shows a real-time view of the environment, rather than repeatedly generating and sending reports.
Finally, vOps 4.5 also includes the ability to forecast how much hardware will be needed to support an environment based on the current growth rate; common trend alarm warnings delivered by default; application type tags and resource sizing configuration groups; plus vSphere 5 and raw device mapping (RDM) support.
“VMware’s strategy for operations management is to combine performance management, capacity management, [and] configuration management…with the self-learning analytics that came with Integrien for the purpose of automating IT operations,” said Bernd Harzog, analyst with The Virtualization Practice. “Quest is coming out of the gate, in terms of a response [to VMware], better and faster than anybody else in the ecosystem.”
But it’s still too early to pick a winner in this race, Harzog said. “Right now we’re on chapter one of a twenty-chapter book.”
With little fanfare, VMware introduced version 2.0 of its chargeback tool last week, and the release notes include more “known issues” than new features.
vCenter Chargeback has been renamed Chargeback Manager and folded into the vCenter Operations suite. According to the release notes issued Nov. 30, version 2.0 contains more than a dozen new features, including support for vSphere 5.0 and vCloud Director 1.5.
VMware has joined Google and Cisco as a new investor in Puppet Labs, according to a press release issued today.
VMware was one of the investors that contributed to an $8.5 million round of funding for the IT systems automation software maker, bringing Puppet’s total funding to $15.75 million since its founding in 2005.
Puppet’s software, available as both an open source offering and in a commercial version called Puppet Enterprise, automates provisioning of VMware and Amazon EC2 instances in enterprise IT environments. IT shops have also used the open-source version of Puppet to increase server-to-admin ratios. Demand for experience with Puppet’s software has grown rapidly in the last year, according to a Wall Street Journal report.
VMware and Cisco “have hands-on, in-production-at-scale experience with Puppet – in some cases, going back several years,” wrote Puppet founder Luke Kanies in a blog post on the funding round.
The first patch for vSphere 5 has been issued, a fix for a bug that was causing long boot times for virtual machines attached to iSCSI storage systems.
The issue affected vSphere 5 virtual machines connected through software-based iSCSI initiators, and occurred, according to VMware’s Knowledge Base, because “ESXi 5.0 attempts to connect to all configured or known targets from all configured software iSCSI portals. If a connection fails, ESXi 5.0 retries the connection 9 times.”
According to the Knowledge Base article, “VMware is delivering an ISO file for this patch release due to the nature of this issue. This is not common practice and is only done in special circumstances.”
In some cases, this bug led to boot times of up to 90 minutes. Bill Hill, infrastructure IT lead for a Portland-based logistics company, said it took some of his servers that long to boot with the buggy version of ESXi 5.0.
This would be an open-and-shut case of a simple bug fix, Hill said, but it remains a mystery to him, as well as to some VMware insiders, why the original buggy ISO file remains available for download on VMware’s website as of today.
“It’s a frustrating situation,” said Hill. “Why leave a time bomb out there?”
VMware’s PR representatives did not have an official response as of this post.
Site Recovery Manager, vCloud Director and vCenter Operations are seeing strong sales, VMware executives said on the company’s earnings call last night.
VMware reported total revenue for the third quarter of $942 million, an increase of 32% from a year ago. CFO Mark Peek and CEO Paul Maritz said management tools were a strong part of those sales, although they didn’t attach any specific numbers to those tools.
“Much of the increased interest for our management tools is being driven by the build-out of private clouds within our customers’ data centers,” said Peek, according to a transcript of the call.
As the virtualization market matures, functionality that used to live in the underlying infrastructure is steadily being absorbed into virtual machines.
One of the hotter areas for emerging companies of late is software that allows agents inside guest VMs to automate the use of host-based Flash storage as cache in order to boost application performance.
Last week, FlashSoft, a Flash caching company which came out of stealth at VMworld 2011, released a new version of its beta product. Fusion-io has been around longer than most with its solid-state drives, but it too is also moving into the automated caching space following its acquisition of IO Turbine earlier this year.
This week, another company came out of stealth called Nevex, which also claims to have invented a better Flash caching mousetrap. Like FlashSoft, Nevex’s CacheWorks product uses agents inside guest VMs to automate the use of Flash as cache. Also like FlashSoft, Nevex plans to make its software work at the hypervisor level to eliminate the need for guest agents.
Where Nevex says it differs from other offerings is that its software caches at the file, rather than the block level. This means users can select which specific applications to accelerate using Flash, instead of letting an algorithm move hot blocks into the cache. Nevex also integrates with Windows to control which data to promote to DRAM for multi-level caching.
When SSDs first came on the enterprise IT scene, they were a part of the storage area network (SAN), sitting behind a storage controller. But with these newer offerings, the use of Flash as cache on the host, rather than as persistent storage on a back-end array, is the vision.
In the meantime, with the advent of new technologies, it’s becoming easier to picture the entire enterprise data center infrastructure, from the virtualized network to this type of virtualized storage, running as software inside x86 hosts. I’m reminded of the old Sun tagline, “the network is the computer.” Sun has since been gobbled up by Oracle, of course, but it feels like we’re finally seeing that concept come to fruition.
Microsoft let drop a new tidbit of information about Hyper-V 3.0 in a blog post this week.
Hyper-V Network Virtualization:
Allows you to keep your own internal IP addresses when moving to the cloud while providing isolation from other organizations’ VMs – even if those VMs use the same exact IP addresses
The post is otherwise a rehash of Hyper-V 3.0 features already previewed at the Microsoft Build conference last month, including support for a new virtual switch and scalability improvements.
It’s unclear at this point whether this technology is related to the new NVGRE standard proposed by a Microsoft-led consortium within IETF. But what is clear is that network virtualization is becoming a new battleground for the market’s biggest virtualization vendors.