425 IT geeks invaded Brunswick, Maine last Thursday for the New England VMware Users Group summer meeting at Brunswick High School. The event, which started at 10 AM with a sponsor showcase and proceeded with after-lunch sessions, concluded in a Lobster/Clambake at Gritty’s in Freeport, Maine.
“Good performance starts with proper planning and design, configuration best practices and operational awareness,” said Mike Burke (left), Virtual Infrastructure Practice Director for Virtera, a CT-based consulting firm during the first session I attended equivocally titled “Getting the most out of your ESX Environment – Performance tuning.”
“Tuning is not a replacement for proper planning and design,” Burke said. “Don’t guess; gather the data.”
Planning trumps tuning in VMware deployments
Performance, Burke said, hinges on examining and understanding how CPU, memory, networking and disk space will work together in a virtual infrastructure. Planning, then, is the key to better ESX performance, and not necessarily tuning.
Generally speaking, more cores are better; so plan on going big. “How does VMware license their product? Per socket. If you can choose a quad-core, it’s a lot more economical,” Burke said, citing that AMD series processors are now “dirt cheap” compared to earlier years, with eight core boxes going for around $2000.
“You need to weigh what the workload is against what the goals of virtualizing it are,” Burke said. For example, a four-CPU physical server may not be a good candidate for a server consolidation project if it’s currently using the four CPUs to the fullest extent, because you won’t have a good consolidation ratio on those systems. It may, however, be a candidate if the goal is overall server virtualization instead of consolidation.
Another reason for this type virtualization is hardware capability. You can’t VMotion from AMD to Intel. Even in servers from the same line, Burke said, older hardware instruction sets coded in the CPU won’t look the same, preventing VMotion capability.
More host RAM gives better overall performance, configure a VM’s memory based on actual need – are you really using the full two GHz, or would it better used elsewhere? On a general note, though, Burke said “oversubscribing,” or provisioning more memory for virtual machines in the physical host server, is better than not.
A systems administrator can, however, tweak a few advanced variables in a virtual machine’s configuration or vmx file. For memory, you can reset the time interval for transparent page sharing (TPS), which is automatically set at 60 seconds, by using mem.ShareScanTime. Adjust the memory pages to scan per 1 GHz idle cycle, automatically set at 4 MB/sec per 1GHz by using mem.ShareScanGhz. mem.CtlMaxPercent limits memory reclamation by ballooning.
But tweaking these parameters, Burke warned, can detract from virtualizing your systems. “You’re instructing ESX to spend more time processing and looking for memory savings rather than sitting in the background and running the VMs, which diverts resources away from virtualizing guests to handling these configurations on the backend.”
On a per-VM level, you can use the same mem.CtlMaxPercent command, and also: sched.mem.maxmemstl, which caps the max memory reclamation by ballooning (vmmemctl); sched.mem.pshare.enable – enables/disables memory sharing (TPS); lastly, using sched.swap.persist can enable vSwap to persist after power-off.
New England VMware Users Group summer meeting attendees sound off
Other sessions spanned from technical sessions about VMware DRS and HA to VDI demonstrations, SRM implementation and general topics, such as managing the virtual data center and how to minimize VM sprawl. Some sessions were led by speakers from consulting companies; others were 45 minute long vendor sales pitches disguised as learning sessions, but overall attendees had a good experience.
“For me, the sessions are good, but I can learn the same material in a textbook or on a website,” said Lee Pullen of the Maine Education Association. “The largest benefit is the networking, meeting other users and finding out what they’re doing in their own environments. That’s the real value in attending.”
“I found a few of the titles of the sessions misleading,” said State of Maine Office of Information Technology employee Lori Blier. “The last session I was at seemed like it was going to be about getting more out of your virtual infrastructure, which is what it was called. But they mainly talked about virtual desktops.”
Perhaps next year, the organizers could include descriptions of the presentations so that participants know exactly what to expect, Blier suggested. She said that she liked the sessions that discussed managing and using ESX Server, which is what the state of Maine plans to upgrade to pending management approval. They currently use VMware Server in production.
“I was a little disappointed that there were no sessions or vendors that focused on the networking side,” commented Kris Kirby from the Vermont Agency of Natural Resources, who said that’s his main concern with his VMware environment. “I also thought it was going to start a little earlier.”
Chris Harney, a systems engineer from Maine who organizes the New England Users Group meetings, said that feedback from the last session indicated increased interest in virtual desktop information sessions, which is why several of the sessions catered to that track. He also said that if networking came up as a trend on the feedback sheets, it would be voted on by a focus group and most likely included in the program for the January winter meeting, which will be held in Massachusetts.
“We’ve only been doing this for a couple years, and we’re still trying to figure out what works,” Harney said. “This is just a hobby for me, but it’s almost a full-time job. When we first started, we had 35 people. Now we’re looking at almost 500 per meeting.”
In discussions with other administrators, determining the number of hosts per cluster gets a wide range of response. Although there are a number of factors, the unfortunate simple answer for every installation is “it depends.” Let’s tour of some of the reasoning behind various hosts per cluster configuration.
Similar generations of hardware
Basing cluster configurations on similar hardware is a common logical type of cluster separation. This is because VI3 does not currently support dissimilar migrations to different processor systems with VMotion technology. Another common cluster configuration is to separate clusters following the internal proof of concept, a cluster separation that makes the business and internal IT totally comfortable with VMware virtualization. This generally has the ‘pilot’ equipment in a cluster because it was possibly taking on a less important role due to new resources being added.
HA and DRS configurations and standby capacity
Depending on the granularity of the VMware HA and DRS configuration, more or less hosts may be required when considering separation and failures permitted values. These generally make the difference between one or two more hosts in clusters with less than ten hosts.
Many administrators implement VI3 as part of the disaster recovery (DR) plan, and there may be a cluster with excess host capacity planned for use in a DR situation. This configuration will either mirror the primary cluster’s host capacity or be at a percentage needed to sustain the DR workload.
Separating development, test, QA, and live workloads
Depending on the requirements to define the process for internal systems to go from the conceptual stages to a live workload, different clusters may hold these stages well. Separating the workloads between clusters is a natural protection from a resource perspective and can make the process more likely to be enforced. On the same token, the various clusters should be configured respective to their roles. For example, the development cluster should not have access to the live network segment or live storage system.
Internal and external-facing systems
Over on SearchServerVirtualization.com, I posted a blog about putting external-facing VMs on the same hosts that hold internal workloads. Within VI3, these workloads would be better suited on a separate cluster. This can lower the risk of a hypervisor vulnerability affecting an internal workload.
Some clusters may be configured and populated exclusively by funding situations. Various large customer projects, consulting entities, or departmental chargeback may dictate how the clusters are configured. This granular approach may effectively create a large amount of underutilized capacity, but may be unavoidable when it comes to the financial traits of an organization.
More powerful hosts or more hosts?
This is a tough one to call as there are benefits either way to having a large number of very capable systems or a smaller number of lesser expensive systems. Both systems are comparatively large systems, but in terms of an ESX host the smaller system may be a dual socket, dual core system with 32 GB of RAM and the larger system may be a quad socket, quad core system with 128 GB of RAM. Both are good ESX host candidates, and there are many upfront cost factors with each configuration.
What is your strategy on cluster configuration?
As you can see, there are many approaches to this topic. VI3 is quite adaptive to most configurations, but upfront planning remains important with this important configuration topic. Share your comments below on your cluster configuration.
It goes without saying that working with a beta incurs a small amount of risk. With VMware Server 2.0 beta 2, I came across a particular situation that caused an issue with a virtual machine (VM). My CentOS Linux system had been running VMware Server 2.0 beta 1 since its initial release. On a separate Windows system, I had been running VMware Server 2.0 beta 2. To archive the VM, I had copied it from the beta 2 system to the beta 1 system. After the VM was copied over, I could add it to the virtual machine inventory, modify the hardware inventory to get the processor and RAM configuration correct for the destination system, however I could not power it on.
I had seemingly forgotten that the Linux system was at the beta 1 release. After downloading beta 2, a quick run of the vmware-install.pl file uninstalled beta 1 and then installed beta 2. The virtual machine, however, was still unusable due to the backward version modifications. A quick scan of the release notes for beta 2 did not make specific mention of this situation, however it makes sense that the older build (# 63231) would not be able to power on and use the newer build (#84186). Once the Cent OS Linux system was upgraded to beta 2 and had the VM copied back over, it was able to power up without modification.
The gotcha: if I would have performed an initial move of the VM instead of a copy, it would have been destroyed!
VMware products generally perform well in the category of moving VMs between versions that are on different host operating systems, as well as importing VMs to newer products. But, exercise a little bit of caution in the beta use of VMware Server 2.
VMware Workstation 6 makes it possible for multiple virtual machines (VMs) to run on a desktop or laptop. An existing physical PC can be converted into a VMware VM, or a new VM can be created from scratch. Each VM then represents a complete PC, including the processor, memory, network connections and peripheral ports, and can run Windows, Linux and a host of other operating systems side-by-side on the same computer.
Here is what’s new in Workstation 6.5:
A record/replay functionality in the integrated virtual debugger. You can deploy your applications in “record” mode directly from Visual Studio to capture your entire virtual machine (VM) execution. Record/Replay functionality has been added to the existing Integrated Virtual Debugger plug to reproduce exact VM executions and debug the application during replay and identify defects without leaving the familiar Integrated Development Environment (IDE).
It includes multi-monitor support for Unity, so users can integrate guest applications with host machines across two or more monitors.
VMware has also added support for virtual machine streaming, so users can start using their VMs without waiting for them to completely finish downloading from the Virtual Appliance Marketplace or a HTTP server.
VMware’s ACE is used to provision standardized client PC environments inside secure and centrally managed VMs. Each ACE contains a complete client PC—including the operating system and all applications.
ACE 2.5 users will be able to take advantage of all new features in Workstation 6.5, plus these new features;
Now there is a Kiosk Mode, so virtual desktops can be deployed to shared physical PCs while preventing the host operating system from misuse or from attacks.
There is a Full Screen Toggle Mode, so IT can switch full-screen views between guests or between guest and host operating system via hot keys.
And lastly, the new version includes Pocket ACE Caching, which improves performance of Pocket ACE by setting pre-defined maximum cache size.
VMworld is VMware’s annual virtualization conference that is held in both the United States and Europe. Each year thousands of users attend to learn and network with fellow VMware users, vendors, customers and employees. There are multiple tracks and over 200 sessions to choose from during the 4-day long conference. This year there will be seven tracks that will focus on the following areas:
o Virtualizing the Desktop
o Building Business Continuity & Disaster Recovery
o Exploring Technology & Architecture
o Planning & Operations in the Datacenter
o Automating the Virtual Datacenter
o Running Enterprise Applications in Virtual Machines
o Virtualization 101
So how do you justify the expense of the conference in addition to travel and lodging expenses to your boss? Here’s some information to help you make a case for attending.
Most IT shops have training budgets for their staff to attend either online or classroom training each year. A typical one week classroom training course is around $3,000; VMware’s Deploy, Secure and Analyze training course alone is $3,295. The cost for conference registration will vary based on when you register it is $1,495 until July 11th, $1,745 from July 12 to September 13, 2008 and $1,895 onsite. Airfare will typically run between $300 – $500, hotels are anywhere from $50 – $120 a night in Vegas depending on how fancy a place you want to stay in.
So the total cost to attend would be approximately $2,500. This would be either comparable or cheaper then most one week training classes. You will also get way more bang for your buck by attending VMworld then you would by going to a VMware training class. There are so many great sessions available and you can customize what you want to see and learn about. Show your boss the great sessions that are available, many may cover a specific pain point in your company (i.e. disaster recovery).
Many conferences are seen as more of a social event, yes it is in Vegas and you could easily get distracted but VMworld is a very intensive training and networking conference. You will get out of it what you put into it, the potential is there to learn a huge amount of knowledge from the sessions, gain experience from the labs that are available, share experiences with other users and to meet with vendors to look at their products that may fit a need at your company. Offer to do a write-up on the conference and share the information and experience that you gained in VMworld with other users in your company. You might even try and convince your boss to also attend, there are many sessions that are suited specifically for management instead of administrators.
There are also many VMware engineers that attend the conference; VMworld is a good opportunity to get close to the people that have designed and developed the software and discuss any particular issues, questions or problems that you may have in your environment. In addition the training doesn’t end when VMworld ends, with the large amount of sessions available (200+) it’s usually impossible to see all the ones that you want to in only four days. VMworld attendees are given a login to a site that has all the slides and audio from all the presentations so they can view all the sessions after VMworld is over. VMware no longer releases all the sessions to the general public right after Vmworld ends. Last year they released a very small amount of sessions and then released a few more each month.
So there are many good reasons that you should attend, hopefully you can make the case with your boss to go this year. If you do attend I look forward to seeing you there!
Both Fortune Magazine and virtualization.info are reporting that Diane Greene, co-founder of VMware, has left her position as CEO and been replaced by former Microsoft executive Paul Maritz by way of the EMC board of directors.
Although initially it was unclear whether or not Ms. Greene left on her own volition, at this time multiple sources are confirming that she was fired. Speculation says that her forced departure could be the result of her attempt to VMotion VMware from EMC to Intel, and the fact of the matter is that Ms. Greene’s tenuous relationship with EMC CEO Joe Tucci was well known.
I had the pleasure of meeting Ms. Greene on two separate occasions, and both times I was impressed with her attitude and knowledge. She was well-liked and will be a hard act to follow — no single executive has done more to bring virtualization to the desktop and to the data center.
Trade shows are great – if you have time to attend, have staff to cover while you’re away learning about a new technology, can avoid summons back to the office during the show, can find a show in your local area or can get budget approval to attend a show that requires flight or hotel reservations.
Enter the virtual trade show (VTS); an online conference conceived to mitigate the above challenges. Last week, sister sites SearchDataCenter.com and SearchServerVirtualization.com hosted an advanced enterprise virtualization VTS. I helped staff the networking lounge and editorial booth where I had the opportunity to chat with VMware users about two of the virtualization provider’s newest tools, Site Recovery Manager (SRM) and Storage VMotion.
IM chatting with attendees
Conversations ranged from general IT talk (“Anyone use virtual desktops?”) to small talk (“What’s the weather like in Maine?”). Trying to be the friendly host, I said “Good morning” to the room. I immediately got the reply “Good evening” and was subsequently told this particular user was signed on from –literally–the other side of the world.
I ended up chiming-in on another user’s question about if anyone was familiar with VMware Site Recovery Manager (SRM). The respondent had said that he was, and I ended up asking him about his experience via private IM. SRM orchestrates your virtual machine disaster recovery (DR) plan in the event that your main data center goes down. It prioritizes which virtual machines (VMs) are brought up at the failover site based on available resources, syncs your VM configurations between the main site and the failover site, and allows for DR plan testing without having to take the system offline. It’s a relatively new addition to the VI3 lineup, having been on the market for four months (at the time of publication).
Our conversation turned to plug-ins, and he raved about Andrew Kutz’s Storage VMotion plug-in. The plug-in adds a user interface to the out-of-the-box product, which operates through a command line interface. The attendee explained that he’s primarily a “Windows guy,” so a graphical user interface makes using Storage VMotion much easier.
Kutz recently released an update to the Storage VMotion plug-in.
“The new release now ignores raw device mapping,” Kutz said. “Previously, if you had a raw device that pointed to a 300 Gig disk, the plug-in would look at it as an actual disk and screw up the disk size map.”
He also removed the majority VMware’s internal code from the plug-in (excepting the code that loads the plug-in), replacing it with code based on the VI Toolkit for .NET.
Impressive user interface
The VTS emulates the look of a physical tradeshow floor, which makes navigation a bit friendly, though not as intuitive as I would have liked. You could either move around with the help of a clickable navigation bar, or point-and-click your way from the main entryway to the desired location, be it the conference hall, vendor hall, networking lounge or “library” where you can download PDFs of presentations and various information from vendors, which then moves into your “suitcase,” displayed on your personal page.
VTSs are essentially fancy webcast packages displayed in unconventional ways. In this particular show, the topics were “Protecting your Virtual Environment: Backup and Storage,” “Virtual Infrastructure Automation and High Availability Best Practices” and “Virtual Infrastructure Tuning and Advanced Management.” The speaker was displayed on the left side of the screen presenting his slides via streaming video. The slides were displayed on the right hand side. Users could ask questions via a box at the bottom of the screen.
The VTS, if done correctly, has many more plusses than minuses. As long as there is a reliable Internet connection, there’s no need to leave the data center (if you don’t have a reliable connection in your data center, you might think about leaving for good). The content is almost exactly the same as at a physical trade show (that’s how they got the video of the speaker to begin with). And editorial staff can send IT pros direct links to helpful guides that they know of if an IT pro wants to know about, for example, virtual desktop drawbacks.
If any SearchVMware.com readers passed up the opportunity to “attend” a virtual trade show, I suggest you test it out next time a topic of interest comes around. It’s actually fun to use (think AOL in the 90’s minus the “you’ve got mail”) and offers great learning potential and networking opportunities.
An archived version of the advanced enterprise virtualization virtual trade show is available online, short registration required.
A little known feature, called Virtual Machine Failure Monitoring (VMFM), was introduced in ESX 3.5. VMFM offers the ability to leverage HA to monitor VMs for operating system failures, such as blue screens, and have them automatically restarted. Previously, HA would only deal with ESX host failures by automatically restarting VMs on alternate ESX hosts in the event of a problem with the host server.
VMFM also extends HA to monitor VMs through a heartbeat sent every second when using VMware Tools. This new feature is disabled by default and is considered ‘experimental’ by VMware. This typically means it works but it is not officially supported for production use yet. In order for this feature to function properly you must first ensure the following conditions exist:
• ESX hosts are version 3.5
• VirtualCenter is version 2.5
• VMware Tools is installed on VMs and is the latest version
• You have a Cluster configured and HA enabled
To enable it follow the below steps:
• Edit the Settings for your Cluster
• Choose VMware HA and click the Advanced Options button
• Add the following Options and Values
das.vmFailoverEnabled – true (true or false)
das.FailureInterval – 30 (declare virtual machine failure if no heartbeat is received for the specified number of seconds)
das.minUptime – 120 (After a virtual machine has been powered on, its heartbeats are allowed to stabilize for the specified number of seconds. This time should include the guest operating system Boot up time)
das.maxFailures – 2 (Maximum number of failures and automated resets allowed for the time that das.maxFailureWindow specifies. If das.maxFailureWindow is ‐1 (no window), das.maxFailures represents the absolute number of failures after which automated response is stopped and further investigation is necessary)
das.maxFailureWindow – 86400 (Either -1 or a value in seconds. If das.maxFailures is set to a number, and that many automated resets have occurred within that specified failure window , automated restarts stop and further investigation is necessary)
I enabled this on a Cluster and tested it by simulating a blue screen on a VM running Windows 2003 Server and it worked perfectly. After 30 seconds the loss of heartbeat was detected and the VM was automatically restarted. Currently there are no notification alerts that can be configured when this occurs. That is, if you check the events for the VM you will see no evidence of this happening. The only mention of it that I found in the logs was in the hostd log on the ESX server ([2008-06-26 11:47:22.552 ‘ha-eventmgr’ 3076440992 info] Event 101 : VM1 on Esx1.xyz.com in ha-datacenter is reset). Hopefully this will change in a later version when the feature is no longer considered ‘experimmental’. You can read more about this new feature in a white paper that VMware has provided on it.
To many of us, some VMware product features and functionalities still seem like magic. Sure, I understand what the products do, but considering that my education and background is in writing and editing (read: not computer science), how they work still remains a mystery.
At least a few pieces of the VMware Infrastructure feature set were explained to me by Alliance Technologies solutions architect and Central Iowa Virtualization User Group (CIVUG) member Sean Clark. In a recent conversation, Clark explained how VMotion works, what Distributed Resource Scheduler (DRS) is and how the competition stacks up. Check out this audiocast if you’re still using VMware Server or just the ESXi hypervisor but are considering moving into the rest of the suite as it more provide more information to help you with your decision.
Performing key maintenance on a VMware Infrastructure 3 (VI3) host OS or can be an involved process, as can the process of simply adding a host. Because of the length of time that a system may be offline or because of what exactly is being performed, having the hosts in an isolated cluster can allow a safe environment for virtually any task. An isolated cluster will not apply to the same DRS and high availability rules that may apply to the live cluster. Further, you can reboot the host as needed without needing to wonder if there will be any effect to the live workload.
With the isolated cluster, the following tasks can be safely performed:
-Version upgrades (ESX 3.0x to 3.5 Update 1)
-Adding a new host to an existing cluster
-Testing network connectivity to various port groups
-Confirming VMware HA and DRS performance with in an isolated set of rules
-Importing or configuring new storage types
A good practice for the isolated cluster would be to keep it less visible to the live workload and named accordingly within the VMware Infrastructure Client, so a name like “TestingCluster” or “ZZUpgradeCluster” can distinguish the collection to be different from that of the live workload.
The figure below shows a cluster with a name and position that contains one host in maintenance mode for any task better suited in an isolated environment:
It is important to note that this cluster would still require licensing as it would be in live workload cluster. More information on creating a VI3 cluster can be found in the VI3 online library.