In vCenter Server you can display certain views that will show listings of your hosts and virtual machines (VMs). These views are useful for displaying the configuration and status of all your hosts and VMs, and they are also customizable so you can either display or hide predefined columns to control what is displayed. But what if you want more information besides what you can see with the predefined columns? You can use custom attributes. This feature is unique to vCenter Server. You can display additional information in the columns or annotations section of the host/VM summary page. For example, you could add columns that display information about the OS or function. You can also sort by the custom attribute columns so you can group information together.
As mentioned this is only a feature of vCenter Server, so if you connect to a host directly using the VMware Infrastructure Client (VI Client) you will not see these attributes. They are stored inside in the vCenter Server database in two different tables. The attributes themselves are stored in the VPX_FIELD_DEF table and the attribute values are stored in the VPX_FIELD_VAL table. Because this data is custom if something happens to your database these attributes will be lost as they do not repopulate like some of the other tables do when hosts are added to vCenter Server.
You can create these attributes in two different ways; either be selecting Administration then Custom Attributes from the top menu, or by selecting a host or VM in the left pane and in the right pane selecting the summary tab and, under Annotations, clicking the edit link. VMs will have a notes field built in to the vCenter Server database — this field is not considered a custom attribute. The note field is useful for documenting information about the VM, but custom attributes provide more fields so you can split information up into multiple fields rather than putting it all together in the notes field.
Now let’s walk through how to add custom attributes.
You can add or remove attributes once you have the Custom Attributes window open:
When you add a new attribute, you have a choice of three different types; Host, Virtual Machine or Global:
You may or may not see Host/Virtual Machine If you select the machines individually and click the Edit link instead of using the top menu option. Host and Virtual Machine attributes only apply to those objects and will only be displayed for each one. Global attributes are displayed for either object type.
Once you add the attribute it will be displayed in the Annotations section for the host/VM and will also be available as a column to be displayed in the Host/VM views.
Now that you have your custom attributes defined you can start setting the values for each VM or host. To do this, first select the host or VM, then click the edit link on the summary tab in the annotations section. You can also set this in the view screens by clicking under the column that you want to update on the host or VM that you want to update, as shown below. This method is faster than setting them individually on host or VM.
If you want to further customize the columns, simply right-click on the column heading and you can check or uncheck columns that are displayed, including the predefined columnsn and your custom columns.
Using these custom attributes is a great way of documenting your hosts and VMs inside the VI Client so you know more about them. Often times VM names are not descriptive, so having these extra fields can provide more information to anyone who uses the VI Client. This can be useful as it can prevent certain things from happening, such as accidentally rebooting the wrong VM.
I read both the Virtualization Review performance testing article and the related commentary with quite a bit of interest. I came across this statement in the article:
The key to making this cross-comparison of the hypervisors is ensuring a consistent environment. For each test, every hypervisor was installed on the same server using the same disk system, processor quantity, and memory quantity. The same hardware was used for each test and all software was installed and configured the same way for each test.
I find this confusing. I interpret it to mean that identical hardware was used for testing, as well as some sort of install of the software. But I need to ask: What software? Are we talking about the contents of the virtual machines? Or are we talking about the hypervisors? What exact version of the hypervisors were tested? What the paragraph about does not speak to is the layout of the VMs within the disk system. Did the VMs share a LUN or did they use their own LUN? Hyper-V often requires one LUN per VM if you want Quick Migration support. Is this the way the other hypervisors VMs were also laid out? Since disk I/O is a major issue with performance I would have expected this to be spelled out in great detail.
The other parameters that I expected to see explained were how the VMs were imported into the hypervisors. Were they stood up as new VMs for each hypervisor? Were the VMs the results of a P2V migration? Or were they imported from a library of VMs? Were paravirtualized drivers installed within the VMs? Just how were the VMs configured and created?
I was looking through the article for several numbers, one I found and the others I did not.
- How many times did the testers run each test? The engineer in me wants to know the math behind the numbers. Perhaps there should have been a graph of runs vs. results — this would help me to determine if caching was involved.
- Were the tests run sequentially? I ask this because disk information is cached at several levels, so we need to know if the tests are running from cache or direct from disk. If you have one VM per LUN, cache comes into play quite a bit. Several disk I/O tools (iozone for example) can disable disk cache capabilities, but repetitive tests will often live within cache, which is not a very accurate disk I/O measurement, as with an average system disk access will not live in the cache. In essence, while the tests were supposed to be of just the hypervisor, the disk subsystem is part of the whole. Was the disk I/O time included in the equation? It appears that disk I/O was included, but this begs the question: how were the VMs laid out on the disk? Were they all using the same LUN? If so then the LUN got pummeled and the numbers are all suspect.
- Where are the numbers for running this same test on physical hardware? Otherwise how can we know if the CPU, RAM, and disk operations numbers make any sense or not? Where is the baseline for the tests?
The most interesting number to me was the time clock, but without knowing how many tests were run, the numbers are so close that the differences could amount to nothing more than mathematical round off or truncation errors. Ever seen a poll that has a documented 3% error, but the results are within 2% of each other? In terms of statistics, this means that they are really the same results. So where are the statistics behind the time clock values?
What I do like about this test is that it is an unoptimized test. In other words, no one went in and optimized the number of spindles per LUN, the SQL implementation, or the other workloads. In other words, the testers did not go out of their way to make anything look particularly good. The tester instead did exactly what a user would do: installed a workload and let it run. The average user is simply not going to optimize his install, he will just run it and expect the best.
On the other hand, many vendors will optimize everything to garner the best results. They will spend hours if not days tweaking this and that to get the best results. I even know one vendor who put a special bit in their hardware that once toggled would just repeat whatever was in cache on the hardware 10 million times. The vendor got great results but they were not real-world results, as you could never pump data to the hardware that fast through the other layers involved. We need more unoptimized performance tests.
If you are going to do performance tests between hypervisors, take care to ensure your VMs are laid out identically with respect to storage, document the use of any paravirtualized drivers, and run the tests many many times so that you can get your statistics correct.
You may find that there is just no clear winner, leaving your choice of hypervisor dependent entirely on the advanced features available within each product.
Recently I experienced a VMware HA event in my environment which caused the VMs on the affected hosts to be restarted on other servers. While most of the VMs started OK, there were a few that did not. When I manually tried to start them I received the error “Failed to power on VM – No swap file” and the VM would fail to start. What happened is that several VMs were in a zombie-like state, as they were not shut down gracefully. Even though their statuses were displayed as shutdown in the VMware Infrastructure Client (VI Client), there was still a process running on an ESX host that prevented it from being started.
In effect, while the VM’s OS was not running it will still in a running state on an ESX host and had a .vswp file already out there that could not be deleted. As a result, when another host tried to start it the .vswp file could not be created because the other host had a lock on it.
To resolve this situation I had to find out the host that still had a running process for the VM and forcibly terminate the process. To do so I had to log in to the service console of each host and run the following command: ps auxfww | grep VM name. This command returns a list of running process that contain the name of the VM.
When you run the ps command with the VM name listed you will always have one result regardless of if the VM is actually running on the host. This is because the command itself shows up in the result list as the VM name is being used in the command when it is run. However, if the VM is actually running on the host you will receive two results instead of one. The second result will be much longer as it contains several lines of text and will contain the path to the .vmx file of the VM. This second result also contains the process ID (pid) of the VM which can be used to forcibly terminate it. The pid of the VM is located in the second column of the results right after the username (typically root). As you can see in the below example, the first result with a pid of 25914 is the command itself and the second result with a pid of 23896 is the running VM.
[root@esx1 root]# ps auxfww | grep win2003-1
root 25914 0.0 0.2 3688 676 pts/0 S 13:17 0:00 \_ grep win2003-1
root 23896 0.0 0.2 2008 864 ? S< Feb13 4:12 /usr/lib/vmware/bin/vmkload_app /usr/lib/vmware/bin/vmware-vmx -ssched.group=host/user -# name=VMware ESX Server;version=3.5.0;licensename=VMware ESX Server;licenseversion=2.0 build-123630; -@ pipe=/tmp/vmhsdaemon-0/vmxd0af4bb011822fc5; /vmfs/volumes/442d541b-cb5a815d-6083-0017a4a9c074/ win2003-1/ win2003-1.vmx
Now that we know the pid of the VM (23896), to forcibly terminate it we type kill -9 23896. You can verify that the VM process has been terminated by running the ps command again. Only one result should be returned. Now that the VM has been stopped you can power it on using the VI Client and you should have no problems this time.
Today I finally bid farewall to a mountain of old server hardware that was the result of a virtualization project. Dozens of old physical servers have been sitting in an old storage room for over a year having all been replaced by just four new virtual hosts. These old servers served well in their day but were the victim of inefficiency typical of non-virtualized servers. A computer salvage company came and picked them all up to be shipped off to a computer graveyard, and as a result became the final chapter of this virtualization journey.
It’s amazing to see a data center before and after a virtualization project. My data center is full of empty server racks. Where 18 racks used to be filled with servers in the past, today only two racks are needed to hold what remains. The pile of old servers and an empty data center are strong visual evidence of how virtualization can dramatically change a data center environment.
When trying to make a case for a virtualization project, often times the higher-ups who make the decisions are just reading through business cases and studying ROI numbers. They may not have a strong grasp on what virtualization actually is and the additional many benefits that are sometimes not covered by business cases and ROI documents.
Therefore it is important to make them understand what it really is and its many benefits. Visual aids can help make a strong case for you, seeing before and after pictures from other companies that have virtualized their data centers or even visiting their data centers can have a dramatic impact and be of great aid in your quest to virtualize. Other visual aids like the following can be equally valuable:
- Seeing a VM move while running from one host to another because of VMotion
- Seeing VMs restarting automatically on other hosts when a hardware failure occurs because of HA
- Seeing clusters automatically load-balancing themselves with VMware DRS
- Seeing a NIC cable disconnected to simulate a NIC failure and causing no impact to the network connectivity of VMs because of vSwitch NIC failover
So when making a case for virtualization in your environment be sure and include as much visual evidence as possible. Doing so will make those who make the decisions understand what virtualization is truly about.
VMware recently made a technology preview of vCenter Server for Linux available. I decided to test it out and found some disappointing limitations.
vCenter Server for Linux really for Oracle
I downloaded the open virtualization format (OVF) appliance and imported it into my VMware ESX host using the VMware Infrastructure Client — and immediately discovered that vCenter Server for Linux will not work with any database other than Oracle. The rest of the limitations are not as big of a concern and are generally as expected for a technology preview, specifically missing functionality within vCenter for Linux itself. But the lack of support for a database that comes as a part of nearly every Linux distribution is a major issue.
This caveat would not be as much of an issue if VMware’s vCenter Server for Linux offered support for PostgreSQL, MySQL, or even sqlite, but requiring expensive closed source Oracle database for a technology preview is extremely limiting and, quite frankly, a decision that suggests that vCenter Server for Linux was not really created for Linux administrators, but instead for an esoteric few people that have already purchased Oracle. Perhaps this is just a ploy to drive away the possible Linux customers? Or is this a ploy by Oracle to tie Linux vCenter to Oracle and hopefully boost Oracle adoption? I do not know, but not providing interfaces to standard Linux databases is not the Linux way.
Being the Linux person that I am, I did attempt to get MySQL to work with vCenter Server for Linux. First I had to get open database connectivity (ODBC) to attach to my MySQL database server, which required editing a few files and making the necessary modifications for MySQL support. Once I finished doing that I could connect using ODBC to MySQL, but I still needed a database to make it work.
I then started with the database schema for Microsoft SQL and made some changes to the schema and database views to allow the databases to work with MySQL. Whoever said SQL works everywhere is fooling themselves! After an upgrade to MySQL 5.1 I finally had everything imported. Restarting the VMware vCenter for Linux server and looking at its log files showed the failure. MySQL is not a recognized database, and VMware vCenter for Linux died. This was just a bit surprising to me.
So what are the supported databases? Doing a little research on the daemon itself I discovered that supported databases are Microsoft SQL, Oracle, PostgreSQL, and something called Other. I am not sure what Other is but it is not something standard with Linux. Since PostgreSQL was in the list, I decided to give it a shot.
Once more I hooked up ODBC to my PostgreSQL server and then, starting with the Oracle database schema, ported it to PostgreSQL. After several modifications similar in nature to the ones for MySQL, I had a database schema that would load into PostgreSQL. I restarted vCenter for Linux and noticed it would connect, but returned a very strange error related to the tables. After checking a few logs, I realized it had to be something inside vCenter for Linux that caused the problem, not a PostgreSQL issue. Perhaps it is related to the way PostgreSQL case folds, but my tests show there is nothing we can do within PostgreSQL to make it work with vCenter Server for Linux.
MySQL and PostgreSQL can handle the database creations as well as the port of the stored procedures, but you are limited to UTF-8 characters within all string variables (which would impact Cryllic and Asian users of vCenter for Linux). VCenter for Linux does not accept the use of MySQL, it will flat out deny its use. PostgreSQL does not get denied but it also will not work as a database for vCenter due to other errors. Some people suggest using the free Oracle database which is available, but it has size and other limitations and then I would have to support yet another database in my environment.
I can only surmise that vCenter for Linux is not a reality for the general GNU/Linux environment but only for enterprise administrators, who may never need vCenter for Linux as they have Microsoft Windows database servers already. I wonder about the target audience for this endeavor. I own a SMB and I am required to run one Windows server just for vCenter. Ideally I wouldn’t have to do this, neither would I like to run Oracle as Oracle is a very expensive option for an SMB. VCenter is expensive as well, but buying one expensive license instead of two is a money savings for many an organization
VMware: Please implement either MySQL or PostgreSQL support and let me use the GNU/Linux tools I already use!
Having been selected as a vExpert by VMware for 2009, I wanted to comment on what I think being named a vExpert truly means.
First I want to comment on what this award is about. I believe the name vExpert is a bit misleading as it’s not really about being an expert — although the people that have received the award indeed have a high level of expertise using VMware products. Instead, the award is more about giving back to the VMware community. There are many of us that freely share our knowledge and experience working with VMware products with other users and members of the VMware community. In doing this we ask nothing in return. We love VMware products, many of us live and breathe it and we don’t mind giving back and helping others in need so they may benefit from what we know.
There were only 300 vExpert’s chosen by John Troyer, I don’t envy him as this had to be no easy task of only choosing 300 worldwide. Many of the vExpert class of 2009 are bloggers, VMware User Group leaders, VMware evangelists and VMTN community members. I I don’t know all of them, I know many of them. I’d like to comment on a few of them because they are some of the brightest and most giving people I know.
First I’d like to recognize all the members of the VMware Communities forum, known as VMTN. This is a very active, thriving community that offers some of the best support to users in need that I have ever seen. There are a few members of the VMTN community that I would like to specifically acknowledge, these people are my peers and more importantly my friends.
This group of people includes (in no particular order) Ken Cline, whose knowledge and experience continues to astound me; Dave Mishchenko, whose dedication to helping others is amazing; Edward Haletky, who knows more about security then anyone I know (and is always happy to prove that); Oliver Reeh, who inspired me to start my website, VMware-land; Steve Beaver, whose passion for virtualization is highly infectious; Jason Boche who is the biggest VMware geek I know; Tom Howarth who is determined to keep spelling virtualization with an s because he’s British, but who sure knows a lot about it; Robert Dell’Immagine, who runs the VMTN communities; and John Troyer who is VMware’s head blogger, cheerleader and evangelist.
I’d also like to acknowledge a few fellow bloggers from the thriving VMware user community who inspire me on a daily basis to help others and be the best VMware admin that I can be. This includes Eric Sloof, Scott Lowe, Duncan Epping, Rich Brambley, Mike Laverick and my fellow bloggers at TechTarget.
There are many other bloggers and members of the VMTN community whom I haven’t mentioned, but you know who you are and you are all an inspiration to me and also to the rest of the community.
Together, your selfless teaching and dedication to helping others is a credit on to themselves and a big reason for the continued success and popularity of VMware.
So even though the vExpert award leads one to believe that these people are simply technical experts in their field, I believe the true meaning of being a vExpert — and maybe the part that’s more important — is about giving back to the community, because all of you who are VMware technical experts and take the time to freely share your knowledge — you are also vHeroes.
A recent VMware Knowledgebase article has a great matrix for collecting diagnostic information for many VMware products. One product that was missing from the lineup, however, was VMware ESXi, so I thought I would cover the methods for collecting diagnostic information from an ESXi host in addition to covering the collection methods for VMware ESX and vCenter Server.
Typically when you open a support case with VMware they will want you to generate diagnostic bundles that they can use to diagnose your issue. Collecting this diagnostic information from ESX/ESXi hosts and vCenter Servers consists of packaging together various log and configuration files as well as command output and performance data that can be used for troubleshooting purposes. There are two ways to do this; you can use the vm-support script that is run from the ESX service console/ESXi management console or you can use the Export Diagnostic Data option in the VMware Infrastructure Client (VI Client).
The vm-support script can be run in the ESX service console, but you can also run it in the ESXi hidden technical support management console. (For information on accessing the ESXi hidden console see How to access the VMware ESXi hidden console.)
When you run the vm-support command without any options, the command creates a single .tgz file containing thousands of files that can be extracted and viewed to troubleshoot problems. The size of this file will vary depending on various factors including how many VMs are on your host and the typical size is around 20 MB. Before running the vm-support script you should switch to the /tmp directory as the .tgz file is created in the directory where you run the script. Once the file is created you can copy it to your workstation using a program like WinSCP and extract it using either the Linux tar command or an application like WinZip. These files are very useful to VMware support for troubleshooting your problem. You can also specify parameters when running vm-support to collect specific information or do certain tasks.
Below are some of the most commonly used parameters:
- -n – causes no core files to be included in the tar file
- -s – take performance snapshots in addition to other data
- -S – take only performance snapshots
- -x – lists world ids (wid) for running VMs
- -X <wid> – grab debug info for a hung VM
- -w <dir> – set working directory for output files
- -h – displays help for command usage and available options
If you do not want to use the command line to collect this information you can use the VI Client. You can connect to either a single ESX or ESXi host, or you can generate diagnostic bundles for multiple hosts by connecting to a vCenter Server. Once you connect, select File from the top menu, Export, and finally the Export Diagnostic Data option. Optionally, you can click the Administration button, select the System Logs tab and click the Generate Diagnostic Data button. Or, to collect vCenter Server only diagnostics, you can use the Generate VirtualCenter Server log bundle – FULL option while directly on the vCenter Server (Start > All Programs > VMware > Generate VirtualCenter Server log bundle – FULL).
When connected to a standalone host you can only opt to select a directory on your workstation to store the file. When connected to a vCenter Server you can collect diagnostic data from multiple hosts as well as the vCenter server. Once you select the hosts, a task will be created in the VI Client, the vm-support script will run on the hosts you selected, and the resulting .tgz files will be downloaded to your workstation. If you also chose to include the vCenter Server it will create a separate diagnostic bundle for it in a zip file format.
As these diagnostic bundles also contain configuration files it is a good idea to run them periodically even if you do not have a problem so you have a backup of your host configuration. Just be careful that you clean up the .tgz files on your hosts afterwards to avoid filling up your disk partitions.
A lot has been written about virtualizing Microsoft Exchange, but comparatively little on virtualizing Lotus Domino which is also a very popular email platform used by thousands of companies. I was a Domino administrator for more than 10 years, so I thought I would share some tips for running Domino on virtual hosts.
Like any email server, Domino has very high disk input/output (I/O) activity because of the many activities that occur on the server, including full-text indexing, view updates, mail router activity, multiple agents, maintenance tasks such as fixup and compact, and much more. Additionally, Domino servers tend to have high CPU and memory usage as well as network I/O. So is a Domino server a good candidate for virtualization? In most cases yes, except for the busiest of workloads.
Lotus Domino has been using application virtualization for almost 10 years with its built-in partitioned server technology that allows multiple independent Domino servers to run on a single server and operating system. I’ve personally run up to six separate Domino servers on one server without any issues. So, moving to operating system virtualization is a natural and more efficient progression for running multiple Domino servers on a single server.
Because Domino is similar to Exchange as far as resource utilization and workloads, many of the same methods and best practices that are recommended for Exchange can be used with Domino. However, Domino is a bit different than Exchange and there are a few things you should be aware of when virtualizing Domino.
- Since Domino has very high disk I/O make sure you architect your storage for maximum efficiency. This includes using the fastest RPM drives that you can, configuring your RAID groups to have more drives, adjusting your queue depths and using the largest cache possible on your storage controllers. Additionally, use the fastest storage that you can. Fibre Channel tends to be the fastest but hardware iSCSI is also a good choice (using NFS storage is not recommended).
- Split your disk partitioning into multiple physical LUNs or RAID groups; put your operating system, Domino databases and transaction logs on separate partitions.
- Align your VMFS partitions for maximum throughput and minimum latency, check out that link on how to do this or check out Aligning disk partitions to boost virtual machine performance
- While you should always take care when assigning more then one vCPU to a VM, Domino is a very multi-threaded application and usually works best with multiple vCPUs, especially for medium to larger Domino installations. Also the more cores available on your host the better, don’t assign a VM 4 vCPUs if you only have 4 total cores on your host server.
- Don’t skimp on the memory that you assign to your Domino VM, Domino requires a lot of memory and the more you provide to it the more it has available to do caching with. Also do not set memory limits on your Domino VM and use memory reservations to ensure that it is not forced to swap to disk if the physical host memory is exhausted.
- You don’t want too many busy Domino servers running on the same ESX host. As DRS only balances VMs based on CPU and memory usage and not on disk or network I/O you should create affinity rules in DRS to make sure the Domino servers stay on separate hosts.
For more tips on using virtualizing Domino Server on ESX hosts check out the below links:
- IBM Technote – How to size Domino and Sametime systems for full production loads on VMware ESX Server
- IBM Technote – Troubleshooting performance issues for a Domino server on VMware ESX
- VMworld 2007 presentation (free registration required) – IBM Lotus Domino and Lotus Sametime on VMware Infrastructure 3
- VMworld 2008 presentation (available only to attendees or subscribers) – Best Practices for Virtualizing IBM Lotus Domino with VI3
A few days ago, I posted a blog about a VMware award that was announced in January 2009. This award is known as the vExpert Award, and those who receive it become known as vExperts for the following year.
In a previous blog post with an admittedly lighthearted tone, I congratulated the recipients and asked for more information about the award and the process for receiving one. Reading the few available online resources on vExpert left some of my questions unanswered. (While I did reply in the comment section on the original post, we decided to remove the blog and post this entry instead, because the comment wasn’t visible enough.)
On SearchVMware.com and SearchServerVirtualization.com, we have run several rounds of product awards, and process is always important, so I was naturally curious to see whether there was further criterion available for what comprises a vExpert.
Several community members became upset, however, as the blog post was interpreted by some as a denigration of the vExpert Award or an indication that I didn’t think certain recipients deserved the award. I apologize for writing it in a way that left room for misinterpretation.
The vExpert Award selection process clarified
John Troyer, VMware Communities outreach and vExpert program manager, graciously answered my questions. Because of his answer, in addition to knowing how many awards were given and what the new vExperts receive, I also now know that the vExpert selection process wasn’t based solely on self-nominations. There were internal nominations provided by VMware, and many people nominated others whom they believed should be recognized.
Troyer also said that most of the nominations were indeed highly qualified but that VMware only had 300 spots. The actual recipients demonstrated that they gave their time and effort back to help others, either via blog, user group, or publication. He further commented that the vExpert is not a measure of raw technical expertise, as someone could be well versed in VMware technologies but not qualify as a vExpert, and that it may appear that many bloggers were recognized as vExperts, but that’s because the best virtualization bloggers have self-assembled.
Troyer also mentioned that vExperts would see additional benefits over those already announced (for more details, see the VMware vExpert page).
VMware is to Microsoft as vExpert is to MVP?
Is VMware developing an award that will one day act as the VMware equivalent to the Microsoft Most Valued Professional (MVP) Award?
Currently, the MVP Award program is conducted by eight people, and there are 3,500 MVPs around the world out of 100 million active community members. With 300 vExperts, the vExpert Award program has some room for growth if VMware wants it to become the equivalent of the MVP — but as the vExpert Award is in its first year, there’s plenty of time for development.
For comparison purposes, the MVP website outlines the selection process as such:
MVP nominees undergo a rigorous review process. Technical community members, current MVPs, and Microsoft personnel may nominate candidates. A panel that includes MVP team members and product group teams evaluate each nominee’s technical expertise and voluntary community contributions over the prior year. The panel considers the quality, quantity, and level of impact of the MVP nominee’s contributions. Active MVPs receive the same level of scrutiny as do other candidates each year.
MVPs receive a certificate and a thank-you gift, as do vExperts.
MVPs also receive complimentary subscriptions to the Microsoft Developer Network and TechNet, access to private MVP newsgroups, and an invitation to the MVP Global Summit at the Washington State Convention and Trade Center in Seattle and at Microsoft’s headquarters.
Will the vExpert Award program evolve to become the equivalent of the MVP Award program? Will there be a vExpert Global Summit near VMware’s headquarters in Palo Alto, Calif.? SearchVMware.com will be watching.
And once again, congratulations to the first round of vExperts.
While I could not attend physically, I have been able to attend this year’s VMworld Europe remotely. This has got to be one of the most connected conferences I have had the privilege to virtually attend. Blogs, tweets, chats, videos and emails have been flying around and because of it all I have had a chance to keep up with all the announcements, sessions, and shenanigans. I even virtually “ran around” looking for the same information I would look for there, specifically on VMware vShield Zones. The guys on Twitter were a great help in getting me information and pointers to other information.
Twitter has been the best source of instantaneous information. I may have trouble walking and chewing gum but there are tweets galore from attendees of all types in a constant flow. Everyone at VMworld must have a PDA phone!
To get in on the virtual action, check out these resources:
- VMware VMTN Blog with its tweetgrid
Videos have also been a great source of information and are fun to check out:
- Gabe’s Virtual World for some fun videos as he takes on some P.I. work.
- Eric Sloof’s interviews
- You can always search on YouTube for “VMworld Europe”
The blogs have been fantastic, from the moment-by-moment blog posts written during keynotes, as well as the end-of-day blogs by presenters and attendees. Some to check out are:
Jason Boche, a fellow VMware vExpert 2009 and VMware Communities Round Table Panelist also called in from the VMworld VMware party to give us a brief scoop. Thank you Jason!
Being remote from VMworld I felt disconnected at times as I am waking up six hours after the show day has started, but it was very easy to keep up. I was able to sit back and really think about the announcements and presentations. So now you may wonder, should I forgo attending VMworld 2009 in San Francisco? Not at all. The press, and deluge of information is heady, but at the same time it is hectic at best.
Even so, after having time to think about the announcements, I’ve concluded that there was quite a bit of VI4 clarification, but there was also some new announcements. First, the clarifications and updates:
- a new name (vSphere) for the VMware Virtual Infrastructure, and
- more specs for what vSphere 4 can handle in way of virtual machine hardware (8 vCPUs and 256 GBs of memory), and
- information on new cluster limits (manage 64 hosts and 4,096 cores), and
- information on amount of memory a single ESX host can contain and use (512 GBs), and
- another new name (VMware vShield Zones) for the BlueLane Product, and
- more vendors involved in VMware VMsafe, and
- plus many others
The new Items:
- The Virtualization EcoShell Initiative from Quest
- Fluid Operations eCloudManager Provides Open Source VMFS Driver
- Opscheck from Tripwire
- plus many others
The really cool item, I asked if I could drive over (since the engineers are local) and get them to install this on my own Nokia N810….
I know I probably missed some announcements and improvements in all this, but I was only there virtually, and VMworld offers just as much if not more information than any other technical show. Perhaps the Cloud is already here!