I wanted to comment on some recent controversy concerning the Best of VMworld award winners by explaining in detail the process and how the judges decide winners for each category. I’ve been a judge for two years now so I know how challenging it can sometimes be to pick the winners.
Let me begin by explaining what we are not doing when judging products. We are not installing each product and testing them out to determine a winner, that’s not what the awards are about. So we are not installing product A in a lab and seeing how it performs against product B to try and determine which product is better. In fact in many cases product A may do something completely different from product B and a direct comparison is not possible. In the hardware category for example the winner was the Cisco UCS and one of the finalists was the Xsigo I/O Director, two very different products that aren’t directly comparable to each other.
I can’t speak for the other judges but what I do when judging is the following. We get the list of nominees about two weeks before VMworld begins. First we pre-judge to try and make the number of nominees that we have to visit at the show more manageable. There are really no restrictions on who can enter and often times products get nominated that have very little to do with virtualization. For example if you have a product that is a non-virtualization product but you just happen to package it as a virtual appliance and therefore have a very weak tie to virtualization there is a good chance you will not make it past the pre-judging.
When pre-judging often times I have not heard of some of the products in a category so I have to take extra time to research the product and see if it is worthy of consideration. My method for doing this usually starts at the vendor’s website were I’ll read through product datasheets, look at any online demos and sometimes look through the product documentation. Once I’ve determined what the product does I’ll make the call as to if it meets enough of the criteria to make it past the pre-judging and if it has a chance against the other products in the category.
Speaking of criteria, here is the official criteria used to judge products:
- Innovation – Does the product introduce new capabilities or provide significant improvements to standard capabilities? Does it break new ground or redefine the product category?
- Performance – Does the product perform to the degree that it could improve overall data center operation?
- Ease of integration into environment – How easily does the product integrate with other products? Can the product operate effectively in heterogeneous environments? Have vendors certified the product?
- Ease of use and manageability – Is the product easy to install? Are the product’s functions clear and easy to learn and run? Will the product scale smoothly to accommodate growth in an environment?
- Functionality – Does the product deliver as promised? Does the product provide greater or more useful functionality than others in its category? Are there serious functional omissions?
- Value – Does the product represent a cost-effective solution? Can the ROI be easily justified?
- Fills a market gap – Highlight the needs it fills and the problems it solves.
The big one in my book is innovation followed by filling a market gap. Innovative products always have a good shot at winning as do those products that solve a problem that no other product does. Performance can be a bit misleading as we are not performance testing the products we are judging and we are not doing head-to-head performance comparisons. Performance is not about raw performance data like IOPS but is instead meant in more general terms. My interpretation of it is something like this: this process normally takes me two hours to complete and product A lets me complete it in 15 minutes.
After we pre-judge and remove the products that we do not feel have a chance at winning we wait until the show to visit vendor booths to find out more about the products that made it past the preliminaries. We only have one day at VMworld to do this (about five hours total) and I spend anywhere from 30 – 60 minutes with the vendors that I feel have the best chance of winning in their category. (I had a smaller category this year, so in other categories judges may have spent less time at a booth.) While I’m with the vendor I’ll see product demos, ask questions and find out as much about their product as I can to see if it’s a candidate for winning.
I’m only interested in technical information and am not interesting in a marketing speech so the vendors usually pair us up with their best technical people to talk to. Once I’ve found out enough information I move on to the next vendor. After we’ve finished we talk it over with other judges for that category to determine the winner and finalists. Next we move on to a deliberation meeting with all judges to discuss our picks and hear any feedback from other judges. Sometimes a judge in another category may have some good input that would help us decide on a winner.
The bottom line is that judging is just that, it’s a judgment call based on the information that the judges learn about the product and not the result of head-to-head lab comparisons of products. It’s not always easy to name winners but we do the best we can with the information we have. Sometimes vendors make it easy for us by making great products that are clear winners, but often times there are a lot of great products in a category and the decisions are difficult.
This year we had one gracious finalist who did not win the gold acknowledge that the gold winner for that category deserved to win it. That showed a lot of class in my book and helped affirm that we chose the correct winner.
We can only choose one winner; that’s the nature of the beast. We know a vendor’s product is its baby and much like a proud father who wants his child to excel, vendors want their product recognized as the best. Our challenge as judges is to see each product in its best light and then choose a winner based on the criteria of the awards. So before you criticize the winners, the judges, or the process, please take a minute or two to understand the inherent challenge in judging and realize that the judges and editors paired with the judges do take the process very seriously.