Posted by: Brian Gracely
AWS, Cloud Computing, Cloud Management, CloudSpectator, CloudStack, Data Center, Enterprise, Gartner, Mission-Critical Applications, NetworkComputing, Open Source, Performance, Rackspace, SLA
I’ve written before about how Cloud Computing can be confusing (here, here, here). New vendors, legacy vendors, cloudwashing, free software, automation skills to learn, etc. Whenever there is chaos and confusion, many people look for something familiar to give them a sense of direction and proximity to their existing world. And while many pundits like to talk about how Hardware and Software are becoming commoditized, or certain services (such as “Infrastructure as a Service, or IaaS”) are becoming commonplace and non-differentiated, we still have confusion about some of the most basic building block elements. Let me illustrate this with a couple examples of activities you might undertake soon.
Lesson 1 – Not all apples are created equal
This past week, a couple different groups (NetworkComputing, CloudSpectator) attempted to do baseline testing on various IaaS cloud services, in an attempt to compare them in an apples-to-apples format.
In 2013, if someone wanted to compare the cost, performance and features of a given IaaS service, you’d think that this would be a relatively simple task. Just pick a common unit of measure (CPU, RAM, Storage, maybe network bandwidth) and run some tests. Sounds simple enough, right? Think again.
The CloudSpectator report attempted to compare Performance and Price across 14 different IaaS providers. They used an entry-level “unit of measure” (1 VM, 2vCPU, 4Gb RAM, 50Gb Storage) and ran their benchmark tests. The results were shown both in terms of raw performance and in a performance/price metric. Across a set of 60+ tests, the results showed that some Cloud providers scored better than others. The results also showed that certain providers were optimized for certain types of tests much more so that other types of tests. Some of the results were hardware-centric while cloud architecture or the associated cloud-management software influenced others. Big deal you might say, that’s to be expected.
But what you might not expect is that not all of the Cloud providers even offered a 2+4 configuration. Some offered 1+4, 4+4 or slightly different variations, without the ability to customize. Still others only offered higher-performance “unit of measure” on systems with much larger CPU/RAM footprints. So now the arguments started about whether or not the results were skewed because the “correct” platform may not have been chosen for each Cloud provider to deliver optimal test results.
The arguments about whether Price/Performance is a relevant measurement for Cloud offerings are valid. Sometimes services are more important to applications than performance or infrastructure available. Sometimes they aren’t. It depends on the application; one size does not fit all. And as we saw, one size isn’t always available to all, so the end-users may have to do some re-calculations to compare Cloud services.
Lesson 2 – Not all Cloud measurements are created equal
In 2013, with Cloud services being available for many of years, you’d assume that someone could just run a standardized set of tests to compare performance. And you’d be wrong. As pointed out by Joe Masters Emison, picking the right tools isn’t simple. And even when two groups choose to use the same toolsets (eg. Unixbench by both InformationWeek and Cloud Spectator), groups are still going to disagree about the methodology, scale and UUT (Units Under Test).
Lesson 3 – Feature Lists? We Don’t Need No Stinking Feature Lists
In 2013, with the daily bickering between open-source communities about which project has more momentum (as measured by developers, commits, lines of code, attendees at community events), you’d think you could easily compare features lists. And you’d be wrong. Just for fun, try to find the feature-list for OpenStack Havana. Go ahead, I’ll wait. Apparently there were over 400 added in this release. Go ahead, I’ll wait. If you didn’t happen to find it, I’m sure you “checked the code” before coming back, right? Don’t be embarrassed, you’re not alone. Finding the equivalent to a data sheet for Cloud software can be very complicated.
First off, open-source projects like OpenStack and CloudStack aren’t “technically” products, like you might expect to buy from an Enterprise vendor. They are projects, and the code also acts as the core documentation. And those projects don’t have marketing groups that make up “data sheets”. And none of that might matter to you if you’re just grabbing the code (from any open-source project) and starting to hack away. But if you’re an Enterprise IT organization, used to comparing various offerings before making an architectural or buying decision, things might be a little more complicated for you.
Lesson 4 – Getting “value” rarely comes free
Don’t let those first three lessons scare you off. Cloud computing can do some amazing things. CAPEX to OPEX shifts on the balance sheet. Provisioning of new applications in minutes instead of weeks. Scaling operations to adjust to business needs. Enable newfound business agility through technology.
But it’s not free. It’s still in the very early days and hence the groups creating innovation (and disruption) don’t always have time to document things or sit on standards-bodies. They make stuff that’s easier to consume than in the past. They make stuff that hopefully makes your life easiest and brings value to your business. But there’s still some effort that you’ll need to exert in order to get that value. You’ll compare apples to oranges; you’ll have to decide whose testing results you trust; you’ll have to dig a little deeper to find which capabilities help solve your problems.