So I was very excited when IBM officially launched its general purpose public cloud service. It was a validation of the cloud model for the enterprise; it was a chance to see what one of the premier technology vendors on the planet would deliver when it put shoulder to wheel on this exciting new way to deliver IT.
Its got a growing user base too: check out the “Profile” screenshot at the bottom; not only do you get to see your IBM Cloud account, you get to see all your IBM friends, too- as of this writing, IBM’s cloud has 729 users running 2436 instances, 578 block stores and 666 images.
Turns out it’s pretty much feeling its way along, just as Amazon Web Services (AWS) was 3-4 years ago. Its…um…not polished, but it works. It’s a true public cloud experience, even if the the pricing is logarithmic in scale rather than incremental (goes from “quite reasonable” to “Oh My God” fairly quickly). You click and provision storage, instances, and so on. But it feels a little raw if you’re used to RightScale, AWS Managment Console and so on. It’s very bare bones at the moment.
It’s also abundantly clear that the IBM Smart Business Cloud- Enterprise (SBC-Enterprise) is exactly the same as the IBM Smart Business Development and Test Cloud. The transition to “enterprise class public cloud” is simply hanging a new shingle on the door. See the screenshots below, they haven’t really finished transitioning the brand on the portal pages and it’s all over the documentation too. The test and dev cloud and the SBC-Enterprise cloud are one and the same.
But that’s fine by me- if IBM wants to take their dev cloud infrastructure and call it Enterprise, they can do that. I’m not aware of any conceptual reasons for not doing production in a cloud right next to where you do test and dev, besides the expectations for uptime and support.
What’s a little crazy is how much uber-complicated stuff is baked right in, like Rational Asset Manager and IDAM even though actual use is a little creaky. This highlights the main difference between the approach IBM is taking and the approach AWS took. The very first thing you do on creating your IBM profile and logging in at http://ibm.com/cloud/enterprise is manage users. The last thing you do (or can do, I couldn’t even figure where to do it) is check your bill and see how much you’ve consumed and how much you’ve paid. That’s almost the reverse of the AWS experience- again, nothing really wrong with that.
That’ll make the lone wolf supernerd webdork garage genuises a little discomfited, but it’ll make the project managers very happy. Which is probably the point, IBM definitely isn’t courting startups.
Other notable postives: status updates and notifications about the service are baked into the dashboard and appear at the top of the screen. When I was using it, there was a helpful suggestion to pick the Boulder CO data center because of heavy use at IBM’s flagship cloud DC in Raleigh. The provisioner even automagically put Boulder at the top of the list for me. Does anyone else remember when AWS didn’t even have status dashboard? I do.
The web portal is reliable, fast and built on solid, modern open standards, no “Choose your browser with care young padawan” here. Security is superficially reliable: You are forced to create a 2048- bit RSA keypair for your first Linux instance and strongly encouraged to do so for each additional one; the Windows instances enforce complex passwords and you don’t get to be Administrator at provisioning. Only ports 22 and 3389 are open, respectively. After you login and start monkeying around, of course, you are on your own.
For the brief handful of hours I used it, the connection and response time were rock solid and quite acceptable. Disk I/O was fine both on the instances and to the attached block stores. Only a little weirdness when patching or installing software. Licensing all brushed under the rug, as it should be. Getting a test website up and running was trivial. Instances start with 60GB local storage and block stores start at 250GB. The REST APIs? Probably OK. I, uh, sort of ignored the API part of it. But they’re there.
However, it’s easy to see this is early days for an end-user service. I got the always-helpful “This is an error message: the message is that there is an error” at random. Provisioning a stock SUSE Enterprise instance reliably took 6-7 minutes each time. The Windows instance took north of 25 minutes every time (on my first try I used the delta to fumble around with SUSE and vi and get VNC going because I have apparently devolved into a helpless monkey without a GUI).
SUSE was no-password SSH, username “idcuser”(short for ‘IBM Dev Cloud User’, I’m guessing). When I created a privileged user and went through VNC, I couldn’t sudo until I went back to SSH and set a password. idcuser can sudo in SSH without a password but not over VNC, apparently. Which is fine, I’m glad it doesn’t have a default root password, that’s a Good Thing (TM) but I had to figure that out. AWS just gives you a stock SSH session and everyone knows to change the root password first. IBM’s instructions exclude mention of the root password (“sudo /bin/bash/” in SSH instead).
I couldn’t save my modified instances as customized images to boot later- the SUSE image showed up in my repository after several tries but wouldn’t relaunch. I eventually managed to save a working image after a few days. The Windows image just refused to save; I gave up after a half dozen tries. There’s no way to upload my own images; I’m stuck with a collection of RHEL, SUSE and a lonely Windows Server 2008 for a portfolio; I’d have to waste a lot of time to replicate my own VMs from that and if I wanted to run any other OS forget it.
Signing up was pleasantly easy, but I didn’t get access unless a (very helpful) IBM rep contacted me and sent me a welcome kit via email. I was also mistakenly signed up for Premium Support Services without asking to be, which is one of those automatic eye-watering cost multipliers I mentioned before. Fortunately IBM has been quite responsive and I’m only getting charged for what I use. Anyone remember when AWS didn’t even have a support response option? I do. Again, IBM is working backward (in this case kind of a good thing).
All of this, most especially the lack of freedom in managing a VM library (couldn’t save, couldn’t upload) and the distinctly unfinished branding effort going on put IBM’s cloud in the “nice idea so far” category. It’s going to take some time and some effort for this to be anywhere close to the capabilities or integrated services AWS has. Only a true enthusiast would want to run anything in production on this. It’s still definitively not a enterprise-class cloud environment for running workloads.
It probably will be in time; unlike AWS, IBM doesn’t have to invent anything. I suspect the real issues at IBM around this are in tying in the appropriate moving parts and adding features with an eye to the end-user experience. IBM is historically not good at that, which is probably why AWS got there first. Amazon is good at that part. Scale matters too- IBM is one of the worlds largest data center operators in its own right; Amazon rents space in other peoples. If IBM ever runs into growing pains, as AWS has, it is going to be on a completely different plane. I’d expect them to screw up the billing before they screwed up operations, for example.
Anyway, check out the screenshots. When I get my invoice at the end of the month, I’ll tell you all how much this cost me. If it was over $10 I’ll be shocked.
UPDATE: Due to a promotion campaign that I was apparently grandfathered into by a few days or a week, which ends June 1, my test-drive for the IBM Cloud was free. A quick login and peek around shows the current number of users at 864, more than 100 over a month ago, but 2962 running instances, demonstrating an uptick in active customers but also the fungibility of demand.
They’ve updated the design to scrub away more of the IBM Smart Business Development and Test Cloud branding and I was finally able to launch my saved SUSE image. Progress!
Software configuration management is one of those topics that gives me an instant ice cream headache, and I’m not even doing it!
Pity the poor admin whose job it is to keep track of all the versions of operating systems, applications and firmware and the interdepencies between them all, so as not to hit a conflict when installing or deinstalling something that brings everything grinding to a halt. In a cloud environment, when everything is virtualized it gets even harder to manage as resources are constantly being moved around.
With the latest version of rPath’s software, dubbed X6, development, QA, and release engineers can construct, deploy, configure and repair software stacks via a easy to use, visual UI, helping to keep control of software configuration.
rPath X6 allows users to visually create a version-controlled blueprint for generating system images, managing changes up and down the stack. And it works across physical, virtual and cloud-based resources.
Here are some shots of the UI. Click on them to get a clearer view.
Microsoft Management Summit 2011 has a theme this year. Naturally it is cloud computing. The premise is that advanced systems management tools for Hyper-V and Windows Server will lead all down the primrose path into private clouds and your data centers, once sprinkled with enough powdered unicorn horn, will turn into Amazon Web Services on steroids or something.
Brad Anderson VP of management and security at Microsoft will have a lovely song and dance routine at the keynote around this. It’ll be a work of art, but the message train went straight off the rails when they brought up big-box department store Target (that’s Tar-JHAY to you) as the big win for System Center, which is Microsoft’s answer to VMware vCloud Director and every other virtualization/automation/cloud platform out there. But they couldn’t have picked a worse example.
Target has implemented System Center and Hyper-V in its retail stores and collapsed 8 real servers down to two per location, a whopping 75% reduction in footprint. That’s the story. Bravo. Saves on power, maintenance, refresh cycle, blah blah. But this is not a cloud story, not even a little bit. This is the anti-cloud story. Microsoft couldn’t have picked a better example of what the difference between leveraging virtualization and cloud computing actually is than this. All Target did was do everything they’re already doing, but better. They didn’t revamp their infrastructure, they just optimized what they already had.
What is cloud about that? I’m a skeptic, I’ll bash anyone on either side of the aisle in the private cloud/public cloud debates, but this is egregiously misleading.
Here is how the Target story would have been remotely close to cloud computing in descending order of relevance:
1) Done away with all the on-prem servers in favor of a WAN optimizer or three and served up infrastructure and/or services remotely to every store from a central Target-run data center.
2) Found a way to utilize all that spare capacity (did they really just chuck three out of every four servers around the nation? Mental note: check Target dumpsters for salvage) to serve back office and online services needs.
3) Kept an application front end server at each location that served and transmitted application data to and from cloud services (Microsoft Azure? Hello?)
4) Bought an Oracle Exalogic for each of Target’s 1,755 locations to make sure they were getting “enterprise grade cloud” from Uncle Larry. Ok that one was a joke.
But that would be HARD, you see. Any of those options, which would qualify as legit exercises in cloud computing, would require a hell of a lot more work than simply trimming infrastructure. They redecorate the bathrooms every once in a while too, and they aren’t touting “next-gen on-demand potty usability”, are they?
Really revamping how Target did IT operations to actually gain some benefit from the promise of cloud computing would requires years or planning and re-architecting and would turn the entire IT organization on its head.
What this use case demonstrates isn’t even a step on the path to cloud computing. It’s a step sideways into the bushes or something. This is what you do WHILE you try and come up with a plan to utilize cloud techniques and services. When and if Target moves to take steps to actually do cloud computing at its stores, everything Microsoft touted today is not even going to be irrelevant- it’s going to be invisible. Systems management won’t even be part of the conversation. Cloud is about services, not systems.
To top it off, the grand irony in this is that Target was a big early customer win for AWS too; it famously defended its retail site from getting overrun on “Cyber Monday” and proved out the concept of elastic scale very nicely. There’s a clear juxtaposition here: public cloud is far enough along that Target uses it for actually doing business. Microsoft’s version of private cloud…isn’t.
UPDATE: This blog has been edited to reflect that Andersen’s keynote has not happened as of writing. When it does go off tomorrow, we will be very pleased to accept criticism of our out-in-front opinion on it.
UPDATE: This blog post has come under some criticism for misrepresenting how Microsoft is presenting their Target Case Study. It has been pointed out that in the language used in Microsoft’s Case Study doesn’t use the word cloud, and during the video presentation during Andersen’s keynote, cloud was not specifically part of the Target presentation, i.e. no one has actually said “Target is doing cloud computing here” and it is therefore not within the pale to make fun of them for such such an out of tune sales pitch.
This is unmitigated BS. Despite a technically accurate description of what Target achieved as a beta tester of Hyper-V and SCOM, it was thrown up in everybody’s face unabashed as a big part of Microsoft’s campaign to position itself as part of the cloud computing market.
Judge for yourself:
“Particularly as organizations are contemplating cloud computing, they find comfort in knowing the Microsoft platform can virtualize and manage all kinds of applications — Microsoft’s, a third party’s or home-grown — on a massive scale.”– said Brad Anderson, corporate vice president, Management and Security Division at Microsoft.
Target is front and center on the Microsoft Cloud Case Studies page (screenshot if the page has changed).
Target is top of the list for a Microsoft press kit on cloud computing (screenshot if the page has changed).
The description of Brad Andersen’s Day One MMS 2011 keynote address which featured the Target case study.
“This keynote will cover Microsoft’s cloud computing offerings. Brad will share how the cloud empowers IT Professionals by showing how to build and manage a private cloud…”
“…today we’re going to focus on the cloud and cloud computing, and really we’re going to focus about how we’re going to empower all of you to take advantage of the cloud. “
“I think what you’re going to see throughout this morning is what we’ve done is make the cloud very, very approachable for you. ”
“…we’re going to share with you over the next hour is the learnings that we’ve had …to deliver what I believe is the most simple, the most complete, and the most comprehensive cloud solution for all of you to use. ”
“So, today, you know, we have Hyper-V cloud out in the world. It’s doing phenomenally well for us. It’s being adopted at rates that we’re just ecstatic with, but it’s a combination of Hyper-V and System Center. “
“So, let’s take a look at what Target’s doing.”
Target isn’t doing cloud computing here. If you’re trying to sell hamburgers, don’t show up holding a hot dog. We all have a rough idea of what cloud computing is by now; deliberately screwing with that idea for the sake of a marketing campaign is just dumb.
One more from Andersen’s keynote:
“I often hear quotes like what you see up here: “What is the cloud? You know, is it just a set of fancy catchphrases that’s kind of lacking technical backing?”
SAN FRANCISCO –– Hewlett-Packard CEO Leo Apotheker outlined his vision for the company today and the message was loud and clear, it’s all about the cloud. But as always at these big events, the CEO was long on vision, short on detail.
Central to his plans is a Platform as a Service (PaaS) offering that will compete with Microsoft Azure, VMware’s Springsource cloud initiatives and Salesforce.com with Force.com and Heroku, among many other PaaS providers.
“If you want to be in the cloud business you have to cover all of the areas, Infrastructure as a Service, Platform as a Service and I can’t emphasis enough on Platform as a Service,” Apotheker told press and analysts gathered here today. However he glossed over all the software work that has to be done to build a Platform as a Service.
“The software for our cloud platform will be based on a certain number of technologies, but let’s not get into that right now,” Apotheker said. He added that HP will be launching pieces of this platform in 2011 and 2012 and said that it will come from HP Labs, as well as acquisitions and partners.
He stressed that HP wants a lot more developers working with it to build a higher value software business. But he quickly added that the company’s existing partners, namely Microsoft, will continue to be important partners.
Last July HP announced a partnership with Microsoft in which it would bundle Microsoft’s Azure cloud software as an appliance. However it doesn’t sound like Azure will be the basis of HP’s Platform as a Service offering in the future.
There were lots of questions on how HP plans to catch Amazon Web Services, now a juggernaut in cloud infrastructure services. Some analysts predict that if AWS continues at its current growth rate it will be a $10 billion business by 2016.
“How we will catch up is pretty damn simple,” Apotheker quipped. “We have 25,000 enterprise sales people … and why will our customers buy from us, they want SLAs, they want security, they want a capability that is scalable worldwide, there aren’t many companies that can provide that,” he said.
Yahoo and others have backed away from building out more datacenters due to the capital intensity of the business. Apotheker claimed HP has already made the investment required to offer global cloud services. “From a big investment point of view, quite a lot of it is already there,” he said.
He announced that the company will launch an Open Cloud Marketplace, or app store, that will run on the HP cloud and support software from many different companies.
“In this open environment there will be HP software, but there will also be a lot of non-HP software … at the end of the day we can’t create all of this innovation by ourselves,” he said.
Analytics and security capabilities will be central to HP’s plans overall.
“Security is a huge opportunity here, it is probably the most important application anyone can create in this connected world anyway, so you will see us doing much more in the security side,” he said.
Apotheker took a shot at Symantec, claiming the “point solution providers are having a challenge, not because of us, but because of what they provide.” He said it is easy for smart people to work around “this or that” point product. “Whatever barrier you put at a given point, people will just find another way in.”
He said it is more important to find a solution to secure the entire stack, although he offered few details on how HP might tackle this.
Apotheker resigned as CEO of SAP amid falling sales and a price increase that angered customers. He was retired when HP called him to take over the reins from Mark Hurd.
Has hosting and cloud leader Rackspace found a way into the private cloud fray?
A new service offering by Rackspace Cloud Builders will come to you, wherever you may be, and install a bunch of hardware and software that qualifies as cloud; a definite new twist from Rackspace’s usual hosting, co-lo and cloud sales model.
To boot, Rackspace will offer pre-certified arrangements of hardware to go with OpenStack, although it is shy about using that term. At Cloud Connect today, we are told, Dell, Opscode and Rackspace demonstrated ground-up cloud building, which presumably involves some rackable Dell gear and some bootable media.
I must admit to some curiosity: Did every demo start from scratch? Is it cheating if you don’t zero the drives for every “cloud build”? I’d be booting servers off images on my laptop, so presumably it’s only cheating if they changed the BIOS settings.
Rackspace’s new Cloud Builders division consists basically of Anso Labs, acquired last year by Rackspace specifically for its on-prem capabilities. Rackspace claims it can pick and choose from more than 50 companies in the OpenStack project with which to build a private cloud that will be “cost competitive with Amazon,” if customers will only let Rackspace build the a cloud environment.
First of all, that’s ridiculous. Nobody operating a private cloud can compete on cost with Amazon Web Services (AWS). At most, Rackspace might get operating costs down to the point where one could pitch an internal private cloud deployment vs. AWS when security, compliance and governance are taken into account. And companies are not even going to bother to try that until their current infrastructure is fully depreciated.
What this really is a way for everybody involved in OpenStack to put up a viable alternative to cloud-in-a-box offerings from premium IT vendors. It’s more a poor man’s VCE and it’s silly to pretend otherwise.
The writing’s on the wall for IT operations that are not moving off premise for new infrastructure deployments. It’s going to be dense, converged and virtualization-ready; IBM, HP, VCE and now maybe Oracle have a lock on that business. New buys for big hardware will skip right past “converged” for “cloud” since the functionally the same thing with a new set of management tools at the hardware level.
Those big IT vendors sell support, support, installation, and more support. And it’s not cheap — in fact, it’s downright larcenous. In some cases, an Oracle Exalogic with Sun tools, Linux and Oracle database will run $4 million, plus a support contract that might be 30% of your licensing fees (according to a floor rep last year). Plus, you can’t ever, ever switch after you buy. A fully loaded Vblock is plenty comfortable in that price scale, too.
I don’t know about you, but I can fill a rack or two with blades, storage and open source software for a bit less than that kind of cash. I’m betting Rackspace sees the same opportunity (they’re the commodity operations experts, after all).
So let’s not beat around the bush; the service announcement and hardware demos are really about taking a crack at staying alive in big enterprise IT shops, but that’s a harder sell than telling customers they might be able to do what AWS does if they just know the magic words.
Verizon is taking another run at selling SAP CRM as a service, this time with SAP’s Rapid Deployment Solution (RDS) on real cloud infrastructure. A year ago the two companies were selling the full SAP CRM package on traditional hosting. Imagine two old guys at a rave and you’ve got the picture. No one came to that party.
The new offering, available in the US only, is designed to be up and running in eight weeks, a much faster turnaround than traditional SAP installations that can take months. It’s priced on a per user per month subscription which Verizon said should work out at about $100 per seat. There’s also an implementation fee of $80,000 to $220,000 depending on which modules the customer wants.
SAP has other products in its RDS bag including supply chain, product development and HR software that Verizon expects it will also sell as cloud-based services at some point.
It’ll be interesting to watch and see if Verizon gets any traction selling this SaaS offering. It has enormous network reach and is developing cloud infrastructure anyway, so there’s very little financial risk in it selling Software as a Service.
For SAP, building out cloud infrastructure to sell its applications as a service would be super expensive, better to partner with the major network operaters and let them sell it for you.
Salesforce.com is the main competition here with several years head start. Can the old guys catch up?
Data center and Internet Business Exchange (IBX) giant Equinix has taken a bold step up the stack in a way that may show what enterprises are expecting as they look at private cloud…and also discomfit some of Equinix’s best customers. Equinix is integrating Carpathia Hosting’s (a long time Equinix partner) compliance management systems into its own operations to sell directly to its co-lo customers. The company calls the relationship “unique.”
Equinix has been fairly strict in the past about remaining vendor-neutral as a data center floor space provider. It doesn’t push customers onto particular hardware or management systems (beyond racks and power); just pitches its expertise as interconnection and reliability. But with Carpathia’s compliance tools and advanced virtualized networking capabilities, it’s definitely moving beyond floor space. Equinix says the initial target is highly regulated, ‘complex compliancy’ verticals like Federal IT and health care providers, which is what Carpathia specializes in. The company added that customers are asking for this.
Greg Adgate, GM of the Global Enterprise Segment for Equinix, said that users with strict auditing needs wanted more integration at the first layer of infrastructure. They come to Equinix looking to co-locate and own their IT, but as federal guidelines on compliance get stricter, they have a need for advanced GRC tools like the ones Carpathia provides (through a variety of means including other vendors, like Vyatta).
Adgate said that Equinix and Carpathia had worked hard to integrate Carpathia’s compliance expertise, which is largely around documenting processes within the machine environment into its data center operations. Equinix had deliberately avoided that side up until now.
Cloud computing comes in because Carpathia has some interesting offerings designed around delivering compliant infrastructure as a cloud in its hosting business, and it sees a massive, policy-based push to cloud services and cloud platforms in the federal government and health care. U.S. CIO Vivek Kundra wants 20% of the federal IT infrastructure on or in a cloud pretty much now. Legislative mandates (and funds!) are driving adoption of electronic health records (EHRs) and healthcare information exchanges (HE’s).
Carpathia’s not giving up its hosting business. CTO Jon Greaves said the firm has spent the last year building out sophisticated ways to use Equinix’s IBX data centers, where network traffic and connections are traded and exchanged around the world, to make services that operated like clouds but looked like compliant operations. Now Equinix and Carpathia can either sell that technology directly to co-lo users or pitch Carpathia’s managed services as an alternative. Greaves said users can even buy Carpathia’s networking tools and connect to other public cloud providers in secure ways.
Courting cloud computing providers and enterprise investment is a natural evolution of Equinix’s strategy. At some point, said Tier1 analyst Doug Toombs, it had to start delivering more than power and floor space. He said the company’s position as an independent provider still held.
Toombs said the real strength of Equinix is its global network of internet exchanges, which service every major telecom and many others besides, so consumers have a choice on communications providers. That remains staunchly neutral.
Verizon, for instance pitched its cloud services as enterprise ready because you could completely avoid the public internet; Verizon could simply put you on a backhaul link if you didn’t want to use the internet for infrastructure. However, if you didn’t like Verizon, too bad. Equinix can now offer a similar enticement, with the advantage of not getting trapped at one telecom.
Competitors Rackspace and Savvis have seen the writing on the wall for a while now, and offer all kinds of tools and management products beyond floor space and uptime, and it shows. Both firms reported double-digit growth in customers and revenue last year; Equinix, with its co-lo pure play, got killed and had to lower its EBITDA.
But how will this sit with Equinix’s service provider customers, especially cloud? Integrating complex compliance tools at the bottom layer of infrastructure means they can offer the enterprise something Amazon Web Services and OpSource (both giant Equinix customers) can’t. They just cannot meet this. Those clouds can maintain certifications; with Carpathia in its operations, Equinix users can meet their own audits. Equinix says it’s not infringing.
“If you look at the complex compliance market, there really isn’t anyone else doing it so we don’t think we’re taking down any of our customers today. The complexity of it drives a price point that resonates with the price point we bring,” said Greg Adgate. That’s a high price point, Equinix is not cheap.
However, everybody knows that enterprise IT is coming to the cloud and bringing the big money stakes. After all, even if they own their infrastructure, lots of enterprises co-lo and still consider that “in-house infrastructure” and if they want cloud, they’ll look twice at a floor space operator that can do secure, private cloud-y stuff. “There’s some obvious segments like fed and health care but the market opportunity is much bigger than that,“ said Adgate.
Amazon did not respond to requests for comment about its data center provider. Salesforce.com doesn’t do sensitive infrastructure, so they’re rapidly sinking to the SaaS morass and don’t matter. OpSource also hosts in Savvis, so they’re probably fine.
The Enomaly-powered cloud brokerage service SpotCloud has been chugging along for some time, rolling on-demand virtual machine providers into its fold. It’s set to go live next week, a clearing house for cloud computing resources, a bit like a rudimentary RightScale, but unlike RightScale it handles the cash. Sellers pay a brokerage fee and set their own prices based on whatever the hell they fell like.
That’s neat, everyone’s been saying cloud computing is going to make infrastructure a commodity, so along comes the commodities broker to match the bulk producer with the consumer. It’s been tried before- Deutsche Telekom spun out Zimory as a cloud brokerage a few years ago. It wasn’t successful and it’s withered on the vine into a weird sort of cloud management/infrastructure delivery thingie.
SpotCloud might have a fighting chance, though, mostly because it’s about three years later. Vrtualization is much more widespread at the service provider/data center level, the market for public cloud is robust and so on.
It also has actual capacity to offer. SpotCloud says it’s got a solid 10,000 physical servers committed to the platform from scores of operators all over the world. Right there, that puts it at a comparable scale with Rackspace and Amazon’s cloud environments.
It works the same way those do- login, pick a server, launch it, pay for it. It’s incredibly light on features; ‘bare-bones cloud’ comes to mind. But you want a server somewhere? Get a server. Essentially without lifting a finger, SpotCloud just built out one of the largest contiguous cloud computing environments in the world. It remains to be seen whether or not sellers can offer prices that actually tempt buyers.
That’s where the surprising news comes in. One, SpotCloud now supports VMware; Enomaly CTO and SpotCloud creator Reuven Cohen said they were basically forced into it. Originally SpotCloud only worked with Enomaly-powered clouds, but that had to change, since the overwhelming majority of sellers with capacity to offer were on VMware. “Honestly I was a little surprised…VMware is basically everywhere,” he said.
Cohen said he pitched his platform as free and easy, just park it on servers that are only getting used for SpotCloud anyway and away you go, but VMware was king. The other surprise was that the sellers on SpotCloud aren’t all up to date, cloud-ready hosters with unused capacity they want to monetize. Cloud computing firms aren’t the sellers on the cloud computing brokerage.
“It was really, uh, little guys, other types, guys you would not associate with the cloud,” Cohen said. In other words, the excess virtualized capacity was floating around the hosting world. It’s floating around in data centers and colos of all stripes and sizes, from Joe’s Basement Hosting Shoppe to operators stuck with dead weight investments.
Cohen said the pitch had gone from “Join the cloud, get in on the action” to “Clean out your closets, the server-man is here” One large Australian DC had jumped the gun on a customer project and ended up with $3 million worth of infrastructure that was sitting dark. The client had bailed; they had 2 TBs of RAM and servers just sitting there. They signed it up for SpotCloud and at least it’s got a chance; any money is better than none on dead weight and they can just pull out if they find a better use for it. Another example was an LA-based DC that served the entertainment industry.
It’s interesting to me that what is, for all intents and purposes, one of the largest true IaaS clouds a) popped up overnight and b) is built out of scrap iron and used tires. That’s definitely a validation of the cloud computing model and how far we’ve come on platform. The real test though is whether anyone is going to buy enough of it to matter, since a commodity is by definition easy to get a hold of.
“It’s been harder finding the buyers,” admitted Cohen. Color me unsurprised.
Back in August we reported that pharmaceutical giant, Eli Lilly was looking for additional cloud providers to Amazon, for better support.
Lilly has picked Indiana-based hosting and cloud provider, BlueLock, according to a wink and a nudge from Twitter sources close to BlueLock. The hoster is in bed with VMware and the vCloud initiative, providing all the bells and whistles that go along with the vCloud Datacenter service.
Presumably Eli Lilly was willing to pay more for better support than it had been getting at AWS?
We reached out to all parties for comment and got these responses:
“Thanks for contacting us about this topic. Unfortunately, due to conflicting priorities we aren’t able to participate,” said Carole Copeland, corporate communications manager for Eli Lilly, via email.
And this response from BlueLock:
“We will have to kindly decline to comment. It is already public knowledge that Eli Lilly has been in discussion with a few cloud players (BlueLock included), however the discussion and any outcomes are between the companies involved due to competitive and partnership reasons.”
It’s unfortunate everyone is keeping tight-lipped on this cloud implementation as it would be really helpful to understand and learn from what happened here. It’s clear that terms of service and SLAs played an important role in Eli Lilly’s decisions.
A new report from the 451 Group paints a picture of cloud computing poised to spread but bound by limits of infrastructure. It’s still a tiny fraction of the IT market but advanced Asian governments are promoting cloud, with government-run data center and technology parks courting cloud development projects in Malaysia, Singapore, Hong Kong and elsewhere.
Communications and local government initiatives are two major forces driving the deployment of clouds in developing Asia, said Agatha Poon, research manager on global cloud computing for 451. Many parts of Asia are very technologically advanced, often well over what anywhere in the Americas can boast — South Korea is the most broadband-connected country in the world — but it’s localized in small pockets around strategic connection hubs.
There’s massive dark (or dim) gaps where communication infrastructure is lacking, underpowered or evolving in weird new directions with mobile communications (like most of India). But, for example, last year Amazon Web Services (AWS) and IBM opened clouds in Singapore. AWS is (probably) in the giant Equinix facility and IBM in the Chengi Business Park, run by the government’s Infocomm Development Authority of Singapore. AWS has an Availability Zone for its cloud there; IBM is doing something mysterious and high concept, naturally. Chinese telecoms are also experimenting with cloud platforms.
Poon says the market is gunning for SMBs who want new services based around cloud infrastructure; the large enterprises there are going to be more conservative, precisely the same pattern cloud is following in the US and Europe. The growth rate looks exponential — the 451 Group predicts $1 million of cloud spending in 2009 to be $17 million by the end of 2011.
That’s small potatoes, but a significant vote of confidence. Poon said that businesses are going to get more options, but local ICT providers should gird their loins: if they don’t catch up or implement something cloud-like, they’ll get absolutely crushed by the big players in the region.
The big telcos in Asia are moving fast in the cloud space, Poon said, either through partnerships and strategic investments. She pointed to South Korean telco KT, which is making an aggressive shift toward offering cloud infrastructure (compute and storage) through a variety of platforms, like Cloud.com and enStratus, and said the tech giants that supply the world with hardware, like Fujitsu and Samsung, were also gearing up to service the cloud market.
Its another interesting demonstration of the way the market for cloud works; it’s organic, demand-based, almost biological. It’s creeping out along the bright communication hubs, where the activity and the cash and electricity-based nutrition is, and avoiding the leaner pastures.
That’s because cloud computing, properly done, minimizes the fears of risky investment. A provider can go where the action is and drop in a little bit. If it works, they can add some more. There’s no need for anybody, provider or user, to stand up a massive data center operation and just hope for the best. They can just take it, or leave it. As long as there’s a big pipe nearby, of course.