The Troposphere


September 4, 2009  3:27 PM

VMware vCloud Express: Right move, wrong focus

JoMaitland Jo Maitland Profile: JoMaitland

VMware is right to introduce a cloud computing service that competes with Amazon EC2. But wrong to focus on the aspect of buying these services with a credit card. We know of at least one company where the act of punching in a credit card number to buy servers is immediate grounds for dismissal.

vCloud Express, unveiled at VMworld in San Francisco this week, lets companies running VMware software hook up to a hosting provider running a public cloud also based on VMware, for additional compute resources on demand.

vCloud Express competes with Amazon.com’s EC2, now infamous for the speed at which users can buy and turn on servers, the low cost point for entry and the ability to use only what you need, when you need it. But chasing Amazon.com’s value proposition of “fast and cheap”, which is how VMware CEO Paul Maritz referred to vCloud Express in his keynote, is the wrong focus for enterprise IT.

Yes, IT managers want more agility and lower costs, but most of them won’t touch cloud services with a 10-foot pole, from VMware or anyone else, until they are sure of the security and reliability of these services. That’s where VMware should be putting its effort and focus, not on a simplistic web interface for entering credit card numbers.

The vCloud Express announcement left the 12,000-strong audience at the keynote cold. Finding anyone in corporate IT at the show that had tried or was using Amazon.com EC2 was tough. It’s still early days for this stuff, but most people said concern around security of their data and workloads in the cloud was an issue. One company we found that is using EC2, Pathwork Diagnostics, said the advantages were less about cost and more about increasing performance. This user said one of the downsides of EC2 was the lack of a job scheduler that works well in a dynamic IP environment.

VMware would be better served listening to these customers and their problems with managing infrastructure in the cloud, than chasing Amazon’s fast, cheap model, which is surely not where the big bucks in cloud computing is going to be anyway.

September 4, 2009  2:08 PM

How to Build a Private Cloud

JohnMWillis John Willis Profile: JohnMWillis

Ubuntu Enterprise Cloud (UEC) is a private cloud that embeds Eucalyptus cloud on Ubuntu server. The current release of UEC runs on Ubuntu 9.04 Server running Eucalyptus 1.5. There is a latter version of Eucalyptus (i.e., 1.5.2); however, I didn’t try that for this blog post. In this blog example I installed all of the UEC cloud components on a single system. Typically you would not want to do this; however, this works well as a demo system.

Quick UEC Overview

UEC is made up of three components: Cloud Controller (eucalyptus-cloud), Cluster Controller (eucalyptus-cc), and one or more Node Controllers (eucalyptus-nc). The Cloud Controller is the Web-services interface and the WEBUI server. The Cloud Controller also provides resource scheduling and S3 and EBS computable storage interfaces. A cluster in UEC is synonymous with an availability zone in AWS. In this release of UEC the Cluster Controller has to run on the same machine as the Cloud Controller. The Cluster Controller provides network control for the defined cluster and manages resources within the cluster (i.e., resources on the node). The Cloud Controller and the Cluster Controller are sometimes referred to as the Front End. Typically the Node Controller runs on a separate box from the Front End box. In a production environment there will be multiple Node Controllers making up a larger cluster (i.e., your cloud). Each Node Controller runs as a KVM hypervisor and all the Node Controllers in the cluster make up the cloud environment. In the current release, running multiple clusters is really not supported. In future releases of UEC, you will be able to run multiple clusters in one environment. Each cluster acts like an availability zone in the UEC environment. As I noted earlier, in this blog example, I am putting everything on the same box (my laptop). I will point out areas where the configuration would be different in a normal installation of UEC.

Building a Private Cloud in One Hour


August 26, 2009  7:06 PM

Amazon VPC: moving the goal posts or playing catch-up?

CarlBrooks Carl Brooks Profile: CarlBrooks

Amazon has announced, in its inimitable bloggy style, a new service to allow users to create virtual private clouds within its data centers.

The new Amazon VPC offering is “virtual” because the networking and the machine images are opaque to the physical infrastructure. It’s “private,” because unlike standard EC2 instances, they don’t have a public IP address. And it’s “cloud,” naturally, because you pay $0.05/hour for the service and you can quit whenever you want.

The cloud computing blogosphere was abuzz with the announcement (e.g., here, here, and here). But is Amazon VPC, as these blogs say, really revolutionary, a re-definition of private cloud, and a validation in thinking about public, private and hybrid clouds?

None of the above, I believe. While it’s fun to poke holes on an announcement such as this, especially when it’s by acknowledged cloud market leader Amazon, there has to be a street-level view of this that looks at the reality of what’s in the offering and why.

Frankly, Amazon VPC is a terrible virtual private cloud. Network control and management are rudimentary, the VPN is stone-age, users can’t expose clients to the internet and can’t assign them IP addresses. Clearly it is not ready for prime-time, and clearly it is not aimed at Amazon’s existing user base, because they’d all have to uproot their current infrastructures to use it. It is for experimenters who start with requirements that preclude public cloud.

Granted, it’s early days, and changes are in the works, but the kind of technology in Amazon VPC was hashed out with hosting, complex hosting and managed hosting years ago. Compared to what is standard for secure VPN infrastructures in these areas, the Amazon VPC and VPN are decidedly small beer.

Next, arguments that this announcement validates definitions for different types of cloud computing or somehow affect the current market as it applies to private cloud are risible. Suppliers don’t define a marketplace — they react to it.

Cloud computing is essentially a consumption model: as much as, for as long as, and whenever you like. Cloud underpinnings like virtualization, security and costing models are just a means to an end. It was only natural that when large enterprises saw this new model of self-service and low-overhead management, they would want to try it out in their own data centers.

It’s also natural that those interested in private clouds wouldn’t want to use public clouds — public cloud is antithetical to controlling your IT environment. Hosting providers quickly realized that enterprises wanted fenced off reserves to noodle around with cloud stuff, not open pasture.

Indeed, VMware has been leaping to fill that need since last year, with vSphere and vCloud and hosting partnerships with Rackspace and Terremark, among others.

So Amazon isn’t defining the conversation by any means- they’re playing catch-up. As it stands, the Amazon public cloud isn’t designed to be private — just the opposite. Amazon VPC is a radical change of pace for them, not for the cloud market. The cloud market is rapidly filling up with providers who understand the enterprise cloud market and want to service it, which has never been Amazon’s goal.

In the near future (*cough* VMworld *cough*), we’ll see products and services that make the Amazon VPC look like chopped liver, and it will be abundantly clear that Amazon is just starting to react to a segment of cloud that is already well under way and they never set out to capture, but is taking off faster than many thought possible.


August 14, 2009  9:52 PM

Party’s over, kids: Microsoft has private cloud all sewn up. In 2010. Maybe

CarlBrooks Carl Brooks Profile: CarlBrooks

Microsoft says it will have the definitive virtualized public/private/platform cloud solution ready to go in a “shrink wrap” package by 2010, and that, by the way, hosters that aren’t fully virtualized will go the way of the dodo. Of course, this may come as a surprise to all the hosters already going great guns with any variety of managed, virtualized and dedicated offerings, including cloud computing models.

Zane Adam, Senior Director of Virtualization at Microsoft announced the Microsoft model for hosting companies and data centers at Tuesday’s Hosting Con 2009 keynote. He said that lowering “human touch” and “fabric management” were the new face of hosting and “those that pull the plug [on virtualization and automation] too late will become dinosaurs.”

Adam pitched Microsoft’s “System Center Solutions” and Dynamic Data Center Tookit as the provisioning and management glue for Microsoft’s new server products. Get on Server 2008 R2 with Hyper-V, he said, download the software kit and away you go: virtualized, managed, cloud-ready. A wonder no one’s thought of that before.

Adam was perhaps too farseeing for those at the keynote. Some attendees felt the conversation might be getting a little blurry, a little too fast. That’s not surprising given the audience — rock-ribbed rack-em-and-stack-em hosters — many of whom see an inextinguishable need for physical hosting, even as cloud computing grows.

Adam said the “vNext” version of the Toolkit will complete the vision with dynamic provisioning for virtual machines, application monitoring and “one-click” provisioning by Q1 of 2010.

Microsoft is justly famed for a pie-in-the-sky product lines, but there may be some meat to the announcement. Server 2008 R2 with be released this October, and Azure is slated for the general availability at the same time. The “System Center” and the toolkit are already out, in crude fashion.

So, hosters, if you were tired of watching Amazon and Rackspace do it for free, or hadn’t heard of VMware or Xen, or just start feeling a little antediliuvian, all you have to to is wait. Microsoft will have this whole virtualization/cloud thing sewn up tight some time next year.


August 12, 2009  2:06 PM

Azure to blaze the way for hybrid cloud

CarlBrooks Carl Brooks Profile: CarlBrooks

Infosys believes Microsoft is staking out the cloud as the inevitable future of IT — and designing Azure to be a seamless bridge between hither and thither in order to make the transition in small steps for enterprise consumers.

According to Jitendra Pal Thethi, principal architect for Microsoft business intelligence at the Indian IT giant, Microsoft’s aim with Azure to hopscotch right over the Infrastructure-as-a-Service part of the cloud and sell what it already has – software — in the approved cloud fashion, on-demand, scalable and transparent at the hardware level. Why should the baker sell wheat, after all?

Thethi has been involved with Azure since development began, some three years ago. He said that Azure is designed to let developers to carve off sections of their projects and put them in Microsoft’s cloud without having to re-learn or revamp anything. Databases already developed in Microsoft SQL can go right into Azure’s SQL Data Service without a hitch, storage, processing and all, for example.

“This concept [is that] not everything will be in the cloud; not everything will be on-premise—it will be a hybrid world,” he said. Thethi said businesses already using Microsoft for development “can pick off the low hanging fruit” without having to leave their comfy Microsoft environment or design an interface to a non-Microsoft cloud.

“Azure today gives you an on-premise experience….It’s something none of the other cloud providers provide,” he said. Thethi said that cloud computing will fundamentally change development and design, but it’s years away and Microsoft is well aware of that.

“The fact of the matter is that they want to get the ball rolling,” he said, and get developers comfortable with using online services in small ways before thinking bigger. “The entire architecture and development [model] is going to change,” he said, but Microsoft is betting businesses will want to move into the cloud in safe, familiar steps.

Microsoft plans to make Azure as compatible and useful as it can, reasoning that the less developers have to do, the easier it will be for them to make the switch. Some people already call Azure “on-demand Server 2008.”

Furthermore, it should be noted that Microsoft has no real advantages in delivering computing power itself; it neither makes computers nor helps people run them. Hosting companies and data centers do that, and they are already cutting a broad swath in the public cloud market.

So Redmond, by virtue of ubiquity, has the opportunity to carve out the Platform-as-a-Service territory very neatly. It already makes the software that (mostly) everyone is using, it has plenty of spare cash and plenty of big iron on the ranch for users, and it can scoop up subscribers and users just be being that little bit easier to use, and just a few cents cheaper than the competition, and by letting enterprises come in at their own pace.

After all, Microsoft is nothing if not patient. With cloud computing, it has everything to gain here, very little to lose and an audience it doesn’t have to chase. All it has to do is make Azure run, and wait.


July 27, 2009  3:53 PM

Cloud computing surfaces in local politics

CarlBrooks Carl Brooks Profile: CarlBrooks

In a demonstration of cloud computing’s increasing stature in the real world, Washington state freshman state representative Reuven Carlyle called for scrapping a $300 million data center in favor of cloud computing last week.

“We are deeply troubled by the weakness of the technical and financial support behind this decision, and fear the state is potentially making a $300 million mistake,” Carlyle said in a letter to Governor Christine Gregoire published on Carlyle’s website. Co-written with Representative Hans Dunshee, the letter was first picked up by Pacific Northwest regional news site Crosscut.com

In a nutshell, the letter calls for a halt to a bond sale to fund the project and a review of existing cloud services, like “Google, Microsoft, Amazon or others as many companies and governments are doing today.” Further, it argues that the trend in outsourcing data and services is a fait accompli and a better use of taxpayer dollars.

Unfortunately, Carlyle’s letter sometimes reads like it was written by a jingo-happy IT vendor. To wit: “How best to efficiently and effectively move away from hardware-centric, expensive, proprietary, silos of data trapped in old databases to open, transparent, flexible, accessible, customer-oriented applications available via the Internet?” he asks.

(I think we’ve all snoozed through that PowerPoint talk, no?)

This is understandable. Carlyle comes fresh from the communications industry, where silos are not filled with grain and budgets are fine-tuned with an axe, as opposed to government, where silos are more than likely filled with grain and budgets are fed like foie gras geese.

Dunshee appears to be a more traditional politician; interestingly, he lists many unions as backers, groups likely to want state construction dollars.

It’s unclear why Carlyle and Dunshee believe the new IT infrastructure would go to waste. What’s notable, however, is that cloud is now commonplace enough that a politician will throw it out there and hold traditional IT up as the poorer model. That’s a long step in discourse from “cutting edge.”


July 22, 2009  8:22 PM

Rackspace opens cloud APIs to the masses

CarlBrooks Carl Brooks Profile: CarlBrooks

Speaking at the open-source hippie lovefest OSCON today, Rackspace made good today on last week’s report that it would open-source its APIs.

Rackspace released the API specification under the Creative Commons license. Source for the software used by the APIs is under the MIT X11 free software license. Find it at http://github.com/rackspace and start your own cloud.

Speaking from OSCON, Rackspace’s Erik Carlin said the company would maintain the code in traditional style.

“The intention was to open it up — we’d love to get to the point where we have external committers,” said Carlin. Currently, Rackspace is the only commiter (an entity that can make final changes to any open source project) for the code that’s been released. Carlin said Rackspace wanted to steer a “canonical set of bindings” on top of the project but looked forward to seeing what developers would do with the project.

“I hate to create our own interface and add to the [plethora of cloud APIs], but there was nothing we could embrace,” Carlin said. As it stands, the proliferation of both open and closed cloud interfaces has been an impediment to cloud computing adoption, he said.

Going forward, Carlin said he hoped to see standards emerge that will prune out the thicket of cloud technolgies and specifications, and said Rackspace will jump all over an open standard when it emerges.

Asked why Rackspace built its interface around webby ReSt instead of XML-y SOAP, like Amazon, Carlin said there was a trend toward web interfaces on the front end. Furthermore, there are plenty of other aspects to a cloud than just the user screen, he said. For example, issues like competing virtual machine formats and management specs still need to be hammered out.

“APIs are only half the battle,” he said.


July 17, 2009  5:48 PM

Big Week In The Clouds

JohnMWillis John Willis Profile: JohnMWillis

Most weeks are pretty cloudy for me these days. However, this one was chock-filled with exciting stuff. In case you missed any, here goes…

Rackspace Cloud API

Rackspace has three cloud offerings, ( Cloud Files, Cloud Sites, and Cloud Servers). Cloud Sites is their PaaS offering that use to be called Mosso. Cloud Files is, of course, their cloud storage offering. The big question for Rackspaces’s IaaS has been no-API (i.e, Cloud Servers). Some people believe that you really can’t be called an IaaS unless you have an API to manage the infrastructure. This week Rackspace answered this question.

Link…

Azure Pricing

This week Microsoft announced the long awaited pricing for their new PaaS offering called Azure. Microsoft announced that their bare bones windows services, running on Azure, will be $0.12 per hour. The big debate this week has been focused on comparing the Azure pricing with Amazon’s EC2 Windows pricing at $0.125 per hour. The answer is, you really can’t compare. First off, Azure is a PaaS that doesn’t offer OS level access and Amazon is an IaaS that gives you Administrator (root) level access.  Secondly, Azure applications can only run as .Net or Win32 based applications.  Azure runs similar to the way Google’s PaaS works. You can install your application code into their Paas; however, you can’t install an already packaged application. For example, you can’t install something like Drupal on Azure, at least not easily.  One last point is that, Amazon EC2 Windows instances run as Windows 2003 Servers only.  In the end the primary choice will most likely not be price, and more likely will be based on the target application.

Link…

GSA To Build A Store Front To The Clouds

The General Services Administration is plaining to launch an online application, (i.e., storefront), to enable agencies to purchase cloud computing applications like Amazon Web Services. The Federal CIO, Vivek Kundra, announced this on Wednesday.

Link…

BMC Offers A Deployment Solution For Amazon Web Services

BMC Software announced this week that they are leveraging Amazon Web Services to manage hybrid cloud environments by managing deployments to Amazon’s EC2.  BMC has had a solid story for behind-the-firewall-management ever since their acquisitions of BladeLogic and Remedy.  By combing service management solutions with strong provisioning in a cloud environment could make this move exciting.

Link…


July 14, 2009  3:22 PM

Windows Azure pricing vs. Amazon AWS

JoMaitland Jo Maitland Profile: JoMaitland

Microsoft has released its pricing for Azure today. It’s tough to do an apples-to-apples comparison with Amazon AWS because they are different technical models, but the CPU service seems like it will be cheaper. Keep in mind you have to explicitly program for Azure to use the CPU.

Here is the Windows Azure pricing and here is the Amazon AWS pricing.

Comments, thoughts?


July 13, 2009  9:03 PM

The Curious Case of CloudSwitch

JohnMWillis John Willis Profile: JohnMWillis

A few days ago I had a call with Ellen Rubin, one of the co-founders of a new cloud startup called Cloudswitch.  Cloudswitch recently closed an $8M Series B funding from Commonwealth Capital Ventures.  The interesting thing is that they are still in stealth mode and have not yet released a product.  They have created an enormous amount of buzz based on the fact that their company is still in stealth mode and have attracted so much money.  Is the cloud really this hot, or is there more to this story?  I decided to tell their story in pure David Fincher style.  I will tell this curious case of Cloud Switch story backwards.

  • I am given the green light to talk about Cloudswitch, a new kind of cloud service that is described as a cloud broker service.
  • After almost a year of ongoing discussions with Ellen, I finally get why they call it a switch.  They see themselves moving workloads back and forth within the enterprise, as opposed to the concept of a cloudburst which may imply a unidirectional flow.
  • Cloudswitch acquires new office space in Burlington, MA.  They now have a good team of developers, management, and good funding to focus on the getting the product ready and are now spending time with early customers and partners.
  • June 2009 they closed an $8M Series B led by Commonwealth Capital Ventures with existing investors Matrix and Atlas ventures also participating.
  • They spend a lot of time working with enterprises customers and have successfully completed their pilot phase of development.  They are now gearing up for a beta later this year.
  • The new CEO, John, caused a number of venture firms who know him to express interest in doing a preemptive Series B.  Although they were not planning to look for additional funding until 2010, they decided that this was a great opportunity.
  • They build a core team and are fortunate to be able to bring in John McEleney as their CEO.  John was formerly the CEO at SolidWorks and ComputerVision.  He grew SolidWorks to over $350M in revenue and a market leader in the CAD space. He has a great track record of scaling companies.
  • Ellen pings me again in February 2009 to get me up to speed on what they are doing.  I am very excited about what they are doing.
  • They raised $7.4M in a Series A – first part in July 2008, second part added Atlas Venture in December 2008.
  • They tried to focus on solving some of the main issues that will enable enterprises to use cloud computing: security, control and integration with the enterprise data center.  Their product will be delivered as a software appliance.
  • Cloudswitch is founded by Ellen Rubin and John Considine in spring of 2008, and they incubate the company at Matrix Partners.  They do a ton of research asking what people think about their idea.
  • I am contacted by Ellen Rubin, formerly head of marketing at Netezza, in May of 2008.  Ellen asks me what I think about a Cloud Broker appliance startup idea.  I am under no restriction to discuss this idea, other than my word.  I decide not to divulge anything until Ellen gives me the green light.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: