The Troposphere


October 14, 2009  7:53 PM

Amazon would like to remind you where the hype started

CarlBrooks Carl Brooks Profile: CarlBrooks

Amazon would like to remind you to thank them for the heightened expectations.

So a Web app running on a telecom service goes belly up and cloud is moribund yet again. That seems to be the latest version of the slightly overheated cloud marketing machine this week.

It may be that the end user cannot tell Amazon Web Services apart from Gmail, which isn’t his job, really, or that the Sidekick/Danger/Microsoft data loss may be one of the most spectacular IT bungles ever made, but this is certainly not going to register in the real cloud computing markets.

No-one stores their email contacts on AWS. Salesforce.com isn’t ever going to let this happen (call me if they do, just sayin’) and Azure, well, isn’t exactly a thing yet, and had zero contact with the destroyed data. I would venture that not a single consumer of any of these services even blinked when they heard about the Sidekick apocalypse.

Seriously, who unplugs a light fixture, let alone a SAN running a live database, in a data center without checking that they made a backup? And when did rolling live backups go out of style in the enterprise world? Hell, I’ve put in rolling live backups for companies with 15 employees.

Anyway, Peter DeSantis, VP of EC2 talked to me at length about last week’s cloud-killer du jour DDOS on bitbucket.org. Here’s a few of his other thoughts on the DDOS, the hype and the possibly incontrovertible fact that without Amazon to raise the bar, we wouldn’t be talking about it at all.

For instance, DeSantis said it would be trivial to wash out standard DDOS attacks by using clustered server instances in different availability zones.

“One of the best defenses against any sort of unanticipated spike is simply having available bandwidth. We have a tremendous amount on inbound transit to each of our regions. We have multiple regions which are geographically distributed and connected to the internet in different ways. As a result of that it doesn’t really take too many instances (in terms of hits) to have a tremendous amount of availability – 2,3,4 instances can really start getting you up to where you can handle 2,3,4,5 Gigabytes per second. Twenty instances is a phenomenal amount of bandwidth transit for a customer.” he said.

The largest DDOS attacks now exceed 40Gbps. DeSantis wouldn’t say what AWS’s bandwidth ceiling was but indicated that a shrewd guesser could look at current bandwidth and hosting costs and what AWS made available, and make a good guess.

“ I don’t want to challenge anyone out there, but we are very, very large environment and I think there’s a lot of data out there that will help you make that case.” he said.

DeSantis said that the reason that stories like the DDOS on Bitbucket.org (and the non-cloud Sidekick story) is because people have come to expect always-on, easily consumable services.

“People’s expectations have been raised in terms of what they can do with something like EC2. I think people rightfully look at the potential of an environment like this and see the tools, the multi- availability zone, the large inbound transit, the ability to scale out and up and fundamentally assume things should be better. “ he said.

In the meantime, DeSantis urges the skeptical to look at the big picture. Things have changed so fast, he said, that people have lost sight of what it used to take to get what Amazon offers:

“A customer can come into EC2 today and if they have a Web site that’s designed in a way that’s horizontally scalable, they can run that thing on a single instance; they can use [CloudWatch] to monitor the various resource constraints and the performance of their site overall; they can use that data with our autoscaling service to automatically scale the number of hosts up or down based on demand so they don’t have to run those things 24/7; they can use our Elastic Load Balancer service to scale the traffic coming into their service and only deliver valid requests.”

“All of which can be done self-service, without talking to anybody, without provisioning large amounts of capacity, without committing to large bandwidth contracts, without reserving large amounts of space in a co-lo facility and to me, that’s a tremendously compelling story over what could be done a couple years ago.”

October 5, 2009  3:43 PM

Private cloud isn’t a new market

CarlBrooks Carl Brooks Profile: CarlBrooks

Private cloud is a touchy subject these days. Proponents say it’s inevitable, detractors say it’s all marketing. Enterprises, who are supposed to be clamoring for it, are cautious: hearing endless pitches from endless different angles will do that.

“We only have ourselves to blame,” said ParaScale CEO Sajai Krishnan. He said, month over month, more than half the people he pitches to say they’ve come to learn about cloud and cut through the hype. He said that enterprises are hearing about cloud, but what they’re hearing is, ‘do everything a different way’, and that’s not attractive.

Cloud is pitched as easy, cheap, low-investment, cures cancer and feeds the poor, etc., but the reality is that for a large organization that’s not going to use Amazon or Rackspace, private cloud means changing the way your business runs. Maybe for the better, but that’s real work.

Enterprises are told “’Here is cloud infrastructure and now you can have it in-house’ – that’s a big change from something that used to be fairly stovepiped,” said Krishnan. Private cloud enthusiasts promise efficiency, but what the enterprise hears is ‘they want to sell me more stuff to cram in there’.

Krishnan thinks this complexity slows down private cloud adoption. Unless it’s your business, a la Amazon or Rackspace, building or re-directing a data center into a self-service automated, fully virtualized compute cycle utility is weary and expensive work. A company that does that will spend years doing it and years realizing the return. It’s not the technology; you can get a cloud for free on Ubuntu now- it’s the planning, and procedural changes.

All well and good for a Web 2.0 enthusiast to start up a business on his or her laptop with Amazon; but convincing 10,000 developers they have to use a new business process is quite another. Krishnan says the other pressure is that enterprises are conservative, and what they have is working. It’s not impossible, it’s just a lot slower than many have speculated, he said.

Intuitively, this makes sense. I can’t poke any holes in Krishnan’s reasoning. The timeline for private cloud is going to look a lot more like the infrastructure lifecycle, than a booming new marketplace. So watch those private cloud startup ideas, kids. There’s less room in here than you think.


September 30, 2009  8:59 PM

Why is Larry Ellison afraid of the cloud?

Jhurwitz Judith Hurwitz Profile: Jhurwitz

I just finished watching Larry Ellison’s conversation with Ed Zander at the Churchill Club, a Silicon Valley business and technology forum. While these type of dialogues are not rare in the industry, I found this one to be particularly insightful. I think we will look back at this conversation as a watershed moment regarding the role of hardware, software, integration, and the cloud.

In case you missed seeing this video on YouTube (I recommend that you watch it http://www.youtube.com/watch?v=rmrxN3GWHpM). In case you don’t have time, let me summarize the key points that I heard and give you my take.

In essence, with the acquisition of Sun Microsystems, Oracle is hoping to put the right pieces in place to position itself as an equal to IBM in the IT market.Clearly, Oracle likes the software stack that Sun has built including ownership of Java and a lot of interesting distributed computing technology.And if we are talking about the cloud, Sun has a lot of good technology it picked up through various acquisitions. While many prognosticators assumed that Oracle would sell off Sun’s hardware assets, it is becoming clear that Oracle wants to make good use of Sun’s hardware. On some certain level, I think this is crazy since the hardware business has low margins and a complex business model.However, if you listen to Ellison’s talk it is clear why he wants to keep the hardware. He envisions a world where customers want to buy in a more straightforward way – no complex integrations, no piece parts from hundreds of different vendors.

Clearly, customers do want to have fewer vendors to deal with but it is not clear that they want 100 percent one-stop-shopping. It’s sort of like back to the good old days of computing in the 1970s – one mainframe, integrated applications, and simplified management.

What Oracle envisions is to be able to ship a system to its customers that comes bundled with everything including packaged applications, bundled with its database, middleware, — all the bells and whistles. It would be tuned and configured as a black box. The customer benefit would be that there would be no need for any integration of component parts. It would act like a complete system. There are clearly benefits to Oracle in being able to grab total share of wallet from the customer. For the customer there is benefit in not worry about so many moving parts.

The only thing that could possibly spoil the vision is cloud computing. Customers looking to a future of cloud computing would increasingly rely on software as a service, platform as a service, and infrastructure as a service to meet many of their computing needs. Increasingly companies are looking to a new generation of applications that leave upgrading software to the SaaS provider.

Larry Ellison decries the cloud because it assumes that there is no middleware, hardware, chips, etc. But, of course, this is not true. There are lots and lots of hardware and chips but they are configured and used differently than Ellison’s vision. There are lots of middleware and business services that will be available to customers but it is offered in a new economic model that the cloud represents.

I think that Ellison is uncomfortable with the cloud because it could have an impact on Oracle’s vision of deeper control over the customer. In time, the cloud could also dramatically impact maintenance revenue.

Of course, the cloud won’t take over the world of computing in the short run. It will evolve over time until more and more computing is based on the cloud. The cloud will have a disruptive impact on the way everything from hardware, software and services is delivered. And if I had to bet on outcomes, Ellison will be plotting a comprehensive cloud strategy just in case.


September 23, 2009  8:53 PM

Net Neutrality-the lifeblood of the cloud

CarlBrooks Carl Brooks Profile: CarlBrooks

Cloud providers in the US should thank their lucky stars there’s a new guy at the FCC who is moving ahead with policies that will guarantee net neutrality.

What is net neutrality? Let me explain. No, there is too much. Let me sum up: the telco carriers that built and maintain the physical infrastructure of the internet want to charge more money for service to the biggest consumers, and throttle usage by their prix-fixe customers (home and small businesses) if those users actually try to use the bandwidth they signed up for.

In opposition to this is, naturally, everyone else. Prejudicial network pricing is precisely contrary to the expectations of a market-driven economy- you’re supposed to pay less and less as you buy more and more. It’s predatory and would be drag-iron for the entire online economy to say the least- imagine if HP actually charged more per server sold to its best customers? Now imagine HP was the only server vendor who served your zip code. Sorry for the horrifying thought, all you hardware buyers.

To put the fight in perspective, you could combine en suite, Microsoft, Yahoo, Google and Amazon and you’d have a company almost as big as Verizon. Verizon is only one of four major telecommunications companies in the US.

To date, the telcos haven’t been able to browbeat the FCC into letting them leverage their monopoly into predatory billing, partly because opposition is so stark and partly because there is a vestigial sense that utilities that provide a public benefit ought not to be allowed to victimize the public at large.

The implications of net neutrality for the public cloud are plain; because it’s basically margins-driven, any squeeze from carriers would hamstring providers. Amazon’s cloud success is driven precisely by the fact that using it is easy and costs about the same as running your own server, minus the investment.

If it became more expensive to run a cloud server than a real server, which prejudicial network pricing would assuredly do, cloud adoption would stumble badly. Little users would stick with hosting; enterprises might still move into private cloud, but there would be no compelling reason for them to stick appropriate applications and data in the public cloud

The true benefits of cloud computing– cheap, elastic and massively parallel computing power at the finger tips of the bright young things in industry and academia– would never be realized, since Comcast or Verizon would be lying in wait to pounce on data crunching projects and surcharge them.

On the other side, the SaaS explosion would fizzle if Salesforce.com suddenly had to pony up for its millions of users, for instance- not a single free service out there would stay open a day past the day it had to charge to make up for overage charges, nor would the umpteen start-ups predicated around cloud, both using and selling, get off the ground if they had to plan for a crop share with their telco landlords if the business got popular.

Without net neutrality, in short, cloud would go where its ancestors — utility and grid — went, to the backwaters of research or in the vast wastes of enterprise, just part of the gaggle of professional services sold to the large corporations. Utility and grid ended up there because they lacked all the things that cloud realizes- speed, ease, availability and economy. Cloud computing is supposed to obscure the infrastructure layer; it needs a level playing field to do that.

So Amazon, Rackspace, Google and all the others should wipe their brow in relief that they’ve got at least 3 or 4 years to really let the whole idea take hold and become a mainstay in the economy rather than a sideshow.

That doesn’t leave much time for dilly-dallying.


September 9, 2009  8:59 PM

Measuring the Growth of Cloud Computing

CarlBrooks Carl Brooks Profile: CarlBrooks

As cloud computing has grown in recognition, and the marketplace has started to attract serious cash, some people are beginning to put some serious effort in to tracking and measuring actual cloud usage. Here’s a small collection of links that show, with some veracity, the state of cloud computing today.

Guy Rosen has the rough cut of usage for public clouds, which finds that among IaaS providers, Amazon EC2 leads the pack, followed by Rackspace, Joyent and GoGrid.

But there are caveats to Rosen’s data. Rosen is only counting websites running in the cloud. The raw data comes from Quantcast, which Rosen has analyzed according to IP location to generate comparisons.

It’s worth questioning how useful Rosen’s analysis is. Classically, Web servers are a primary use case for cloud computing, but increasingly, data processing stacks, test and dev and similar applications are pitched as potential uses for the public cloud. With Amazon continually making hay over its use by the enterprise, this analysis may be accurate, but it is certainly limited.

Another stab at quantifying the cloud comes from those beloved propeller-headed comp sci types, which they dub “Cloud Cartography.” In the course of analyzing multi-tenancy security vulnerabilities, researchers at the University of California, San Diego and MIT came up with a bone-simple way to coarsely measure actual servers on Amazon’s EC2 cloud. (Hint: it involved a credit card, nmap, wget and Amazon’s DNS servers.) According to their cursory research, the number of responding server instances on EC2 currently stands at 14,054.

Cloud Cartography promises to be a very entertaining arms race between cloud providers and the curious, and will doubtless be emulated by others for different sites. I’ll try to keep this space updated as new metrics come around. In the meantime, vendor-neutral suggestions about ways to gauge the state of cloud computing are welcome. Let’s make this a haven for learning what’s really going on.


September 4, 2009  3:27 PM

VMware vCloud Express: Right move, wrong focus

JoMaitland Jo Maitland Profile: JoMaitland

VMware is right to introduce a cloud computing service that competes with Amazon EC2. But wrong to focus on the aspect of buying these services with a credit card. We know of at least one company where the act of punching in a credit card number to buy servers is immediate grounds for dismissal.

vCloud Express, unveiled at VMworld in San Francisco this week, lets companies running VMware software hook up to a hosting provider running a public cloud also based on VMware, for additional compute resources on demand.

vCloud Express competes with Amazon.com’s EC2, now infamous for the speed at which users can buy and turn on servers, the low cost point for entry and the ability to use only what you need, when you need it. But chasing Amazon.com’s value proposition of “fast and cheap”, which is how VMware CEO Paul Maritz referred to vCloud Express in his keynote, is the wrong focus for enterprise IT.

Yes, IT managers want more agility and lower costs, but most of them won’t touch cloud services with a 10-foot pole, from VMware or anyone else, until they are sure of the security and reliability of these services. That’s where VMware should be putting its effort and focus, not on a simplistic web interface for entering credit card numbers.

The vCloud Express announcement left the 12,000-strong audience at the keynote cold. Finding anyone in corporate IT at the show that had tried or was using Amazon.com EC2 was tough. It’s still early days for this stuff, but most people said concern around security of their data and workloads in the cloud was an issue. One company we found that is using EC2, Pathwork Diagnostics, said the advantages were less about cost and more about increasing performance. This user said one of the downsides of EC2 was the lack of a job scheduler that works well in a dynamic IP environment.

VMware would be better served listening to these customers and their problems with managing infrastructure in the cloud, than chasing Amazon’s fast, cheap model, which is surely not where the big bucks in cloud computing is going to be anyway.


September 4, 2009  2:08 PM

How to Build a Private Cloud

JohnMWillis John Willis Profile: JohnMWillis

Ubuntu Enterprise Cloud (UEC) is a private cloud that embeds Eucalyptus cloud on Ubuntu server. The current release of UEC runs on Ubuntu 9.04 Server running Eucalyptus 1.5. There is a latter version of Eucalyptus (i.e., 1.5.2); however, I didn’t try that for this blog post. In this blog example I installed all of the UEC cloud components on a single system. Typically you would not want to do this; however, this works well as a demo system.

Quick UEC Overview

UEC is made up of three components: Cloud Controller (eucalyptus-cloud), Cluster Controller (eucalyptus-cc), and one or more Node Controllers (eucalyptus-nc). The Cloud Controller is the Web-services interface and the WEBUI server. The Cloud Controller also provides resource scheduling and S3 and EBS computable storage interfaces. A cluster in UEC is synonymous with an availability zone in AWS. In this release of UEC the Cluster Controller has to run on the same machine as the Cloud Controller. The Cluster Controller provides network control for the defined cluster and manages resources within the cluster (i.e., resources on the node). The Cloud Controller and the Cluster Controller are sometimes referred to as the Front End. Typically the Node Controller runs on a separate box from the Front End box. In a production environment there will be multiple Node Controllers making up a larger cluster (i.e., your cloud). Each Node Controller runs as a KVM hypervisor and all the Node Controllers in the cluster make up the cloud environment. In the current release, running multiple clusters is really not supported. In future releases of UEC, you will be able to run multiple clusters in one environment. Each cluster acts like an availability zone in the UEC environment. As I noted earlier, in this blog example, I am putting everything on the same box (my laptop). I will point out areas where the configuration would be different in a normal installation of UEC.

Building a Private Cloud in One Hour


August 26, 2009  7:06 PM

Amazon VPC: moving the goal posts or playing catch-up?

CarlBrooks Carl Brooks Profile: CarlBrooks

Amazon has announced, in its inimitable bloggy style, a new service to allow users to create virtual private clouds within its data centers.

The new Amazon VPC offering is “virtual” because the networking and the machine images are opaque to the physical infrastructure. It’s “private,” because unlike standard EC2 instances, they don’t have a public IP address. And it’s “cloud,” naturally, because you pay $0.05/hour for the service and you can quit whenever you want.

The cloud computing blogosphere was abuzz with the announcement (e.g., here, here, and here). But is Amazon VPC, as these blogs say, really revolutionary, a re-definition of private cloud, and a validation in thinking about public, private and hybrid clouds?

None of the above, I believe. While it’s fun to poke holes on an announcement such as this, especially when it’s by acknowledged cloud market leader Amazon, there has to be a street-level view of this that looks at the reality of what’s in the offering and why.

Frankly, Amazon VPC is a terrible virtual private cloud. Network control and management are rudimentary, the VPN is stone-age, users can’t expose clients to the internet and can’t assign them IP addresses. Clearly it is not ready for prime-time, and clearly it is not aimed at Amazon’s existing user base, because they’d all have to uproot their current infrastructures to use it. It is for experimenters who start with requirements that preclude public cloud.

Granted, it’s early days, and changes are in the works, but the kind of technology in Amazon VPC was hashed out with hosting, complex hosting and managed hosting years ago. Compared to what is standard for secure VPN infrastructures in these areas, the Amazon VPC and VPN are decidedly small beer.

Next, arguments that this announcement validates definitions for different types of cloud computing or somehow affect the current market as it applies to private cloud are risible. Suppliers don’t define a marketplace — they react to it.

Cloud computing is essentially a consumption model: as much as, for as long as, and whenever you like. Cloud underpinnings like virtualization, security and costing models are just a means to an end. It was only natural that when large enterprises saw this new model of self-service and low-overhead management, they would want to try it out in their own data centers.

It’s also natural that those interested in private clouds wouldn’t want to use public clouds — public cloud is antithetical to controlling your IT environment. Hosting providers quickly realized that enterprises wanted fenced off reserves to noodle around with cloud stuff, not open pasture.

Indeed, VMware has been leaping to fill that need since last year, with vSphere and vCloud and hosting partnerships with Rackspace and Terremark, among others.

So Amazon isn’t defining the conversation by any means- they’re playing catch-up. As it stands, the Amazon public cloud isn’t designed to be private — just the opposite. Amazon VPC is a radical change of pace for them, not for the cloud market. The cloud market is rapidly filling up with providers who understand the enterprise cloud market and want to service it, which has never been Amazon’s goal.

In the near future (*cough* VMworld *cough*), we’ll see products and services that make the Amazon VPC look like chopped liver, and it will be abundantly clear that Amazon is just starting to react to a segment of cloud that is already well under way and they never set out to capture, but is taking off faster than many thought possible.


August 14, 2009  9:52 PM

Party’s over, kids: Microsoft has private cloud all sewn up. In 2010. Maybe

CarlBrooks Carl Brooks Profile: CarlBrooks

Microsoft says it will have the definitive virtualized public/private/platform cloud solution ready to go in a “shrink wrap” package by 2010, and that, by the way, hosters that aren’t fully virtualized will go the way of the dodo. Of course, this may come as a surprise to all the hosters already going great guns with any variety of managed, virtualized and dedicated offerings, including cloud computing models.

Zane Adam, Senior Director of Virtualization at Microsoft announced the Microsoft model for hosting companies and data centers at Tuesday’s Hosting Con 2009 keynote. He said that lowering “human touch” and “fabric management” were the new face of hosting and “those that pull the plug [on virtualization and automation] too late will become dinosaurs.”

Adam pitched Microsoft’s “System Center Solutions” and Dynamic Data Center Tookit as the provisioning and management glue for Microsoft’s new server products. Get on Server 2008 R2 with Hyper-V, he said, download the software kit and away you go: virtualized, managed, cloud-ready. A wonder no one’s thought of that before.

Adam was perhaps too farseeing for those at the keynote. Some attendees felt the conversation might be getting a little blurry, a little too fast. That’s not surprising given the audience — rock-ribbed rack-em-and-stack-em hosters — many of whom see an inextinguishable need for physical hosting, even as cloud computing grows.

Adam said the “vNext” version of the Toolkit will complete the vision with dynamic provisioning for virtual machines, application monitoring and “one-click” provisioning by Q1 of 2010.

Microsoft is justly famed for a pie-in-the-sky product lines, but there may be some meat to the announcement. Server 2008 R2 with be released this October, and Azure is slated for the general availability at the same time. The “System Center” and the toolkit are already out, in crude fashion.

So, hosters, if you were tired of watching Amazon and Rackspace do it for free, or hadn’t heard of VMware or Xen, or just start feeling a little antediliuvian, all you have to to is wait. Microsoft will have this whole virtualization/cloud thing sewn up tight some time next year.


August 12, 2009  2:06 PM

Azure to blaze the way for hybrid cloud

CarlBrooks Carl Brooks Profile: CarlBrooks

Infosys believes Microsoft is staking out the cloud as the inevitable future of IT — and designing Azure to be a seamless bridge between hither and thither in order to make the transition in small steps for enterprise consumers.

According to Jitendra Pal Thethi, principal architect for Microsoft business intelligence at the Indian IT giant, Microsoft’s aim with Azure to hopscotch right over the Infrastructure-as-a-Service part of the cloud and sell what it already has – software — in the approved cloud fashion, on-demand, scalable and transparent at the hardware level. Why should the baker sell wheat, after all?

Thethi has been involved with Azure since development began, some three years ago. He said that Azure is designed to let developers to carve off sections of their projects and put them in Microsoft’s cloud without having to re-learn or revamp anything. Databases already developed in Microsoft SQL can go right into Azure’s SQL Data Service without a hitch, storage, processing and all, for example.

“This concept [is that] not everything will be in the cloud; not everything will be on-premise—it will be a hybrid world,” he said. Thethi said businesses already using Microsoft for development “can pick off the low hanging fruit” without having to leave their comfy Microsoft environment or design an interface to a non-Microsoft cloud.

“Azure today gives you an on-premise experience….It’s something none of the other cloud providers provide,” he said. Thethi said that cloud computing will fundamentally change development and design, but it’s years away and Microsoft is well aware of that.

“The fact of the matter is that they want to get the ball rolling,” he said, and get developers comfortable with using online services in small ways before thinking bigger. “The entire architecture and development [model] is going to change,” he said, but Microsoft is betting businesses will want to move into the cloud in safe, familiar steps.

Microsoft plans to make Azure as compatible and useful as it can, reasoning that the less developers have to do, the easier it will be for them to make the switch. Some people already call Azure “on-demand Server 2008.”

Furthermore, it should be noted that Microsoft has no real advantages in delivering computing power itself; it neither makes computers nor helps people run them. Hosting companies and data centers do that, and they are already cutting a broad swath in the public cloud market.

So Redmond, by virtue of ubiquity, has the opportunity to carve out the Platform-as-a-Service territory very neatly. It already makes the software that (mostly) everyone is using, it has plenty of spare cash and plenty of big iron on the ranch for users, and it can scoop up subscribers and users just be being that little bit easier to use, and just a few cents cheaper than the competition, and by letting enterprises come in at their own pace.

After all, Microsoft is nothing if not patient. With cloud computing, it has everything to gain here, very little to lose and an audience it doesn’t have to chase. All it has to do is make Azure run, and wait.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: