The Troposphere


July 13, 2009  4:11 PM

AWS growth info exposed

Mino65434 Steve Cimino Profile: Mino65434

We recently came upon some photos displaying, in fancy picture form, the growth of Amazon Web Services in terms of bandwidth usage and objects stored in Amazon S3. The results are impressive, as you’ll see below:

AWS usage growth

Amazon S3 momentum

With S3 storage almost tripling in a year, not to mention AWS usage equally skyrocketing, the future of cloud computing at Amazon seems, as assumed, to be very bright indeed.

If anyone out there wants to challenge, confirm or comment on these numbers, we’d love to hear from you.

July 13, 2009  1:09 PM

Infrastructure as Code

JohnMWillis John Willis Profile: JohnMWillis

A few weeks ago I attended Velocity “09″ in San Jose, Ca. One of the sessions used a phrase that I had never seen before and it stung me like a bee. In fact, in my opinion, this new phrase described one of the more dominant themes of the conference. These sometimes called “Internet 10″ companies had figured something out that the enterprise has not been able to figure out for over 30 years. They understand that managing your infrastructure is as important as managing your applications. Fortune 5000 enterprise have always given lip service to this concept. However, they purchase tools first, thinking that is all they need in order to say that their infrastructure is important. They use monitors and event managers that give them a warm and fuzzy feeling that they are doing all the right stuff. On the configuration and provisioning side, they use large monolithic distribution systems to provide software distribution and sometimes configuration. In the enterprise, they also raise their swords called ITIL and COBIT to protect their “as-important-as” piece of mind. This false sense of confidence always reminds me of the CEO who is confident that all employees are treated equal because they put silly motivational posters on all the walls. Meanwhile, his parking spot is the closest one to the door.

At Velocity, companies like Flickr, Twitter, Google, and Myspace were making this subtle point about how their gods were not found in the tools they used; moreover, they were the process they used. They understood something the enterprise has never quite grasped. That is, if the infrastructure is important in the business then why not treat it as such. Put your process where your mouth is and not the money you spend on your tools. Theses companies at Velocity understood that the code that manages your infrastructure is as important as the code that runs your applications. In fact, the Velocity presenters made the point time after time that the infrastructure “code” and the application “code” need to be treated as equal. To that end, some of the these companies used shared version control of application code and infrastructure code within the same tool. You don’t see that in the enterprise, folks! To quote my good friend @littleidea, “WTF?”.

What is infrastructure code and how would you put it in a version control system? Yeah, yeah, sure, sure, infrastructure is all those pesky objects that “Bob” the sysadmin understands, but know one else does. Yes Virginia, these objects are the glue that make our infrastructures work. It is this meta-nursha that is needed to deploy, manage, and and configure our infrastructures. In the enterprise they have infrastructure code too. NOT! . . . they have scripts, and tons of them . . . perl scripts, shell scripts, proprietary macro configuration languages that look like scripts. All of this “wannabe” metadata scattered across all sorts of buildings and geographies. Some items are embedded in products provided by Tivoli, Microsoft, BMC, and HP. Others are hidden in workload manager tools. Some are managed by the operating system scheduler (e.g., CRON). And last but not least, in Bob’s special directories on some not so well-documented file server. Oh yeah, and guess what, virtualized environments are usually managed by another completely different team.

Infrastructure as Code means managing all the things that appear just after the server comes up. It also means putting a process in place that let’s you better manage all these items. The funny thing is that Luke Kaines of Reductive Labs has been preparing me for the acceptance of this concept of “Infrastructure as Code” for over two years now. When I heard the phrase “Infrastructure as Code” at Velocity, I knew exactly what they were talking about. Luke has preached this concept in a couple of my Cloud Cafe podcasts as well as with Micheal Cote and I in our IT Management podcast series. The defining of the various system configuration files (user ids, mount points, services, etc.) as objects allows the organization to better manage their infrastructure resources. Reusable objects that can be referenced as code provide an enterprise with an object oriented model for managing their infrastructure. There is a beautiful analogy here. In the early 1980′s we switched our applications from a functional paradigm to an object oriented model. We still haven’t done this with our infrastructure yet. Are you starting to get my point?

At the Velocity conference, presentation after presentation pointed out the useful tools that companies are using to implement this “process-first” model. Puppet and Chef are clearly dominate figures in this new IT renaissance. However, they are clearly just the tools and not the process. You would hear things like – “Oh yeah, we use puppet along with things like capastrano and nanite.” In fact, one of the vendors at the Velocity conference, Controltier, had a nice poster describing this whole new stack when it comes to the concept of Infrastructure as Code. They look at the stack as a three layer model. The lowest layer being the virtual/cloud image or the bare metal. The second being systems configuration layer with tools like Puppet, Chef, and cfengine. Lastly, a third stack describing the application service deployment layer. Coincidentally, this is their specialty layer. Controltier manages the application lifecycle for large enterprise Java applications.

Describing a new concept is always difficult and lends itself to confusion (try to Google – “What is a Cloud?”). Infrastructure as Code might not be the best way to label this new concept. Quoting the brilliant Andres Shafer of Reductive Labs from an ongoing argument that I am having with him on this subject:

Care less about the labels, and more about what it enables. We are moving towards enabling what we don’t have words to describe, so I expect some communication to be clumsy…

Debate or no debate, this is a very exciting time for infrastructure and I look forward to working with some of the key players in this new area.


July 10, 2009  4:09 PM

Microsoft nabs office.com domain for Office Online?

CarlBrooks Carl Brooks Profile: CarlBrooks

Veteran virtual collaboration software vendor ContactOffice has given up its prime post over the office.com domain, possibly in favor of Microsoft, which is rumored to be announcing its Google Docs killer Office Online next week.

According to WHOIS, a brand protector and “IP investigator” firm called Marksmen now holds the rights to the soon-to-be eponymous domain. Marksmen, in turn, is known to purchase domains on behalf of Microsoft; it was the unwitting dupe in this 2007 little payola scandal. It doesn’t take a Wile E. Coyote to make the leap and assume that come August 1, http://office.com will sport a new coat of pale blue Web 2.0, courtesy of Clippy. (Yes, I know he’s retired, but the blood still boils at the very mention of the name, no?)

Reached by phone, ContactOffice spokesman Tom Graham would not comment on the move in any way, so it is unknown on how Microsoft convinced the small firm to shuffle out of the limelight. The $60 billion Redmond firm has been known to move aggressively to crush competitors, but it has also made millionaires by buying up technologies and intellectual property as it saw fit.

Google Docs, meanwhile, with its tiny market share, hardly seems to be a competitor to Microsoft Office, but Redmond is clearly planting stakes in every cloud market it can; from reports that it will undercut Amazon Web Services (AWS) with Azure, to promoting its Dynamic Data Center Toolkit. (Surely, that should convince legions of data center operators to switch to Hyper-V!)


July 8, 2009  8:13 PM

Paglo goes whole hog to lift log fog; this blog agog

CarlBrooks Carl Brooks Profile: CarlBrooks

Cloud-based IT management provider Paglo has busted out with an interesting twist on managing network log files: Google-style browser search, cross referenced over time. Paglo says it’s the first of it’s kind. It’s definitely unique. Here’s a sample of the dashboard:

Anyone who’s ever had to slog through truckloads of log files will see Paglo’s utility instantly. With its intuitive search interface and comprehensive set of data analytics, this screenshot will make admins mouths’ water.

Paglo is an example of a strange new beast that we’ll probably be seeing a lot more of: a pure cloud-based management tool. To get started, simply download the open source Paglo Crawler to your network and it will get start gathering data from WMI or syslogs and feed them back to an individual index on Paglo’s infrastructure.

Paglo CTO Chris Waters said the idea for Paglo sprang from seeing the ubiquity of search (everyone know how to use a search engine, right?) and applying it to the user demand they saw for log management.

“We know there’s latent demand in the market for logs.” he said. Since Paglo was already collecting, massaging and delivering complex data in real time, adding searchable logs to customers’ data was a natural fit. (Click here to read about about folks running new kinds of databases in the cloud.)

Aside from its bravura log search tool, Paglo also has a fairly standard set of MSP features that will be familiar to any IT pro, including performance and network monitoring, and patch management for Windows. It’s thin on other standard features, though, like remote desktop access or remote control. Clearly, it’s biggest strength is the way it aggregates network information.

Most importantly, though, Paglo requires no initial investment — it’s pure pay-as-you-go. More tradtional MSPs require onsite hardware and a hefty licensing fee to get started. Waters is banking on making it cheap and easy to get started and scaling out Paglo’s virtualized, hosted infrastructure to keep growing. All of which, naturally, is only practically possible for a small business “in the cloud.”


July 2, 2009  7:44 PM

AppEngine follows Rackspace’s lead, barfs all over the place

CarlBrooks Carl Brooks Profile: CarlBrooks

Google’s vanguard platform service AppEngine has fallen down , hot on the heels of Rackspace’s storied hot mess on Monday

Follow the real-time nerd outrage on, what else, Twitter.

A Google spokesman, presumably Brett Slatkin, posted to the AppEngine Google Group about the outage saying the problems had started at approximately 6:30AM PST .

Very little is known about the infrastructure that underpins AppEngine, but entering “unplanned maintenance mode” and knocking your entire userbase off their data, especially when they have no alternative and no way to mitigate risk by using other platforms is certain to be a very low-level problem.

AppEngine is still in beta, like lots of other Google services, so they haven’t broken any promises, per se, but it’s hardly good PR. Google experienced a more major outage in May.

Rackspace recouped some of it’s sorely besmirched street cred by promptly and thoroughly communicating information about the outage, via Twitter, blogs and direct communication. It remains to been seen if Google will do the same.



Server Error (500) <-----bad juju!


June 30, 2009  7:43 PM

Rackspace falls over in Dallas, tweets the whole thing

CarlBrooks Carl Brooks Profile: CarlBrooks

More than 24 hours after users began reporting Rackspace hosted services were unresponsive, and the main site went dark, Rackspace has possibly set a new record for transparency and accountability, if not customer satisfaction, by tirelessly tweeting the entire episode.

They also ran to update the company blog (how droll, so Web 1.0, right?) and blamed power outages in their Dallas data center.

For additional amusment, see the vulturescompetitors flock to #rackspacefail

An official statement has not been made and a request for comment has gone unanswered to date; so the root of the problem is still to be determined. Amazon’s recent calamity was exacerbated by lightning unaccountably penetrating a supposedly world-class data center- it’ll be interesting to see if Rackspace’s facilities have similar flaws.

UPDATE: Rackspace HAS NOT released their incident report, but it’s out in the wild. According to the report, which I won’t post, but will summarize from, since the content is fair game at this point: a mains breaker flipped and one line of generator backups had “excitation failure” which means they didn’t start up properly. Subsequently 3 banks of UPS batteries bled out and slammed a bunch of racks — which means they weren’t charging properly or worse, underdesigned for the load.

What this means in the simplest possible terms: “Heads Will Roll”. Between this and Amazon’s air-to-ground static electricity adventure, data center types are wagging their grimy, highly redundant fingers as hard as possible at these incidents.

UPDATE: the incident report is now public


June 30, 2009  3:32 PM

Trawling for ideas in Open Cirrus — HP’s stab at a free cloud

CarlBrooks Carl Brooks Profile: CarlBrooks

HP plans to release its own open source cloud computing platform, according to the Director of HP’s Service Automation and Integration Labs, Chris Whitney. The Open Cirrus project, which HP Labs sponsors along with Yahoo! and Intel, was designed to put together “the ultimate stack of software people can use to build a cloud,” he said.

The project is a collaboration between the IT giants to pool computing resources at different sites into a “testbed” cloud and open it to researchers. Whitney said that Open Cirrus has about 300 researchers onboard since announcing partnerships with far-flung computer science labs in Russia, Malaysia and South Korea. He said HP has contributed about 10,000 computing nodes(virtual servers), Yahoo! about 3-4,000, and the other partners are kicking in at least 1,000 nodes each.

“We’re definitely envisioning an open-source software stack under the GPL” he said, similar to the eponymous LAMP software stack. He said they are experimenting with existing free cloud technologies (like EUCALYPTUS) and also gathering data from HP Labs’ own hardware installation, which runs on commodity HP data center servers.

Whitney said his team is also experimenting on a low level with using optical fiber communications on server backplanes and heating and cooling techniques in HP’s installation.

At a higher level, researchers are focusing on different applications for Hadoop, like data mining applications and “wide-area Hadoop” — data processing over distant geographical locations. There is a short list of current projects, but more are expected from the new partner sites.

Several open source cloud projects exist already, like Spanish Abiquo and UC-Berkeley’s EUCALYPTUS (released on Ubuntu 9.10), Canadian Enomaly, the Globus Nimbus project and others. Cloud leaders like Amazon and Rackspace run their clouds on open source technology but do not release their technology publicly. IBM is facilitating an EU-funded project called RESERVOIR, but it’s goals appear to be stretgic rather than practical.

HP’s entry, when and if it arrives, will mark the first open source cloud platform released by a major commercial vendor; certainly something to watch.


June 18, 2009  7:48 PM

Amazon EC2 zap smash: everyone’s cool with it

CarlBrooks Carl Brooks Profile: CarlBrooks

In hindsight, the lightning-strikes-Amazon-data center story is a tidy little example of a nu-media bubble. Someone should make a graph of the coverage indexed by hysteria, outrage, maniacal prophesy and supposition and tweet it or something.

Having now had a nice talk with real live Amazon people, it seems they are treating it mostly as a public relations problem, and the real issue is transparency.

You see, for Amazon watchers, the Holy Grail is to find out exactly what and where Amazon’s servers are. But Amazon isn’t keen on handing out details, likely because the reality is messy and because they might be making it up as they go along. Those Amazon watchers might want to relax. Sure, Amazon is a going concern, but it doesn’t have the kind of scratch or incentive to re-invent the wheel server like Google or Microsoft do.

Further hurting Amazon’s cause is that most hosting companies are more than happy to tell you what they run. Verizon, for instance, recently boasted about its new “CaaS” hardware. Pricing also starts at $250/month, and that’s before you fire up a single server.

Amazon is trying to run away from that game and focuses on delivery. But after a certain point, people do really care about the nuts and bolts, since unlike semi-durable consumer goods, an EC2 instance is an ongoing concern, and users want to understand how their application is staying up (I know — so last century, right?).

I did have a chance to ask about Amazon’s hush-hush data center facilities. I didn’t get much more than a general admission that “Availability Zones” are usually located in different data centers, and that there are four in the US as of June 9. Amazon was also apparently startled to discover that one facility had electrical exposure to the Great Outdoors. That’s still progress. Hopefully, there’ll be more. I’m waiting, and I know lots of others are as well.


June 15, 2009  5:27 PM

Salesforce.com pushes free version of Force.com

JoMaitland Jo Maitland Profile: JoMaitland

Salesforce.com has announced a free version of its cloud development platform, Force.com, in an effort to woo new customers.

Salesforce claims 110,000 business applications are currently running on top of Force.com which competes against Amazon AWS, Google App Engine and Microsoft Windows Azure in the cloud computing application development market.

Offering a limited, free version is a typical ploy by software companies to get users hooked, and then when they need more functionality they are forced to buy a subscription. Salesforce.com began this way with a free version of its SaaS CRM product. They’d get the VP of sales to sidestep IT and buy it for themselves, then they’re in, along with all the company’s data and nobody can get them out.

Force.com Free Edition offers:

  • one custom app
  • one website with up to 250,000 page views per month
  • up to 10 custom objects (custom database tables) per user
  • a sandbox development environment to test the app or site before deploying it
  • basic online training
  • a library of sample apps

Companies that need more than one app, or to support more than 100 users, or more than one website must upgrade to a commericial subscription to Force.com that begins at $25 per user per month.

For more on CRM check out Voices of CRM


June 11, 2009  10:43 PM

Light and dark in the Cloud: Amazon fails, Rackspace punts

CarlBrooks Carl Brooks Profile: CarlBrooks

In a succint one-day recap of the mouth-watering prospects of using cloud computing and the legless terror it can engender, website gdgt.com autoscaled gigantic traffic, and Amazon’s flagshp EC2 service went dark for hours in the night after an electrical storm.

That’s right- according to Amazon (mouse over the teeny little ‘i’s), “A lightning storm caused damage to a single Power Distribution Unit (PDU) in a single Availability Zone.” That means that one of its data centers in the US popped a cork and shut down an unspecified number of racks after a lightning strike.

The outage lasted from detection at 6:39PM PT on 6/10 to full availbility at 1:20AM PT on 6/11. That’s five hours of unexpected downtime, kids. Anyone running real-time applications or large batch jobs when their server got slammed? Any lost revenue/time/work? Lets check the SLA shall we? Graciously, Amazon will not be charging affected customers for services that went dark.

Amazon’s public stance on this so far? A pop-up window on their status page (see above).

On the other side of the coin, currently-quite minimalist gdgt streamed the keynote from Apple’s legendary dog-and-pony show, the Worldwide Developers’ Conference and incurred the expected Japanese-monster sized traffic spike. Reportedly traffic averaged 656 page views per second throughout the event, serving up something like 4.7 million total veiws by the end.

gdgt did this on Rackspace’s “Cloud Sites”, a scalable webhosting platform that starts small and charges users for extra capacity as needed. No word so far on exactly HOW much money they spent or saved, but it is presumably significantly less than if they had planned ahead and bought/rented the capacity that they needed. “Significant” in this context means “statistically observable”, by the way. Webhosting isn’t exactly a super-premium market at this point, this stunt probably didn’t ring up a staggering total.

We did try for comment; look for updates if the gdgt guys come through. Anyway, collective “good job!” for the penny-pinching whiz-bang, boys.

UPDATE: it was a few hundred bucks total according to Rackspace spokespeople.

Yet despite this cheerful little story of a moveable feast of web delivery, Amazon’s memento mori still sits at the head of table, an uncomfortable reminder that real-world “cloud delivery” means lightning, damaged data centers, and unexpected, unpreventable downtime with no recompense.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: