The Troposphere


May 19, 2009  5:13 PM

Day 2 dawns at Enterprise Cloud Summit

CarlBrooks Carl Brooks Profile: CarlBrooks

We’re exploring new frontiers at the Enterprise Cloud Summit, as cloud deckhands try to haul up the sails and ship owners look for value. Overall the mood is good; people are receptive, especially since the order of the day for cloud evangelists is “saving money” and they have examples to prove it.

Probably the biggest news of the day was from Amazon, which announced a new set of AWS  services that address performance and monitoring. It’s worth noting that that’s something many cloud management services (e.g., RightScale) are also offering.

Amazon CTO Werner Vogels also point-blank ducked a question on how Amazon makes decisions on how to bring new AWS products to market. This is obviously of interest to cottage industries springing up around AWS.

Other highlights have included a snappy “Dark Side” talk by Forrester Analyst James Staten, providing welcome relief after other sugar-coated morning sessions.

Then, a panel featuring an honest-to-goodness risk analyst and an insurance wonk outlined the meaning of their pet neologism, “intellectual malpractice,” that is, losing/misusing the citizenry’s personal identity materials. Their opinion is that adding cloud providers into the insurance mix may add hitherto unknown risks, since victims may be able to sue anybody involved in the chain.

Technology has outpaced the law, and it’s a breakout world for litigators, argued Bob Parisi, from the risk advisory firm The Marsh Group. That may prove to be a roadbloack for enterprise adopters, he said. Drew Bartkiewicz from The Hartford was unequivocal, however. “Cloud insurance exists,” he said. It’s just going to cost you an unspecified amount of money.

May 14, 2009  8:46 PM

SearchCloudComputing.com is live!

CarlBrooks Carl Brooks Profile: CarlBrooks

Exciting news, the site I’ve been brought on to write and report for launched today! There’s plenty of content there already, including straight news, advice, and expert commentary. Many of the articles already there are backfill — old news — but definitely worth a look at, since it comes close to gathering all of TechTarget’s cloud stories in one place. That’s not an inconsequential resource, and I am fully pimping it out to anyone who reads this blog.

Please do be aware that it is soft launch, we know there are dead links and some funky items- it is in process and we are not making a big noise about it. Comments welcome on general look and fell.

As a bonus, I think it looks nice. It’s clean and easy to get around. It’s going to be a blast. A cloudy, cloudy blast.

So that’s nice.

In other news, I finished the Eucalyptus Systems piece. I’ll link to it when it’s officially out in the world.

Another piece about the so-called “Legal Cloud” is up. I think it’s newsworthy because it’s a very clear-cut example of what I’m going to call the “Cloud VAR” (you could even say “Cee-VAR.” That’s a cool sounding acronym, right?). What they’ve done is basically fence off their own little patch of Rackspace (I don’t know to what level or many technical details) as a virtual private cloud and are selling cloud resources to law firms, exclusively. These guys have industry expertise — CEO Mark Hadfield hails from Workshare Deltaview, which makes software for handling legal documents, and the CTO comes from Joyent. At this point, they are long on concept and short on track record (one customer told me he thought the Legal Cloud was basically non-existent and in early testing), but these guys say they want to be out of beta and open for business in 3 months (!).

Of course, they can do this because, wait for it, they are using the cloud. If the model works, they just ramp up their Rackspace headroom; if it doesn’t, they simply turn off the tap. No waiting, no planning, nothing but cash over the barrel.

So that’s a heady recipe when venture capital seems mighty eager for cloud companies: take one cloud/IT manager type, one industry wonk and one credit card and Presto! Cloud VAR open for business. “Nimble” seems to be taking on a whole new meaning out here in cloud-land.


May 14, 2009  3:13 PM

Feds get specific on cloud interoperability

JoMaitland Jo Maitland Profile: JoMaitland

Anyone hoping to sell infrastructure as a service to the Government will need to explain how they meet the following criteria for interoperability in the a cloud setting:

5.1 Describe your recommendations regarding “cloud-to-cloud” communication and ensuring interoperability of cloud solutions.
5.2 Describe your experience in weaving together multiple different cloud computing services offered by you, if any, or by other vendors.
5.3 As part of your service offering, describe the tools you support for integrating with other vendors in terms of monitoring and managing multiple cloud computing services.
5.4 Please explain application portability; i.e. exit strategy for applications running in your cloud, should it be necessary to vacate.
5.5 Describe how you prevent vendor lock in.

It’s a helpful list for anyone looking to tap into these services.

Here’s the complete RFI


May 12, 2009  8:40 PM

Cloud management startups vie for VC dollars

CarlBrooks Carl Brooks Profile: CarlBrooks

Cloud computing is the fresh word on everyone’s lips; the push started in earnest a year or more ago, when Amazon’s Web Services portal came into its own and the reality of instant-on, pay-as-you-go infinitely scalable servers sank in. Now companies that cover the middle ground between your minute-by-minute rental server and the software that actually makes you money are hot topics.

Needless to say, venture capitalists are hungry for cloud companies. Diego Marino of Spanish startup Abiquo says that VC firms have been beating the door down for attention since Under the Radar 2009, April’s startup showboat conference in Redmond, CA. “They are knocking”, he said.

The slump in the US economy has probably done more to stimulate cloud interest than any amount of hype: companies from large to small that were mulling the choice between buying iron or renting cloud space have had to face up to severe bottom-line pressures. The practically zero time and investment costs involved in using cloud and the growing awareness of reliability have tipped the balance for cloud, and big enterprise is slowly but surely hoving over to using cloud technologies in their own data centers to increase utilization and flexibility.

And thus companies who can help manage, deploy and use all these new technologies are all the rage. Here are five shiny new shops to know about, in alphabetical order:

Abiquo:
Open source cloud development and management software: Abiquo released AbiCloud in April, has $400k in funding and prospects for more very soon. The company expects to make money giving away the software to developers and charging for support and development. AbiCloud currently supports VirtualBox, VMware ESX & ESXi, Xen and KVM.

Cloudkick:
Stripped down, intuitive management tools for EC2 instances in a browser format; Cloudkick is angling for admins who want a clever way to keep on top of their cloud. Co-founder Alex Polvi says that they started with $20K in seed money from Y Combinator and coded the project in Python in 5 months. He says the user base is growing fast and “It’s a logical progression” for users to want to buy premium services once they are hooked on the free service. It will be sold just like server instances- by the minute.

enStratus:
Indicative of the current heady state of the marketplace, enStratus started as product development and new features for Valtira, a marketing platform. Valtira honchos David Bagley and George Reese spun enStratus off when it became clear they could use their tools for distributed systems in general and are aimed squarely at companies moving into the cloud. They too, are optimistic about funding.

Eucalyptus Systems:
The bunch who developed EUCALYPTUS, an open source cloud platform out of UC-Santa Barbara are cashing in by offering to help IT shops start build their own data cloud, fully compatible with EC2 and Amazon’s APIs. They have $5.5 million from Benchmark Capital and the support of well established players in cloud, like RightScale. They are head-to-head with Abiquo, but the world is big enough for two free cloud –building software projects, right?

Tap In Systems:
Running from and sold from and used from inside Amazon’s cloud, Tap In is an early example of what will eventually be a crowded ecosystem of businesses inside EC2 space. Built around open source Nagios, Tap In promises support for an increasing number of vendor APIs in the cloud and recently announced partnerships with public cloud providers GoGrid and 3Tera. CEO Peter Loh says they have angel money right now, and they are targeting enterprise users.


May 7, 2009  9:04 PM

Rich Wolski on the difference between data centers and clouds

CarlBrooks Carl Brooks Profile: CarlBrooks

I’m working up an article about EUCALYPTUS and Eucalyptus Systems, cf. my earlier post on’t.

Leaving aside the giggles over nomenclature, I had quite a nice talk with Dr. Rich Wolski, the lead scientist on the original open source project (also with CEO Woody Rollins and the VP of marketing).

Anyway, Wolski had an interesting and quite succinct definition of the differences between a data center employing virtualization in its currently accepted form and a cloud infrastructure, because, on paper, the two share enough common elements that lots of people (and marketers) are happy to fuzz the two together.

I didn’t think that was quite right, and neither does the Good Doctor, (ha! like I’m the expert over here). Otherwise, why get excited over cloud? If that were true, then it’s just re-packaged old news, and nobody needs to do anything but change the badges and maybe dice up the trim package, if I may borrow from the big book of automobile industry metaphors. But cloud is fundamentally something different, and new, and it’s worth knowing why.

He says it’s down to access and the control structure. The major difference between a data center and a cloud is access- a cloud is set up so anyone can drop a penny in the slot and start up a server or six- in a data center, you ask, and someone does it for you, then hands over the steering wheel.

He said something like, pardon the paraphrasing, “in a data center, virtualization is the grease that lubricates resource management” for the admins; it allows the guy in charge to move his resources around- “it’s a reconfiguration mechanism,” but “in a cloud, [virtualization] is a fence.” it separates and protects resources and lets everyone have their own private playground without knocking over the other kids’ toys.

A subtle difference? Wolski says it’s down to a bedrock set of premises and assumptions that drove the development of the cloud model.

“We tried to look at the cloud paradigm from an analytical perspective,” he explains, and “cloud is an ecommerce model-it’s a transactional model [in a] distributed system”.

Did that sink in? At its most basic, cloud is not about computers. It’s about sales. Start with the premise that you have a product (server instances/CPU time/bit buckets), you want to sell them to any and all comers over the internet (ecommerce) and you want to do a lot of it. What you get is “cloud computing”, and logically, it’s no surprise that Amazon pioneered it commercially. They didn’t assume they needed a resource farm and a way to sell access to it- they assumed they had servers and needed to sell those instead.

So that’s interesting to me. Cloud is not a utopian access opium dream- it’s a logical outgrowth of commodity commerce.

UPDATE:Story’s done, look for it soon. One more little nugget from Wolski on defining cloud computing: When he and his crew started thinking about the project in 2007, they found it was “utterly impossible to get a consensus” on what cloud was, so they “decided to sidestep the debate by picking the thing that was demonstrably was a cloud — the one thing no one could say was NOT a cloud.” Their answer? Amazon Web Services.

So there you have it – not sure what an elephant is? Look around; you can’t miss one if it’s in the room with you.


May 6, 2009  1:40 AM

Citrix and Amazon hop into mindbending infinite-loop bed together

CarlBrooks Carl Brooks Profile: CarlBrooks

We’re sensing someone had an office pool going on “alliteration they could get away with” for this slug line: “Citrix Announces Citrix C3 Lab Built on Amazon Web Services to Connect Cloud Computing to the Corporate Datacenter”.

Anyways, what the mess of ess’ and clatter of consonants means is that Citrix, as part of its big show this week is announcing their Citrix’ XenApp software is now available for rent, from AWS, to run Citrix servers on Amazon’s cloud.

C3 stands for Citrix Cloud Center, their management suite for IaaS providers.

That’s the Citrix Citrix Cloud Center Lab, for those who collect poorly thought out product names.

The brain-pain part comes from the fact that Amazon itself runs its cloud on Xen, so a customer of Citrix Citrix would, technically, be using XenApps to use Xen virtual machines to manage and deploy public cloud resources to their customers as a customer using public cloud resources running on Xen virtual machines.

Hey, that may be great and just what C3 users want and it may work just fine, but it’s a hell of a rabbit hole to send your data down, ontologically speaking. We’re not prejudiced, we’re just trying to keep up.

Also, according to the article published today by SearchEnterpriseDesktop.com News Director Alex Barrett, you’ll be able to do it all from your iPhone. I think they should call the app, “DRINK ME”, but no-one reads anymore, they’ll never get it.

This is doubtless targeted at developers who want to play with Citrix distributed computing offerings, and the odd IT shop that wants to offer Citrix AND call itself “cloud” and can’t afford its own data center.

Really, though, it’s just another way of proving that Amazon’s cloud model is still the top of heap. Oracle, IBM, now Citrix, and others are essentially opening Amazon storefronts, only it’s buzzword-friendly grid computing instead of baby clothes and lawn sprinklers. Amazon doesn’t care- they have a two and a half year leap on everyone else wanting to offer public cloud instances, the best distribution channel extant and plenty of headroom. All they have to do is make space, keep the lights on, let their little playfriends from the middleware/manageware classroom play in their pool, and rake in the dough.

I’ll scrape together some time soon and poke around, see how good a job Citrix has done making this work and post some screenshots and an update in a day or so. Just a head’s up, ladies and gentlemen: I may never been seen again.


May 4, 2009  8:58 PM

Cloud Security Alliance in favor of open standards

CarlBrooks Carl Brooks Profile: CarlBrooks

Very interesting conversation with Nils Puhlmann, a co-founder of the Cloud Security Alliance originally about the DTMF initiative for open standards, but went out and about a bit; here are three nuggets which I was interested to hear about:

Security: Puhlmann is of the opinion that a) as much transparency as possible would have direct benefits to cloud providers, since “If you do everything well, why would you not want to show your customer?”.

He feels that customers would actually be more likely to buy into vendors that could show best practice security under some kind of standards model, since it would free customers from glacially slow and costly audits and testing- enterprise could buy into a public cloud without a hitch under the right conditions(read: bomb-proof security standards) and b) “within 12 months we will see many things in cloud security that will have completely failed,” either through backing the wrong horse in terms of security model or through market forces that ebb away from a chosen track.

He thinks that consensus on cloud security will emerge in baby steps as the marketplace learns what works and what doesn’t and what pays off and what doesn’t.

Compliance: Puhlmann says, “What we see for some companies,” that have regulatory oversight, “are compliance rules that rely on the notion that you have complete control over your data,”which, if you are using public clouds, is patently untrue.

But enterprise wants to use public clouds, small and midsize companies that interact with regulated agencies will want to use it; Puhlmann points out that the $19B Electronic Health Records initiative, for instance is “simply not going to happen without cloud” technologies.

FIM: Puhlmann raises hopes of a universal federated identity model, since as data gets more and more distributed, “a good federated identity standard would provide the means to track and control who has access to your data across private enterprise and the public cloud.” and believes it’s a problem that remains unsatisfactorily addressed, cloud or no cloud, but this drive toward IaaS might provoke a more catholic solution.

And the long and short of addressing a lot of these concerns lies in the hands of the agencies that regulate so much of the data about us personally.

Puhlmann thinks that dollars and sense are going to come to a head much quicker than many anticipate, since the poor economic climate is driving an awful lot of fence sitters off the palings and into the clouds, and then,”economic pressure will become so immense that regulators will have a big lobby standing behind them to force them to act,” to catch up to cloud technologies and enact regulations that allow controlled data to exist in public cloud infrastructures.

And who am I to doubt him? A reporter, so of course I doubt him, and this is well-considered analysis, not factual reporting, but from a common-sense perspective, everything he says holds water, and it’s going to be terrifically interesting to see what happens with the cloud rubber meets that federally regulated road.


April 29, 2009  8:47 PM

EUCALYPTUS sprouts business shoots

CarlBrooks Carl Brooks Profile: CarlBrooks

In what is surely a contender for the most-complicated backronym of the ages, the Elastic Utility Computing Architecture for Linking Your Programs to Useful Systems, first developed at Middleware and Applications Yielding Heterogenous Environments for Metacomputing, UC- Santa Barbara (MAYHEM in case you weren’t seeing the silly nomenclature trend) has jumped the public lagoon for the open waters of commerce:

Eucalyptus Systems, formed out of the team that developed EUCALYPTUS has gone commercial to sell help with their open-source software to cloud-minded types.

EUCALYPTUS (fingers…getting…so… tired) is a set of open-source software tools that allow users to interact with and deploy AWS-type clouds. You can use the software to administer your EC2 images or create your own private data cloud along the same model. Pretty neat, and free, but what happens when AWS decides to re-tool their APIs? Questions soon to be answered.

read the press release here.

watch this page for updates.


April 4, 2009  1:27 AM

Amazon’s New Elastic Map Reduce

JohnMWillis John Willis Profile: JohnMWillis

Amazon has offered a new web service that provides Hadoop services called Elastic Map Reduce.  Hadoop is a Java based framework that implements Map Reduce.  Map Reduce is a method of programming that gives a program the capability of breaking a job up into hundreds or even thousands of separate parallel processes.  The idea is that you can take a simple process (like counting the words in a book) and  break it up into multiple running parts (i.e., The Map), then collect them all back into summary counts (i.e., The Reduce).  This allows a programmer to process extremely large data sets in a timely manner.  Hadoop is used by companies like Google, Yahoo,  AOL,  IBM,  Facebook, and Last.fm to name a few.

The new Amazon Elastic Map Reduce service is another pay-as-you-go service that starts at .015 cents per hour and can go as high as .12 cents per hour.  This is an additional charge on top of your standard EC2 and S3 services.  For example, if you use the Hadoop to read data from S3 and start up 4 EC2 instances, you will be charged the original costs for the EC2 and S3 usage plus an additional charge for the Elastic Map Reduce service.  Amazon is charging you for the setup and administration of Hadoop – as a service.  The Elastic Map Reduce service is extremely simple to set up and run.  Basically, you upload your application and any data you wish to process to an S3 bucket.  You then create a job that includes the location on S3 of your input and output data sets and your map reduce program.  The current implementation supports writing Hadoop Map Reduce  programs in Java, Ruby, Perl, Python, PHP, R, and C++.  You then also configure the number of EC2 instances you want to run for the Map Reduce job.  You can also add some advanced arguments and use more complex processing methods, if you choose.  The AWS Management console has been updated to support a new tab for the Elastic Map Reduce service.  This new service hides all of the system administration complexities in setting up a Hadoop environment which can be quite complex.  A hadoop setup of multiple systems is not a simple task.  Hadoop runs as a cluster of machines with a specific file system called the HDFS (Hadoop File System).  It also has a number of servers called Datanodes (these are the job processing clients) and a master server called the Namenode server.  Here is a diagram of an HDFS environment:

This morning I had an opportunity to test out the new Elastic Map Reduce service using the Amazon Management Console.  I setup one of their sample jobs to do a word count Map Reduce.  There is an excellent video from Amazon on how to get started here.  Here are some screen shots of my testing this morning:

I also had an opportunity to speak to Don Brown, a local Atlanta entrepreneur and founder of twitpay.me, about the significance of the new Amazon Elastic Map Reduce service.  Don  pointed out two significant aspects to the new Amazon Elastic Map Reduce announcement.  First, by Amazon creating this new service,  they have put a higher level of significance on the usage of Map Reduce and Hadoop.  Organizations looking to explore new techniques for large data set processing and/or new ways to do data warehousing, might start hearing about Map Reduce and Hadoop.  With Amazon’s new service these organizations will start getting Google search result lists that show Amazon as a leading player with Hadoop and Map Reduce.  This will add instant credibility to this technology.  In Don’s words, “Amazon is sort of like the new IBM when it comes to cloud computing . . . you can’t go wrong with Amazon!”.  All of this should speed up the adoption of Map Reduce being used as sort of a new data warehouse technology which Don and I both agree is a good thing.  Secondly, Don suggests that the administration it takes an organization to setup Hadoop is a bargain at .015 to .12 cents per hour.  He has done a number of Hadoop consulting engagements and says the cost of a consultant to do a Hadoop setup is not cheap.

We both agree, however, despite all the hoopala regarding this new announcement, that it is still hard to implement a Hadoop solution and there are not that many experts.  Therefore, as exciting as the new Amazon announcement is, it still only gets you half way there.  Learning how to develop code using Map Reduce and Hadoop is a completely different way of thinking than traditional programming paradigms.  Most traditional programming shops will have to re-tool to take advantage of this new paradigm.  On an upside, all freshman CS students at the University of California Berkley are required to learn Hadoop in their freshmen semesters.  All and all this new announcement, in my opinion, puts Amazon in a  class of their own when it comes to “Cloud Computing.”


April 2, 2009  4:48 PM

In 140 characters tell me why map reduce is important?

JohnMWillis John Willis Profile: JohnMWillis

This morning Lance Weatherby, a local Atlanta venture catalyst, asked a simple question on twitter – “In 140 characters tell me why map reduce is important please.” When I saw the question it made me think about the answer. All of us who cover the cloud-o-sphere take terms like map reduce for granted. However, unless you are working directly with cloud computing or you have a BS from Stanford, you probably won’t have a clue why every one is so excited about these terms. My initial one word twitter answer was parallelism. Then I added a second word – multi-core. However, I guess my answers were lazy ones compared to others. Here is a list of great answers posted this morning to Lance’s and my re-tweet question…


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: