The Troposphere


May 4, 2009  8:58 PM

Cloud Security Alliance in favor of open standards- regulatory agencies to feel economic pressures to open to cloud.

CarlBrooks Carl Brooks Profile: CarlBrooks

Very interesting conversation with Nils Puhlmann, a co-founder of the Cloud Security Alliance originally about the DTMF initiative for open standards, but went out and about a bit; here are three nuggets which I was interested to hear about:

Security: Puhlmann is of the opinion that a) as much transparency as possible would have direct benefits to cloud providers, since “If you do everything well, why would you not want to show your customer?”.

He feels that customers would actually be more likely to buy into vendors that could show best practice security under some kind of standards model, since it would free customers from glacially slow and costly audits and testing- enterprise could buy into a public cloud without a hitch under the right conditions(read: bomb-proof security standards) and b) “within 12 months we will see many things in cloud security that will have completely failed,” either through backing the wrong horse in terms of security model or through market forces that ebb away from a chosen track.

He thinks that consensus on cloud security will emerge in baby steps as the marketplace learns what works and what doesn’t and what pays off and what doesn’t.

Compliance: Puhlmann says, “What we see for some companies,” that have regulatory oversight, “are compliance rules that rely on the notion that you have complete control over your data,”which, if you are using public clouds, is patently untrue.

But enterprise wants to use public clouds, small and midsize companies that interact with regulated agencies will want to use it; Puhlmann points out that the $19B Electronic Health Records initiative, for instance is “simply not going to happen without cloud” technologies.

FIM: Puhlmann raises hopes of a universal federated identity model, since as data gets more and more distributed, “a good federated identity standard would provide the means to track and control who has access to your data across private enterprise and the public cloud.” and believes it’s a problem that remains unsatisfactorily addressed, cloud or no cloud, but this drive toward IaaS might provoke a more catholic solution.

And the long and short of addressing a lot of these concerns lies in the hands of the agencies that regulate so much of the data about us personally.

Puhlmann thinks that dollars and sense are going to come to a head much quicker than many anticipate, since the poor economic climate is driving an awful lot of fence sitters off the palings and into the clouds, and then,”economic pressure will become so immense that regulators will have a big lobby standing behind them to force them to act,” to catch up to cloud technologies and enact regulations that allow controlled data to exist in public cloud infrastructures.

And who am I to doubt him? A reporter, so of course I doubt him, and this is well-considered analysis, not factual reporting, but from a common-sense perspective, everything he says holds water, and it’s going to be terrifically interesting to see what happens with the cloud rubber meets that federally regulated road.

April 29, 2009  8:47 PM

EUCALYPTUS sprouts business shoots

CarlBrooks Carl Brooks Profile: CarlBrooks

In what is surely a contender for the most-complicated backronym of the ages, the Elastic Utility Computing Architecture for Linking Your Programs to Useful Systems, first developed at Middleware and Applications Yielding Heterogenous Environments for Metacomputing, UC- Santa Barbara (MAYHEM in case you weren’t seeing the silly nomenclature trend) has jumped the public lagoon for the open waters of commerce:

Eucalyptus Systems, formed out of the team that developed EUCALYPTUS has gone commercial to sell help with their open-source software to cloud-minded types.

EUCALYPTUS (fingers…getting…so… tired) is a set of open-source software tools that allow users to interact with and deploy AWS-type clouds. You can use the software to administer your EC2 images or create your own private data cloud along the same model. Pretty neat, and free, but what happens when AWS decides to re-tool their APIs? Questions soon to be answered.

read the press release here.

watch this page for updates.


April 4, 2009  1:27 AM

Amazon’s New Elastic Map Reduce

JohnMWillis John Willis Profile: JohnMWillis

Amazon has offered a new web service that provides Hadoop services called Elastic Map Reduce.  Hadoop is a Java based framework that implements Map Reduce.  Map Reduce is a method of programming that gives a program the capability of breaking a job up into hundreds or even thousands of separate parallel processes.  The idea is that you can take a simple process (like counting the words in a book) and  break it up into multiple running parts (i.e., The Map), then collect them all back into summary counts (i.e., The Reduce).  This allows a programmer to process extremely large data sets in a timely manner.  Hadoop is used by companies like Google, Yahoo,  AOL,  IBM,  Facebook, and Last.fm to name a few.

The new Amazon Elastic Map Reduce service is another pay-as-you-go service that starts at .015 cents per hour and can go as high as .12 cents per hour.  This is an additional charge on top of your standard EC2 and S3 services.  For example, if you use the Hadoop to read data from S3 and start up 4 EC2 instances, you will be charged the original costs for the EC2 and S3 usage plus an additional charge for the Elastic Map Reduce service.  Amazon is charging you for the setup and administration of Hadoop – as a service.  The Elastic Map Reduce service is extremely simple to set up and run.  Basically, you upload your application and any data you wish to process to an S3 bucket.  You then create a job that includes the location on S3 of your input and output data sets and your map reduce program.  The current implementation supports writing Hadoop Map Reduce  programs in Java, Ruby, Perl, Python, PHP, R, and C++.  You then also configure the number of EC2 instances you want to run for the Map Reduce job.  You can also add some advanced arguments and use more complex processing methods, if you choose.  The AWS Management console has been updated to support a new tab for the Elastic Map Reduce service.  This new service hides all of the system administration complexities in setting up a Hadoop environment which can be quite complex.  A hadoop setup of multiple systems is not a simple task.  Hadoop runs as a cluster of machines with a specific file system called the HDFS (Hadoop File System).  It also has a number of servers called Datanodes (these are the job processing clients) and a master server called the Namenode server.  Here is a diagram of an HDFS environment:

This morning I had an opportunity to test out the new Elastic Map Reduce service using the Amazon Management Console.  I setup one of their sample jobs to do a word count Map Reduce.  There is an excellent video from Amazon on how to get started here.  Here are some screen shots of my testing this morning:

I also had an opportunity to speak to Don Brown, a local Atlanta entrepreneur and founder of twitpay.me, about the significance of the new Amazon Elastic Map Reduce service.  Don  pointed out two significant aspects to the new Amazon Elastic Map Reduce announcement.  First, by Amazon creating this new service,  they have put a higher level of significance on the usage of Map Reduce and Hadoop.  Organizations looking to explore new techniques for large data set processing and/or new ways to do data warehousing, might start hearing about Map Reduce and Hadoop.  With Amazon’s new service these organizations will start getting Google search result lists that show Amazon as a leading player with Hadoop and Map Reduce.  This will add instant credibility to this technology.  In Don’s words, “Amazon is sort of like the new IBM when it comes to cloud computing . . . you can’t go wrong with Amazon!”.  All of this should speed up the adoption of Map Reduce being used as sort of a new data warehouse technology which Don and I both agree is a good thing.  Secondly, Don suggests that the administration it takes an organization to setup Hadoop is a bargain at .015 to .12 cents per hour.  He has done a number of Hadoop consulting engagements and says the cost of a consultant to do a Hadoop setup is not cheap.

We both agree, however, despite all the hoopala regarding this new announcement, that it is still hard to implement a Hadoop solution and there are not that many experts.  Therefore, as exciting as the new Amazon announcement is, it still only gets you half way there.  Learning how to develop code using Map Reduce and Hadoop is a completely different way of thinking than traditional programming paradigms.  Most traditional programming shops will have to re-tool to take advantage of this new paradigm.  On an upside, all freshman CS students at the University of California Berkley are required to learn Hadoop in their freshmen semesters.  All and all this new announcement, in my opinion, puts Amazon in a  class of their own when it comes to “Cloud Computing.”


April 2, 2009  4:48 PM

In 140 characters tell me why map reduce is important?

JohnMWillis John Willis Profile: JohnMWillis

This morning Lance Weatherby, a local Atlanta venture catalyst, asked a simple question on twitter – “In 140 characters tell me why map reduce is important please.” When I saw the question it made me think about the answer. All of us who cover the cloud-o-sphere take terms like map reduce for granted. However, unless you are working directly with cloud computing or you have a BS from Stanford, you probably won’t have a clue why every one is so excited about these terms. My initial one word twitter answer was parallelism. Then I added a second word – multi-core. However, I guess my answers were lazy ones compared to others. Here is a list of great answers posted this morning to Lance’s and my re-tweet question…


March 27, 2009  2:08 PM

Microsoft Was Out of Line

JohnMWillis John Willis Profile: JohnMWillis

Reuven Cohen, the founder of Enomaly, has been one of the brightest voices in the cloud computing revolution. Reuven, a self proclaimed Instigator, has been working tirelessly to get the word out about cloud computing. I first met Reuven on a plane ride from Austin to Chicago last year, and in that time he schooled me on everything “cloud”. I often refer to that trip as my cloud baptism. Since that trip I have stayed in touch and followed his activities. Reuven was responsible for the formation of the extremely popular Cloud Camp events. He also has started the Cloud Computing Interoperability Forum (CCIF) to help pave the way for standards within cloud computing . Most recently he is the Instigator of something called the Cloud Computing Manifesto that will be announced this coming Monday (3/30/09).

A few of us who cover the cloud-o-sphere have been under embargo about this document and its upcoming announcement. I actually had the first opportunity to read this document last week and I had not planned on discussing it until Monday. Actually, I still do not plan on discussing the details of the document until Monday, as I promised; however, what I would like to talk about is the irresponsibility of Microsoft’s actions yesterday regarding this manifesto document. To be clear, Reuven Cohen, first and foremost is an entrepreneur and like all of us he is always looking for angles to create opportunities. Regardless, he has done some great work for the cloud computing community of which myself and software companies like Microsoft have benefited. Therefore, in an effort to coordinate this “Cloud Computing Manifesto”, he needed to share it with hundreds of organizations and numerous individuals, and all he asked the recipients were two questions in return. One, “Are you on board with this?”, and two, “Please keep it under wraps until the announcement date.”. In my opinion Microsoft was totally out of line to pre-announce the manifesto. Millions of analysts, developers, bloggers, and reporters honor Microsoft announcements all of the time and I feel it was completely hypocritical of Microsoft to jump the gun for selfish reasons, and pre-announce this document yesterday. It is obvious they have issues with the document and that is their right; however, they should have voiced their opinions about the document this coming Monday as will I.


March 23, 2009  5:51 PM

The Tale of Three Cloud SLA’s

JohnMWillis John Willis Profile: JohnMWillis

Wikpedia says…

The SLA records a common understanding about services, priorities, responsibilities, guarantees and warranties. Each area of service scope should have the ‘level of service’ defined. The SLA may specify the levels of availability, serviceability, performance, operation, or other attributes of the service such as billing. The ‘level of service’ can also be specified as ‘target’ and ‘minimum’, which allows customers to informed what to expect (the minimum), whilst providing a measurable (average) target value that shows the level of organization performance. In some contracts penalties may be agreed in the case of non compliance of the SLA (but see ‘internal’ customers below).It is important to note that the ‘agreement’ relates to the services the customer receives, and not how the service provider delivers that service.

There has been a lot of hype around the discussion of Service Level Agreements (SLA) in the cloud as of late. SLA hype has been around as long as I can remember and definitely predates the cloud. In the enterprise you will typically hear numbers like 99.999 or terms like “Five Nines”. Five nines is sort of the gold standard in the enterprise, equating to about 5 minutes of outage per year. However, whenever I think of the five nine metric it reminds me what one of my old Six Sigma mentors used to say about five nines – “Five Nines equates to about one commercial plane crash per day for a year.”

The age old problem of negotiating an SLA has always been a difficult task for any client. One of the main contention points in negotiating an SLA is around the outage credits and how they are applied. Does the customer get a reimbursement for the lost services or is the SLA applied to a future credit? In the classic hosting example, a future credit might include hours added after your service contract terminates. The credit for future services is always a suspect model for an SLA. However, with some of the newer pay as you go models in the cloud, it is easier to apply these types of credits (e.g., next month’s bill). However, in any case the test of a great SLA is one that gives a customer a direct reimbursement for lost services. Another area of difficulty when negotiating an SLA is defining the SLA outages. A few times I have been on the provider side of defining an SLA, and it is always in the best interest of the provider to supply extremely clear SLA definitions. Detailed reports are always a good best practice for the provider and the customer. Without clear definitions and documented reports, in most cases, an SLA will be useless. Here are three main areas I typically focus on when discussing an SLA:

  1. Defining the outage.
  2. How does a customer prove an outage to get credit?
  3. How does the credit get applied?

I thought I might take a look at some of the top Cloud providers who provide server instances and see what their SLAs are in relation to the aforementioned three areas.

Amazon Web Services EC2

The Amazon Web Services EC2 SLA can be found at: http://aws.amazon.com/ec2-sla/ and describes the details of the AWS EC2cSLA. In the AWS SLA EC2 agreement, Amazon claims a 99.95% SLA. Let’s break down their SLA based on the three areas described above.

Defining the outage

Basically a defined outage in AWS is very confusing at best. It basically means that you can not launch a replacement instance within a 5 minute period while at least two availability zones within the same region are down. I take this to mean that if if two out of three data centers are available and you still can’t launch and/or run any application on your EC2 server, it will not be defined as an outage. To further complicate the matter, AWS calculates their 99.95 based on the previous 365 days. If the customer doesn’t have 365 prior days of service with AWS the prior days are calculated as 100% available. This means if you are a new customer (say 2 months), and a catastrophic event happens to hit two of the three US based data centers and you can’t start an instance for three days, you would get a 10% credit for only one day’s prorated costs for EC2 services. The first two days would not be below the 12 month period 99.95 outage percentage. Also complicating the AWS EC2 SLA is the new reserve instances’ up front fees are not eligible for credits concerning outages. Whoops, they have an exclusion for that scenario described above – “caused by factors outside of our reasonable control, including any force majeure event.”

How does a customer prove an outage to get credit?

In order to receive a credit for a defined AWS EC2 outage a customer has to capture, document, and send a request to Amazon to be processed. In other words, the onus is on the customer to prove the outage. AWS does not provide any interface or report documentation to help the customer define their outages. Furthermore, Amazon requires the customer to document the region, all instance ids, and provide service logs. The customer also is required to cleanse confidential information from the logs and all of this must be done within a 30 day period of the outage.

How does the credit get applied?

First off, the AWS credit gets applied against future credits and is not a reimbursement of lost services. As previously stated it is the customer’s responsibility to provide all of the proof and do it with a 30 day period. If the customer supplies all of the documentation and Amazon approves the outage that qualifies for the below 99.95%, they will then apply a 10 percent discount on the next month’s bill.

SLA Grade “C”

AWS puts a heavy burden on the client to prove the outage. The terms of the SLA are difficult at best to understand.

RackSpace/Mosso Cloud Sites

Cloud Sites was formally called Mosso. Cloud Sites is a service that provides a platform based cloud where users share scalable back end load balancing, web services, and databases clusters. The Cloud Sites SLA can be found here: http://www.mosso.com/sla.jsp. Let’s break down their SLA based on the three areas described above.

Defining the outage

Based on the agreement, the definition of an outage from Cloud Sites is extremely simple and is described in less than 150 words. The AWS EC2 SLA is over 1000 words. Simply put, if you open an support incident report with Cloud Sites, they will credit you with a 1 day prorated credit for every hour of downtime. Supposedly all you have to do is tell the rep that you have an outage. They should then start the incident and calculate the outage.

How does a customer prove an outage to get credit?

Here is the rub, they don’t tell you about recording the outage when you call. You have to tell them to record the incident as an outage, and then you will have to continuously monitor the situation and call back support to confirm the ending time. Cloud Sites does not do this automatically for you. The reason I know this is because one of my blog sites is hosted on Mosso/Cloud Sites. I have never been given an automatic credit even though I have had at least 5 or 6 outages over the last year. I have also called in at least 5 or six times and an incident report was never discussed. In fact, on most calls they say that a specific cluster is down and that it should be up soon with no mention of a start time or stop time for recording the outage. You have to point this out to them. One of the things that has always annoyed me about Mosso/Cloud Sites is that they never notify you when the outage is fixed, even if you call in and ask about the outage. You have to find that out for yourself. This makes it extremely difficult to document an outage. Another problem with Cloud Sites is that you don’t have access to the servers the way you do with AWS EC2, so it is difficult to gather the appropriate logs to document the outage.

How does the credit get applied?

Cloud Sites outages get applied against future credits and are not a reimbursement of lost services. The Mosso Cloud Sites SLA is a great example of where less is actually less. The brevity of their SLA seems attractive at first, however, they do not have a defined process for requesting a credit. At least AWS has a documented email where you can send your detailed information. There is nothing in the Cloud Sites’ short 58 word SLA that tells you how to go about getting a refund. The client would have to assume they could call Cloud Sites support and request a refund, assuming they actually documented the start and end time of the outage. All and all, it seems like a very confusing process that is supposed to be “as described” very simple. If you can get through all of the above, then Cloud Sites will credit you a one day prorated credit for every 60 minutes of documented downtime.

SLA Grade “B minus- ”

Cloud Sites offers a simple plan; however, they need to be more clear on their SLA. In the SLA, they state that if you open up an incident report they will start the clock. However, they don’t even have a ticketing system for customers to input incidents. You have to call or start a live chat. Also, they do not notify you when the outage is cleared and this makes it difficult for a customer to keep track of their outages.

3Tera

3Tera recently announced a new 99.999 SLA for their Virtual Private Datacenter (VPDC) customers. The SLA announcement can be found at the following location: http://www.3tera.com/News/Press-Releases/Recent/3Tera-Introduces-the-First-Five-Nines-Cloud-Computing.php. Let’s break down their SLA based on the three areas described above.

Defining the outage

According to the 3Tera announcement, the customer does not have to define the outage. 3Tera will automatically detect and calculate outages. The AppLogic Cloud Computing Platform constantly monitors and reports the availability of the system and instantly alerts 3Tera’s operations team of critical issues. Some might think that the 3Tera five nines announcement is the significant part of their SLA compared to AWS 99.95; however, it’s the automatic recording of the outage that is an unprecedented feature of their SLA. While other cloud vendors require the customer to prove the outage times, 3Tera automates this process.

How does a customer prove an outage to get credit?

Short and sweet..automatically with no human intervention.

How does the credit get applied?

3Tera’s credit gets applied to the current month’s bill. VPDC customers automatically receive SLA service credits for any calendar month where availability falls below the targeted 99.999 percent. If availability is anywhere between 99.999 percent and 99.9 percent, a 10 percent credit applies to the whole VPDC service for the entire month. If availability is lower than 99.9 percent, a 25 percent credit applies.

SLA Grade “A minus- ”

I would have otherwise given 3Tera a solid “A”; however, the service has just been announced and is not available yet. When they actually post the SLA page and the actual customer contract, then I will adjust this rating accordingly


March 18, 2009  5:58 PM

Privacy org bashes Google cloud

JoMaitland Jo Maitland Profile: JoMaitland

The EPIC has formally asked the Federal Trade Commission to open an investigation into Google’s cloud computing services — including Gmail, Google Docs, and Picasa — to determine “the adequacy of the privacy and security safeguards.” The petition follows the recent report of a breach of Google Docs.

Here’s a link to the EPIC petition against Google.


March 16, 2009  5:17 PM

Windows Azure outage holds up developers

HannahDrake Hannah Drake Profile: HannahDrake

Developers looking to use Windows Azure to build apps this weekend faced the blue screen of death as a 22-hour outage locked them out of the service.

The outage is a reminder to users tapping into cloud services that the availability of the system is out of their control. Microsoft has so far declined to comment on what happened.


March 16, 2009  3:21 PM

The Rackspace/Mosso PCI Debate

JohnMWillis John Willis Profile: JohnMWillis

A few weeks ago Rackspace made an announcement about hosting the first PCI complaint cloud solution. PCI is short for the Payment Card Industry Data Security Standard, which is a worldwide security standard for merchants who store, process or transmit credit card holder data.   Rackspaces’s Cloudsites (formally called Mosso) was used to enable the online merchant, The Spreadsheet Store , to move to the cloud without having to compromise the security of their online transactions (i.e., PCI compliance).  What should have been a great success story for the Rackspace/Mosso team turned into a little bit of a PR debacle.

Some of the cloud security experts and thought leaders took exception with the Rackspace/Mosso titled “Cloud Hosting is Secure for Take-off: Mosso Enables The Spreadsheet Store, an Online Merchant, to become PCI Compliant”, and they called out Rackspace/Mosso on their bold claim of being the first cloud provider to offer PCI compliancy. Craig Balding, an IT Security Practitioner and cloud expert, was the first blogger to point out in his blog article “What Does PCI Compliance in the Cloud Really Mean?”:

Mosso/Rackspace recently announced they have “PCI enabled” a Cloud Site’s customer that needed to accept online credit card payments in return for goods (i.e. a merchant).

However, the website hosted on Mosso’s Cloud, doesn’t actually receive, store, process, transmit any data that falls under the requirements of PCI.

Or to put it another way, its ‘compliance’ through not actually needing to be…

Craig goes on to say that Rackspace’s “PCI How To” document is just an “implementation of an age-old Internet architecture that involves redirecting customers wishing to pay for the contents of their online basket to an approved and compliant online payment gateway.”

Christopher Hoff, another cloud and security expert, also calls an objection to the aforementioned Rackspace/Mosso PCI hype by stating in his blog, “How To Be PCI Compliant in the Cloud…”, the following:

So after all of those lofty words relating to “…preparing the Cloud for…online transactions,” what you can decipher is that Mosso doesn’t seem to provide services to The Spreadsheet Store which are actually in scope for PCI in the first place!*

The Spreadsheet store redirects that functionality to a third party card processor!

So what this really means is if you utilize a Cloud based offering and don’t traffic in data that is within PCI scope and instead re-direct/use someone else’s service to process and store credit card data, then it’s much easier to become PCI compliant. Um, duh.

Ben Cherian of Ben Cherian’s blog, also goes on to refer to the Rackspace/Mosso antics as a trick when he states the following:

When I saw this, I wondered how it was possible, but as I read closer it became clear that it was just a trick! It seems that their “PCI-compliant” solution requires Mosso not to store any information that requires PCI compliance. Instead they offload the burden of compliance to a third-party payment gateway (Authorize.Net).

However, keeping it real, Greg Hrncir the Director of Operations at Mosso shot back with the following comment on Craig’s blog:

The truth is that we are the first Cloud, that we know of, that enabled its Cloud customers to gain PCI compliance using multiple technologies. The future of Cloud technologies is full of these types of hybrid solutions that combine the best of both worlds. The goal for a customer and online merchant, is to get PCI compliance, not be purist in terms of technology. On line merchants want to leverage the Cloud for scaling, and this is a good way to do it by combining both worlds.

In summary, I think they were all right. Craig, Chris, and Ben were perfectly within bounds to call out the titled Rackspace/Mosso hype and in doing so they all did a brilliant job educating us all on what PCI really means in or outside of a cloud.  However, Greg Hrncir, also points out that what Mosso did was a first-in movement and as a hybrid model they are setting the building blocks for otherwise roadblocked initiatives. In my opinion, what Rackspace has done is significant from a “cloud” industry standpoint; however, being “cloud” leaders they should have used a little bit more discretion in their announcement.  With all the hype already associated with cloud computing it is important for the leaders in this space to keep the discussion a little bit grounded.  However, this reminds me of an old friend of mine, that every time he would get into a fight he would stick his chin out and say “hit me”. In the Mosso/PCI debate it looks like Mosso got hit.


March 6, 2009  12:54 PM

Cloud computing and teenage sex…

JoMaitland Jo Maitland Profile: JoMaitland

What do cloud computing and teenage sex have in common?

Everyone talks about, few actually do it, and even fewer get it right, according to the following story.

Check it out:

http://web2.sys-con.com/node/862933


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: