It’s a good question. For those who hate the cloud and all it stands for, these outages provide fodder for their anti-cloud rants and fuel their fear. The trouble is that so many of us have become dependent on cloud services, we can’t really afford to walk away at this point, and we probably shouldn’t let a bad few weeks steer us either way.
I see it a lot like baseball. In a 162 game season, you can’t judge a team in 10 game stretches. You have to look at the overall picture of the team — not when it’s going great, and not when it’s going horribly. By the same token, we can’t judge cloud computing based on a few well publicized outages.
In some sense, this is all about growing pains. As cloud computing grows more popular, outages such as these from popular services are going to have a bigger impact when they do occur and generate more publicity as a result, especially when the companies involved are the likes of Amazon, Google and Microsoft.
But in each of these cases, as bad as they were, and as frustrating as it was for every user involved, the services came back and for the most part — with that exception of Amazon — no data was lost.
The good news is that all three companies get that they have to be up front about these outages and explain clearly what happened. And in all three cases, the big companies posted sincere apologies with clear explanations about what went wrong.
But is sincerity enough for a busy IT pro just trying to keep his users up and running? Probably not, but it’s nothing that each IT pro hasn’t run into in his own data center from time to time. As Healthcare CIO John Halamaka pointed out in his a blog post called, Should We Abandon The Cloud, his company’s system has “highly redundant, geographically dispersed Domain Name System (DNS) architecture. In theory it should not be able to fail. In practice it did.”
And Halamka writes it’s foolish to think it can’t happen to anyone, even these three large companies whose business is the Cloud. “Believing that Microsoft, Google, Amazon or anyone else can engineer perfection at low cost is fantasy,” Halamka wrote.
I’m sure it was no fun for the users who logged onto their Gmail accounts last week and found everything gone. I know I would have panicked if it were me, so I’m not minimizing it by any means. But Google did what it needed to do. It found the nature of the problem, it went to its backups and its backups of backups and it recovered the data. As Seth Weintraub wrote on Fortune, tape might be somewhat archaic, but it proved the extent to which Google has gone to protect its data.
The outage appears to have been caused by a bug. As Ben Traynor of Google pointed out, they use tape precisely because it is immune to software bugs. As the old commercial used to say, “The garlic worked.”
But not everyone was convinced that this worked out well in the end. Michael Hickins wrote in a Wall Street Journal article, that this was a black eye for cloud computing in general.
“This is a black eye for companies like Google, which is actively trying to convince businesses and governments to switch their on-premise email systems to online services, which it promotes as less expensive and more reliable.”
I agree to the extent that it plays into the hands of the naysayers and the anti-cloud crowd, but as I’ve asked here before, how many times has your Exchange server gone down in your company? Just because you have an email server behind the firewall doesn’t mean you are immune to problems like the one Google experienced last week during an upgrade.
It was by no means Google’s finest hour, but neither was it an unmitigated disaster because they did what they had to do. In the end, Google recovered the data, and that’s the lesson people should be taking from this incident.
It’s not that Google lost data for a short period of time, it’s that software glitches happen to everyone, even the mighty Google, and we should not be judging them by the fact that they had a problem–because no technology, no matter where it lives is infallible–but by how well they dealt with the problem.
Looking at it from that perspective, we learned that cloud computing works as it should in a crisis situation and Google actually proved the power of the notion. ]]>
Another recent report by Forrester Research found that while enterprise IT spending was up for the most part, jobs numbers were down. That means that companies were apparently investing in IT infrastructure, but not people to maintain that infrastructure, a situation that could be sustainable short-term through a down cycle like we’ve been in, but not necessarily over the long term.
The fact that IT jobs are reportedly lagging can’t make IT pros who question Cloud strategies very happy. It could be an underlying reason why so many don’t trust the move to the cloud — because they see it as an attack on what they do for the Enterprise. But it doesn’t have to be that way for several reasons.
First of all look no further than the Federal Government. As I wrote last week, US CIO Vivek Kundra wants to cut inefficiencies, close more than 800 data centers and shift $20 billion of the $80 billion of the federal IT budget to cloud services. That means that short term, some federal sector IT jobs will disappear with this shift and the data center closings.
But let’s look at the big picture. Even as the private sector and the federal government shift some resources to the Cloud, at least some of those jobs should should move to those Cloud companies who logically will need more personnel to deal with the increasing volume of business. What’s more, the shift requires IT to monitor and deal with different types of problems — like negotiating and holding the cloud companies to their service level agreements.
It’s also likely that we will see a corresponding internal enterprise shift to private cloud services, which means more efficient use of the hardware and software licenses, but which still requires humans to build, monitor and maintain. In addition, it requires people to deal with requests outside of the standard private cloud service offerings.
If you figure that most users will get by with standard private service offerings, there will always be a percentage who need customized services to meet the unique demands some projects will always have. That means staffs of programmers, IT Pros and consultants will still have plenty of work.
What’s more, enterprise class systems still require a great deal of leg work to select, test, set up and maintain, and there will always be a need for IT pros to fill that role.
While it’s easy to look at the short-term IT jobs picture and get discouraged, there seems to be a longer lag between recovery and jobs growth in this economic cycle. But as the economy gets stronger, even with shifting IT budgets, the jobs should come back too — even if they end up being with the service providers instead of the customers.
Photo by The Planet on Flickr. Used under Creative Commons License. ]]>
For every customer using a service like Mozy, you need hard drive space and redundancy and all of that costs a certain amount per user. At some point, the numbers just don’t make sense any more. I guess that EMC has reached the tipping point.
In the future, the service is going to get a lot more costly. According to a CNET article, it’s going from a flat rate of $82 for two years of unlimited service to 5.99/month for up to 50GB of data and $2.00 for each 20GB block after that. Still, a very reasonable deal by most standards, but quite a bit more than the initial deal.
Meanwhile, Jason Perlow writes on ZDNet about Flickr, the free photo service being too big to fail, or at least that’s what he hopes given that he has thousands of photos on there. He even tells the sad tale of a guy who lost 4000 of his photos when Flickr inadvertently deleted his account. Thankfully, they eventually found them — had to be on backup somewhere right?
Why should you care, I hear IT pros scoff. You’re certainly not using any two-bit consumer service for your company data, right? Well, you should care because these two tales are like the proverbial canary in a coal mine. If it can happen to consumers, dear readers, it can happen to you too.
As I reported in a recent blog post, “92 percent of the companies in [a] Management Insight Technologies survey used at least one cloud service and 53 percent had 6 or more services.” That means you aren’t quite as secure as you might believe. If your company is using public cloud services — and if this survey is any indication, chances are that you are — these two stories should be a wake-up call.
What it means, as I’ve written before (but it doesn’t hurt to repeat), is that you need to understand your Terms of Service, and you also need to understand what happens if those terms change. If the price of the service gets uncomfortably high, how easily can your data off one service and onto another, or back onto the cozy confines of your own in-house servers?
Further, make sure you understand the service’s backup, redundancy and disaster recovery plans. Even reputable companies fail. Who can forget the disaster Microsoft faced back in October, 2009 when they hosed all of the Sidekick data they maintain. They eventually recovered it, but it was an ugly reminder of what can happen to cloud-based data.
Don’t get me wrong, I’m not trying to scare you away from cloud services. They can be very useful indeed for many functions and there lots of advantages to going to the cloud, but just understand what you’re getting into, as you would with any service, before you sign on the dotted line. ]]>