The Troposphere


May 18, 2011  3:23 PM

Cloud outage roundup!

CarlBrooks Carl Brooks Profile: CarlBrooks

It’s been a rough patch for cloud computing in the “perceptions of reliability” department. Gremlins working overtime caused EBS to fail at Amazon, taking down a bunch of social media sites, among others. Naturally, that got a lot of attention, much as throwing an alarm clock down a wind tunnel will make a disproportionate amount of noise.

As the dust was settling and the IT media echo chamber was polishing off the federally mandated outrage/contrarian outrage quota for all kerfuffles involving Anything 2.0, more outages struck, including a Blogger outage that no one in IT really cared about, although this reporter was outraged that it temporarily spiked a favorite blog.

While nobody was caring about Blogger, Microsoft’s hosted (cloud) Exchange and collaboration platform, Business Productivity Online Services (BPOS, now a part of Office 365) went down, which people in IT most assuredly did care about. Especially, as many of the forum posters said, if they had recently either been sold or sold their organization on “Microsoft cloud” as a preferable option to in-house Exchange.

“I’ve been with Microsoft online for two weeks now, two outages in that time and the boss looks at me like I’m a dolt. I was THIS close to signing with Intermedia,” said one poster. That’s the money quote for me; Intermedia is a very large hosted Exchange provider and this (probably) guy was torn between hosted Exchange and BPOS. Now he feels like he might have picked wrong: notice he didn’t discuss the possibility of installing on-prem Exchange, just two service options.

Microsoft posted a fairly good postmortem on the outage in record time, apparently taking heed from the vicious pillorying AWS got for its lack of communication (AWS’ postmortem was also very good, just many days after the fact):

“Exchange service experienced an issue with one of the hub components due to malformed email traffic on the service. Exchange has the built-in capability to handle such traffic, but encountered an obscure case where that capability did not work correctly.”

Anyone who’s had to administer Exchange feels that pain, let me tell you. It also tells us BPOS-S is using Exchange 2000 (That is a JOKE, people).

What ties all these outages together is not their dire effect on the victims. That’s inconsequential in the long term, and won’t stop people from getting into cloud services (there are good reasons to call BPOS cloud instead of hosted application services but that’s another blog entirely). It’s not the revelation that even experts make mistakes in their own domain, or that Amazon and Microsoft and Google are largely still feeling their way around on exactly what running a cloud means.

It’s the communication. If anything could more clearly delineate “cloud service” from “hosted service,” it’s the lack of transparency, lack of customer touch, and the unshakeable, completely relative perception of users across the board, that when outages occur, they are on their own.

Ever been in a subway car and the power dies? I grew up in Boston, so that must have happened hundreds of times to me. People’s fear and unease grow directly proportional to the time it takes the conductor to yell out something to show they’ve got the situation in hand. Everything is always fine, the outage is temporary, no real harm done, but people only start to freak when they get no assurance from the operator.

Working in IT and having a service provider fall over is the same thing, only you’re going to get fired, not just have a loud sweaty person flop all over you in the dark (OK, that may happen in somea lot of IT shops). Your boss doesn’t care you aren’t running Microsoft’s data center; you’re still responsible. Hosters have learned from long experience that they need to be, or at least provide the appearance of, being engaged when things go wrong, so their users can have something to tell their bosses. I used to call up vendors just to be able to tell my boss I’d been able to yell at “Justin our engineer” or “Amber in support” and relay the message.

Cloud hasn’t figured out how to address that yet; either we’re all going to get used to faceless, nerve-wracking outages or providers are going to need to find a way to hit that gap between easy, anonymous, economical and enterprise ready.

May 6, 2011  7:42 PM

Did cloud computing help catch Osama bin Laden?

CarlBrooks Carl Brooks Profile: CarlBrooks

Now that we’ve gotten the link bait headline out of the way, let me say first that cloud computing is in no way to be considered anywhere near as important to the death of Osama bin Laden as the actual people with guns and helicopters. Credit where credit is due.

However, foundational shifts in technology come across all fronts, and not every story is about business success or advances in personal conveniences; many of them are far more consequential (and sometimes gruesome) than we normally consider. Now, how can one make the case that cloud computing (in its entire manifold “as a Service” glories) was instrumental in the final push to find and put an end to America’s most visible modern enemy?

First, let’s be charitable and assume we were actually looking for him for the last ten years as opposed to the last two, and that the search wasn’t impossibly tangled up in international politics. Now, let’s assume he was, in fact, well hidden, “off the grid” informationally speaking, and surrounded by trusted confidantes, and we only had scraps of information and analysis to go on.

Of course, we always had a rough idea where he was: Afghan intelligence knew he was near Islamabad in 2007, Christiane Amanpour said sources put him in a “comfortable villa” in 2008, and it was only logical that he’d be located in a place like Abbotobad. Rich old men who have done terrible things do not live in caves or with sheepherders in the boonies; they live comfortably near metropolitan areas, like Donald Trump does.

But all that aside, tying together the intelligence and the operations could have come from new ways that the Armed Forces are learning to use technology, including cloud computing. The AP wrote about a brand-new, high-tech “military targeting centre” that the Joint Special Operations Command (JSOC) had opened in Virginia, specifically to assist in this kind of spook operation.

“The centre is similar to several other so-called military intelligence ‘fusion’ centres already operating in Iraq and Afghanistan. Those installations were designed to put special operations officials in the same room with intelligence professionals and analysts, allowing U.S. forces to shave the time between finding and tracking a target, and deciding how to respond.

At the heart of the new centre’s analysis is a cloud computing network tied into all elements of U.S. national security, from the eavesdropping capabilities of the National Security Agency to Homeland Security’s border-monitoring databases. The computer is designed to sift through masses of information to track militant suspects across the globe, said two U.S. officials familiar with the system.”

Well, there you have it. A “cloud computing network” took down the original Big Bad. Wrap up the season and let’s get on to a new story arc. But wait, you cry. WTH is a “cloud-computing network”? That sounds like bad marketing-speak, it’s meaningless babble. Do we know anything more about what exactly was “cloud” about this new intelligence-sifting and operational assistance center?

A spokesman for the United States Special Operations Command (USSOCOM), which is where JSOC gets its authority and marching orders, said there was nothing they could release at this time about the technology being used here.

However, a few months ago, I had a fascinating interview with Johan Goossens, director of IT for NATO Allied Command Transformation (ACT), headquartered in VA (probably not too far from JSOC’s high-tech spook base), about how NATO, driven in large part by the U.S. military, was putting into to play the lessons of cloud computing. He said, among other things, the heart of the new efforts he was leading were two-fold; a new way of looking at infrastructure as a fluid, highly standardized and interoperable resource built out in modular form and automated to run virtual machines and application stacks on command — cloud computing, in a word — and ways to marry vast networks of information and assets (human and otherwise) into a cohesive, useful structure.

Goossens’ project involved consolidating existing NATO data centers into three facilities; each one is federated using IBM technology and services. He started with software development as the obvious test case and said the new infrastructure will be operational sometime this year, which is “light speed by NATO standards.”

Some of this is simple stuff, like making it possible for, oh, say, the CIA to transfer a file to an Army intelligence officer without three weeks of paperwork and saluting everyone in sight (that is not an exaggeration in how government IT functions, and in spades for the military) or having a directory of appropriate contacts and command structure to look at as opposed to having to do original research to find out who someone’s commanding officer was. Some of it is doubtless more complex, like analyzing masses of data and delivering meaningful results.

What evidence is there that the U.S. military was already down this road? Well, Lady Gaga fan PFC Bradley Manning was able to sit at a desk in Afghanistan and copy out files from halfway around the world and any number of sources, so we know the communication was there. We know the U.S. deploys militarized container data centers that run virtualization and sync up with remote infrastructure via satellite. We know this new “targeting centre” in Virginia was up and running well before they let a reporter in on it, and it almost by definition, had to involve the same technology that Goossens is involved in. There’s only so many vendors capable of selling this kind of IT to the military. IBM is at the top of that list.

The Navy SEALs that carried out the raid were staged from one of these modular high tech remote bases; the raid itself was reportedly streamed via audio and partly in video in real time. Photos and information also went from Abbotobad to Washington in real time. That data didn’t bunny hop over the Amazon CloudFront CDN to get there, but the principle is the same.

So it’s possible to pin part of the killing of Osama bin Laden on the strength of new ways the world is using technology, including cloud. I sincerely doubt Navy SEALs were firing up Salesforce.com to check their bin Laden leads or using EC2 to crunch a simulation, but I’d bet my back teeth (dear CIA, please do not actually remove my back teeth) that they were doing things in a way that would make perfect sense to anyone familiar with cloud and modern IT operations.

We’ll probably never know exact details about the infrastructure that runs the JSOC spook show, since they don’t have anything to say on the subject and I’m not about to go looking on my own (wouldn’t turn down a tour, though). But it’s a sobering reminder that technology advances across the board, not just in the mild and sunny climes of science and business, but also in the dead of night, under fire, on gunships you can’t hear, and the result is death.

“Think not that I am come to send peace on earth: I came not to send peace, but a sword.” Mat. 10:34


May 4, 2011  8:26 PM

Why HP wants the whole cloud stack

JoMaitland Jo Maitland Profile: JoMaitland

HP’s clumsy cloud leak this week sheds a little bit more light on the printer giant’s cloud computing plans, but the details signal a much bigger trend. The major IT players feel they must own the whole cloud stack. Why?

According to The Reg story HP’s Scott McCllelan posted the following information on his public LinkedIn profile about HP’s planned offerings:

– HP “object storage” service: built from scratch, distributed sytem, designed to solve for cost, scale, and reliability without compromise.
– HP “compute”, “networking”, and “block storage”: an innovative and highly differentiated approach to “cloud computing” – a declarative/model-based approach where users provide a specification and the system automates deployment and management.
– Common/shared services: user management, key management, identity management & federation, authentication (inclu. multi-factor), authorization, and auditing (AAA), billing/metering, alerting/logging, analysis.
– Website and User/Developer Experience. Future HP “cloud” website including the public content and authenticated user content. APIs and language bindings for Java, Ruby, and other open source languages. Fully functional GUI and CLI (both Linux/Unix and Windows).
– Quality assurance, code/design inspection processes, security and penetration testing.

The “object storage” service would be akin to Amazon S3 and the “block storage” service smells like Amazon’s EBS; the automatic deployment piece sounds like Amazon CloudFormation which provides templates of AWS resources to make it easier to deploy an application. And metering, billing, alerting, authorization etc. are all part of a standard cloud compute service. How you make a commodity service “highly differentiated” is a mystery to me and if you do, who’s going to want that? But that’s another story.

The Platform as a Service part for developers is interesting, although not a surprise since HP already said it would support a variety of languages including open source ones. And the security elements tick the box for enterprise IT customers rightly worried about the whole concept of sharing resources.

These details are enough to confirm that HP is genuinely building out an Amazon Web Services-like cloud for the enterprise. So why does it need to own every part of the stack? HP has traditionally been the “arms dealer” to everyone, selling the software, the hardware and the integration services to pull it all together, so why not do the same with cloud? Sell the technology to anyone and everyone that wants to build a cloud? They’re would be no conflict of interest with service providers to whom it is also selling gear, and no commodity price wars for Infrastructure as a Service. (Believe me they are coming!)

Apparently HP believes it has no choice and other IT vendors seem to believe the same thing. The reason is integration. Cloud services, thanks to AWS’s example, are so easy to consume because all the parts are so tightly integrated together. But to offer that, the provider has to have control of the whole stack — the hardware, the networking and the full software stack — to ensure a smooth experience for the user.

If you don’t, as others have proven, your cloud might never materialize. VMware began its cloud strategy by partnering, first with Salesforce.com to create VMforce, then with Google, creating Google App Engine (GAE) for Business which runs VMware’s SpringSource apps. Then Salesforce.com acquired Heroku and started doing its own thing no doubt leaving VMware with a deep loss of control. Both the arrangement with Salesforce and GAE have gone nowhere in over a year and VMware has since launched its own PaaS called CloudFoundry.

Similarly, IBM built its own test and dev service and now public cloud compute service from scratch. It’s also working on a PaaS offering although there’s still no word on this. Microsoft sells Azure as a service and is also supposedly packaging it up to resell through partners (Dell, HP, and Fujitsu) for companies that want to build private clouds. The latter is over a year late, while Azure has been up and running for a year.

In other words, whenever these companies bring a partner into the mix and try to jointly sell cloud, it goes nowhere fast and they revert to doing it on their own.

The benefit of being late to the game, as HP certainly is, means it gets to learn from everyone else’s mistakes. Cloud-based block storage needs better redundancy, for example! Or, don’t waste your time partnering with Google or Salesforce.

There’s also a theory that to be able to sell a cloud offering, you have to have run a cloud yourself, which makes some sense. So if the past is any help, and there isn’t much of it in cloud yet, HP is on the right path building the whole cloud stack.


May 3, 2011  11:30 PM

HP exec leaks cloud plans; VMware in, Microsoft out?

JoMaitland Jo Maitland Profile: JoMaitland

HP will most likely base its cloud Platform as a Service offering on VMware’s CloudFoundry, according to plans leaked by an HP exec. Surprisingly, there was no mention of support for Microsoft Azure, possibly because Microsoft is still working out the spec for reselling this software.

The Register reported that Scott McClellan, the chief technologist and interim vice president of engineering for HP’s new cloud services business spilled the plans on his public LinkedIn profile. [Doh!]

According to The Reg story, HP will reveal the details of its cloud strategy at VMware’s conference, VMWorld in August.


April 22, 2011  7:00 PM

Using the IBM Smart Business Cloud — Enterprise

CarlBrooks Carl Brooks Profile: CarlBrooks

So I was very excited when IBM officially launched its general purpose public cloud service. It was a validation of the cloud model for the enterprise; it was a chance to see what one of the premier technology vendors on the planet would deliver when it put shoulder to wheel on this exciting new way to deliver IT.

Its got a growing user base too: check out the “Profile” screenshot at the bottom; not only do you get to see your IBM Cloud account, you get to see all your IBM friends, too- as of this writing, IBM’s cloud has 729 users running 2436 instances, 578 block stores and 666 images.

Turns out it’s pretty much feeling its way along, just as Amazon Web Services (AWS) was 3-4 years ago.  Its…um…not polished, but it works. It’s a true public cloud experience, even if the the pricing is logarithmic in scale rather than incremental (goes from “quite reasonable” to “Oh My God” fairly quickly). You click and provision storage, instances, and so on. But it feels a little raw if you’re used to RightScale, AWS Managment Console and so on. It’s very bare bones at the moment.

It’s also abundantly clear that the IBM Smart Business Cloud- Enterprise (SBC-Enterprise) is exactly the same as the IBM Smart Business Development and Test Cloud. The transition to “enterprise class public cloud” is simply hanging a new shingle on the door. See the screenshots below, they haven’t really finished transitioning the brand on the portal pages and it’s all over the documentation too. The test and dev cloud and the SBC-Enterprise cloud are one and the same.

But that’s fine by me- if IBM wants to take their dev cloud infrastructure and call it Enterprise, they can do that. I’m not aware of any conceptual reasons for not doing production in a cloud right next to where you do test and dev, besides the expectations for uptime and support.

What’s a little crazy is how much uber-complicated stuff is baked right in, like Rational Asset Manager and IDAM even though actual use is a little creaky. This highlights the main difference between the approach IBM is taking and the approach AWS took. The very first thing you do on creating your IBM profile and logging in at http://ibm.com/cloud/enterprise is manage users. The last thing you do (or can do, I couldn’t even figure where to do it) is check your bill and see how much you’ve consumed and how much you’ve paid. That’s almost the reverse of the AWS experience- again, nothing really wrong with that.

That’ll make the lone wolf supernerd webdork garage genuises a little discomfited, but it’ll make the project managers very happy. Which is probably the point, IBM definitely isn’t courting startups.

Other notable postives: status updates and notifications about the service are baked into the dashboard and appear at the top of the screen. When I was using it, there was a helpful suggestion to pick the Boulder CO data center because of heavy use at IBM’s flagship cloud DC in Raleigh. The provisioner even automagically put Boulder at the top of the list for me. Does anyone else remember when AWS didn’t even have status dashboard? I do.

The web portal is reliable, fast and built on solid, modern open standards, no “Choose your browser with care young padawan” here. Security is superficially reliable: You are forced to create a 2048- bit RSA keypair for your first Linux instance and strongly encouraged to do so for each additional one; the Windows instances enforce complex passwords and you don’t get to be Administrator at provisioning. Only ports 22 and 3389 are open, respectively. After you login and start monkeying around, of course, you are on your own.

For the brief handful of hours I used it, the connection and response time were rock solid and quite acceptable. Disk I/O was fine both on the instances and to the attached block stores. Only a little weirdness when patching or installing software. Licensing all brushed under the rug, as it should be. Getting a test website up and running was trivial. Instances start with 60GB local storage and block stores start at 250GB. The REST APIs? Probably OK. I, uh, sort of ignored the API part of it. But they’re there.

However, it’s easy to see this is early days for an end-user service. I got the always-helpful “This is an error message: the message is that there is an error” at random. Provisioning a stock SUSE Enterprise instance reliably took 6-7 minutes each time. The Windows instance took north of 25 minutes every time (on my first try I used the delta to fumble around with SUSE and vi and get VNC going because I have apparently devolved into a helpless monkey without a GUI).

SUSE was no-password SSH, username “idcuser”(short for ‘IBM Dev Cloud User’, I’m guessing). When I created a privileged user and went through VNC, I couldn’t sudo until I went back to SSH and set a password. idcuser can sudo in SSH without a password but not over VNC, apparently. Which is fine, I’m glad it doesn’t have a default root password, that’s a Good Thing (TM) but I had to figure that out. AWS just gives you a stock SSH session and everyone knows to change the root password first. IBM’s instructions exclude mention of the root password (“sudo /bin/bash/” in SSH instead).

I couldn’t save my modified instances as customized images to boot later- the SUSE image showed up in my repository after several tries but wouldn’t relaunch. I eventually managed to save a working image after a few days. The Windows image just refused to save; I gave up after a half dozen tries. There’s no way to upload my own images; I’m stuck with a collection of RHEL, SUSE and a lonely Windows Server 2008 for a portfolio; I’d have to waste a lot of time to replicate my own VMs from that and if I wanted to run any other OS forget it.

Signing up was pleasantly easy, but I didn’t get access unless a (very helpful) IBM rep contacted me and sent me a welcome kit via email. I was also mistakenly signed up for Premium Support Services without asking to be, which is one of those automatic eye-watering cost multipliers I mentioned before. Fortunately IBM has been quite responsive and I’m only getting charged for what I use. Anyone remember when AWS didn’t even have a support response option? I do. Again, IBM is working backward (in this case kind of a good thing).

All of this, most especially the lack of freedom in managing a VM library (couldn’t save, couldn’t upload) and the distinctly unfinished branding effort going on put IBM’s cloud in the “nice idea so far” category. It’s going to take some time and some effort for this to be anywhere close to the capabilities or integrated services AWS has. Only a true enthusiast would want to run anything in production on this. It’s still definitively not a enterprise-class cloud environment for running workloads.

It probably will be in time; unlike AWS, IBM doesn’t have to invent anything. I suspect the real issues at IBM around this are in tying in the appropriate moving parts and adding features with an eye to the end-user experience. IBM is historically not good at that, which is probably why AWS got there first. Amazon is good at that part. Scale matters too- IBM is one of the worlds largest data center operators in its own right; Amazon rents space in other peoples. If IBM ever runs into growing pains, as AWS has, it is going to be on a completely different plane. I’d expect them to screw up the billing before they screwed up operations, for example.

Anyway, check out the screenshots. When I get my invoice at the end of the month, I’ll tell you all how much this cost me. If it was over $10 I’ll be shocked.

User managment page

Storage page

My Profile! and 729 other IBM friends!

Helpful error messagesA Helpful Error Message

The Control PanelIBM Cloud Control Panel

UPDATE: Due to a promotion campaign that I was apparently grandfathered into by a few days or a week, which ends June 1, my test-drive for the IBM Cloud was free. A quick login and peek around shows the current number of users at 864, more than 100 over a month ago, but 2962 running instances, demonstrating an uptick in active customers but also the fungibility of demand.

They’ve updated the design to scrub away more of the IBM Smart Business Development and Test Cloud branding and I was finally able to launch my saved SUSE image. Progress!


March 22, 2011  6:00 AM

rPath X6 relieves software config management headaches

JoMaitland Jo Maitland Profile: JoMaitland

Software configuration management is one of those topics that gives me an instant ice cream headache, and I’m not even doing it!

Pity the poor admin whose job it is to keep track of all the versions of operating systems, applications and firmware and the interdepencies between them all, so as not to hit a conflict when installing or deinstalling something that brings everything grinding to a halt. In a cloud environment, when everything is virtualized it gets even harder to manage as resources are constantly being moved around.

With the latest version of rPath’s software, dubbed X6, development, QA, and release engineers can construct, deploy, configure and repair software stacks via a easy to use, visual UI, helping to keep control of software configuration.

rPath X6 allows users to visually create a version-controlled blueprint for generating system images, managing changes up and down the stack. And it works across physical, virtual and cloud-based resources.

Here are some shots of the UI. Click on them to get a clearer view.


March 21, 2011  9:25 PM

Microsoft points to Target for private cloud but gets it embarassingly wrong

CarlBrooks Carl Brooks Profile: CarlBrooks

Microsoft Management Summit 2011 has a theme this year. Naturally it is cloud computing. The premise is that advanced systems management tools for Hyper-V and Windows Server will lead all down the primrose path into private clouds and your data centers, once sprinkled with enough powdered unicorn horn, will turn into Amazon Web Services on steroids or something.

Brad Anderson VP of management and security at Microsoft will have a lovely song and dance routine at the keynote around this. It’ll be a work of art, but the message train went straight off the rails when they brought up big-box department store Target (that’s Tar-JHAY to you) as the big win for System Center, which is Microsoft’s answer to VMware vCloud Director and every other virtualization/automation/cloud platform out there. But they couldn’t have picked a worse example.

Target has implemented System Center and Hyper-V in its retail stores and collapsed 8 real servers down to two per location, a whopping 75% reduction in footprint. That’s the story. Bravo. Saves on power, maintenance, refresh cycle, blah blah. But this is not a cloud story, not even a little bit. This is the anti-cloud story. Microsoft couldn’t have picked a better example of what the difference between leveraging virtualization and cloud computing actually is than this. All Target did was do everything they’re already doing, but better. They didn’t revamp their infrastructure, they just optimized what they already had.

What is cloud about that? I’m a skeptic, I’ll bash anyone on either side of the aisle in the private cloud/public cloud debates, but this is egregiously misleading.

Here is how the Target story would have been remotely close to cloud computing in descending order of relevance:

1) Done away with all the on-prem servers in favor of a WAN optimizer or three and served up infrastructure and/or services remotely to every store from a central Target-run data center.

2) Found a way to utilize all that spare capacity (did they really just chuck three out of every four servers around the nation? Mental note: check Target dumpsters for salvage) to serve back office and online services needs.

3) Kept an application front end server at each location that served and transmitted application data to and from cloud services (Microsoft Azure? Hello?)

4) Bought an Oracle Exalogic for each of Target’s 1,755 locations to make sure they were getting  “enterprise grade cloud” from Uncle Larry. Ok that one was a joke.

But that would be HARD, you see. Any of those options, which would qualify as legit exercises in cloud computing, would require a hell of a lot more work than simply trimming infrastructure. They redecorate the bathrooms every once in a while too, and they aren’t touting “next-gen on-demand potty usability”, are they?

Really revamping how Target did IT operations to actually gain some benefit from the promise of cloud computing would requires years or planning and re-architecting and would turn the entire IT organization on its head.

What this use case demonstrates isn’t even a step on the path to cloud computing. It’s a step sideways into the bushes or something. This is what you do WHILE you try and come up with a plan to utilize cloud techniques and services. When and if Target moves to take steps to actually do cloud computing at its stores, everything Microsoft touted today is not even going to be irrelevant- it’s going to be invisible. Systems management won’t even be part of the conversation. Cloud is about services, not systems.

To top it off, the grand irony in this is that Target was a big early customer win for AWS too; it famously defended its retail site from getting overrun on “Cyber Monday” and proved out the concept of elastic scale very nicely. There’s a clear juxtaposition here: public cloud is far enough along that Target uses it for actually doing business. Microsoft’s version of private cloud…isn’t.

UPDATE: This blog has been edited to reflect that Andersen’s keynote has not happened as of writing.  When it does go off tomorrow, we will be very pleased to accept criticism of our out-in-front opinion on it.

UPDATE: This blog post has come under some criticism for misrepresenting how Microsoft is presenting their Target Case Study. It has been pointed out that in the language used in Microsoft’s Case Study doesn’t use the word cloud, and during the video presentation during Andersen’s keynote, cloud was not specifically part of the Target presentation, i.e. no one has actually said “Target is doing cloud computing here” and it is therefore not within the pale to make fun of them for such such an out of tune sales pitch.

This is unmitigated BS. Despite a technically accurate description of what Target achieved as a beta tester of Hyper-V and SCOM, it was thrown up in everybody’s face unabashed as a big part of Microsoft’s campaign to position itself as part of the cloud computing market.

Judge for yourself:

From the official press release on Target and virtualization:

“Particularly as organizations are contemplating cloud computing, they find comfort in knowing the Microsoft platform can virtualize and manage all kinds of applications — Microsoft’s, a third party’s or home-grown — on a massive scale.”– said Brad Anderson, corporate vice president, Management and Security Division at Microsoft.

Target is front and center on the Microsoft Cloud Case Studies page (screenshot if the page has changed).

Target is top of the list for a Microsoft press kit on cloud computing (screenshot if the page has changed).

The description of Brad Andersen’s Day One MMS 2011 keynote address which featured the Target case study.

“This keynote will cover Microsoft’s cloud computing offerings. Brad will share how the cloud empowers IT Professionals by showing how to build and manage a private cloud…”

Brad Andersen’s MMS2011 keynote transcript:

“…today we’re going to focus on the cloud and cloud computing, and really we’re going to focus about how we’re going to empower all of you to take advantage of the cloud. “

“I think what you’re going to see throughout this morning is what we’ve done is make the cloud very, very approachable for you. ”

“…we’re going to share with you over the next hour is the learnings that we’ve had …to deliver what I believe is the most simple, the most complete, and the most comprehensive cloud solution for all of you to use. ”

“So, today, you know, we have Hyper-V cloud out in the world. It’s doing phenomenally well for us. It’s being adopted at rates that we’re just ecstatic with, but it’s a combination of Hyper-V and System Center. “

“So, let’s take a look at what Target’s doing.”

(Video segment.)

Target isn’t doing cloud computing here. If you’re trying to sell hamburgers, don’t show up holding a hot dog. We all have a rough idea of what cloud computing is by now; deliberately screwing with that idea for the sake of a marketing campaign is just dumb.

One more from Andersen’s keynote:

“I often hear quotes like what you see up here: “What is the cloud? You know, is it just a set of fancy catchphrases that’s kind of lacking technical backing?”

hear, hear


March 15, 2011  12:37 AM

Is HP hiding cloud software in its labs?

JoMaitland Jo Maitland Profile: JoMaitland

SAN FRANCISCO –– Hewlett-Packard CEO Leo Apotheker outlined his vision for the company today and the message was loud and clear, it’s all about the cloud. But as always at these big events, the CEO was long on vision, short on detail.

Central to his plans is a Platform as a Service (PaaS) offering that will compete with Microsoft Azure, VMware’s Springsource cloud initiatives and Salesforce.com with Force.com and Heroku, among many other PaaS providers.

“If you want to be in the cloud business you have to cover all of the areas, Infrastructure as a Service, Platform as a Service and I can’t emphasis enough on Platform as a Service,” Apotheker told press and analysts gathered here today. However he glossed over all the software work that has to be done to build a Platform as a Service.

“The software for our cloud platform will be based on a certain number of technologies, but let’s not get into that right now,” Apotheker said. He added that HP will be launching pieces of this platform in 2011 and 2012 and said that it will come from HP Labs, as well as acquisitions and partners.

He stressed that HP wants a lot more developers working with it to build a higher value software business. But he quickly added that the company’s existing partners, namely Microsoft, will continue to be important partners.

Last July HP announced a partnership with Microsoft in which it would bundle Microsoft’s Azure cloud software as an appliance. However it doesn’t sound like Azure will be the basis of HP’s Platform as a Service offering in the future.

There were lots of questions on how HP plans to catch Amazon Web Services, now a juggernaut in cloud infrastructure services. Some analysts predict that if AWS continues at its current growth rate it will be a $10 billion business by 2016.

“How we will catch up is pretty damn simple,” Apotheker quipped. “We have 25,000 enterprise sales people … and why will our customers buy from us, they want SLAs, they want security, they want a capability that is scalable worldwide, there aren’t many companies that can provide that,” he said.

Yahoo and others have backed away from building out more datacenters due to the capital intensity of the business. Apotheker claimed HP has already made the investment required to offer global cloud services. “From a big investment point of view, quite a lot of it is already there,” he said.

He announced that the company will launch an Open Cloud Marketplace, or app store, that will run on the HP cloud and support software from many different companies.

“In this open environment there will be HP software, but there will also be a lot of non-HP software … at the end of the day we can’t create all of this innovation by ourselves,” he said.

Analytics and security capabilities will be central to HP’s plans overall.

“Security is a huge opportunity here, it is probably the most important application anyone can create in this connected world anyway, so you will see us doing much more in the security side,” he said.
Apotheker took a shot at Symantec, claiming the “point solution providers are having a challenge, not because of us, but because of what they provide.” He said it is easy for smart people to work around “this or that” point product. “Whatever barrier you put at a given point, people will just find another way in.”

He said it is more important to find a solution to secure the entire stack, although he offered few details on how HP might tackle this.

Apotheker resigned as CEO of SAP amid falling sales and a price increase that angered customers. He was retired when HP called him to take over the reins from Mark Hurd.


March 8, 2011  9:11 PM

Poor man’s VCE? Rackspace Cloud Builders wants to sell you OpenStack

CarlBrooks Carl Brooks Profile: CarlBrooks

Has hosting and cloud leader Rackspace found a way into the private cloud fray?

A new service offering by Rackspace Cloud Builders will come to you, wherever you may be, and install a bunch of hardware and software that qualifies as cloud; a definite new twist from Rackspace’s usual hosting, co-lo and cloud sales model.

To boot, Rackspace will offer pre-certified arrangements of hardware to go with OpenStack, although it is shy about using that term. At Cloud Connect today, we are told, Dell, Opscode and Rackspace demonstrated ground-up cloud building, which presumably involves some rackable Dell gear and some bootable media.

I must admit to some curiosity: Did every demo start from scratch? Is it cheating if you don’t zero the drives for every “cloud build”? I’d be booting servers off images on my laptop, so presumably it’s only cheating if they changed the BIOS settings.

Rackspace’s new Cloud Builders division consists basically of Anso Labs, acquired last year by Rackspace specifically for its on-prem capabilities. Rackspace claims it can pick and choose from more than 50 companies in the OpenStack project with which to build a private cloud that will be “cost competitive with Amazon,” if customers will only let Rackspace build the a cloud environment.

First of all, that’s ridiculous. Nobody operating a private cloud can compete on cost with Amazon Web Services (AWS). At most, Rackspace might get operating costs down to the point where one could pitch an internal private cloud deployment vs. AWS when security, compliance and governance are taken into account. And companies are not even going to bother to try that until their current infrastructure is fully depreciated.

What this really is a way for everybody involved in OpenStack to put up a viable alternative to cloud-in-a-box offerings from premium IT vendors. It’s more a poor man’s VCE and it’s silly to pretend otherwise.

The writing’s on the wall for IT operations that are not moving off premise for new infrastructure deployments. It’s going to be dense, converged and virtualization-ready; IBM, HP, VCE and now maybe Oracle have a lock on that business. New buys for big hardware will skip right past “converged” for “cloud” since the functionally the same thing with a new set of management tools at the hardware level.

Those big IT vendors sell support, support, installation, and more support. And it’s not cheap — in fact, it’s downright larcenous. In some cases, an Oracle Exalogic with Sun tools, Linux and Oracle database will run $4 million, plus a support contract that might be 30% of your licensing fees (according to a floor rep last year). Plus, you can’t ever, ever switch after you buy. A fully loaded Vblock is plenty comfortable in that price scale, too.

I don’t know about you, but I can fill a rack or two with blades, storage and open source software for a bit less than that kind of cash. I’m betting Rackspace sees the same opportunity (they’re the commodity operations experts, after all).

So let’s not beat around the bush; the service announcement and hardware demos are really about taking a crack at staying alive in big enterprise IT shops, but that’s a harder sell than telling customers they might be able to do what AWS does if they just know the magic words.


March 3, 2011  11:05 PM

Verizon and SAP: still two old guys at a rave?

JoMaitland Jo Maitland Profile: JoMaitland

Verizon is taking another run at selling SAP CRM as a service, this time with SAP’s Rapid Deployment Solution (RDS) on real cloud infrastructure. A year ago the two companies were selling the full SAP CRM package on traditional hosting. Imagine two old guys at a rave and you’ve got the picture. No one came to that party.

The new offering, available in the US only, is designed to be up and running in eight weeks, a much faster turnaround than traditional SAP installations that can take months. It’s priced on a per user per month subscription which Verizon said should work out at about $100 per seat. There’s also an implementation fee of $80,000 to $220,000 depending on which modules the customer wants.

SAP has other products in its RDS bag including supply chain, product development and HR software that Verizon expects it will also sell as cloud-based services at some point.

It’ll be interesting to watch and see if Verizon gets any traction selling this SaaS offering. It has enormous network reach and is developing cloud infrastructure anyway, so there’s very little financial risk in it selling Software as a Service.

For SAP, building out cloud infrastructure to sell its applications as a service would be super expensive, better to partner with the major network operaters and let them sell it for you.

Salesforce.com is the main competition here with several years head start. Can the old guys catch up?


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: