As the dust was settling and the IT media echo chamber was polishing off the federally mandated outrage/contrarian outrage quota for all kerfuffles involving Anything 2.0, more outages struck, including a Blogger outage that no one in IT really cared about, although this reporter was outraged that it temporarily spiked a favorite blog.
While nobody was caring about Blogger, Microsoft’s hosted (cloud) Exchange and collaboration platform, Business Productivity Online Services (BPOS, now a part of Office 365) went down, which people in IT most assuredly did care about. Especially, as many of the forum posters said, if they had recently either been sold or sold their organization on “Microsoft cloud” as a preferable option to in-house Exchange.
“I’ve been with Microsoft online for two weeks now, two outages in that time and the boss looks at me like I’m a dolt. I was THIS close to signing with Intermedia,” said one poster. That’s the money quote for me; Intermedia is a very large hosted Exchange provider and this (probably) guy was torn between hosted Exchange and BPOS. Now he feels like he might have picked wrong: notice he didn’t discuss the possibility of installing on-prem Exchange, just two service options.
Microsoft posted a fairly good postmortem on the outage in record time, apparently taking heed from the vicious pillorying AWS got for its lack of communication (AWS’ postmortem was also very good, just many days after the fact):
“Exchange service experienced an issue with one of the hub components due to malformed email traffic on the service. Exchange has the built-in capability to handle such traffic, but encountered an obscure case where that capability did not work correctly.”
Anyone who’s had to administer Exchange feels that pain, let me tell you. It also tells us BPOS-S is using Exchange 2000 (That is a JOKE, people).
What ties all these outages together is not their dire effect on the victims. That’s inconsequential in the long term, and won’t stop people from getting into cloud services (there are good reasons to call BPOS cloud instead of hosted application services but that’s another blog entirely). It’s not the revelation that even experts make mistakes in their own domain, or that Amazon and Microsoft and Google are largely still feeling their way around on exactly what running a cloud means.
It’s the communication. If anything could more clearly delineate “cloud service” from “hosted service,” it’s the lack of transparency, lack of customer touch, and the unshakeable, completely relative perception of users across the board, that when outages occur, they are on their own.
Ever been in a subway car and the power dies? I grew up in Boston, so that must have happened hundreds of times to me. People’s fear and unease grow directly proportional to the time it takes the conductor to yell out something to show they’ve got the situation in hand. Everything is always fine, the outage is temporary, no real harm done, but people only start to freak when they get no assurance from the operator.
Working in IT and having a service provider fall over is the same thing, only you’re going to get fired, not just have a loud sweaty person flop all over you in the dark (OK, that may happen in
somea lot of IT shops). Your boss doesn’t care you aren’t running Microsoft’s data center; you’re still responsible. Hosters have learned from long experience that they need to be, or at least provide the appearance of, being engaged when things go wrong, so their users can have something to tell their bosses. I used to call up vendors just to be able to tell my boss I’d been able to yell at “Justin our engineer” or “Amber in support” and relay the message.
Cloud hasn’t figured out how to address that yet; either we’re all going to get used to faceless, nerve-wracking outages or providers are going to need to find a way to hit that gap between easy, anonymous, economical and enterprise ready.]]>
However, foundational shifts in technology come across all fronts, and not every story is about business success or advances in personal conveniences; many of them are far more consequential (and sometimes gruesome) than we normally consider. Now, how can one make the case that cloud computing (in its entire manifold “as a Service” glories) was instrumental in the final push to find and put an end to America’s most visible modern enemy?
First, let’s be charitable and assume we were actually looking for him for the last ten years as opposed to the last two, and that the search wasn’t impossibly tangled up in international politics. Now, let’s assume he was, in fact, well hidden, “off the grid” informationally speaking, and surrounded by trusted confidantes, and we only had scraps of information and analysis to go on.
Of course, we always had a rough idea where he was: Afghan intelligence knew he was near Islamabad in 2007, Christiane Amanpour said sources put him in a “comfortable villa” in 2008, and it was only logical that he’d be located in a place like Abbotobad. Rich old men who have done terrible things do not live in caves or with sheepherders in the boonies; they live comfortably near metropolitan areas, like Donald Trump does.
But all that aside, tying together the intelligence and the operations could have come from new ways that the Armed Forces are learning to use technology, including cloud computing. The AP wrote about a brand-new, high-tech “military targeting centre” that the Joint Special Operations Command (JSOC) had opened in Virginia, specifically to assist in this kind of spook operation.
“The centre is similar to several other so-called military intelligence ‘fusion’ centres already operating in Iraq and Afghanistan. Those installations were designed to put special operations officials in the same room with intelligence professionals and analysts, allowing U.S. forces to shave the time between finding and tracking a target, and deciding how to respond.
At the heart of the new centre’s analysis is a cloud computing network tied into all elements of U.S. national security, from the eavesdropping capabilities of the National Security Agency to Homeland Security’s border-monitoring databases. The computer is designed to sift through masses of information to track militant suspects across the globe, said two U.S. officials familiar with the system.”
Well, there you have it. A “cloud computing network” took down the original Big Bad. Wrap up the season and let’s get on to a new story arc. But wait, you cry. WTH is a “cloud-computing network”? That sounds like bad marketing-speak, it’s meaningless babble. Do we know anything more about what exactly was “cloud” about this new intelligence-sifting and operational assistance center?
A spokesman for the United States Special Operations Command (USSOCOM), which is where JSOC gets its authority and marching orders, said there was nothing they could release at this time about the technology being used here.
However, a few months ago, I had a fascinating interview with Johan Goossens, director of IT for NATO Allied Command Transformation (ACT), headquartered in VA (probably not too far from JSOC’s high-tech spook base), about how NATO, driven in large part by the U.S. military, was putting into to play the lessons of cloud computing. He said, among other things, the heart of the new efforts he was leading were two-fold; a new way of looking at infrastructure as a fluid, highly standardized and interoperable resource built out in modular form and automated to run virtual machines and application stacks on command — cloud computing, in a word — and ways to marry vast networks of information and assets (human and otherwise) into a cohesive, useful structure.
Goossens’ project involved consolidating existing NATO data centers into three facilities; each one is federated using IBM technology and services. He started with software development as the obvious test case and said the new infrastructure will be operational sometime this year, which is “light speed by NATO standards.”
Some of this is simple stuff, like making it possible for, oh, say, the CIA to transfer a file to an Army intelligence officer without three weeks of paperwork and saluting everyone in sight (that is not an exaggeration in how government IT functions, and in spades for the military) or having a directory of appropriate contacts and command structure to look at as opposed to having to do original research to find out who someone’s commanding officer was. Some of it is doubtless more complex, like analyzing masses of data and delivering meaningful results.
What evidence is there that the U.S. military was already down this road? Well, Lady Gaga fan PFC Bradley Manning was able to sit at a desk in Afghanistan and copy out files from halfway around the world and any number of sources, so we know the communication was there. We know the U.S. deploys militarized container data centers that run virtualization and sync up with remote infrastructure via satellite. We know this new “targeting centre” in Virginia was up and running well before they let a reporter in on it, and it almost by definition, had to involve the same technology that Goossens is involved in. There’s only so many vendors capable of selling this kind of IT to the military. IBM is at the top of that list.
The Navy SEALs that carried out the raid were staged from one of these modular high tech remote bases; the raid itself was reportedly streamed via audio and partly in video in real time. Photos and information also went from Abbotobad to Washington in real time. That data didn’t bunny hop over the Amazon CloudFront CDN to get there, but the principle is the same.
So it’s possible to pin part of the killing of Osama bin Laden on the strength of new ways the world is using technology, including cloud. I sincerely doubt Navy SEALs were firing up Salesforce.com to check their bin Laden leads or using EC2 to crunch a simulation, but I’d bet my back teeth (dear CIA, please do not actually remove my back teeth) that they were doing things in a way that would make perfect sense to anyone familiar with cloud and modern IT operations.
We’ll probably never know exact details about the infrastructure that runs the JSOC spook show, since they don’t have anything to say on the subject and I’m not about to go looking on my own (wouldn’t turn down a tour, though). But it’s a sobering reminder that technology advances across the board, not just in the mild and sunny climes of science and business, but also in the dead of night, under fire, on gunships you can’t hear, and the result is death.
“Think not that I am come to send peace on earth: I came not to send peace, but a sword.” Mat. 10:34
According to The Reg story HP’s Scott McCllelan posted the following information on his public LinkedIn profile about HP’s planned offerings:
- HP “object storage” service: built from scratch, distributed sytem, designed to solve for cost, scale, and reliability without compromise.
- HP “compute”, “networking”, and “block storage”: an innovative and highly differentiated approach to “cloud computing” – a declarative/model-based approach where users provide a specification and the system automates deployment and management.
- Common/shared services: user management, key management, identity management & federation, authentication (inclu. multi-factor), authorization, and auditing (AAA), billing/metering, alerting/logging, analysis.
- Website and User/Developer Experience. Future HP “cloud” website including the public content and authenticated user content. APIs and language bindings for Java, Ruby, and other open source languages. Fully functional GUI and CLI (both Linux/Unix and Windows).
- Quality assurance, code/design inspection processes, security and penetration testing.
The “object storage” service would be akin to Amazon S3 and the “block storage” service smells like Amazon’s EBS; the automatic deployment piece sounds like Amazon CloudFormation which provides templates of AWS resources to make it easier to deploy an application. And metering, billing, alerting, authorization etc. are all part of a standard cloud compute service. How you make a commodity service “highly differentiated” is a mystery to me and if you do, who’s going to want that? But that’s another story.
The Platform as a Service part for developers is interesting, although not a surprise since HP already said it would support a variety of languages including open source ones. And the security elements tick the box for enterprise IT customers rightly worried about the whole concept of sharing resources.
These details are enough to confirm that HP is genuinely building out an Amazon Web Services-like cloud for the enterprise. So why does it need to own every part of the stack? HP has traditionally been the “arms dealer” to everyone, selling the software, the hardware and the integration services to pull it all together, so why not do the same with cloud? Sell the technology to anyone and everyone that wants to build a cloud? They’re would be no conflict of interest with service providers to whom it is also selling gear, and no commodity price wars for Infrastructure as a Service. (Believe me they are coming!)
Apparently HP believes it has no choice and other IT vendors seem to believe the same thing. The reason is integration. Cloud services, thanks to AWS’s example, are so easy to consume because all the parts are so tightly integrated together. But to offer that, the provider has to have control of the whole stack — the hardware, the networking and the full software stack — to ensure a smooth experience for the user.
If you don’t, as others have proven, your cloud might never materialize. VMware began its cloud strategy by partnering, first with Salesforce.com to create VMforce, then with Google, creating Google App Engine (GAE) for Business which runs VMware’s SpringSource apps. Then Salesforce.com acquired Heroku and started doing its own thing no doubt leaving VMware with a deep loss of control. Both the arrangement with Salesforce and GAE have gone nowhere in over a year and VMware has since launched its own PaaS called CloudFoundry.
Similarly, IBM built its own test and dev service and now public cloud compute service from scratch. It’s also working on a PaaS offering although there’s still no word on this. Microsoft sells Azure as a service and is also supposedly packaging it up to resell through partners (Dell, HP, and Fujitsu) for companies that want to build private clouds. The latter is over a year late, while Azure has been up and running for a year.
In other words, whenever these companies bring a partner into the mix and try to jointly sell cloud, it goes nowhere fast and they revert to doing it on their own.
The benefit of being late to the game, as HP certainly is, means it gets to learn from everyone else’s mistakes. Cloud-based block storage needs better redundancy, for example! Or, don’t waste your time partnering with Google or Salesforce.
There’s also a theory that to be able to sell a cloud offering, you have to have run a cloud yourself, which makes some sense. So if the past is any help, and there isn’t much of it in cloud yet, HP is on the right path building the whole cloud stack.]]>
The Register reported that Scott McClellan, the chief technologist and interim vice president of engineering for HP’s new cloud services business spilled the plans on his public LinkedIn profile. [Doh!]
According to The Reg story, HP will reveal the details of its cloud strategy at VMware’s conference, VMWorld in August.]]>