The Troposphere


July 6, 2011  5:51 PM

vFabric Hyperic plugin surfaces for Cloud Foundry

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

A new plugin has been developed that connects part of VMware’s vFabric middleware to its Cloud Foundry Platform as a Service (PaaS) offering, according to a post last week on the SpringSource Hyperic blog.

vFabric Hyperic, which monitors performance in custom Web applications, can now be integrated through the plugin into Cloud Foundry’s VMC command-line interface to monitor applications running on the PaaS platform. Features include auto-discovery, event tracking and metrics collection on Cloud Foundry system and account usage, as well as Cloud Foundry provisioned services. The new integration will also allow for starting, stopping and restarting Cloud Foundry applications, updating reserved memory, and scaling up or down by one application instance to meet performance demands.

Meanwhile, Hyperic is just one part of vFabric; other components include the Apache Tomcat-based tc Server; RabbitMQ messaging; GemFire distributed data management; and the vFabric Enterprise Ready Server for Web server load balancing. There have been hints from VMware that RabbitMQ will also make its way onto the Cloud Foundry platform — the Hyperic blog post refers to RabbitMQ, “once available,” as a provisioned service the Hyperic plugin will be able to manage.

But there have been hints about RabbitMQ since the launch of Cloud Foundry, and actual integration has yet to see the light of day. GemFire is another application that could lend itself to cloud-based deployment and development, and broadly, VMware says it would be the ‘natural evolution’ for such offerings to become services offered on the Cloud Foundry platform. But the devil’s in the details, and a detailed strategy for integration between the overall vFabric and Cloud Foundry platforms has yet to be publicly voiced by VMware.

Instead, with the latest release of vFabric, version 5, VMware deepened integration between vFabric and the vSphere hypervisor, rather than with Cloud Foundry — users can now change the ‘identity’ of VMs running different components of vFabric within a given block of vSphere licenses according to demand, and vSphere’s dynamic memory feature has been added to tc Server.

In the spirit of true open source, which Cloud Foundry aims to be, it would be helpful if VMware published a roadmap for integration plans, which would give confidence to developers interested in using the platform. Instead, as it stands today, Cloud Foundry has an experimental air –- in Paul Maritz’s words at the Structure conference last month, it’s a “calculated risk” at this point — and VMware could at least theoretically pull the plug on it at any time.

June 30, 2011  11:01 PM

OpSource exit shows the power of the platform

CarlBrooks Carl Brooks Profile: CarlBrooks

OpSource has been bought by ICT and IT services giant Dimension Data. This tells us several important things about the cloud computing market when we look at some of the details. It’s mostly positive unless you’re a private cloud cultist or one of the vendor giants enabling private cloud cargo cults in various areas of IT.

OpSource likely made out here, too; Informa analyst Camille Mendler said NTT, which now owns Dimension Data and was a 5% equity investor in OpSource, is well known for piling up money to get what it wants. “NTT was an early investor in OpSource years ago. They always pay top dollar (see DiData price, which turned off other suitors)” Mendler said in a message.

Mendler also pointed out the real significance of the buy: the largest providers are moving to consildate their delivery arms and their channel around cloud products, becuase that’s where the action is right now. Amazon Web Services is an outlier; private cloud in the enterprise is in its infancy; but service providers in every area are in a wholesale migration into and delivering cloud computing environments. OpSource already runs in some NTT data center floor space; DiData has a massive SP/MSP customer base and it’s OpSource’s true strength as well.

DiData is already actively engaged with customers that are doing cloudy stuff, said Mendler, and they basically threw up their hands and bought out the best provider focused cloud platform and service provider they could find. “There’s a white label angle, not just enterprise” she said.

And it’s not the only deal for cloud for providers, by NTT either: It bought controlling interest in an Australian MSP with a cloud platform in May. gathering in OpSource means NTT have a stake in most of the world in a fairly serious fashion when it comes to the next wave of public and hosted cloud providers.

What else?

Well, DiData is a huge firm in IT services. They have all the expertise and software they’d ever need, but instead of developing a platform or an IaaS to sell to customers, they bought one outright and are starting up a cloud services business unit to sell it pretty much as is. That means, as has been pointed out so many times before, building a cloud is hard work, and quite distinct from well understood data center architectures around virtualization and automation as we used to know them.

It also means there was a pressing need for a functioning cloud business today, or more likely yesterday. “Essentially, what Dimension has said is ‘nothing changes with OpSource'” said OpSouce CTO John Rowell.

Rowell’s a bit giddy; he said with access to DiData’s partnerships and customers, OpSource gets a fast track to global infrastructure growth in a way it couldn’t before. “We believe we can go head to head with Amazon and we‘ll be better than them,” he said. He might not be far off, at least in the MSP sector; OpSource does have a few pieces of the puzzle AWS doesn’t, like a working support system, mature networking (mature networking features in the cloud=hosting circa 1999) and a very slick interface that is pig easy to use or extend.

Overall though, it tells us the real action is behind the scenes for enterprise IT- cloud computing is on fire in the service world; it’s still mostly smoke in the enterprise world.


June 28, 2011  8:34 PM

What the heck is Office 365?

CarlBrooks Carl Brooks Profile: CarlBrooks

Office 365 is live. Read the fluff here, or watch videos all day of Microsoft SMB customers. But what exactly is Office 365? Let us hove to established tradition for “WTF is this thing” stories and start with what it is NOT:

Office 365 is not Microsoft Office software. Not Word, Excel, PowerPoint or Outlook. If you do not have those things, signing up for Office 365 will not get them for you (you can buy them at the same time you sign up, however).

It is not compatible with Microsoft Office 2003. You need to be on Office 2007 or better, because Office 365 needs Office “Open” XML (OOXML) to do most of the neato-burrito online stuff. The Microsoft how-to’s (LGT to the guide for enterprise) say it will pretty much work with MSO07 or MSO10, but you will need MSO10 Professional Plus to use all the Office 365 features.

It is not an Exchange Server, a Communications (now Lync) Server or SharePoint server. It is also not like anything you would consider a hosted Exchange server, nor is it an online email/app suite like Gmail or Zoho. It is not an browser-based online service.

It is not cross platform. This is for Windows and Internet Explorer. It lives on ActiveX and Silverlight.

It is not anything whatsoever to do with mobile devices or mobile apps, except for delivering Exchange mail and using SharePoint Mobile (one of those things is very useful, the other one is SharePoint Mobile)

It is not Google Docs.

It is definitely not iCloud.

What it is:

Office 365 is a replacement for your Exchange and Sharepoint servers that comes as a monthly subscription service from Microsoft, and it is an add-on software pack to your Office installation. It also runs Communications Server (now Lync) as a service, but I’m not sure anyone’s ever actually used Lync. It’s not like hosted versions of these products, nor is it like running them yourself- Microsoft does 100% of the admin and you get zero access except to an identity and management layer for adding and managing users and mailboxes to some extent. This is the cloud computing part of Office 365. It is better known as BPOS.

Inboxes are 25 GB and message size limits are 25 MB, the signal benefit here is that you will only intensely annoy the recipients of your 25 MB emails and no longer your IT admin as well. Admins everywhere are chuckling in anticipated schadenfreude at the thought of Microsoft operators trying to unstick Exchange queues full of 25 MB attachments going to 25 GB mailboxes instead of them.

Office 365 lets you send email from yourdomain.com and not you.microsoft.com; it supposedly will do single sign-on if you let it sync with your AD. It requires Active Directory Federation Services 2.0, so Windows 2003 Server support is out the window for that feature.

It DOES NOT integrate any further than syncing users, addresses and the Global Address List. You CANNOT UNSYNC your AD, and it is in no way shape or form a tool for managing ADs. You’ll still do all user management from your domain server and you’ll manage Office 365 users on Office 365, unless you migrate completely to Office 365 and stop using local directory services (because all you use your ADs for is email, RIGHT?) Microsoft says you can do a standard cut-over or partial migration if you want to stick your entire email infrastructure in the Microsoft cloud.

The add-on part is a download called the “Office desktop setup.” Run it on each machine that will use Office 365 after installing Office 2007 or 2010. Once you’ve done that and set up your users, they can use Office WebApps to edit and share .doc files from their PC in a browser. Apparently it’s not too hot on mobile devices, though.

That’s what Office 365 is, in sum. Is it a Gmail/Google Apps killer? Not at any entry point that is not equal to “free,” it’s not. Its also clearly not set up to be used the same way. Where’s SkyDrive, by the way? Where’s the cross-browser support?

Is it pretty cool and does neat stuff, like real-time collabo on documents and websites with multiple editors (no more email chains of these: “pls rvw chnges and snd back asap thx attached” hooray!) Sure. and like it or lump it, the world pretty much runs on Office.

But will it upheave the Office desktop landscape? Not even a little bit.


June 22, 2011  8:48 PM

The FBI is coming to get your cloud (or not)

CarlBrooks Carl Brooks Profile: CarlBrooks

Hooray! Another “public cloud is ridiculously dangerous and will eat your babies” news item. Or not. Maybe both. Don’t worry but lock up your babies, is more or less what I’m saying. Lets break it out.

In conjunction with the LulzSec raids on the Web farms of the body politic, the FBI seized servers in Virginia. This, of course, knocked out a number of perfectly innocent websites and services that were on the same servers as whatever the FBI was after.

This has happened before. In 2009, minor hoster Core IP was raided by the FBI and dozens of the company’s customers were suddenly high and dry. The FBI took servers willy-nilly, not caring who or what was hosted on them.

The end of the story is this: Core IP was, among others, implicated in widespread telecom fraud and probably dirty. All the innocent customers who hosted in good faith? SOL, according to the FBI. Whether they knew about it or not, if they partook of a service that was used in conjunction with criminal activities, they were on the hook.

That’s a bit bizarre to anyone even slightly familiar with technology who understands how multitenancy works and does not believe in guilt by association, but it has real world parallels — a distribution center being used for smuggling gets shut down even if legitimate goods are going through and everyone suffers.

But it does seem particularly awful in the virtual world, since incriminating data can so easily be identified and passed to authorities without burning the entire operation to the ground. It’s just so stupidly, pointlessly destructive that it makes the nerd in us grind our teeth in frustration.

Anyway, it’s either spite, pre-trial collective punishment or willful ignorance by the FBI, but two’s enough for a trend. The implications for cloud computing are clear, since they are almost by definition multitenant environments; host in public and you are at terrible risk from naughty neighbors. The whole thing can blow up in your face overnight, and then the FBI has all your junk. Therefore, private cloud is way to go, right?

Wrong. “Do not host with small-time operators” is the lesson here. Until the day the FBI marches out of Equinix trundling a rack or two or Amazon’s gear, I will not believe that this risk will ever touch service providers over a certain scale. The limit on that scale is an in-house legal department (or a law firm on retainer), if I’m not very much mistaken.

Much for the same reason they don’t sent SWAT teams to rich people’s houses, the FBI does not flash into Google or Microsoft or Amazon data centers and start kicking things over. They engage in protracted, legally sanctioned and highly specific co-operation with those providers, because the FBI does not want to dragged into court and possibly curtailed in its ability to abuse those without the legal resources.

When the feds need data or user information or evidence from MS/Google/etc, you can be sure that it is an employee of one of those providers handing it over to them. Hey, it’s plainly stated in most web services and cloud providers SLAs — “We fully co-operate with legal investigations” or something.

So don’t worry about this happening to you if you use Amazon Web Services or Rackspace Cloud. Worry about encrypting your data. And this certainly doesn’t do a thing to change the current calculus on on enterprise data security and the cloud (which is “MONGO SAY CLOUD BAAAAD!!!”).

Or instead, worry about what your lemur-brained development crew is doing on Amazon’s cloud. There we find a rich source of security delights, from crappy apps running in public to this little gem: The pool of publicly available, user created Amazon Machine Images (AMIs) is riddled with highly insecure, vulnerable virtual machine images, according to new research from Darmstadt Research Center for Advanced Security (CASED) in Germany.

Out of 1100 user created AMIs they tested, 30% were vulnerable to compromise right from launch. Don’t you feel better now?


June 15, 2011  10:22 PM

LexisNexis throws hat in big data ring

CarlBrooks Carl Brooks Profile: CarlBrooks

Premier data analysis and records search firm LexisNexis has dropped a large rock into the next generation database pond: it’s releasing the core of its own data management and search technology as an open source software (dual license: free/community edition and paid-for pro edition).

HPCC Systems is three major components, according to Armondo Escalante, CTO at LexisNexis. The data processing engine (“Thor”) that organizes and stores the data is a massively multiparallel batch processing server written in C++ that runs on Linux and commodity x86 servers.

“That gives it a big advantage at run time,” said Escalante, over Hadoop, the open source offshoot of Google’s MapReduce platform written in Java. Escalante claims Thor is four times faster than Hadoop when running certain queries. It functions much as Hadoop does; it’s a distributed file system that requires several nodes and runs inquiries in as many parallel jobs as possible.

“Roxie” is the data delivery engine; Escalante says it, like Thor, is a clustered architecture running on Linux, for delivering transactions. Point your front end at it and connect with SOAP or JSON to interact. The third element is the interface language used to control these engines, call ECL.
Escalante says that there is no practical limit to how many nodes or how much data these tools can scale to, which makes sense; this kind of architecture is familiar territory for grid and HPC users doing massive data processing jobs. This is not for processing math problems and crunching datasets, however- it is for long term storage and access to a dynamic pool of unstructured data in very large amounts; amounts that would seem utterly ludicrous when LexisNexis began building out this platform a decade ago.

“Ten years ago we were doing big data. 18 terabytes online serving to our customers,” said Escalante. “Ten years ago that was big data. Not so much now.” Today firms like Google, LexisNexis and Microsoft casually talk about storing petabytes of data and the need to sort through it all. Large enterprises are also sitting on exponentially expanding stores of business data even if they aren’t in the Information game or the online ad business.

Escalante says LexisNexis’ motivation for this was twofold: one, was the desire to get free innovation from the scientific and database communities that have a need for operations on this scale and the other was to tap into the trend for data management at very large scale and with unstructured data stores. “Three years ago we started going to the Hadoop conferences and said finally someone’s talking about this, and we’ve seen the growth [and] we believe our software is superior,” he said.

That’s not an absurd claim; While not the household word that Google is, LexisNexis is the leading research and data location service in the world, and stores and searches truly vast empires of publications and data, including sources deliberately not able to be searched by Google and data stores that Google doesn’t bother to make available. LexisNexis is a serious research tool with serious performance and charges a pretty penny for the privilege too. Google makes its search results available for free because it sells ads around them, although both firms derive their value from correctly linking disparate kinds of data together.

Google is also famously secretive; whatever it’s using for MapReduce is years ahead of Hadoop. LexisNexis, also justly famous for tight lips, claims that the open source HPCC will be developed just as its internal platform develops. The pro edition gets you LexisNexis’ other managment tools it has developed around HPCC and support. Escalante said the original driver of HPCC was to get out from under the thumb of Oracle and other “shrink-wrap” software vendors, since the amount of money LexisNexis would have to pay to run their business on Oracle would probably buy Larry Ellison another couple of yachts. He said traditional relational databases could certainly get the job done for big data but the back pocket pain was extreme.

“You can buy 100, 200 big Oracle systems and maybe do it but it’ll cost you a fortune,” he said. Now LexisNexis thinks enterprises will look seriously at HPCC as an alternative to more Oracle in their data center, although Escalante admits it’s going to be a tough sell, since enterprise are always interested in stuff that works and never interested in being someone’s science project.

Maybe having a legitimate information management firm backing HPCC Systems will make it easier to get in the door; maybe not. MySQL, another free database got some entry to the enterprise when backed by a commercial firm, but MySQL mostly took off on the web where there was a hole to be filled. IBM and Oracle didn’t exactly go down in flames because another free database showed up; they’re probably not quaking in their boots now, either.

This also means LexisNexis has decided that their infrastructure technology has minimal value to them as a trade secret (and they aren’t shy when it comes to revenue grabs, believe me) and more value to them as a service business, which is in its own way an interesting reflection on the cloud computing trends of today. It will also, of course, run on Amazon Web Services, and Escalante said there are plans to run HPCC as an online, pay-as-you-go data processing service at some point.

Will it be a “Hadoop killer”? Probably not, open source doesn’t work that way. Will it turn you into Google over night? Probably not, but it’s nice to see another legit contender join the fray and the possibilities are only positive for anyone dealing with large amounts of data and a bent towards experimenting.

A DNA sequencer can generate a terabyte of raw data a day. Currently there’s no good way to deal with that data except to crunch it, look at the results and put it away. What if you could keep that data alive and do as many searches in as many ways as you like on it, on the same server hardware you’ve already got?


May 18, 2011  3:23 PM

Cloud outage roundup!

CarlBrooks Carl Brooks Profile: CarlBrooks

It’s been a rough patch for cloud computing in the “perceptions of reliability” department. Gremlins working overtime caused EBS to fail at Amazon, taking down a bunch of social media sites, among others. Naturally, that got a lot of attention, much as throwing an alarm clock down a wind tunnel will make a disproportionate amount of noise.

As the dust was settling and the IT media echo chamber was polishing off the federally mandated outrage/contrarian outrage quota for all kerfuffles involving Anything 2.0, more outages struck, including a Blogger outage that no one in IT really cared about, although this reporter was outraged that it temporarily spiked a favorite blog.

While nobody was caring about Blogger, Microsoft’s hosted (cloud) Exchange and collaboration platform, Business Productivity Online Services (BPOS, now a part of Office 365) went down, which people in IT most assuredly did care about. Especially, as many of the forum posters said, if they had recently either been sold or sold their organization on “Microsoft cloud” as a preferable option to in-house Exchange.

“I’ve been with Microsoft online for two weeks now, two outages in that time and the boss looks at me like I’m a dolt. I was THIS close to signing with Intermedia,” said one poster. That’s the money quote for me; Intermedia is a very large hosted Exchange provider and this (probably) guy was torn between hosted Exchange and BPOS. Now he feels like he might have picked wrong: notice he didn’t discuss the possibility of installing on-prem Exchange, just two service options.

Microsoft posted a fairly good postmortem on the outage in record time, apparently taking heed from the vicious pillorying AWS got for its lack of communication (AWS’ postmortem was also very good, just many days after the fact):

“Exchange service experienced an issue with one of the hub components due to malformed email traffic on the service. Exchange has the built-in capability to handle such traffic, but encountered an obscure case where that capability did not work correctly.”

Anyone who’s had to administer Exchange feels that pain, let me tell you. It also tells us BPOS-S is using Exchange 2000 (That is a JOKE, people).

What ties all these outages together is not their dire effect on the victims. That’s inconsequential in the long term, and won’t stop people from getting into cloud services (there are good reasons to call BPOS cloud instead of hosted application services but that’s another blog entirely). It’s not the revelation that even experts make mistakes in their own domain, or that Amazon and Microsoft and Google are largely still feeling their way around on exactly what running a cloud means.

It’s the communication. If anything could more clearly delineate “cloud service” from “hosted service,” it’s the lack of transparency, lack of customer touch, and the unshakeable, completely relative perception of users across the board, that when outages occur, they are on their own.

Ever been in a subway car and the power dies? I grew up in Boston, so that must have happened hundreds of times to me. People’s fear and unease grow directly proportional to the time it takes the conductor to yell out something to show they’ve got the situation in hand. Everything is always fine, the outage is temporary, no real harm done, but people only start to freak when they get no assurance from the operator.

Working in IT and having a service provider fall over is the same thing, only you’re going to get fired, not just have a loud sweaty person flop all over you in the dark (OK, that may happen in somea lot of IT shops). Your boss doesn’t care you aren’t running Microsoft’s data center; you’re still responsible. Hosters have learned from long experience that they need to be, or at least provide the appearance of, being engaged when things go wrong, so their users can have something to tell their bosses. I used to call up vendors just to be able to tell my boss I’d been able to yell at “Justin our engineer” or “Amber in support” and relay the message.

Cloud hasn’t figured out how to address that yet; either we’re all going to get used to faceless, nerve-wracking outages or providers are going to need to find a way to hit that gap between easy, anonymous, economical and enterprise ready.


May 6, 2011  7:42 PM

Did cloud computing help catch Osama bin Laden?

CarlBrooks Carl Brooks Profile: CarlBrooks

Now that we’ve gotten the link bait headline out of the way, let me say first that cloud computing is in no way to be considered anywhere near as important to the death of Osama bin Laden as the actual people with guns and helicopters. Credit where credit is due.

However, foundational shifts in technology come across all fronts, and not every story is about business success or advances in personal conveniences; many of them are far more consequential (and sometimes gruesome) than we normally consider. Now, how can one make the case that cloud computing (in its entire manifold “as a Service” glories) was instrumental in the final push to find and put an end to America’s most visible modern enemy?

First, let’s be charitable and assume we were actually looking for him for the last ten years as opposed to the last two, and that the search wasn’t impossibly tangled up in international politics. Now, let’s assume he was, in fact, well hidden, “off the grid” informationally speaking, and surrounded by trusted confidantes, and we only had scraps of information and analysis to go on.

Of course, we always had a rough idea where he was: Afghan intelligence knew he was near Islamabad in 2007, Christiane Amanpour said sources put him in a “comfortable villa” in 2008, and it was only logical that he’d be located in a place like Abbotobad. Rich old men who have done terrible things do not live in caves or with sheepherders in the boonies; they live comfortably near metropolitan areas, like Donald Trump does.

But all that aside, tying together the intelligence and the operations could have come from new ways that the Armed Forces are learning to use technology, including cloud computing. The AP wrote about a brand-new, high-tech “military targeting centre” that the Joint Special Operations Command (JSOC) had opened in Virginia, specifically to assist in this kind of spook operation.

“The centre is similar to several other so-called military intelligence ‘fusion’ centres already operating in Iraq and Afghanistan. Those installations were designed to put special operations officials in the same room with intelligence professionals and analysts, allowing U.S. forces to shave the time between finding and tracking a target, and deciding how to respond.

At the heart of the new centre’s analysis is a cloud computing network tied into all elements of U.S. national security, from the eavesdropping capabilities of the National Security Agency to Homeland Security’s border-monitoring databases. The computer is designed to sift through masses of information to track militant suspects across the globe, said two U.S. officials familiar with the system.”

Well, there you have it. A “cloud computing network” took down the original Big Bad. Wrap up the season and let’s get on to a new story arc. But wait, you cry. WTH is a “cloud-computing network”? That sounds like bad marketing-speak, it’s meaningless babble. Do we know anything more about what exactly was “cloud” about this new intelligence-sifting and operational assistance center?

A spokesman for the United States Special Operations Command (USSOCOM), which is where JSOC gets its authority and marching orders, said there was nothing they could release at this time about the technology being used here.

However, a few months ago, I had a fascinating interview with Johan Goossens, director of IT for NATO Allied Command Transformation (ACT), headquartered in VA (probably not too far from JSOC’s high-tech spook base), about how NATO, driven in large part by the U.S. military, was putting into to play the lessons of cloud computing. He said, among other things, the heart of the new efforts he was leading were two-fold; a new way of looking at infrastructure as a fluid, highly standardized and interoperable resource built out in modular form and automated to run virtual machines and application stacks on command — cloud computing, in a word — and ways to marry vast networks of information and assets (human and otherwise) into a cohesive, useful structure.

Goossens’ project involved consolidating existing NATO data centers into three facilities; each one is federated using IBM technology and services. He started with software development as the obvious test case and said the new infrastructure will be operational sometime this year, which is “light speed by NATO standards.”

Some of this is simple stuff, like making it possible for, oh, say, the CIA to transfer a file to an Army intelligence officer without three weeks of paperwork and saluting everyone in sight (that is not an exaggeration in how government IT functions, and in spades for the military) or having a directory of appropriate contacts and command structure to look at as opposed to having to do original research to find out who someone’s commanding officer was. Some of it is doubtless more complex, like analyzing masses of data and delivering meaningful results.

What evidence is there that the U.S. military was already down this road? Well, Lady Gaga fan PFC Bradley Manning was able to sit at a desk in Afghanistan and copy out files from halfway around the world and any number of sources, so we know the communication was there. We know the U.S. deploys militarized container data centers that run virtualization and sync up with remote infrastructure via satellite. We know this new “targeting centre” in Virginia was up and running well before they let a reporter in on it, and it almost by definition, had to involve the same technology that Goossens is involved in. There’s only so many vendors capable of selling this kind of IT to the military. IBM is at the top of that list.

The Navy SEALs that carried out the raid were staged from one of these modular high tech remote bases; the raid itself was reportedly streamed via audio and partly in video in real time. Photos and information also went from Abbotobad to Washington in real time. That data didn’t bunny hop over the Amazon CloudFront CDN to get there, but the principle is the same.

So it’s possible to pin part of the killing of Osama bin Laden on the strength of new ways the world is using technology, including cloud. I sincerely doubt Navy SEALs were firing up Salesforce.com to check their bin Laden leads or using EC2 to crunch a simulation, but I’d bet my back teeth (dear CIA, please do not actually remove my back teeth) that they were doing things in a way that would make perfect sense to anyone familiar with cloud and modern IT operations.

We’ll probably never know exact details about the infrastructure that runs the JSOC spook show, since they don’t have anything to say on the subject and I’m not about to go looking on my own (wouldn’t turn down a tour, though). But it’s a sobering reminder that technology advances across the board, not just in the mild and sunny climes of science and business, but also in the dead of night, under fire, on gunships you can’t hear, and the result is death.

“Think not that I am come to send peace on earth: I came not to send peace, but a sword.” Mat. 10:34


May 4, 2011  8:26 PM

Why HP wants the whole cloud stack

JoMaitland Jo Maitland Profile: JoMaitland

HP’s clumsy cloud leak this week sheds a little bit more light on the printer giant’s cloud computing plans, but the details signal a much bigger trend. The major IT players feel they must own the whole cloud stack. Why?

According to The Reg story HP’s Scott McCllelan posted the following information on his public LinkedIn profile about HP’s planned offerings:

- HP “object storage” service: built from scratch, distributed sytem, designed to solve for cost, scale, and reliability without compromise.
– HP “compute”, “networking”, and “block storage”: an innovative and highly differentiated approach to “cloud computing” – a declarative/model-based approach where users provide a specification and the system automates deployment and management.
– Common/shared services: user management, key management, identity management & federation, authentication (inclu. multi-factor), authorization, and auditing (AAA), billing/metering, alerting/logging, analysis.
– Website and User/Developer Experience. Future HP “cloud” website including the public content and authenticated user content. APIs and language bindings for Java, Ruby, and other open source languages. Fully functional GUI and CLI (both Linux/Unix and Windows).
– Quality assurance, code/design inspection processes, security and penetration testing.

The “object storage” service would be akin to Amazon S3 and the “block storage” service smells like Amazon’s EBS; the automatic deployment piece sounds like Amazon CloudFormation which provides templates of AWS resources to make it easier to deploy an application. And metering, billing, alerting, authorization etc. are all part of a standard cloud compute service. How you make a commodity service “highly differentiated” is a mystery to me and if you do, who’s going to want that? But that’s another story.

The Platform as a Service part for developers is interesting, although not a surprise since HP already said it would support a variety of languages including open source ones. And the security elements tick the box for enterprise IT customers rightly worried about the whole concept of sharing resources.

These details are enough to confirm that HP is genuinely building out an Amazon Web Services-like cloud for the enterprise. So why does it need to own every part of the stack? HP has traditionally been the “arms dealer” to everyone, selling the software, the hardware and the integration services to pull it all together, so why not do the same with cloud? Sell the technology to anyone and everyone that wants to build a cloud? They’re would be no conflict of interest with service providers to whom it is also selling gear, and no commodity price wars for Infrastructure as a Service. (Believe me they are coming!)

Apparently HP believes it has no choice and other IT vendors seem to believe the same thing. The reason is integration. Cloud services, thanks to AWS’s example, are so easy to consume because all the parts are so tightly integrated together. But to offer that, the provider has to have control of the whole stack — the hardware, the networking and the full software stack — to ensure a smooth experience for the user.

If you don’t, as others have proven, your cloud might never materialize. VMware began its cloud strategy by partnering, first with Salesforce.com to create VMforce, then with Google, creating Google App Engine (GAE) for Business which runs VMware’s SpringSource apps. Then Salesforce.com acquired Heroku and started doing its own thing no doubt leaving VMware with a deep loss of control. Both the arrangement with Salesforce and GAE have gone nowhere in over a year and VMware has since launched its own PaaS called CloudFoundry.

Similarly, IBM built its own test and dev service and now public cloud compute service from scratch. It’s also working on a PaaS offering although there’s still no word on this. Microsoft sells Azure as a service and is also supposedly packaging it up to resell through partners (Dell, HP, and Fujitsu) for companies that want to build private clouds. The latter is over a year late, while Azure has been up and running for a year.

In other words, whenever these companies bring a partner into the mix and try to jointly sell cloud, it goes nowhere fast and they revert to doing it on their own.

The benefit of being late to the game, as HP certainly is, means it gets to learn from everyone else’s mistakes. Cloud-based block storage needs better redundancy, for example! Or, don’t waste your time partnering with Google or Salesforce.

There’s also a theory that to be able to sell a cloud offering, you have to have run a cloud yourself, which makes some sense. So if the past is any help, and there isn’t much of it in cloud yet, HP is on the right path building the whole cloud stack.


May 3, 2011  11:30 PM

HP exec leaks cloud plans; VMware in, Microsoft out?

JoMaitland Jo Maitland Profile: JoMaitland

HP will most likely base its cloud Platform as a Service offering on VMware’s CloudFoundry, according to plans leaked by an HP exec. Surprisingly, there was no mention of support for Microsoft Azure, possibly because Microsoft is still working out the spec for reselling this software.

The Register reported that Scott McClellan, the chief technologist and interim vice president of engineering for HP’s new cloud services business spilled the plans on his public LinkedIn profile. [Doh!]

According to The Reg story, HP will reveal the details of its cloud strategy at VMware’s conference, VMWorld in August.


April 22, 2011  7:00 PM

Using the IBM Smart Business Cloud — Enterprise

CarlBrooks Carl Brooks Profile: CarlBrooks

So I was very excited when IBM officially launched its general purpose public cloud service. It was a validation of the cloud model for the enterprise; it was a chance to see what one of the premier technology vendors on the planet would deliver when it put shoulder to wheel on this exciting new way to deliver IT.

Its got a growing user base too: check out the “Profile” screenshot at the bottom; not only do you get to see your IBM Cloud account, you get to see all your IBM friends, too- as of this writing, IBM’s cloud has 729 users running 2436 instances, 578 block stores and 666 images.

Turns out it’s pretty much feeling its way along, just as Amazon Web Services (AWS) was 3-4 years ago.  Its…um…not polished, but it works. It’s a true public cloud experience, even if the the pricing is logarithmic in scale rather than incremental (goes from “quite reasonable” to “Oh My God” fairly quickly). You click and provision storage, instances, and so on. But it feels a little raw if you’re used to RightScale, AWS Managment Console and so on. It’s very bare bones at the moment.

It’s also abundantly clear that the IBM Smart Business Cloud- Enterprise (SBC-Enterprise) is exactly the same as the IBM Smart Business Development and Test Cloud. The transition to “enterprise class public cloud” is simply hanging a new shingle on the door. See the screenshots below, they haven’t really finished transitioning the brand on the portal pages and it’s all over the documentation too. The test and dev cloud and the SBC-Enterprise cloud are one and the same.

But that’s fine by me- if IBM wants to take their dev cloud infrastructure and call it Enterprise, they can do that. I’m not aware of any conceptual reasons for not doing production in a cloud right next to where you do test and dev, besides the expectations for uptime and support.

What’s a little crazy is how much uber-complicated stuff is baked right in, like Rational Asset Manager and IDAM even though actual use is a little creaky. This highlights the main difference between the approach IBM is taking and the approach AWS took. The very first thing you do on creating your IBM profile and logging in at http://ibm.com/cloud/enterprise is manage users. The last thing you do (or can do, I couldn’t even figure where to do it) is check your bill and see how much you’ve consumed and how much you’ve paid. That’s almost the reverse of the AWS experience- again, nothing really wrong with that.

That’ll make the lone wolf supernerd webdork garage genuises a little discomfited, but it’ll make the project managers very happy. Which is probably the point, IBM definitely isn’t courting startups.

Other notable postives: status updates and notifications about the service are baked into the dashboard and appear at the top of the screen. When I was using it, there was a helpful suggestion to pick the Boulder CO data center because of heavy use at IBM’s flagship cloud DC in Raleigh. The provisioner even automagically put Boulder at the top of the list for me. Does anyone else remember when AWS didn’t even have status dashboard? I do.

The web portal is reliable, fast and built on solid, modern open standards, no “Choose your browser with care young padawan” here. Security is superficially reliable: You are forced to create a 2048- bit RSA keypair for your first Linux instance and strongly encouraged to do so for each additional one; the Windows instances enforce complex passwords and you don’t get to be Administrator at provisioning. Only ports 22 and 3389 are open, respectively. After you login and start monkeying around, of course, you are on your own.

For the brief handful of hours I used it, the connection and response time were rock solid and quite acceptable. Disk I/O was fine both on the instances and to the attached block stores. Only a little weirdness when patching or installing software. Licensing all brushed under the rug, as it should be. Getting a test website up and running was trivial. Instances start with 60GB local storage and block stores start at 250GB. The REST APIs? Probably OK. I, uh, sort of ignored the API part of it. But they’re there.

However, it’s easy to see this is early days for an end-user service. I got the always-helpful “This is an error message: the message is that there is an error” at random. Provisioning a stock SUSE Enterprise instance reliably took 6-7 minutes each time. The Windows instance took north of 25 minutes every time (on my first try I used the delta to fumble around with SUSE and vi and get VNC going because I have apparently devolved into a helpless monkey without a GUI).

SUSE was no-password SSH, username “idcuser”(short for ‘IBM Dev Cloud User’, I’m guessing). When I created a privileged user and went through VNC, I couldn’t sudo until I went back to SSH and set a password. idcuser can sudo in SSH without a password but not over VNC, apparently. Which is fine, I’m glad it doesn’t have a default root password, that’s a Good Thing (TM) but I had to figure that out. AWS just gives you a stock SSH session and everyone knows to change the root password first. IBM’s instructions exclude mention of the root password (“sudo /bin/bash/” in SSH instead).

I couldn’t save my modified instances as customized images to boot later- the SUSE image showed up in my repository after several tries but wouldn’t relaunch. I eventually managed to save a working image after a few days. The Windows image just refused to save; I gave up after a half dozen tries. There’s no way to upload my own images; I’m stuck with a collection of RHEL, SUSE and a lonely Windows Server 2008 for a portfolio; I’d have to waste a lot of time to replicate my own VMs from that and if I wanted to run any other OS forget it.

Signing up was pleasantly easy, but I didn’t get access unless a (very helpful) IBM rep contacted me and sent me a welcome kit via email. I was also mistakenly signed up for Premium Support Services without asking to be, which is one of those automatic eye-watering cost multipliers I mentioned before. Fortunately IBM has been quite responsive and I’m only getting charged for what I use. Anyone remember when AWS didn’t even have a support response option? I do. Again, IBM is working backward (in this case kind of a good thing).

All of this, most especially the lack of freedom in managing a VM library (couldn’t save, couldn’t upload) and the distinctly unfinished branding effort going on put IBM’s cloud in the “nice idea so far” category. It’s going to take some time and some effort for this to be anywhere close to the capabilities or integrated services AWS has. Only a true enthusiast would want to run anything in production on this. It’s still definitively not a enterprise-class cloud environment for running workloads.

It probably will be in time; unlike AWS, IBM doesn’t have to invent anything. I suspect the real issues at IBM around this are in tying in the appropriate moving parts and adding features with an eye to the end-user experience. IBM is historically not good at that, which is probably why AWS got there first. Amazon is good at that part. Scale matters too- IBM is one of the worlds largest data center operators in its own right; Amazon rents space in other peoples. If IBM ever runs into growing pains, as AWS has, it is going to be on a completely different plane. I’d expect them to screw up the billing before they screwed up operations, for example.

Anyway, check out the screenshots. When I get my invoice at the end of the month, I’ll tell you all how much this cost me. If it was over $10 I’ll be shocked.

User managment page

Storage page

My Profile! and 729 other IBM friends!

Helpful error messagesA Helpful Error Message

The Control PanelIBM Cloud Control Panel

UPDATE: Due to a promotion campaign that I was apparently grandfathered into by a few days or a week, which ends June 1, my test-drive for the IBM Cloud was free. A quick login and peek around shows the current number of users at 864, more than 100 over a month ago, but 2962 running instances, demonstrating an uptick in active customers but also the fungibility of demand.

They’ve updated the design to scrub away more of the IBM Smart Business Development and Test Cloud branding and I was finally able to launch my saved SUSE image. Progress!


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: