The Troposphere


March 23, 2010  10:30 PM

Cloud spending plans revealed at Cloud Slam ’10

JoMaitland Jo Maitland Profile: JoMaitland

If you’re interested in hearing about enterprise IT plans for cloud computing this year, check out the Cloud Slam ’10 virtual trade show happening this week.

I will be giving the TechTarget keynote presentation on March 25th at 3.30pm EST, on our cloud purchasing intentions survey data. Over 500 members of our audience completed the survey, answering more than 50 questions on their plans for public and private cloud adoption. There’s some interesting trends we’d be happy to share with you.

Cloud Slam ’10 is offering more than 100 expert session presentations on the rapidly shifting world of cloud-based IT and business strategies. Topics include cloud computing system integration, private vs. public cloud computing, setting up a channel for hybrid clouds and secure cloud environment interoperability.

Register today!

February 26, 2010  11:20 PM

CA’s $100 million dollar cloud wager

CarlBrooks Carl Brooks Profile: CarlBrooks

On Wednesday $4 billion software provider CA bought 20-man 3Tera; analysts reported that CA had paid around 30 times the revenue valuation of the cloud software platform maker. Independent sources now confirm the price was a cool $100 million, terms (cash or stock) as yet undisclosed.

Gut-check this with simple math — 3Tera reported around 80 paying customers, largely small and mid-sized managed service providers (MSPs). They’d have to be paying around $40,000 a year on average, by no measure a startling price for an enterprise software installation, to bring in $3.2 million a year, which, multiplied by 30, brings us right around the reported $100 million.

That kind of valuation may make stock analysts cringe, since a) any firm that looks like it’s wasting its capital cannot be considered to have a sound growth strategy and b) Pets.com. But it’s a great get from the technology side.

Was $100 million too much to pay for 3Tera?

The short answer is no — it was a unique technology with a proven (albeit modestly) track record and it fit a piece of the puzzle CA wanted for cloud computing — point-and-click, one-size-fits-all infrastructure. It’s not like there were dozens of 3Tera’s floating around to spark a bidding war, and there’s not yet a bubble to artificially inflate the worth of cloud computing technology. CA simply decided it needed this and put cash on the barrel until 3Tera said yes.

However, the figure surely changes the story from “CA snaps up golden opportunity” to “CA just sunk a pile into a future scenario.” CA has $4 billion in revenue and approximately $1 billion in net tangible assets. It just invested a significant portion of that into a software company with 80 customers and a nice looking Web portal product (basically) and are betting that the enterprise appetite for private cloud will exceed predictions.

Conservative estimates for cloud spending over the next few years hover around $40-$50 billion, and 10-15% of the overall IT market.

By far the largest part of that cash will go right down the pipe to Software as a Service, leaving a very poor table indeed for infrastructure plays, especially when HP, IBM, and EMC/Cisco/VMware are sitting down to eat with you.

It’s quite possible that the enterprise appetite for what is now considered private cloud will become a big tent for enterprise IT overall, and make those kinds of figures look undercooked, but any way you slice it, CA has a lot riding on this buy. It’s certainly brightened the days of CEOs of small companies everywhere, I’ll say that much.

“3Tera’s impressive exit is validation of the tremendous opportunity facing all cloud startups,” said ubiquitous cloudketeer Reuven Cohen, who also makes cloud infrastructure platform software.

A new CA

On a brighter note, this is a definitive sign that CA has come around from the old days.

CA’s new acquisitions have been marked by caution, generosity(!) and foresight, and a good attitude towards the technology and the talent that’s coming in. Out of Oblicore, NetQoS, Cassatt and now 3Tera, I’m fairly sure the majority of those firm’s employees still work at CA if they desire to do so. CA spokesman Bob Gordon said that all twenty of 3Tera’s employees would stay on with CA and CEO Barry X Lynn will stay on for a transitional period.

Let’s compare that to, say, 1999, when CA would have systemically lured away or undersold all of 3Tera’s customers, bought up their building lease and cut off the heat, shot the CFO’s dog and then bought the company for $6 and a green apple before firing everyone by Post-It note and carving IP out of code like a irritable Atzec priest. On a Monday.

We’ve come a long way since then, for sure. Congratulations to both companies — to one for the windfall, the other for the bold commitment.


February 16, 2010  9:20 PM

Two free cloud beta services to check out

JoMaitland Jo Maitland Profile: JoMaitland

Cloud computing services are popping up like daises these days. The good part is many of them are launching with a free beta service, which means you can try before you buy and more importantly, get some valuable experience with cloud-based IT services.

The first one to check out is an expandable NAS appliance from Natick, MA-based Nasuni, which connects to the cloud for backup, restore and disaster recovery purposes. Active data is cached in the appliance on-site, maintaining the availability of data that’s required day to day, while older data is sent over the wire to the cloud service provider of your choice. Right now that could be Amazon (S3), Iron Mountain or Nirvanix, while Nasuni works on building more cloud partners.

For anyone with a lot of file-based data and tired of provisioning, managing and paying for yet another NAS filer, this is an interesting service to check out. Our sister site SearchStorage.com covered the company’s launch. For more details read this story (Nasuni Filer Offers Cloud Storage Gateway for NAS).

On a different note, people using EC2 instances might be interested to check out how to get more utilization out of them with a free service called Silverline.
IT shops often end up sizing EC2 servers just like in a traditional data center. To meet peak application demand, the servers are over configured. This means spare cycles are costing money.

Silverline creates a virtual background container on any EC2 instance. When an application is run in this background container it can only use the spare cycles. This guarantees that what was already running on the instance will run unaffected, while the spare cycles are used by the application(s) placed in the virtual background container. The company claims EC2 customers can get more from the servers they are already paying for.

And one advantage over EC2 Spot Pricing is that Silverline’s virtual background container is persistent – where spot instances can be terminated based on pricing.


February 5, 2010  1:46 AM

Microsoft and NSF giving away Azure

CarlBrooks Carl Brooks Profile: CarlBrooks

The National Science Foundation and Microsoft have announced they will be giving away Azure resources for researchers in an attempt to: “shift the dialogue to what the appropriate public/private interaction” is for research computing, according to Dan Reed, Corporate Vice President for Extreme Computing (yes, really) at Microsoft.

For 3 years, Microsoft is giving away an unspecified amount of storage and support as well as CPU time for research applications to be run on Azure. NSF assistant director for Computer & Information Science & Engineering Jeanette Wing suggested that cloud computing platforms and Azure in specific should be considered a better choice for research facilities than maintaining and building their own facilities.

“It’s just not a good use of money or space,” she said.

Look at the Large Hadron Collider, said Wing, which has 1.5 petabytes of data already, or digital research projects that can generate an exabyte of data in a week, or less. She urged researchers to use Azure to figure out new ways to coping with all that information.

This is a nice, charitable gesture, not unlike Amazon’s occasional giveaways to worthy scientific projects, of EC2 instances and bandwidth. There are significcant caveats that Microsoft and the NSF have papered over.

First, from all reports, Azure is a very large data center operation- possibly as large as some of the less prestigious high-performance computing facilities that researchers use around the world. unless Microsoft is giving away the whole thing, it’s not going to make much of a dent in the demand.

Second, go down to the local university science department and tell a professor he or she can hop on a virtualized, remote Windows platform and process their experiment data. Go on, I dare you.
99% of experimental, massive-data, high performance computing is done on open source, *nix-based platforms for some very sound reasons. Microsoft won’t gain much traction suggesting that researchers can do better on Azure. It may find some eggheads desperate for resources, but that’s a different story.

So what is the real import, the overall aim of setting up Azure as a platform to host boatloads of raw data and let people play with it? Both Reed and Wing said they wanted to see researchers with new ideas on how to search and manage these large amounts of data.
Well that makes more sense–go sign up for a grant, but read the fine print, or you could be inventing the next Google, brought to you by Microsoft…


January 25, 2010  11:26 PM

AWS experimenting with actually supporting DNS for users

CarlBrooks Carl Brooks Profile: CarlBrooks

Sharp eyes to Shlomo Swidler, who posted an update to an old thread and an old complaint on AWS – getting lumped into spam blacklists. EC2 staffer “Steve@AWS” announced the availability of a private beta today to institute PTR records for selected users to assist in getting them off real-time blacklists- a standard DNS tool conspiciously absent in AWS.

A major problem for AWS and EC2 since its inception is that users with the publically generated EC2 IP addresses handed out by Amazon are extremely susceptible to getting stuck on spam blacklists, like Spamhaus or Trend Micro’s (Spamhaus is by far the more influential).

Read coverage about the most severe blacklist to date here.

It’s been an ongoing problem because Amazon doesn’t provide the usual level of service for users running websites or sending email from within EC2. Most hosts provide ways for an email server to politely verify that it does, in fact, originate with the domain name it says it does. PTR records do that and they have become a de facto standard for email hosts. Without them, a spam complaint can knock entire swaths of IP addresses out of the daylight and get tagged as spam providers.

The only way for hosts to get unflagged after their IPs are dirtied up with the spammer label is for the host provider to individually verify and notify the blacklist provider that the address is good. Amazon, being very highly automated and very popular, doesn’t do that well, and it took the blackout by Spamhaus last year to force the cloud provider to open up and start to reform its practices of not responding to customers having email trouble.

Hopefully this private beta is a sign that Amazon is going further and moving towards accepting more of its responsibilities as a web host- after all, giving out the address means you need to police the streets, collect the garbage and make sure the mail can go through. Hosters have taken this on their shoulders since the telecoms washed their hands of responsibilities around spam a decade now- its well past time for Amazon to join in.

UPDATE: Amazon confirms they are adding new features for DNS and conducting a private beta for selected users.


January 15, 2010  4:03 PM

Benchmarks for Cloud — more, please

CarlBrooks Carl Brooks Profile: CarlBrooks

Rackspace has commissioned a set of benchmarks to highlight performance differences between Rackspace and Amazon EC2.

“Benchmarks!” you explode, your exquisitely sensitive logical faculties coruscating in outrage. “Benchmarks are vendor-driven popularity contests that cherrypick tests for results! Its a swamp of dismal nonsense, a perpetual statistical hellhole that means nothing! Benchmarks make engineers maaaad!!” you say.

Well, fine, then you do it, because there aren’t any for cloud yet, and I don’t care if Rackspace did go have it done. They had sound reasons, there is precious little precedent, and the results are informative, useful and not overtly flawed.

Analyst and high-traffic expert Matthew Sacks carried out the benchmarks on his website The Bitsource. Overall, the tests were rudimentary and carefully controlled. He tested the time it took to compile a linux kernel on every type of instance between the two services, and used Iozone to compile read/writes for storage systems.

Sacks, a systems administrator at Edmunds.com, said he developed the methodology himself.
“The idea behind it is that we can get a pretty good idea on how these instances stack up” he said, but it’s not a comprehensive metric. “I decided on kernel compilation as a general measure of CPU” he said, because it was well understood, easy and fast to replicate and uncomplicated in results.

Sacks said that Rackspace’s motivation was good old fashioned boosterism. “They had received reports from their customers that Rackspace was way better than EC2.” he said, so they decided to test that out with a third party. The results seem to bear them out.

“There are clear wins in CPU and disk performance,” he said. Rackspace beat EC2 instances in compiling by a slender margin in every case but one, and showed 5 to 10 times the amount of CPU availability. Disk Read/Write was also higher, sometimes by twice as fast, although random access tests were much closer, suggesting throughput on EC2 lags behind Rackspace even if data request execution doesn’t.

However, Sacks took pains to say that his tests did not mean that an application running on EC2 could be shifted to Rackspace and save either time or money by default. He said that users always had to consider the application, not just the infrastructure, and he wanted his tests to be a resource for people to come and compare their specific needs. “The variables are so great it’s hard to come up with a standard for testing [cloud]” he said.

“What I would like to do is test more providers,” said Sacks. Perhaps he’ll get the opportunity- he said the testing and review only took a few weeks and it wouldn’t be hard to repeat for different platforms.

Sacks’ two-man experiment aside, I can think of at least one cloud out there that could make EC2 and Rackspace look like a snail chasing molasses when it comes to kernel compilation and disk I/O:
NewServer’s Bare Metal Cloud makes Cloud Servers and EC2 look like two sick men in a sack race” sounds catchy to me. I wonder if Rackspace will sponsor that benchmark?

But that’s the point– I want more third party tests like Sacks’. I want more and more and more independent review of what providers say and what they do. I can find more compelling independent information about the USB stick in my pocket than any one of the clouds that want me to trust my business to them.

It only took Matthew Sacks a few weeks to make a clean, well documented and useful set of benchmarks, even if that came at the behest of a vendor.

So let’s make more.


December 31, 2009  11:48 PM

Cloud computing, 2009 in brief

CarlBrooks Carl Brooks Profile: CarlBrooks

It’s fun being at the top of a technology wave. The past year in cloud computing has moved with the giddy, inexorable pace that marks a major technological shift in how we use computing power, and more importantly, how we think about it.

Cloud computing, barely a whisper in three or four years ago, is now firmly embedded, if still nascent, in the ontology of mainstream information technology. It’s a part of any conversation about IT anywhere. Even the dyed-in-the-wool Grumpy Sysadmin(TM) will, grumpily, talk about cloud computing.

It started with Amazon, online retailer par excellence, who found a way to get IT pros what they wanted, without the hassle of shipping a hundred pounds of metal and very clever sand per buy. The world had moved on to the Internet, they reasoned, so why not get what they want – CPU cycles and plenty of bit storage – without the part they hated.

And it worked. By 2008, the tipping point was reached, and analysts officially began cramming ‘cloud’ into their IT buying predictions, which, naturally, immediately drove IT management insane trying to figure out a) what the cloud was b) what it cost and c) whether or not they needed it. That made 2009 a lot of fun.

So what happened to turn cloud computing from ridiculed buzzword to reality?
Most of us started the year wondering what the devil it was: Fortunately, the government came up with a pretty definitive answer, which should tell anyone with an ounce of sense how robust and uncomplicated the concept is. Many others jumped on the bandwagon with glee, ‘cloud washing’ any old thing with an Internet connection.

Cloud terminology-hit mainstream newspapers, and the boob tube, where we got the standard expression of polite interest. It’s n ow on a par with ‘hacker’, ‘firewall’ and ‘servers’ for IT terms the regular press doesn’t understand but is happy to sprinkle over any tech reporting.

Then there was cloud outage after outage after outage, but nobody cared.

And that’s the long and short of it, kids. No matter what happened, cloud made sense to users, practically and economically. They bought in and they’re still buying in. Analysts and pundits weighed, promising riches and/or wrack and ruin, security folks went through the roof at every turn, and yet, somehow, the idea makes enough sense that people don’t care. They’ll put up with the potholes for the sake of the ride; its still a lot better than walking.

Just so with cloud computing.


December 16, 2009  8:48 PM

Amazon lets users bid on unused EC2 capacity

JoMaitland Jo Maitland Profile: JoMaitland

Amazon says its new EC2 Spot Instances offering lets user bid on unused EC2 capacity and run those instances for as long as their bid exceeds the current Spot Price. The price will change based on supply and demand and is not for everyone obviously.

Amazon says applications suited to this kind of pricing model might be image and video processing, conversion and rendering, scientific research data processing and financial modeling and analysis, all typical EC2 use cases already.

From Amazon’s perspective, it might as well sell as much of its unused capacity as it can, even if it is just for cents on the dollar. Presumably there’s a way through the AWS console for users to be alerted when the bids and pricing changes to warn them if they are about to lose their instance to someone willing to pay more for it. That sounds like a nightmare waiting to happen.

There is a feature called Persistent Request that supposedly prevents your instance from terminating before it has finished your job, but you might want to try this out a few times before you do anything real on it!

Hats off to Amazon though for continuing to innovate around pricing and during it’s peak season too.


December 3, 2009  12:38 AM

Wharton prof discusses Google’s prospects for anti-trust action

JoMaitland Jo Maitland Profile: JoMaitland

Is Google a self-regulatory public servant or rapacious and unscrupulous monopolist, asked Eric Clemons, professor of operations and information management at The Wharton School, during a talk at the Supernova conference in San Francisco today.

Clemons drew a parallel between Microsoft’s ability in the 1980s and 90s to use its dominant position in one market, operating systems, as leverage to control and dominate another, the browser market. Microsoft’s Internet Explorer browser bundled with its OS killed Netscape, a competitor in the browser market. This anti-competitive, monopolistic power was the basis of the DOJ’s case against Microsoft.

A decade later Google has 70% of the market for internet search, is launching new products like gmail, calendaring, office apps, mobile operating systems, laptop operating systems and a cloud computing development platform, all making headway in the market and in some cases knocking out established players. Ironically, Google beat Microsoft to a contract for outsourced email and calendaring in Japan recently by bidding 40% less than Microsoft wanted for the renewal of that contract. Touche! Indeed. But it’s bitter sweet.

Clemons belives Google will face increased scrutiny by the DOJ for its potentially predatory monopoly, in a non-contestable market, that could harm the competitive process and will likely kick-off an anti-trust suit against the company.

I think the real message in the professor’s comments is that the high-tech industry moves so fast that it lends itself to monopoly and that market players need to be aware and know when they are in danger of a Google or Microsoft jumping in.

For instance, watch Mircosoft’s Windows Azure cloud computing development platform. This could do for Platform as a Service what Google did for search.


December 2, 2009  9:36 PM

When the utility model falls apart

CarlBrooks Carl Brooks Profile: CarlBrooks

Interop was great. It highlighted for me how far and how fast cloud has come along. It was all too brief; all the cloud sessions were slammed and debate ran high.

Panelists like AT&T’s VP for Business Development and Strategy Joe Weinman, for instance, did a splendid job laying out a cost model for computing that follows the utility model, building on the old saw that “nobody has a generator in their backyard anymore”, and arguing that computing services are subject to the same rules as the utility market.

They aren’t, for one reason paramount above all others. Data is unique. It’s not a commodity. One datum of information does not equal another. You don’t care if your neighbor washes his dishes with water drop 1 or water drop 2– but you’re sure going to care if he’s using your data set instead of his to make money, however.

For instance, I asked Joe Weinman what happens to his rosy cost model if net neutrality falls apart and carriers can engage in prejudicial pricing for network users. Naturally, he didn’t answer that, but it’s a primary example of the fundamental problems with treating cloud as a utility.

Of course, what happens, is AT&T makes more money and users that went headlong into public clouds are going to get royally screwed. No large enterprise is a) unaware of this b) going to do it.

Net neutrality is hardly the last political consideration- it’s just the one I brought up because there was a telco in the room. It’s easy to conceive of legislation that would irrevocably compromise data stored in outside repositories- indeed, as far as the rest of the world is concerned, in the US, it already is.

Political wrangling that raises your electric bill a few cents an hour is one thing. Losing exclusivity to your company data by fiat is quite another. It’ll be a cold day in hell before any large (or even medium) size enterprise is going to commit to using public compute as a utility with vulnerabilities like that. Sure, they’ll put workloads in Amazon, and take them out again; they’ll shift meaningless data into these resources, but they’ll never, ever be thinking of it the same way they think about the electric company.

It’s still a useful metaphor, in a rudimentary fashion; It gets the concept of ‘on-tap’ across, and it’s balm to the ears of people who didn’t really understand their IT operations anyway. But futurologists and vendors, especially vendors who want to monetize cloud on top of carrier services, need to understand that the message has come home; we know what cloud is, and we know what the real risks are. Now, show us your answers– In the mean time, we’ll keep our ‘generators’ and our data, to ourselves.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: