The Troposphere

May 17, 2010  6:36 PM

Inside the world of cloud computing at Citrix Synergy 2010

Mino65434 Steve Cimino Profile: Mino65434

Donna Lyon, an attendee at Citrix Synergy, offers her take on the cloud announcements from the show.

There is always a debate over whether cloud computing is a marketing phase or a technological reality; the Citrix Synergy event held in San Francisco was no exception.

Mark Templeton, president and CEO of Citrix, wasted no time in announcing that the cloud technology built by Sonnenschein Nath & Rosenthal. a global law firm, won the firm the Innovation Award for 2010. The company empowers employees by giving them access to the information they need whenever and wherever, confidentially and securely. Using any device — whether desktop computer, mobile phone or iPad — the firm’s employees can access internal company records immediately through their private cloud. This potentially offers up a better work/life balance to employees, along with allowing them to set up new offices quickly and grow more efficiently.

“Virtualization and cloud computing is our future…if you’re not doing it now you need to be,” said Andy Jurczyk, CIO of Sonnenschein Nath & Rosenthal.

A session on the future of IT was lead by Michael Harries and Adam Jaques, both from Citrix. Harries also insisted cloud computing was the way of the future, despite some concerns from audience members working in the healthcare industry. Jaques, on the other hand, noted that he still considers cloud to be mostly a marketing term.

Duncan Johnston-Watt, CEO of CloudSoft Corporation, and Bruce Tolley, VP, outbound and corporate marketing at Solarflare Communications, hosted a session about how to build an enterprise-class cloud. The pair then demoed the results of their cloud computing test center, created in July 2009, that delivers increased data speeds for internal clouds.

Frank Gens, senior vice president and chief analyst of IDC, took the stage to talk about three big IT trends that are set to change the industry:

  • Mobility, due to 1 billion mobile internet users, 220 million smart phones, 500,000 mobile phone apps and the fact that emerging markets are phone-centric IT users.
  • Cloud computing, due to the desire to consolidate, virtualization and automate.
  • The information avalanche, due to the 7 billion communicating devices in place, 700 million social networkers, and tons of video dominating new growth. Today there is 0.8 ZB of data out there, but in ten years, there will be 35 ZB.

Companies still focused on physical resources are going to be doomed, Gens stated. With the influx of data, organizations are going to have to move into the cloud.

Cloud security concerns remain, especially within the healthcare and government industries, but the takeaway from Citrix Synergy is that people are changing the way they think about cloud computing. The early adopter organizations, such as Sonnenschein, are pushing aside any doubts and embracing the technology. It is early days now, but soon we may not have a choice.

Donna Lyon specializes in external communications and media relations in the software and hardware industries. She has more than eight years experience in marketing, strategy development, public affairs and public relations, working with companies including Cisco Systems, Hewlett Packard, Informatica and BlueArc. Donna’s technology areas of focus include software, virtualization, data centers, networking and collaboration.

Donna’s passion for marketing communications is also shown through her work as a board member on the San Francisco chapter of the American Marketing Association. Donna holds an MBA from Golden Gate University along with a Diploma in Marketing from the Chartered Institute of Marketing at Bristol University.

May 14, 2010  7:49 PM A slap in the face to business as usual

CarlBrooks Carl Brooks Profile: CarlBrooks

The federal government has just launched running entirely on Amazon’s cloud services. Vivek Kundra, federal CIO and cloud champion, is using the site to browbeat skeptics who said that the fed shouldn’t or couldn’t use one-size-fits-all cloud IT services to run important stuff. It’s an opportunity to do something that he hasn’t been able to do so far- flex some muscle and make people sit up and pay attention.

Everything to date has either been a science project–, hosting’s front end at Terremark, NASA Nebula, etc– or a bunch of fluff and boosterism, and his promised cloud computing budgets haven’t hit the boards yet, so up until now, it was business as usual. I’ll bet agency CIOs were spending most of the time figuring out how to ignore Kundra and laughing up their sleeves at him.

This changes things. is a whole project, soup to nuts, running out in the cloud, not just a little peice of an IT project or a single process outsourced. It’s a deliberate, pointed enjoinder that he can get something done in Washington (even if it’s just a website) by going around, rather than through, the normal people.

Technology-wise, this is nothing- the choice of Amazon incidental at best, the money absolute peanuts.

Process-wise, it’s a very public slap in the face to the IT managers and contractors at the fed. It’s absolutely humiliating and horrible for them- every conversation they have for the next year is going to include, “But…” and they know it. If they can’t find a way to squash Kundra, the IT incumbents are in for some scary, fast changes in how they do business.

Federal contractors and government employees HATE that- it’s the opposite of ‘gravy train’. The system isn’t designed to be competitive; it’s designed to soak up money. Kundra is effectively going to force them to be competitive by rubbing their nose in that fact.

What it shows on a larger level is something worth remembering; cloud computing isn’t a technological breakthrough as much as it is a process breakthrough. Cloud users may find it neat that Amazon can do what it does with Xen, for example, but fundamentally, they don’t care that much, they’re just there to get the fast, cheap, no-commitment servers and use them. And that’s what Kundra’s done with (Ok, he picked a contractor do did it, but anyway).

There are probably thousands of federal IT suppliers that could have built and run, and they would have taken their sweet time about it, and milked the coffers dry in the process, because that’s the normal process. They might have bought servers, rented space to run them, put a nice 50% (or more) margin on their costs, and delivered the site when they couldn’t duck the contract any more. That’s normal.

Kundra picking out a contractor who simply went around all that and bought IT at Amazon, cutting the projected costs and delivery time into ribbons?

That’s not normal-and that’s why cloud computing is so important.

May 5, 2010  12:07 AM

Citigroup values AWS sales at $650M in 2010

JoMaitland Jo Maitland Profile: JoMaitland

Citigroup estimates Amazon Web Services (AWS) will hit sales of $650 million in 2010, according to a recent article in Businessweek on the prospects for the cloud computing leader.

Amazon does not break out its AWS revenue, but its headstart and leadership position in cloud computing mean that any indicators on how this business is doing are a helpful data point for the rest of the industry.

So far, companies using AWS are typically in the high performance computing space, it’s pharmaceutical firms, oil and gas, financial services and academic institutions. Also, web retailers and startups are early adopters.

We’d like to hear feedback from any organization that’s testing AWS or using it on and ongoing basis to help shape our coverage of this topic on

You can reach me at



March 30, 2010  5:03 PM

UPDATE: Net Neutrality far from dead–National Broadband Plan axes net neutrality proposal?

CarlBrooks Carl Brooks Profile: CarlBrooks

The FCC raised eyebrows last Friday with its proposed National Broadband Plan at a Congressional hearing last week by excising any mention of net neutrality—the idea that internet providers have to treat their customers and competitors fairly—from the plan. The Broadband Plan also wants to continue the digital wireless spectrum grab, reallocating TV bands to wireless data providers.

Net neutrality is a strong area of interest for cloud computing providers, since they rely on telcos to get the computing out of their cloud and into the hands of customers. The federally mandated minimum of at least one blogger going bananas on any topic was met and net neutrality was declared dead.

Is it? FCC chairman Julius Genachowski is a strong proponent of net neutrality; the proposed national Broadband Plan lists “robust competition” as the first priority; and a rule change submitted by Genachowski governing the carrier status of telcos and ISPs was submitted last year but has yet to be voted on, so I’ll reserve judgment on how dead any of this is.

There’s no indication this administration or the FCC are anything but net positive on net neutrality. A working hypothesis can be formed by taking the view that the term ‘net neutrality’ has been punched into such a shapeless mush of political irritants that simply bringing up the term in the plan would be a polarizing and wasteful exercise in How to Make it Impossible for a Republican to Vote for Your Idea.

The FCC’s plan says it aims to:

 “Develop disclosure requirements for broadband service providers to ensure consumers have the pricing and performance information they need to choose the best broadband offers in the market. Increased transparency will incent service providers to compete for customers on the basis of actual performance.”

If that happens then the networks will, by default, become more neutral as providers strive to undercut each other (provided there’s actually any choices left in your neighborhood or business parks). That’s a long way from dead for net neutrality, even if the term is being avoided.


On April 6th, the DC U.S court of Appeals ruled that the FCC cannot keep Comcast from discriminating against consumers based on how they use the internet. in response, the pundits went full steam ahead on the Net Neutrality is Dead carnival boat, overlooking the true meaning of the ruling. My response below, from Alex Howard’s post:

I wanted to comment on the heart of the ruling, which is that cable ISPs (and FIOS, essentially any broadband provider) are scheduled (loosely) under Title I of the Telecom Act, since the Powell-era Cable Modem Order, as information services instead of communication services.

The Supreme Court ruled that classification as within the scope of the FCC’s powers (however strange it might seem in the light of the intent of the law) in BrandX v FCC, and this decision today upholds that classification. Technically, this is a very sound legal decision, which is probably why it was unanimous.

If the FCC had classified ISPs as common carriers under Title II, the same lawsuit would have be 3-0 the other way. That is the only option the FCC has for exerting regulatory authority of this type over broadband providers.

Will that happen? It’s unclear. Genachowski isn’t averse to that, philosophically, one supposes, but it would be an epic sh*tstorm.

To sum up, the FCC clearly has the tools it needs to order broadband providers to be treated like common carriers and if anything, these decisions show that the courts are very consistent in upholding that authority. It simply has not done so.

So let’s TRY to remember, kids; nothing is bound in stone here. The FCC still retains the power and the ability to effect net neutrality by any number of means, certainly including, but not limited to, Title II of the Telecommunications Act. It is a matter of regulatory policy, not settled law.

The current administration is far more likely to take a pragmatic approach to settling in competition in small pragmatic steps, as it clearly intends to do in the National Broadband Plan, before it uncorks the inevitably contentious idea of reclassifying ISPs as communications providers instead of information providers.

So quit saying “net neutrality is dead, OMG”, please. It isn’t helpful or accurate.

March 23, 2010  10:30 PM

Cloud spending plans revealed at Cloud Slam ’10

JoMaitland Jo Maitland Profile: JoMaitland

If you’re interested in hearing about enterprise IT plans for cloud computing this year, check out the Cloud Slam ’10 virtual trade show happening this week.

I will be giving the TechTarget keynote presentation on March 25th at 3.30pm EST, on our cloud purchasing intentions survey data. Over 500 members of our audience completed the survey, answering more than 50 questions on their plans for public and private cloud adoption. There’s some interesting trends we’d be happy to share with you.

Cloud Slam ’10 is offering more than 100 expert session presentations on the rapidly shifting world of cloud-based IT and business strategies. Topics include cloud computing system integration, private vs. public cloud computing, setting up a channel for hybrid clouds and secure cloud environment interoperability.

Register today!

February 26, 2010  11:20 PM

CA’s $100 million dollar cloud wager

CarlBrooks Carl Brooks Profile: CarlBrooks

On Wednesday $4 billion software provider CA bought 20-man 3Tera; analysts reported that CA had paid around 30 times the revenue valuation of the cloud software platform maker. Independent sources now confirm the price was a cool $100 million, terms (cash or stock) as yet undisclosed.

Gut-check this with simple math — 3Tera reported around 80 paying customers, largely small and mid-sized managed service providers (MSPs). They’d have to be paying around $40,000 a year on average, by no measure a startling price for an enterprise software installation, to bring in $3.2 million a year, which, multiplied by 30, brings us right around the reported $100 million.

That kind of valuation may make stock analysts cringe, since a) any firm that looks like it’s wasting its capital cannot be considered to have a sound growth strategy and b) But it’s a great get from the technology side.

Was $100 million too much to pay for 3Tera?

The short answer is no — it was a unique technology with a proven (albeit modestly) track record and it fit a piece of the puzzle CA wanted for cloud computing — point-and-click, one-size-fits-all infrastructure. It’s not like there were dozens of 3Tera’s floating around to spark a bidding war, and there’s not yet a bubble to artificially inflate the worth of cloud computing technology. CA simply decided it needed this and put cash on the barrel until 3Tera said yes.

However, the figure surely changes the story from “CA snaps up golden opportunity” to “CA just sunk a pile into a future scenario.” CA has $4 billion in revenue and approximately $1 billion in net tangible assets. It just invested a significant portion of that into a software company with 80 customers and a nice looking Web portal product (basically) and are betting that the enterprise appetite for private cloud will exceed predictions.

Conservative estimates for cloud spending over the next few years hover around $40-$50 billion, and 10-15% of the overall IT market.

By far the largest part of that cash will go right down the pipe to Software as a Service, leaving a very poor table indeed for infrastructure plays, especially when HP, IBM, and EMC/Cisco/VMware are sitting down to eat with you.

It’s quite possible that the enterprise appetite for what is now considered private cloud will become a big tent for enterprise IT overall, and make those kinds of figures look undercooked, but any way you slice it, CA has a lot riding on this buy. It’s certainly brightened the days of CEOs of small companies everywhere, I’ll say that much.

“3Tera’s impressive exit is validation of the tremendous opportunity facing all cloud startups,” said ubiquitous cloudketeer Reuven Cohen, who also makes cloud infrastructure platform software.

A new CA

On a brighter note, this is a definitive sign that CA has come around from the old days.

CA’s new acquisitions have been marked by caution, generosity(!) and foresight, and a good attitude towards the technology and the talent that’s coming in. Out of Oblicore, NetQoS, Cassatt and now 3Tera, I’m fairly sure the majority of those firm’s employees still work at CA if they desire to do so. CA spokesman Bob Gordon said that all twenty of 3Tera’s employees would stay on with CA and CEO Barry X Lynn will stay on for a transitional period.

Let’s compare that to, say, 1999, when CA would have systemically lured away or undersold all of 3Tera’s customers, bought up their building lease and cut off the heat, shot the CFO’s dog and then bought the company for $6 and a green apple before firing everyone by Post-It note and carving IP out of code like a irritable Atzec priest. On a Monday.

We’ve come a long way since then, for sure. Congratulations to both companies — to one for the windfall, the other for the bold commitment.

February 16, 2010  9:20 PM

Two free cloud beta services to check out

JoMaitland Jo Maitland Profile: JoMaitland

Cloud computing services are popping up like daises these days. The good part is many of them are launching with a free beta service, which means you can try before you buy and more importantly, get some valuable experience with cloud-based IT services.

The first one to check out is an expandable NAS appliance from Natick, MA-based Nasuni, which connects to the cloud for backup, restore and disaster recovery purposes. Active data is cached in the appliance on-site, maintaining the availability of data that’s required day to day, while older data is sent over the wire to the cloud service provider of your choice. Right now that could be Amazon (S3), Iron Mountain or Nirvanix, while Nasuni works on building more cloud partners.

For anyone with a lot of file-based data and tired of provisioning, managing and paying for yet another NAS filer, this is an interesting service to check out. Our sister site covered the company’s launch. For more details read this story (Nasuni Filer Offers Cloud Storage Gateway for NAS).

On a different note, people using EC2 instances might be interested to check out how to get more utilization out of them with a free service called Silverline.
IT shops often end up sizing EC2 servers just like in a traditional data center. To meet peak application demand, the servers are over configured. This means spare cycles are costing money.

Silverline creates a virtual background container on any EC2 instance. When an application is run in this background container it can only use the spare cycles. This guarantees that what was already running on the instance will run unaffected, while the spare cycles are used by the application(s) placed in the virtual background container. The company claims EC2 customers can get more from the servers they are already paying for.

And one advantage over EC2 Spot Pricing is that Silverline’s virtual background container is persistent – where spot instances can be terminated based on pricing.

February 5, 2010  1:46 AM

Microsoft and NSF giving away Azure

CarlBrooks Carl Brooks Profile: CarlBrooks

The National Science Foundation and Microsoft have announced they will be giving away Azure resources for researchers in an attempt to: “shift the dialogue to what the appropriate public/private interaction” is for research computing, according to Dan Reed, Corporate Vice President for Extreme Computing (yes, really) at Microsoft.

For 3 years, Microsoft is giving away an unspecified amount of storage and support as well as CPU time for research applications to be run on Azure. NSF assistant director for Computer & Information Science & Engineering Jeanette Wing suggested that cloud computing platforms and Azure in specific should be considered a better choice for research facilities than maintaining and building their own facilities.

“It’s just not a good use of money or space,” she said.

Look at the Large Hadron Collider, said Wing, which has 1.5 petabytes of data already, or digital research projects that can generate an exabyte of data in a week, or less. She urged researchers to use Azure to figure out new ways to coping with all that information.

This is a nice, charitable gesture, not unlike Amazon’s occasional giveaways to worthy scientific projects, of EC2 instances and bandwidth. There are significcant caveats that Microsoft and the NSF have papered over.

First, from all reports, Azure is a very large data center operation- possibly as large as some of the less prestigious high-performance computing facilities that researchers use around the world. unless Microsoft is giving away the whole thing, it’s not going to make much of a dent in the demand.

Second, go down to the local university science department and tell a professor he or she can hop on a virtualized, remote Windows platform and process their experiment data. Go on, I dare you.
99% of experimental, massive-data, high performance computing is done on open source, *nix-based platforms for some very sound reasons. Microsoft won’t gain much traction suggesting that researchers can do better on Azure. It may find some eggheads desperate for resources, but that’s a different story.

So what is the real import, the overall aim of setting up Azure as a platform to host boatloads of raw data and let people play with it? Both Reed and Wing said they wanted to see researchers with new ideas on how to search and manage these large amounts of data.
Well that makes more sense–go sign up for a grant, but read the fine print, or you could be inventing the next Google, brought to you by Microsoft…

January 25, 2010  11:26 PM

AWS experimenting with actually supporting DNS for users

CarlBrooks Carl Brooks Profile: CarlBrooks

Sharp eyes to Shlomo Swidler, who posted an update to an old thread and an old complaint on AWS – getting lumped into spam blacklists. EC2 staffer “Steve@AWS” announced the availability of a private beta today to institute PTR records for selected users to assist in getting them off real-time blacklists- a standard DNS tool conspiciously absent in AWS.

A major problem for AWS and EC2 since its inception is that users with the publically generated EC2 IP addresses handed out by Amazon are extremely susceptible to getting stuck on spam blacklists, like Spamhaus or Trend Micro’s (Spamhaus is by far the more influential).

Read coverage about the most severe blacklist to date here.

It’s been an ongoing problem because Amazon doesn’t provide the usual level of service for users running websites or sending email from within EC2. Most hosts provide ways for an email server to politely verify that it does, in fact, originate with the domain name it says it does. PTR records do that and they have become a de facto standard for email hosts. Without them, a spam complaint can knock entire swaths of IP addresses out of the daylight and get tagged as spam providers.

The only way for hosts to get unflagged after their IPs are dirtied up with the spammer label is for the host provider to individually verify and notify the blacklist provider that the address is good. Amazon, being very highly automated and very popular, doesn’t do that well, and it took the blackout by Spamhaus last year to force the cloud provider to open up and start to reform its practices of not responding to customers having email trouble.

Hopefully this private beta is a sign that Amazon is going further and moving towards accepting more of its responsibilities as a web host- after all, giving out the address means you need to police the streets, collect the garbage and make sure the mail can go through. Hosters have taken this on their shoulders since the telecoms washed their hands of responsibilities around spam a decade now- its well past time for Amazon to join in.

UPDATE: Amazon confirms they are adding new features for DNS and conducting a private beta for selected users.

January 15, 2010  4:03 PM

Benchmarks for Cloud — more, please

CarlBrooks Carl Brooks Profile: CarlBrooks

Rackspace has commissioned a set of benchmarks to highlight performance differences between Rackspace and Amazon EC2.

“Benchmarks!” you explode, your exquisitely sensitive logical faculties coruscating in outrage. “Benchmarks are vendor-driven popularity contests that cherrypick tests for results! Its a swamp of dismal nonsense, a perpetual statistical hellhole that means nothing! Benchmarks make engineers maaaad!!” you say.

Well, fine, then you do it, because there aren’t any for cloud yet, and I don’t care if Rackspace did go have it done. They had sound reasons, there is precious little precedent, and the results are informative, useful and not overtly flawed.

Analyst and high-traffic expert Matthew Sacks carried out the benchmarks on his website The Bitsource. Overall, the tests were rudimentary and carefully controlled. He tested the time it took to compile a linux kernel on every type of instance between the two services, and used Iozone to compile read/writes for storage systems.

Sacks, a systems administrator at, said he developed the methodology himself.
“The idea behind it is that we can get a pretty good idea on how these instances stack up” he said, but it’s not a comprehensive metric. “I decided on kernel compilation as a general measure of CPU” he said, because it was well understood, easy and fast to replicate and uncomplicated in results.

Sacks said that Rackspace’s motivation was good old fashioned boosterism. “They had received reports from their customers that Rackspace was way better than EC2.” he said, so they decided to test that out with a third party. The results seem to bear them out.

“There are clear wins in CPU and disk performance,” he said. Rackspace beat EC2 instances in compiling by a slender margin in every case but one, and showed 5 to 10 times the amount of CPU availability. Disk Read/Write was also higher, sometimes by twice as fast, although random access tests were much closer, suggesting throughput on EC2 lags behind Rackspace even if data request execution doesn’t.

However, Sacks took pains to say that his tests did not mean that an application running on EC2 could be shifted to Rackspace and save either time or money by default. He said that users always had to consider the application, not just the infrastructure, and he wanted his tests to be a resource for people to come and compare their specific needs. “The variables are so great it’s hard to come up with a standard for testing [cloud]” he said.

“What I would like to do is test more providers,” said Sacks. Perhaps he’ll get the opportunity- he said the testing and review only took a few weeks and it wouldn’t be hard to repeat for different platforms.

Sacks’ two-man experiment aside, I can think of at least one cloud out there that could make EC2 and Rackspace look like a snail chasing molasses when it comes to kernel compilation and disk I/O:
NewServer’s Bare Metal Cloud makes Cloud Servers and EC2 look like two sick men in a sack race” sounds catchy to me. I wonder if Rackspace will sponsor that benchmark?

But that’s the point– I want more third party tests like Sacks’. I want more and more and more independent review of what providers say and what they do. I can find more compelling independent information about the USB stick in my pocket than any one of the clouds that want me to trust my business to them.

It only took Matthew Sacks a few weeks to make a clean, well documented and useful set of benchmarks, even if that came at the behest of a vendor.

So let’s make more.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: