Citigroup estimates Amazon Web Services (AWS) will hit sales of $650 million in 2010, according to a recent article in Businessweek on the prospects for the cloud computing leader.
Amazon does not break out its AWS revenue, but its headstart and leadership position in cloud computing mean that any indicators on how this business is doing are a helpful data point for the rest of the industry.
So far, companies using AWS are typically in the high performance computing space, it’s pharmaceutical firms, oil and gas, financial services and academic institutions. Also, web retailers and startups are early adopters.
We’d like to hear feedback from any organization that’s testing AWS or using it on and ongoing basis to help shape our coverage of this topic on SearchCloudComputing.com.
You can reach me at firstname.lastname@example.org.
The FCC raised eyebrows last Friday with its proposed National Broadband Plan at a Congressional hearing last week by excising any mention of net neutrality—the idea that internet providers have to treat their customers and competitors fairly—from the plan. The Broadband Plan also wants to continue the digital wireless spectrum grab, reallocating TV bands to wireless data providers.
Net neutrality is a strong area of interest for cloud computing providers, since they rely on telcos to get the computing out of their cloud and into the hands of customers. The federally mandated minimum of at least one blogger going bananas on any topic was met and net neutrality was declared dead.
Is it? FCC chairman Julius Genachowski is a strong proponent of net neutrality; the proposed national Broadband Plan lists “robust competition” as the first priority; and a rule change submitted by Genachowski governing the carrier status of telcos and ISPs was submitted last year but has yet to be voted on, so I’ll reserve judgment on how dead any of this is.
There’s no indication this administration or the FCC are anything but net positive on net neutrality. A working hypothesis can be formed by taking the view that the term ‘net neutrality’ has been punched into such a shapeless mush of political irritants that simply bringing up the term in the plan would be a polarizing and wasteful exercise in How to Make it Impossible for a Republican to Vote for Your Idea.
The FCC’s plan says it aims to:
“Develop disclosure requirements for broadband service providers to ensure consumers have the pricing and performance information they need to choose the best broadband offers in the market. Increased transparency will incent service providers to compete for customers on the basis of actual performance.”
If that happens then the networks will, by default, become more neutral as providers strive to undercut each other (provided there’s actually any choices left in your neighborhood or business parks). That’s a long way from dead for net neutrality, even if the term is being avoided.
On April 6th, the DC U.S court of Appeals ruled that the FCC cannot keep Comcast from discriminating against consumers based on how they use the internet. in response, the pundits went full steam ahead on the Net Neutrality is Dead carnival boat, overlooking the true meaning of the ruling. My response below, from Alex Howard’s post:
I wanted to comment on the heart of the ruling, which is that cable ISPs (and FIOS, essentially any broadband provider) are scheduled (loosely) under Title I of the Telecom Act, since the Powell-era Cable Modem Order, as information services instead of communication services.
The Supreme Court ruled that classification as within the scope of the FCC’s powers (however strange it might seem in the light of the intent of the law) in BrandX v FCC, and this decision today upholds that classification. Technically, this is a very sound legal decision, which is probably why it was unanimous.
If the FCC had classified ISPs as common carriers under Title II, the same lawsuit would have be 3-0 the other way. That is the only option the FCC has for exerting regulatory authority of this type over broadband providers.
Will that happen? It’s unclear. Genachowski isn’t averse to that, philosophically, one supposes, but it would be an epic sh*tstorm.
To sum up, the FCC clearly has the tools it needs to order broadband providers to be treated like common carriers and if anything, these decisions show that the courts are very consistent in upholding that authority. It simply has not done so.
So let’s TRY to remember, kids; nothing is bound in stone here. The FCC still retains the power and the ability to effect net neutrality by any number of means, certainly including, but not limited to, Title II of the Telecommunications Act. It is a matter of regulatory policy, not settled law.
The current administration is far more likely to take a pragmatic approach to settling in competition in small pragmatic steps, as it clearly intends to do in the National Broadband Plan, before it uncorks the inevitably contentious idea of reclassifying ISPs as communications providers instead of information providers.
So quit saying “net neutrality is dead, OMG”, please. It isn’t helpful or accurate.
If you’re interested in hearing about enterprise IT plans for cloud computing this year, check out the Cloud Slam ’10 virtual trade show happening this week.
I will be giving the TechTarget keynote presentation on March 25th at 3.30pm EST, on our cloud purchasing intentions survey data. Over 500 members of our audience completed the survey, answering more than 50 questions on their plans for public and private cloud adoption. There’s some interesting trends we’d be happy to share with you.
Cloud Slam ’10 is offering more than 100 expert session presentations on the rapidly shifting world of cloud-based IT and business strategies. Topics include cloud computing system integration, private vs. public cloud computing, setting up a channel for hybrid clouds and secure cloud environment interoperability.
On Wednesday $4 billion software provider CA bought 20-man 3Tera; analysts reported that CA had paid around 30 times the revenue valuation of the cloud software platform maker. Independent sources now confirm the price was a cool $100 million, terms (cash or stock) as yet undisclosed.
Gut-check this with simple math — 3Tera reported around 80 paying customers, largely small and mid-sized managed service providers (MSPs). They’d have to be paying around $40,000 a year on average, by no measure a startling price for an enterprise software installation, to bring in $3.2 million a year, which, multiplied by 30, brings us right around the reported $100 million.
That kind of valuation may make stock analysts cringe, since a) any firm that looks like it’s wasting its capital cannot be considered to have a sound growth strategy and b) Pets.com. But it’s a great get from the technology side.
Was $100 million too much to pay for 3Tera?
The short answer is no — it was a unique technology with a proven (albeit modestly) track record and it fit a piece of the puzzle CA wanted for cloud computing — point-and-click, one-size-fits-all infrastructure. It’s not like there were dozens of 3Tera’s floating around to spark a bidding war, and there’s not yet a bubble to artificially inflate the worth of cloud computing technology. CA simply decided it needed this and put cash on the barrel until 3Tera said yes.
However, the figure surely changes the story from “CA snaps up golden opportunity” to “CA just sunk a pile into a future scenario.” CA has $4 billion in revenue and approximately $1 billion in net tangible assets. It just invested a significant portion of that into a software company with 80 customers and a nice looking Web portal product (basically) and are betting that the enterprise appetite for private cloud will exceed predictions.
Conservative estimates for cloud spending over the next few years hover around $40-$50 billion, and 10-15% of the overall IT market.
By far the largest part of that cash will go right down the pipe to Software as a Service, leaving a very poor table indeed for infrastructure plays, especially when HP, IBM, and EMC/Cisco/VMware are sitting down to eat with you.
It’s quite possible that the enterprise appetite for what is now considered private cloud will become a big tent for enterprise IT overall, and make those kinds of figures look undercooked, but any way you slice it, CA has a lot riding on this buy. It’s certainly brightened the days of CEOs of small companies everywhere, I’ll say that much.
A new CA
On a brighter note, this is a definitive sign that CA has come around from the old days.
CA’s new acquisitions have been marked by caution, generosity(!) and foresight, and a good attitude towards the technology and the talent that’s coming in. Out of Oblicore, NetQoS, Cassatt and now 3Tera, I’m fairly sure the majority of those firm’s employees still work at CA if they desire to do so. CA spokesman Bob Gordon said that all twenty of 3Tera’s employees would stay on with CA and CEO Barry X Lynn will stay on for a transitional period.
Let’s compare that to, say, 1999, when CA would have systemically lured away or undersold all of 3Tera’s customers, bought up their building lease and cut off the heat, shot the CFO’s dog and then bought the company for $6 and a green apple before firing everyone by Post-It note and carving IP out of code like a irritable Atzec priest. On a Monday.
We’ve come a long way since then, for sure. Congratulations to both companies — to one for the windfall, the other for the bold commitment.
Cloud computing services are popping up like daises these days. The good part is many of them are launching with a free beta service, which means you can try before you buy and more importantly, get some valuable experience with cloud-based IT services.
The first one to check out is an expandable NAS appliance from Natick, MA-based Nasuni, which connects to the cloud for backup, restore and disaster recovery purposes. Active data is cached in the appliance on-site, maintaining the availability of data that’s required day to day, while older data is sent over the wire to the cloud service provider of your choice. Right now that could be Amazon (S3), Iron Mountain or Nirvanix, while Nasuni works on building more cloud partners.
For anyone with a lot of file-based data and tired of provisioning, managing and paying for yet another NAS filer, this is an interesting service to check out. Our sister site SearchStorage.com covered the company’s launch. For more details read this story (Nasuni Filer Offers Cloud Storage Gateway for NAS).
On a different note, people using EC2 instances might be interested to check out how to get more utilization out of them with a free service called Silverline.
IT shops often end up sizing EC2 servers just like in a traditional data center. To meet peak application demand, the servers are over configured. This means spare cycles are costing money.
Silverline creates a virtual background container on any EC2 instance. When an application is run in this background container it can only use the spare cycles. This guarantees that what was already running on the instance will run unaffected, while the spare cycles are used by the application(s) placed in the virtual background container. The company claims EC2 customers can get more from the servers they are already paying for.
And one advantage over EC2 Spot Pricing is that Silverline’s virtual background container is persistent – where spot instances can be terminated based on pricing.
The National Science Foundation and Microsoft have announced they will be giving away Azure resources for researchers in an attempt to: “shift the dialogue to what the appropriate public/private interaction” is for research computing, according to Dan Reed, Corporate Vice President for Extreme Computing (yes, really) at Microsoft.
For 3 years, Microsoft is giving away an unspecified amount of storage and support as well as CPU time for research applications to be run on Azure. NSF assistant director for Computer & Information Science & Engineering Jeanette Wing suggested that cloud computing platforms and Azure in specific should be considered a better choice for research facilities than maintaining and building their own facilities.
“It’s just not a good use of money or space,” she said.
Look at the Large Hadron Collider, said Wing, which has 1.5 petabytes of data already, or digital research projects that can generate an exabyte of data in a week, or less. She urged researchers to use Azure to figure out new ways to coping with all that information.
This is a nice, charitable gesture, not unlike Amazon’s occasional giveaways to worthy scientific projects, of EC2 instances and bandwidth. There are significcant caveats that Microsoft and the NSF have papered over.
First, from all reports, Azure is a very large data center operation- possibly as large as some of the less prestigious high-performance computing facilities that researchers use around the world. unless Microsoft is giving away the whole thing, it’s not going to make much of a dent in the demand.
Second, go down to the local university science department and tell a professor he or she can hop on a virtualized, remote Windows platform and process their experiment data. Go on, I dare you.
99% of experimental, massive-data, high performance computing is done on open source, *nix-based platforms for some very sound reasons. Microsoft won’t gain much traction suggesting that researchers can do better on Azure. It may find some eggheads desperate for resources, but that’s a different story.
So what is the real import, the overall aim of setting up Azure as a platform to host boatloads of raw data and let people play with it? Both Reed and Wing said they wanted to see researchers with new ideas on how to search and manage these large amounts of data.
Well that makes more sense–go sign up for a grant, but read the fine print, or you could be inventing the next Google, brought to you by Microsoft…
Sharp eyes to Shlomo Swidler, who posted an update to an old thread and an old complaint on AWS – getting lumped into spam blacklists. EC2 staffer “Steve@AWS” announced the availability of a private beta today to institute PTR records for selected users to assist in getting them off real-time blacklists- a standard DNS tool conspiciously absent in AWS.
A major problem for AWS and EC2 since its inception is that users with the publically generated EC2 IP addresses handed out by Amazon are extremely susceptible to getting stuck on spam blacklists, like Spamhaus or Trend Micro’s (Spamhaus is by far the more influential).
Read coverage about the most severe blacklist to date here.
It’s been an ongoing problem because Amazon doesn’t provide the usual level of service for users running websites or sending email from within EC2. Most hosts provide ways for an email server to politely verify that it does, in fact, originate with the domain name it says it does. PTR records do that and they have become a de facto standard for email hosts. Without them, a spam complaint can knock entire swaths of IP addresses out of the daylight and get tagged as spam providers.
The only way for hosts to get unflagged after their IPs are dirtied up with the spammer label is for the host provider to individually verify and notify the blacklist provider that the address is good. Amazon, being very highly automated and very popular, doesn’t do that well, and it took the blackout by Spamhaus last year to force the cloud provider to open up and start to reform its practices of not responding to customers having email trouble.
Hopefully this private beta is a sign that Amazon is going further and moving towards accepting more of its responsibilities as a web host- after all, giving out the address means you need to police the streets, collect the garbage and make sure the mail can go through. Hosters have taken this on their shoulders since the telecoms washed their hands of responsibilities around spam a decade now- its well past time for Amazon to join in.
UPDATE: Amazon confirms they are adding new features for DNS and conducting a private beta for selected users.
“Benchmarks!” you explode, your exquisitely sensitive logical faculties coruscating in outrage. “Benchmarks are vendor-driven popularity contests that cherrypick tests for results! Its a swamp of dismal nonsense, a perpetual statistical hellhole that means nothing! Benchmarks make engineers maaaad!!” you say.
Well, fine, then you do it, because there aren’t any for cloud yet, and I don’t care if Rackspace did go have it done. They had sound reasons, there is precious little precedent, and the results are informative, useful and not overtly flawed.
Analyst and high-traffic expert Matthew Sacks carried out the benchmarks on his website The Bitsource. Overall, the tests were rudimentary and carefully controlled. He tested the time it took to compile a linux kernel on every type of instance between the two services, and used Iozone to compile read/writes for storage systems.
Sacks, a systems administrator at Edmunds.com, said he developed the methodology himself.
“The idea behind it is that we can get a pretty good idea on how these instances stack up” he said, but it’s not a comprehensive metric. “I decided on kernel compilation as a general measure of CPU” he said, because it was well understood, easy and fast to replicate and uncomplicated in results.
Sacks said that Rackspace’s motivation was good old fashioned boosterism. “They had received reports from their customers that Rackspace was way better than EC2.” he said, so they decided to test that out with a third party. The results seem to bear them out.
“There are clear wins in CPU and disk performance,” he said. Rackspace beat EC2 instances in compiling by a slender margin in every case but one, and showed 5 to 10 times the amount of CPU availability. Disk Read/Write was also higher, sometimes by twice as fast, although random access tests were much closer, suggesting throughput on EC2 lags behind Rackspace even if data request execution doesn’t.
However, Sacks took pains to say that his tests did not mean that an application running on EC2 could be shifted to Rackspace and save either time or money by default. He said that users always had to consider the application, not just the infrastructure, and he wanted his tests to be a resource for people to come and compare their specific needs. “The variables are so great it’s hard to come up with a standard for testing [cloud]” he said.
“What I would like to do is test more providers,” said Sacks. Perhaps he’ll get the opportunity- he said the testing and review only took a few weeks and it wouldn’t be hard to repeat for different platforms.
Sacks’ two-man experiment aside, I can think of at least one cloud out there that could make EC2 and Rackspace look like a snail chasing molasses when it comes to kernel compilation and disk I/O:
“NewServer’s Bare Metal Cloud makes Cloud Servers and EC2 look like two sick men in a sack race” sounds catchy to me. I wonder if Rackspace will sponsor that benchmark?
But that’s the point– I want more third party tests like Sacks’. I want more and more and more independent review of what providers say and what they do. I can find more compelling independent information about the USB stick in my pocket than any one of the clouds that want me to trust my business to them.
It only took Matthew Sacks a few weeks to make a clean, well documented and useful set of benchmarks, even if that came at the behest of a vendor.
So let’s make more.
It’s fun being at the top of a technology wave. The past year in cloud computing has moved with the giddy, inexorable pace that marks a major technological shift in how we use computing power, and more importantly, how we think about it.
Cloud computing, barely a whisper in three or four years ago, is now firmly embedded, if still nascent, in the ontology of mainstream information technology. It’s a part of any conversation about IT anywhere. Even the dyed-in-the-wool Grumpy Sysadmin(TM) will, grumpily, talk about cloud computing.
It started with Amazon, online retailer par excellence, who found a way to get IT pros what they wanted, without the hassle of shipping a hundred pounds of metal and very clever sand per buy. The world had moved on to the Internet, they reasoned, so why not get what they want – CPU cycles and plenty of bit storage – without the part they hated.
And it worked. By 2008, the tipping point was reached, and analysts officially began cramming ‘cloud’ into their IT buying predictions, which, naturally, immediately drove IT management insane trying to figure out a) what the cloud was b) what it cost and c) whether or not they needed it. That made 2009 a lot of fun.
So what happened to turn cloud computing from ridiculed buzzword to reality?
Most of us started the year wondering what the devil it was: Fortunately, the government came up with a pretty definitive answer, which should tell anyone with an ounce of sense how robust and uncomplicated the concept is. Many others jumped on the bandwagon with glee, ‘cloud washing’ any old thing with an Internet connection.
Cloud terminology-hit mainstream newspapers, and the boob tube, where we got the standard expression of polite interest. It’s n ow on a par with ‘hacker’, ‘firewall’ and ‘servers’ for IT terms the regular press doesn’t understand but is happy to sprinkle over any tech reporting.
And that’s the long and short of it, kids. No matter what happened, cloud made sense to users, practically and economically. They bought in and they’re still buying in. Analysts and pundits weighed, promising riches and/or wrack and ruin, security folks went through the roof at every turn, and yet, somehow, the idea makes enough sense that people don’t care. They’ll put up with the potholes for the sake of the ride; its still a lot better than walking.
Just so with cloud computing.
Amazon says its new EC2 Spot Instances offering lets user bid on unused EC2 capacity and run those instances for as long as their bid exceeds the current Spot Price. The price will change based on supply and demand and is not for everyone obviously.
Amazon says applications suited to this kind of pricing model might be image and video processing, conversion and rendering, scientific research data processing and financial modeling and analysis, all typical EC2 use cases already.
From Amazon’s perspective, it might as well sell as much of its unused capacity as it can, even if it is just for cents on the dollar. Presumably there’s a way through the AWS console for users to be alerted when the bids and pricing changes to warn them if they are about to lose their instance to someone willing to pay more for it. That sounds like a nightmare waiting to happen.
There is a feature called Persistent Request that supposedly prevents your instance from terminating before it has finished your job, but you might want to try this out a few times before you do anything real on it!
Hats off to Amazon though for continuing to innovate around pricing and during it’s peak season too.