Is Google a self-regulatory public servant or rapacious and unscrupulous monopolist, asked Eric Clemons, professor of operations and information management at The Wharton School, during a talk at the Supernova conference in San Francisco today.
Clemons drew a parallel between Microsoft’s ability in the 1980s and 90s to use its dominant position in one market, operating systems, as leverage to control and dominate another, the browser market. Microsoft’s Internet Explorer browser bundled with its OS killed Netscape, a competitor in the browser market. This anti-competitive, monopolistic power was the basis of the DOJ’s case against Microsoft.
A decade later Google has 70% of the market for internet search, is launching new products like gmail, calendaring, office apps, mobile operating systems, laptop operating systems and a cloud computing development platform, all making headway in the market and in some cases knocking out established players. Ironically, Google beat Microsoft to a contract for outsourced email and calendaring in Japan recently by bidding 40% less than Microsoft wanted for the renewal of that contract. Touche! Indeed. But it’s bitter sweet.
Clemons belives Google will face increased scrutiny by the DOJ for its potentially predatory monopoly, in a non-contestable market, that could harm the competitive process and will likely kick-off an anti-trust suit against the company.
I think the real message in the professor’s comments is that the high-tech industry moves so fast that it lends itself to monopoly and that market players need to be aware and know when they are in danger of a Google or Microsoft jumping in.
For instance, watch Mircosoft’s Windows Azure cloud computing development platform. This could do for Platform as a Service what Google did for search.
Interop was great. It highlighted for me how far and how fast cloud has come along. It was all too brief; all the cloud sessions were slammed and debate ran high.
Panelists like AT&T’s VP for Business Development and Strategy Joe Weinman, for instance, did a splendid job laying out a cost model for computing that follows the utility model, building on the old saw that “nobody has a generator in their backyard anymore”, and arguing that computing services are subject to the same rules as the utility market.
They aren’t, for one reason paramount above all others. Data is unique. It’s not a commodity. One datum of information does not equal another. You don’t care if your neighbor washes his dishes with water drop 1 or water drop 2– but you’re sure going to care if he’s using your data set instead of his to make money, however.
For instance, I asked Joe Weinman what happens to his rosy cost model if net neutrality falls apart and carriers can engage in prejudicial pricing for network users. Naturally, he didn’t answer that, but it’s a primary example of the fundamental problems with treating cloud as a utility.
Of course, what happens, is AT&T makes more money and users that went headlong into public clouds are going to get royally screwed. No large enterprise is a) unaware of this b) going to do it.
Net neutrality is hardly the last political consideration- it’s just the one I brought up because there was a telco in the room. It’s easy to conceive of legislation that would irrevocably compromise data stored in outside repositories- indeed, as far as the rest of the world is concerned, in the US, it already is.
Political wrangling that raises your electric bill a few cents an hour is one thing. Losing exclusivity to your company data by fiat is quite another. It’ll be a cold day in hell before any large (or even medium) size enterprise is going to commit to using public compute as a utility with vulnerabilities like that. Sure, they’ll put workloads in Amazon, and take them out again; they’ll shift meaningless data into these resources, but they’ll never, ever be thinking of it the same way they think about the electric company.
It’s still a useful metaphor, in a rudimentary fashion; It gets the concept of ‘on-tap’ across, and it’s balm to the ears of people who didn’t really understand their IT operations anyway. But futurologists and vendors, especially vendors who want to monetize cloud on top of carrier services, need to understand that the message has come home; we know what cloud is, and we know what the real risks are. Now, show us your answers– In the mean time, we’ll keep our ‘generators’ and our data, to ourselves.
Appirio, a cloud integrator that helps companies use cloud services has published a map of the marketplace.
It attempts to distinguish between services that are true cloud offerings versus those that are hosted (single tenant / multi-tenant) versus private cloud offerings.
We didn’t get a chance to look at in detail yet, but we did notice that Salesforce.com is listed under platforms, but not applications. The last we checked their CRM in the cloud app was way more popular than Force.com, their development platform.
Still, it’s a great straw man that everyone should check out and give feedback on. Send suggestion to firstname.lastname@example.org.
The site consists of Paessler’s PRTG Network Monitor software running in various public cloud environments around the world. Each PRTG ‘sensor’ reports back information on perfomance, in real time, and displays each stream on the site for all the world to see. Founder Dirk Paessler said the idea struck him after he began testing his Windows-only software kit on newly available EC2 Windows images.
“We began to create a network of PRTG installations” he said, to see how each cloud would stack up in terms of performance. “My initial interest was to find a way to compare these clouds to each other,” he said. Public clouds were technologically diverse, and he wanted a way to visualize that in a rudimentary fashion for users.
The results, especially over time, are fascinating. Best performers for the money?
“Newservers has the highest performance.” Paessler said. “They’re an interesting case because they give you bare metal,” he said, despite selling capacity and time the same way Amazon does. That may give virtualization boosters pause for thought.
Does that mean everyone else is a poor relation? No, said Paessler. Amazon had excellent overall performance, and the major indicator for performance may be in how you consume, rather than who you consume.
“If you compare, on Amazon, a small CPU[instance] with a c1.medium, the c1.medium is giving you a lot more bang for the buck,” he said, something value-conscious cloud consumers may not realize. Different applications on different platforms may simply be better suited for one flavor of cloud over another, too. Paessler noted that his software was written with Intel processors in mind; using a provider who based their cloud on AMD CPUs showed a steep and inaccurate performance disadvantage.
Yet another twist was that cloud performance varied in a very great degree as tests moved from one cloud to another, Paessler said. The monitoring software tests short-term CPU capacity, data transfer, network response and similar metrics, and how a test turned out really depended on where you were watching from.
“We found out the connection from EC2-EU to EC2-US was very very fast, very reliable, but from EC2-EU to NewServers” it dropped off sharply, he said. Similar variations in response and performance were seen moving data to and from other clouds as well. Analysis is proving more complex than he had imagined. “We are talking about a [world-wide] web here,” he said.
Paessler doesn’t make too much of his new toy, however. He said that the site shows only the most basic and rough kind of information, and doesn’t take into account any number of factors. That will come with time as cloud matures, and as he finds ways to improve his little experiment.
“As with all benchmark testing, this is only a clue,” he said. “If you do consider cloud hosting, try two, three, four [providers] and try them with your application.” Don’t assume Paessler’s results hold true across the board.
In the meantime, the site will prove a fascinating time sink for statisticians, analysts and cloud watchers. Paessler said he put the site up partly to softsoap the cloud community, partly as a public service, and partly, just because he could.
With a few dozen reporting nodes, co-locating in so many locations would have cost a pretty penny, but in the cloud, the site doesn’t cost Paessler more than a few hundred dollars to run, and the results for observers are proving well worth it.
We all appear to have swallowed the cheerful news about cloud computing hook, line and sinker, thanks in part to a confluence of economic woes and a sudden maturation of the technology- by happy chance, cloud computing seemed like the answer to a lack of cash. Economies of scale and self-service meant we could gorge ourselves on CPU cycles and bandwidth and quit whenever the price went too high- we didn’t have to over-provision, worry about fixed costs; it was brilliant.
So that story’s told. But what about the other side? If legitimate businesses can sip or chug from cloud’s sea of resources as they wish, so can the crooked ones. Spammers, scammers, extortionists, terrorists, bot herders, warez traders and internet privateers that fight unseen wars over international IT channels for profit and patriotism, and so forth- all of them use servers and software just like regular upstanding folks.
Now, with self-service and automated compute clouds, they can have all they want for pennies a shot. Even better, there’s no work involved in snaffling up a PC or a web server for malicious intent- why bother pwning boxes when you can rent? All you need is a credit card that doesn’t have your home address on it – i.e., someone else’s — and away you go. Of course, this is already happening; as recent events at Rackspace and Amazon will attest, not to leave out the awful LxLabs tragedy. The industry is aware of the potential problems, but how ready are they?
Rackspace’s Tom Sands wrote a feel-good blog about it back in April; theoretical expert IT zombie holocaust survivor Hoff has also detailed a few potential Zerg rush techniques.
What scares me, however, is that the bad guys are better than the good guys at technology. Much, much better–they have to be. The black market is a very pure example of a well-regulated free market economy- purer by far than any in the ‘white market’. A well regulated free market economy, as we all know, is the most potent driver of innovation there is. The regulation comes from above in the form of punitive and reactionary measures taken by industry and government. The innovation comes from being forced to re-invent ways to obtain malicious goals.
That means that attackers are seeing and using the true potential of cloud computing long, long before the rest of the world will. Bad guys have already taken advantage of public cloud resources in fairly rudimentary ways, like hop-scotching around the world to fire up spam servers as they get detected, and engaging in cheap DDoS attacks.
Now, with cloud cartography a reality, the possibilities are staggering. As attackers realize the real fundamental change that cloud computing brings to IT — the ability to think in hundreds and thousands of nodes whenever and wherever, rather than a few piled up in a heap, we will see astonishing feats.
There are millions of credit card numbers floating around out there- how long before someone bothers to nab a few tens of thousands, open up EC2 accounts and start up every single available instance on Amazon all at once? I mean every last scrap of CPU they have, at once.
How about a 10,000 instance, 10,000 hour rolling blackout of Google that moves from Azure to GoGrid to AWS to Rackspace or from Mexico to Brazil to Canada to Japan?
Never mind the idea that someone could compromise actual cloud infrastructures; Its not like operators use simple, well known authentication and web-based management consoles to administer these astonishingly potent resources, right? Right?
Now, fast forward a few years- Brazil, Russia, China, Korea and India all have services comparable to Amazon. Now what, kids?
UPDATE: Why, look here! Step by step instructions on cracking PGP passphrases with Amazon EC2! skip to the end: job time reduced from 5 years to oh, several days. Wait!! Amazon is not keen on unexpected 100’s of nodes firing up all at once.
Oh, wait, somebody found a way to fix that. Carry on.
Amazon would like to remind you to thank them for the heightened expectations.
So a Web app running on a telecom service goes belly up and cloud is moribund yet again. That seems to be the latest version of the slightly overheated cloud marketing machine this week.
It may be that the end user cannot tell Amazon Web Services apart from Gmail, which isn’t his job, really, or that the Sidekick/Danger/Microsoft data loss may be one of the most spectacular IT bungles ever made, but this is certainly not going to register in the real cloud computing markets.
No-one stores their email contacts on AWS. Salesforce.com isn’t ever going to let this happen (call me if they do, just sayin’) and Azure, well, isn’t exactly a thing yet, and had zero contact with the destroyed data. I would venture that not a single consumer of any of these services even blinked when they heard about the Sidekick apocalypse.
Seriously, who unplugs a light fixture, let alone a SAN running a live database, in a data center without checking that they made a backup? And when did rolling live backups go out of style in the enterprise world? Hell, I’ve put in rolling live backups for companies with 15 employees.
Anyway, Peter DeSantis, VP of EC2 talked to me at length about last week’s cloud-killer du jour DDOS on bitbucket.org. Here’s a few of his other thoughts on the DDOS, the hype and the possibly incontrovertible fact that without Amazon to raise the bar, we wouldn’t be talking about it at all.
For instance, DeSantis said it would be trivial to wash out standard DDOS attacks by using clustered server instances in different availability zones.
“One of the best defenses against any sort of unanticipated spike is simply having available bandwidth. We have a tremendous amount on inbound transit to each of our regions. We have multiple regions which are geographically distributed and connected to the internet in different ways. As a result of that it doesn’t really take too many instances (in terms of hits) to have a tremendous amount of availability – 2,3,4 instances can really start getting you up to where you can handle 2,3,4,5 Gigabytes per second. Twenty instances is a phenomenal amount of bandwidth transit for a customer.” he said.
The largest DDOS attacks now exceed 40Gbps. DeSantis wouldn’t say what AWS’s bandwidth ceiling was but indicated that a shrewd guesser could look at current bandwidth and hosting costs and what AWS made available, and make a good guess.
“ I don’t want to challenge anyone out there, but we are very, very large environment and I think there’s a lot of data out there that will help you make that case.” he said.
DeSantis said that the reason that stories like the DDOS on Bitbucket.org (and the non-cloud Sidekick story) is because people have come to expect always-on, easily consumable services.
“People’s expectations have been raised in terms of what they can do with something like EC2. I think people rightfully look at the potential of an environment like this and see the tools, the multi- availability zone, the large inbound transit, the ability to scale out and up and fundamentally assume things should be better. “ he said.
In the meantime, DeSantis urges the skeptical to look at the big picture. Things have changed so fast, he said, that people have lost sight of what it used to take to get what Amazon offers:
“A customer can come into EC2 today and if they have a Web site that’s designed in a way that’s horizontally scalable, they can run that thing on a single instance; they can use [CloudWatch] to monitor the various resource constraints and the performance of their site overall; they can use that data with our autoscaling service to automatically scale the number of hosts up or down based on demand so they don’t have to run those things 24/7; they can use our Elastic Load Balancer service to scale the traffic coming into their service and only deliver valid requests.”
“All of which can be done self-service, without talking to anybody, without provisioning large amounts of capacity, without committing to large bandwidth contracts, without reserving large amounts of space in a co-lo facility and to me, that’s a tremendously compelling story over what could be done a couple years ago.”
Private cloud is a touchy subject these days. Proponents say it’s inevitable, detractors say it’s all marketing. Enterprises, who are supposed to be clamoring for it, are cautious: hearing endless pitches from endless different angles will do that.
“We only have ourselves to blame,” said ParaScale CEO Sajai Krishnan. He said, month over month, more than half the people he pitches to say they’ve come to learn about cloud and cut through the hype. He said that enterprises are hearing about cloud, but what they’re hearing is, ‘do everything a different way’, and that’s not attractive.
Cloud is pitched as easy, cheap, low-investment, cures cancer and feeds the poor, etc., but the reality is that for a large organization that’s not going to use Amazon or Rackspace, private cloud means changing the way your business runs. Maybe for the better, but that’s real work.
Enterprises are told “’Here is cloud infrastructure and now you can have it in-house’ – that’s a big change from something that used to be fairly stovepiped,” said Krishnan. Private cloud enthusiasts promise efficiency, but what the enterprise hears is ‘they want to sell me more stuff to cram in there’.
Krishnan thinks this complexity slows down private cloud adoption. Unless it’s your business, a la Amazon or Rackspace, building or re-directing a data center into a self-service automated, fully virtualized compute cycle utility is weary and expensive work. A company that does that will spend years doing it and years realizing the return. It’s not the technology; you can get a cloud for free on Ubuntu now- it’s the planning, and procedural changes.
All well and good for a Web 2.0 enthusiast to start up a business on his or her laptop with Amazon; but convincing 10,000 developers they have to use a new business process is quite another. Krishnan says the other pressure is that enterprises are conservative, and what they have is working. It’s not impossible, it’s just a lot slower than many have speculated, he said.
Intuitively, this makes sense. I can’t poke any holes in Krishnan’s reasoning. The timeline for private cloud is going to look a lot more like the infrastructure lifecycle, than a booming new marketplace. So watch those private cloud startup ideas, kids. There’s less room in here than you think.
I just finished watching Larry Ellison’s conversation with Ed Zander at the Churchill Club, a Silicon Valley business and technology forum. While these type of dialogues are not rare in the industry, I found this one to be particularly insightful. I think we will look back at this conversation as a watershed moment regarding the role of hardware, software, integration, and the cloud.
In case you missed seeing this video on YouTube (I recommend that you watch it http://www.youtube.com/watch?v=rmrxN3GWHpM). In case you don’t have time, let me summarize the key points that I heard and give you my take.
In essence, with the acquisition of Sun Microsystems, Oracle is hoping to put the right pieces in place to position itself as an equal to IBM in the IT market. Clearly, Oracle likes the software stack that Sun has built including ownership of Java and a lot of interesting distributed computing technology. And if we are talking about the cloud, Sun has a lot of good technology it picked up through various acquisitions. While many prognosticators assumed that Oracle would sell off Sun’s hardware assets, it is becoming clear that Oracle wants to make good use of Sun’s hardware. On some certain level, I think this is crazy since the hardware business has low margins and a complex business model. However, if you listen to Ellison’s talk it is clear why he wants to keep the hardware. He envisions a world where customers want to buy in a more straightforward way – no complex integrations, no piece parts from hundreds of different vendors.
Clearly, customers do want to have fewer vendors to deal with but it is not clear that they want 100 percent one-stop-shopping. It’s sort of like back to the good old days of computing in the 1970s – one mainframe, integrated applications, and simplified management.
What Oracle envisions is to be able to ship a system to its customers that comes bundled with everything including packaged applications, bundled with its database, middleware, — all the bells and whistles. It would be tuned and configured as a black box. The customer benefit would be that there would be no need for any integration of component parts. It would act like a complete system. There are clearly benefits to Oracle in being able to grab total share of wallet from the customer. For the customer there is benefit in not worry about so many moving parts.
The only thing that could possibly spoil the vision is cloud computing. Customers looking to a future of cloud computing would increasingly rely on software as a service, platform as a service, and infrastructure as a service to meet many of their computing needs. Increasingly companies are looking to a new generation of applications that leave upgrading software to the SaaS provider.
Larry Ellison decries the cloud because it assumes that there is no middleware, hardware, chips, etc. But, of course, this is not true. There are lots and lots of hardware and chips but they are configured and used differently than Ellison’s vision. There are lots of middleware and business services that will be available to customers but it is offered in a new economic model that the cloud represents.
I think that Ellison is uncomfortable with the cloud because it could have an impact on Oracle’s vision of deeper control over the customer. In time, the cloud could also dramatically impact maintenance revenue.
Of course, the cloud won’t take over the world of computing in the short run. It will evolve over time until more and more computing is based on the cloud. The cloud will have a disruptive impact on the way everything from hardware, software and services is delivered. And if I had to bet on outcomes, Ellison will be plotting a comprehensive cloud strategy just in case.
Cloud providers in the US should thank their lucky stars there’s a new guy at the FCC who is moving ahead with policies that will guarantee net neutrality.
What is net neutrality? Let me explain. No, there is too much. Let me sum up: the telco carriers that built and maintain the physical infrastructure of the internet want to charge more money for service to the biggest consumers, and throttle usage by their prix-fixe customers (home and small businesses) if those users actually try to use the bandwidth they signed up for.
In opposition to this is, naturally, everyone else. Prejudicial network pricing is precisely contrary to the expectations of a market-driven economy- you’re supposed to pay less and less as you buy more and more. It’s predatory and would be drag-iron for the entire online economy to say the least- imagine if HP actually charged more per server sold to its best customers? Now imagine HP was the only server vendor who served your zip code. Sorry for the horrifying thought, all you hardware buyers.
To put the fight in perspective, you could combine en suite, Microsoft, Yahoo, Google and Amazon and you’d have a company almost as big as Verizon. Verizon is only one of four major telecommunications companies in the US.
To date, the telcos haven’t been able to browbeat the FCC into letting them leverage their monopoly into predatory billing, partly because opposition is so stark and partly because there is a vestigial sense that utilities that provide a public benefit ought not to be allowed to victimize the public at large.
The implications of net neutrality for the public cloud are plain; because it’s basically margins-driven, any squeeze from carriers would hamstring providers. Amazon’s cloud success is driven precisely by the fact that using it is easy and costs about the same as running your own server, minus the investment.
If it became more expensive to run a cloud server than a real server, which prejudicial network pricing would assuredly do, cloud adoption would stumble badly. Little users would stick with hosting; enterprises might still move into private cloud, but there would be no compelling reason for them to stick appropriate applications and data in the public cloud
The true benefits of cloud computing– cheap, elastic and massively parallel computing power at the finger tips of the bright young things in industry and academia– would never be realized, since Comcast or Verizon would be lying in wait to pounce on data crunching projects and surcharge them.
On the other side, the SaaS explosion would fizzle if Salesforce.com suddenly had to pony up for its millions of users, for instance- not a single free service out there would stay open a day past the day it had to charge to make up for overage charges, nor would the umpteen start-ups predicated around cloud, both using and selling, get off the ground if they had to plan for a crop share with their telco landlords if the business got popular.
Without net neutrality, in short, cloud would go where its ancestors — utility and grid — went, to the backwaters of research or in the vast wastes of enterprise, just part of the gaggle of professional services sold to the large corporations. Utility and grid ended up there because they lacked all the things that cloud realizes- speed, ease, availability and economy. Cloud computing is supposed to obscure the infrastructure layer; it needs a level playing field to do that.
So Amazon, Rackspace, Google and all the others should wipe their brow in relief that they’ve got at least 3 or 4 years to really let the whole idea take hold and become a mainstay in the economy rather than a sideshow.
That doesn’t leave much time for dilly-dallying.
As cloud computing has grown in recognition, and the marketplace has started to attract serious cash, some people are beginning to put some serious effort in to tracking and measuring actual cloud usage. Here’s a small collection of links that show, with some veracity, the state of cloud computing today.
Guy Rosen has the rough cut of usage for public clouds, which finds that among IaaS providers, Amazon EC2 leads the pack, followed by Rackspace, Joyent and GoGrid.
But there are caveats to Rosen’s data. Rosen is only counting websites running in the cloud. The raw data comes from Quantcast, which Rosen has analyzed according to IP location to generate comparisons.
It’s worth questioning how useful Rosen’s analysis is. Classically, Web servers are a primary use case for cloud computing, but increasingly, data processing stacks, test and dev and similar applications are pitched as potential uses for the public cloud. With Amazon continually making hay over its use by the enterprise, this analysis may be accurate, but it is certainly limited.
Another stab at quantifying the cloud comes from those beloved propeller-headed comp sci types, which they dub “Cloud Cartography.” In the course of analyzing multi-tenancy security vulnerabilities, researchers at the University of California, San Diego and MIT came up with a bone-simple way to coarsely measure actual servers on Amazon’s EC2 cloud. (Hint: it involved a credit card, nmap, wget and Amazon’s DNS servers.) According to their cursory research, the number of responding server instances on EC2 currently stands at 14,054.
Cloud Cartography promises to be a very entertaining arms race between cloud providers and the curious, and will doubtless be emulated by others for different sites. I’ll try to keep this space updated as new metrics come around. In the meantime, vendor-neutral suggestions about ways to gauge the state of cloud computing are welcome. Let’s make this a haven for learning what’s really going on.