The Troposphere

September 17, 2010  9:09 PM

Vroom: Cloud on wheels

CarlBrooks Carl Brooks Profile: CarlBrooks

The flexibility of cloud computing services appears to be extending to the physical infrastructure itself.

Researchers point to new examples of the rapidly maturing “shipping container data center” as proof. After all, if you can sign up and get a server at Amazon Web Services anytime you like, why shouldn’t you be able to order up a physical data center almost the same way?

Rob Gillen, a researcher at the Oak Ridge National laboratories in Tennessee, is part of a team researching and building a private cloud out of commodity x86 servers to support the operations of Jaguar, ORNL’s supercomputer. It is the fastest and most powerful system in the world right now, but it’s not exactly a good fit for day-to-day computing needs.

“If you look at our Jaguar server here with 225,000 cores, frankly there’s only a few people in the world smart enough to write code that will work really well on that,” said Rob Gillen, researcher for cloud computing technologies at ORNL. Gillen is working on both the overall goal of private cloud and is heavily involved in exploring the use of Microsoft Azure, the Platform as a Service.

He said ORNL is working to develop a self-service, fully virtualized environment to handle less important or less intensive tasks, like post-processing of results from workloads run on Jaguar, and long term storage and delivery of that data.

Gillen said the advantages of using standard, widely available hardware and virtualization technologies to make a pool of resources available, a la Amazon Web Services, was very simple. There was a clear divide in raw computing power, but the pool of available programmers, not to mention existing software tools, was much wider using commodity-type services.

“If you have the opportunity to use fixed, Infiniband gear, generally your scientific problems are going to express themselves better over that,” he said. “The commodity nature[of private clouds] is tough for scientists to grapple with, but the range of solutions gets better.”

Hadoop, the massive multi-parallel next generation database, might do a much better job of processing data from Jaguar, for example, and a researcher wouldn’t need to tie up critically valuable supercomputer time noodling around with different ways to explore all that data.

“The raw generation is done on the supercomputers but much of the post processing is really done on commodity, cloud environment,” said Gillen. But, he’s chronically short of space and wants more servers for the ORNL cloud.

That’s where cloud on wheels comes in; Gillen has been looking at a demo container data center from SGI, called the ICE Cube, which is a standard shipping container with a lot of servers in it.

Gillen’s photos and a video of the interior of the unit are a treat for the gearheads:

Rear view
Interior racks
Video taken inside

It gets put down anywhere there’s space and half a megawatt or so of power. Just add water and presto, instant data center. It might not be pretty, but it’s a less expensive way to get data center space.

“We’re space constrained and that’s one possibility,” said Gillen.

Gillen said that the containerized data center market was pretty well established by now, but offerings from HP and IBM were usually designed to adapt to a traditional data center management process. They had standard power hookup, standard rack equipment, and put a high degree of emphasis on customer access. “Some vendors like HP or IBM really want it to fit into the traditional data center so they optimize them for that.”

SGI’s demo box is a little different. It’s built to do nothing but pack as many commodity x86 servers inside as possible, with unique cooling and rack designs that include DC bus bars connecting directly to server boards (no individual power supplies) and refrigeration ducts that run the length of each rack (no CPU coolers).

Gillen said that means it’s ideally suited for getting a medium-sized private cloud (anywhere from 15,000 to 45,000 cores) in a hurry. He also noted that containerized data centers are available in a wide variety of specialized configurations already.

“We are looking at it specifically in the context of our cloud computing projects but over the last two days a lot of people from other areas have been walking through it,” he said.

September 14, 2010  8:07 PM

Verizon/VMware hybrid cloud missing key feature

JoMaitland Jo Maitland Profile: JoMaitland

Is anyone else amused by Verizon’s puffed up claims to dominance in the cloud computing market? In the wake of the vCloud Director unveiling at VMworld 2010, industry analysts made a huge fuss of VMware’s announcement that Verizon has joined its vCloud service provider program. I, on the other hand, am not impressed.

No doubt landing one of the top telecom providers in the world is a coup from a PR perspective, but so far the partnership is a big paper tiger if you’re an IT shop looking to do anything real with this news.
The press release claims that, with “the click of a mouse,” customers can expand their internal VMware environments to Verizon’s Compute as a Service (CaaS) offering built on VMware vCloud Data Center, for instant, additional capacity. The overall effect is referred to as a hybrid cloud.

The immediacy and ease touted here is far from true; ironically, I learned this during a session at VMworld entitled “Cloud 101: What’s real, what’s relevant for enterprise IT and what role does VMware play.”

The speaker said that to move a workload from internal VMware resources to a vCloud service provider such as Verizon is currently a manual process. It require users to shut down the to-be-migrated workload, select the cloud you’ll deploy it to, then switch to the Web interface of that service provider and import the workload. I am leaving out a bunch of other steps too tedious to mention, but it’s hardly the click of a mouse!

In a follow-up conversation after the session, VMware said the missing feature that will allow automated workload migration, called the vCloud client plug-in, was still to come. No timeframe was given.

And this isn’t all the smoke and mirrors from Verizon; the telco claims its CaaS is the first cloud service to offer PCI compliance. This statement isn’t quite either because the current PCI standard, v1.2, does not support virtual infrastructures. So a real cloud infrastructure (a multi-tenant, virtualized resource) cannot be PCI compliant. The PCI Council is expected to announce v2.0 of the standard at the end of October, which will explain how to obtain PCI compliance in a virtual environment.

A word of advice to IT shops investigating hybrid cloud options: Be sure to play around with the service before you buy. In many cases, these offerings are still only half-baked.

September 7, 2010  6:27 PM

The persistent itch: What does Amazon’s security really do?

CarlBrooks Carl Brooks Profile: CarlBrooks

A story we wrote last week about Amazon’s newest disclosures on its security procedures was sparked in part by a earful from one of the sources in it. Seeking reactions to the newly updated “Overview of Security Processes,” I expected a guarded statement that the paper was a good general overview of how Amazon Web Services approached security, but pertinent technical details would probably only be shared with customers who requested them, and Amazon didn’t want to give too much away.

Instead, what I heard was that Amazon not only does not disclose relevant technical information but it apparently also does not understand what customers are asking for. Potential clients were both refused operational security details and also told wildly different answers on whether or not AWS staff could access data stored in users’ S3 accounts: “No, never,” and “Yes, under some circumstances.” That’s, um, kind of a big deal. They also refuse to indemnify themselves against potential failures and data loss as a matter of course.

Typically, a big enterprise IT organization has a set of procedures and policies it has to follow when provisioning infrastructure; charts are made, checkboxes checked, and someone, somewhere, will eventually claim that information and park it somewhere. This includes minor details like “who can access our data and how,” and “how does a service provider protect our assets and will they compensate us if they fail.” A big customer and a provider will sit down, discuss how the hoster can meet the needs of the organization, assign a value to the business revenue being generated for the enterprise, and agree to pay that amount for any outages.

Everybody is aware of this

Even their biggest fans are somewhat down on AWS for this. Cloud consultant Shlomo Swidler said in an email that Amazon’s efforts to brush up their security picture, like the launch of the AWS Vulnerability Reporting and Penetration Testing program, was the right idea, but Amazon had neutered it by not letting customers use it in a meaningful way. “Without a way to test how things will really behave under simulated attack conditions — including the AWS defensive responses — I don’t understand what will happen under real attack conditions,” he said. The Vulnerability Reporting and Penetration Testing program can reportedly only be used with pre-approval from AWS staff, meaning it can never simulate an in-the-wild attack.

Others are more charitable, and point to Amazon’s track record. IT security auditor Andrew Plato was asked about the new white paper and responded via email.

“From what’s in there, they seem to be doing the right things. They’ve got a good risk management framework, good firewalls, monitoring, they’re following ISO and COBIT , They’ve got change management; they seem to be doing all the good practices that we advise clients to do,” said Plato, president of Anitian Enterprise Security. But he noted that all we had to go on was Amazon’s good word. ”The long and short of it is the content says they’re doing the right things — now, they could be lying,” he said, tongue only partly in cheek.

Plato isn’t worried about Amazon’s security. I’m positive they aren’t lying about anything in their white paper. Nobody should be worried; they have an amazing track record, but we’ll never know, at this rate, exactly what they’re so proud of.

The problem is enterprises are picky

Here’s the problem: IT does not work like baby shoes and garden rakes. It’s not enough to just deliver the goods. You have to show your work, or the IT practitioner cannot trust what you are giving him, at a certain level. All hosting providers know this, and they are proud to show off what they’ve done. After all, they’ve spent a lot of money to get best-in-class gear so they can make money off it.

Hell, Rackspace will drag a hobo off the street to show them around the data center, they’ll talk your ear off; you’ll know what color socks the hard drive guy is wearing on Tuesdays if that’s important to you.

Now, it’s OK that Amazon doesn’t work quite that way. We all understand that the amazing feat they have managed to pull off is to offer real-time self-service IT and charge for it by the hour, and that users are responsible for their own foolishness, and Amazon backs only access and uptime. Most of Amazon’s customers are more than happy with that; they can’t afford to care about what kind of firewall and load balancers run the AWS cloud.

But if Amazon is going to compete for the enterprise customer, and they are explicit that they are trying for those customers, they are going to have to get over it and spill the beans. Not to me, although that would be nice, and not to their competition (though that’s hardly relevant now since their nearest cloud competitor, Rackspace, is apparently $400 million dollars shy of eating their lunch) but definitely to enterprise customers. It’s a fact of life. Enterprises won’t come unless you play their ball game.


There are all sorts of ways AWS can address this without giving away the goose. CloudAudit is one idea; that’s self-service security audits on an API; it fits right in to the AWS worldview. Talking to analysts and professionals under NDA is another. AWS must at the very least match what other service providers offer if it is sincere in competing for enterprise users.

September 7, 2010  6:17 PM

Did Googler jump the gun with cloud calculator?

CarlBrooks Carl Brooks Profile: CarlBrooks

Googler Vijay Gill posted a quick and dirty cloud calculator a few weeks ago that has caused some head scratching. The calculator seems to show an eye popping 168% premium on using AWS versus co-locating your own servers–$118,248/year for AWS XL instances and $70,079.88 for operating a co-lo with equivalent horsepower.

Can that really be the case? AWS isn’t cheap web hosting, it’s mid-tier VPS hosting, price-wise, if you’re talking about using it consistently year over year, and those are definitely cheaper than co-lo. Gill says $743,000 to buy and install your servers, so he’s got the investment figures in there.

Editor Matt Stansberry asked an expert on data center practices and markets that questions and was told:

“There is a point at where this is a very good exercise, but the way it was undertaken was grossly inaccurate,”

That’s Tier1 analyst Antonio Piraino, who points out that not only did Gill not spell out neccessary assumptions, he took Amazon’s retail price as the base cost, and Amazon will cut that in half if a user makes a year or multi-year commitment.

But is it fair to make the comparison in the first place?

Some people will choose Amazon for large-scale, long term commitments, but they will be in the diminous minority. There are far better options for almost anyone in hosting right now. The hosting market has been mature for the better part of a decade, and cloud has many years to go on that front.

AWS isn’t hosting or co-lo, obviously; it’s cloud. First, lots of people pick off the bits they want, like using S3 for storage. That is surely less expensive that co-locating your own personal SAN for data archive or second tier storage (first tier if you’re a web app). That’s the absolutely astounding innovation that AWS has shown the world; they sell any part of the compute environment by the hour, independent of all the other parts.

Second the whole point of AWS is that you can get the entire equivalent of that $743,000 co-lo hardware, running full bore, no cable crimpers or screwdrivers needed, in a few hours (if you’re tardy) without having to buy a thing. Building out a co-lo takes months and months.

So it’s a little off base and what’s the point? To prove that Amazon can be expensive? Not a shock. Renting an apartment can seem like a waste of money if you own a home, not so much if you need a place to live.

August 25, 2010  7:51 PM

CA spends close to $1 billion on cloud acquisitions

JoMaitland Jo Maitland Profile: JoMaitland

CA’s spending spree in the cloud market is far from over according to Adam Elster, SVP and general manager of CA’s services business.

The software giant has gobbled up five companies in the last 12 months including Cassatt (resource optimization), Oblicore (IT service catalog), 3Tera (application deployment in the cloud), Nimsoft (monitoring and reporting of Google Apps, Rackspace, AWS and and most recently, 4Base Techbologies (a cloud consulting and integration firm). Some back of the envelope math says that’s close to a billion dollars worth of acquistions so far.

Elster says the company is looking to make an acquistion every 60 to 90 days to build out its portfolio of cloud offerings. It’s not done with services either. “We’re looking at a couple of others from a services perspective,” Elster said. CA’s focus, as always, is on management. It’s also looking at security in the cloud.

For now, the 4Base deal is keeping CA busy. A Sunnyvale, CA-based virtualization consulting firm, 4Base has about 300 projects on the go with companies including Visa, ebay and T-mobile. It charges around $250,000 per phase of a project and most projects are at least two phases. CA found itself in many of the same deals with 4Base, but 4Base was winning the IT strategy and consulting part of the deal, hence the acquisition.

It’s seems like an expensive proposition to hire the 4Base guys, but Elster says for many large companies it’s a time to market issue versus retraining “senior” inhouse IT staff. “Your challenge is those people do not have the large virtualization and cloud project experience … for $250,000 4Base does the assessment and builds the roadmap, it’s a hot space as it gets the organization to market quicker and reduces risk,” he said.

Most of the projects 4Base is working on involve helping companies build out their virtualization environments beyond a single application or test and dev environment. Rolling out virtualization to a larger scale means getting an ITIL framework in place, updating incident and capacity management reporting tools and creating more standardized IT processes, according to 4Base.

If you’re looking for other boutique companies in the virtualization and cloud consulting market there are a lot out there. Service Mesh,, New Age Technologies,, VirtualServerConsulting, Green Pages Technology Solutions and IT@Once spring to mind.

July 30, 2010  6:30 PM

Eli Lilly – Amazon Web Services story still stands

JoMaitland Jo Maitland Profile: JoMaitland

This week I wrote a story about Eli Lilly’s struggle with Amazon Web Services over legal indemnification issues.

Sources told us that Eli Lilly was walking away from contract negotiations with AWS for expanding its use of AWS beyond its current footprint. AWS has chosen to hide this fact by claiming the story says Eli Lilly is leaving Amazon completely, which was not reported.

Since publishing the story Amazon’s CTO Dr. Werner Vogels has called me a liar, attempted to discredit and claimed my sources are wrong, all via Twitter. I am curious if he thinks any enterprise IT professionals are following his tweets? My hunch is not many, but that’s another story.

Information Week followed up with Eli Lilly to check out the story and was given this statement:

“Lilly is currently a client of Amazon Web Services. We employ a wide variety of Amazon Web Services solutions, including the utilization of their cloud environment for hosting and analytics of information important to Lilly.”

This statement does not refute the issue at the center of my story which is that Eli Lilly has been struggling to agree to terms with AWS over legal liability issues which has prevented it from deploying more important workloads on AWS.

Yes, AWS still gets some business from Eli Lilly, but larger HPC workloads and other corporate data are off the table, right now.

The story raises lots of questions about the murky area of how much liability cloud computing service providers should assume when things go wrong with their service. So far, AWS seems unwilling to negotiate with its customers, and it’s certainly unwilling to discuss this topic in a public way.

That’s AWS’s prerogative, but the issue will not subside, especially as more big companies debate the wisdom of trusting their business information to cloud providers like AWS, Rackspace, etal.

July 23, 2010  5:24 PM

Did Google oversell itself to the City of LA?

CarlBrooks Carl Brooks Profile: CarlBrooks

Has the endless optimism and sunny disposition of the Google crew finally led them to bite off more than they could chew?

Reported trouble meeting security standards has stalled a high profile deal between Google and the City of LA to implement email and office software in the cloud, replacing on premise Novell GroupWise software. While 10,000 users have moved onto Gmail already, according to city CTO Randi Levin, and 6,000 more will move by mid-August, 13,000 police personnel will not be ready to switch from in-house to out in the cloud until fall.

Google and CSC have reimbursed the city a reported $145,000 dollars to help cover the costs of the delay. There was already a sense that Google was giving Los Angeles a sweetheart deal to prove that Google Apps was ready for big deployments; when we first reported this last year, it was noted that Google could give the city more than a million dollars in kickbacks if other public California agencies joined the deal, and that Google was flying-in teams of specialists to pitch and plan the move, something most customers don’t get.

Also in our original coverage, critics raised precisely these concerns; that the technology was an unknown, that there would be unexpected headaches, and that overall, choosing a technology system because Google wanted to prove something might not be the smartest way to set policy.

“Google justified its pitch by saying that the use of Google Apps will save a ton of money based on productivity gains, when everyone knows that when you put in something new, you never know if it will integrate [well] or not with existing technology,” said Kevin McDonald, who runs an outsourced IT systems management firm. That’s not prescient; that’s common sense. MarketWatch also reports that users are dissatisfied with speed and delivery of email and that’s a primary concern for the LAPD.

There was no word today on the fate of the “Government Cloud” that Google said it was building to support public sector users who had a regulatory need to have their data segregated and accounted for. Google originally said that the Government Cloud would be able to meet any and all concerns over privacy and security by the City of LA. Why that hasn’t happened ten months after the promises were made remains to seen.

Google was happy to gloss over potential roadblocks when the deal was announced, like the fact that the LAPD relies on its messaging system; email, mobile devices etc for police duties and maybe it’s right in claiming, as it often has, that Google can do security better, but I’m going to go out on a limb and guess that when the LAPD’s email goes out, the Chief of Police probably does not want to call Google Support and get placed on hold. He probably wants to be able to literally stand next to the server and scream at someone in IT until it’s back.

Maybe that’s an out of date attitude, but it’s one that is hard to shake, especially in the public sector. These people have been doing their jobs (well, showing up at the office, at least) for a very long time without Google; they are not prone to enjoy experimentation or innovation, and Google needs to recognize that and get its ducks in a row if it wants to become a serious contender for the public sector. The “perpetual beta” attitude that the company seems to revel in simply isn’t going to fly.

July 6, 2010  2:34 PM

Cloud confusion? Does not compute

CarlBrooks Carl Brooks Profile: CarlBrooks

Madhubanti Rudra writes for about last week’s Cisco Live event that confusion may still linger over what, exactly, cloud computing is.

The survey revealed that a clear understanding about the actual definition of cloud technology is yet to arrive, but that did not deter 71 percent of organizations from implementing some form of cloud computing.

The survey was conducted by Network Instruments from the show floor; 184 respondents with, presumably, no other agenda than to get to the drinks table and gawk at the technology they probably wouldn’t buy.

Network Instruments pitched the result of the survey as confusing. But if we look closer, were people all that confused? I don’t think so. Just the opposite, actually, and it remains to be seen why Network Instruments would spin results to suggest people weren’t hip.

Meaning of the Cloud Debatable: The term “cloud computing” meant different things to respondents. To the majority, it meant any IT services accessed via public Internet (46 percent). For other respondents, the term referred to computer resources and storage that can be accessed on-demand (34 percent). A smaller number of respondents stated cloud computing pertained to the outsourcing of hosting and management of computing resources to third-party providers (30 percent).

Let’s see; about half think cloud computing means IT services available on the Internet — that’s fair if you include Software as a Service, which most people do. About one-third narrow it down to compute and storage resources available on-demand — that’s a loose working definition of Infrastructure as a Service (and Platform as a Service, to some extent) and also perfectly valid.

Another third think it’s about hosting and managed services, and they could definitely be described as “wrong,” or at least “not yet right,” since managed service providers and hosting firms are scrambling to make their offerings cloud-like with programmatic access and on-demand billing. But that bottom third is at least in the ballpark, since cloud is a direct evolution from hosting and managed hosting.

So what these results really say is that the great majority of respondents are perfectly clear on what cloud computing is, and where it is, and even the minority that aren’t, are well aware of its general proximal market space (hosting/outsourcers) and what need it fills.

I don’t see any evidence that the meaning of cloud is up for debate at all.

June 17, 2010  9:22 PM

Amazon’s early efforts at cloud computing? Partly accidental

CarlBrooks Carl Brooks Profile: CarlBrooks

Former ‘Master of Disaster’ at Amazon Jesse Robbins has a couple of fun tidbits to share about the birth of Amazon EC2. He said the reason it succeeded as an idea in Amazon’s giant retail machine was partly due to his inter-territorial corporate grumpiness and partly due to homesickness–not exactly the masterstroke of carefully planned skunkworks genius it’s been made out to be by some.

Robbins said Chris Pinkham, creator of EC2 along with Chris Brown (and later joined by Wiljem Van Biljon recruited in South Africa)was itching to go back to South Africa right around the time Amazon started noodling around with the idea of selling virtual servers. At the time, Robbins was in charge of all of Amazon’s outward facing web properties and keeping them running.

“Chris really, really wanted to be back in South Africa,” said Robbins, and rather than lose the formidable talent behind Amazon’s then VP of engineering, Amazon brass cleared the project and off they went with a freedom to innovate that many might be jealous of.

“It might never have happened if they weren’t so far away from the mothership”, Amazon’s Seatlle headquarters, said Robbins.

Now half a world away, Christopher Brown, who joined Pinkham as a founding member, architect, and lead developer for EC2, set about finding resources to test his ideas on automation in a fully virtualized server environment. Robbins, who knew about the project, gave Brown the interdepartmental cold shoulder.

“I was horrified at the thought of the dirty, public Internet touching MY beautiful operations,” he said with all the relish of a born operator. Robbins had his hands on the reins of the worlds most successful online retail operation from soup to nuts and wasn’t about to let it be mucked up with long-distance experimentation.

To this day he gets a kick out of the apparently unquenchable (and totally untrue) rumour that EC2 came about because Amazon had spare capacity in its data centers, because his attitude at the time was, like every IT operations manager in a big organization, was that there is no such thing as spare capacity. It’s ALL good for something and NOBODY gets any of it if you can humanly prevent it. It’s‘mine, mine, mine’ as the duck said.

Brown, therefore, grumbled up his own data center (not that that was a stretch for him; Pinkham ran South Africa’s first ISP), set to work, and out popped the world’s first commercially successful cloud, running independently of Amazon’s regular IT. The rest is history (the cartoon in the link is “Ali Baba Bunny“(1957)).

UPDATE: A factual error and the omission of Christopher Brown as Chris Pinkham’s original counterpart in the move from the US to South Africa has been corrected. I regret the error and unintended omission.

June 4, 2010  12:57 AM

VMware wants the whole private cloud software stack- and it may get it

CarlBrooks Carl Brooks Profile: CarlBrooks

Details of VMware’s Project Redwood have been unearthed, and it’s a telling look at where VMware sees itself in the new era of cloud computing: in charge of everything.

While Redwood is still vapor as far as the public is concerned (and the basic VMware cloud technology, vCloud is still in pre-release at ver. 09) – it’s clear that VMware thinks it can capitalize on its position as the default virtualization platform for the enterprise and swoop in to become the private cloud platform of choice as enterprises increasing retool their data centers to look, and work, more like services like Rackspace and Amazon Web Services.

Some people are grumpy about the term private cloud, saying it’s just a data center modernized and automated to the hilt – let’s get that out of the way by noting that “private cloud” is a lot easier to say than “highly automated and fully managed self-provisioning server infrastructure data center system with integrated billing”. It’s also less annoying than “Infrastructure 3.0”, a term that can make normally calm operators scream like enraged pterodactyls. Private cloud it is.

Project Redwood, now known as the VMware Service Director, will lay over a VMware vSphere installation and allow users governed self-service usage via a web portal and an API, effectively obscuring both the data center hardware and the virtualization software VMware customers are used to operating. The goal is to automate resource management so that admins don’t have to and make distributing computing resources as easy and flexible as possible, while maintaining full control.

According to the presentation, vCloud Service Director will support three modes of resource management: “Allocation pools“, where users are given a ‘container’ of resources and allowed to create and use VMs anyway they like up to the limits of the CPU and storage they paid for; “Reservation pools“, which give users a set of resources they can increase or decrease by themselves and “Pay-per-VM” for single-instance purchasing.

–From the article

That’s the IT side taken care of- the other really significant concept is vApps- users can build, save and move application stacks en suite, and will be able to flow out of their private cloud into VMware-approved public cloud services– vCloud Express hosters like BlueLock and Terremark. So admins get control and visibility, and users get true scalability and self-service. That means there’s something for everyone in the enterprise.

Other tidbits from the document-VMware’s concept of cloud:

  • Cloud Computing according to VMware
    Lightweight entry/exit service acquisition model
    Consumption based pricing
    Accessible using standard internet protocols
    Improved economics due to shared infrastructure
    Massively more efficient to manage
  • And how Redwood is the answer:

  • Project Redwood Strategy
    High-Level: Enable broad deployment of
    compute clouds by:
    • Delivering a software solution enabling self-service
    access to compute infrastructure
    • Establishing the most compelling platform for
    internal and external clouds
    • Allow enterprises to create fully-functional internal
    cloud infrastructure
    • Create a broad ecosystem of cloud providers to
    give enterprises choice
    • Provide identical interfaces between internal and
    external clouds to allow toolsets to operate
    identically with either
    • Enable developers on the cloud platform to create
    new applications within a cloud framework
  • Of course, there are products that can already do this and already well on the way to maturity- Abiquo springs to mind. You can do everything Redwood is shooting for today, if you’re so inclined. A titillating report says an audience that reportedly contained VMware engineers cheered during an Abiquo demo. The problem is you have to bring your own hypervisor- few want their YAVS(Yet Another Vendor Syndrome)infection complicated.

    Oracle, on the other hand, has reinvented itself as a “complete stack” of private cloud products, from the Sun iron on up, and IBM is happy to sell you iron that behaves like cloud, and so on.

    But VMware is betting brand loyalty, severe antipathy towards non-commodity hardware and inertia will catapult it past the upstarts and comfortably ahead of Microsoft, its real competition here, which is shooting for the same goal with Hyper-V and the Dynamic Data Center but is at least a year behind VMware here.

    Enterprises running clouds are inevitable, goes the thinking; virtualization is ideally suited to both cloud computing and the commoditized hardware market–provide the entire software stack needed to turn those servers and switches into compute clouds, and you’ll make out like a bandit, especially when the only serious competition to try and offer the same thing right now is Canonical on one extreme, and Oracle on the other.

    If you are running an enterprise data center, want drop-in, one-stop cloud computing, and your options are “free–from hippies” or “bend over“, VMware, who already makes your preferred hypervisor, will be a favored alternative. All they have to do is execute.

    Forgot Password

    No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

    Your password has been sent to: