The Troposphere


September 7, 2010  6:17 PM

Did Googler jump the gun with cloud calculator?

CarlBrooks Carl Brooks Profile: CarlBrooks

Googler Vijay Gill posted a quick and dirty cloud calculator a few weeks ago that has caused some head scratching. The calculator seems to show an eye popping 168% premium on using AWS versus co-locating your own servers–$118,248/year for AWS XL instances and $70,079.88 for operating a co-lo with equivalent horsepower.

Can that really be the case? AWS isn’t cheap web hosting, it’s mid-tier VPS hosting, price-wise, if you’re talking about using it consistently year over year, and those are definitely cheaper than co-lo. Gill says $743,000 to buy and install your servers, so he’s got the investment figures in there.

Editor Matt Stansberry asked an expert on data center practices and markets that questions and was told:

“There is a point at where this is a very good exercise, but the way it was undertaken was grossly inaccurate,”

That’s Tier1 analyst Antonio Piraino, who points out that not only did Gill not spell out neccessary assumptions, he took Amazon’s retail price as the base cost, and Amazon will cut that in half if a user makes a year or multi-year commitment.

But is it fair to make the comparison in the first place?

Some people will choose Amazon for large-scale, long term commitments, but they will be in the diminous minority. There are far better options for almost anyone in hosting right now. The hosting market has been mature for the better part of a decade, and cloud has many years to go on that front.

AWS isn’t hosting or co-lo, obviously; it’s cloud. First, lots of people pick off the bits they want, like using S3 for storage. That is surely less expensive that co-locating your own personal SAN for data archive or second tier storage (first tier if you’re a web app). That’s the absolutely astounding innovation that AWS has shown the world; they sell any part of the compute environment by the hour, independent of all the other parts.

Second the whole point of AWS is that you can get the entire equivalent of that $743,000 co-lo hardware, running full bore, no cable crimpers or screwdrivers needed, in a few hours (if you’re tardy) without having to buy a thing. Building out a co-lo takes months and months.

So it’s a little off base and what’s the point? To prove that Amazon can be expensive? Not a shock. Renting an apartment can seem like a waste of money if you own a home, not so much if you need a place to live.

August 25, 2010  7:51 PM

CA spends close to $1 billion on cloud acquisitions

JoMaitland Jo Maitland Profile: JoMaitland

CA’s spending spree in the cloud market is far from over according to Adam Elster, SVP and general manager of CA’s services business.

The software giant has gobbled up five companies in the last 12 months including Cassatt (resource optimization), Oblicore (IT service catalog), 3Tera (application deployment in the cloud), Nimsoft (monitoring and reporting of Google Apps, Rackspace, AWS and Salesforce.com) and most recently, 4Base Techbologies (a cloud consulting and integration firm). Some back of the envelope math says that’s close to a billion dollars worth of acquistions so far.

Elster says the company is looking to make an acquistion every 60 to 90 days to build out its portfolio of cloud offerings. It’s not done with services either. “We’re looking at a couple of others from a services perspective,” Elster said. CA’s focus, as always, is on management. It’s also looking at security in the cloud.

For now, the 4Base deal is keeping CA busy. A Sunnyvale, CA-based virtualization consulting firm, 4Base has about 300 projects on the go with companies including Visa, ebay and T-mobile. It charges around $250,000 per phase of a project and most projects are at least two phases. CA found itself in many of the same deals with 4Base, but 4Base was winning the IT strategy and consulting part of the deal, hence the acquisition.

It’s seems like an expensive proposition to hire the 4Base guys, but Elster says for many large companies it’s a time to market issue versus retraining “senior” inhouse IT staff. “Your challenge is those people do not have the large virtualization and cloud project experience … for $250,000 4Base does the assessment and builds the roadmap, it’s a hot space as it gets the organization to market quicker and reduces risk,” he said.

Most of the projects 4Base is working on involve helping companies build out their virtualization environments beyond a single application or test and dev environment. Rolling out virtualization to a larger scale means getting an ITIL framework in place, updating incident and capacity management reporting tools and creating more standardized IT processes, according to 4Base.

If you’re looking for other boutique companies in the virtualization and cloud consulting market there are a lot out there. Service Mesh, CloudManage.com, New Age Technologies, AllVirtualGroup.net, VirtualServerConsulting, Green Pages Technology Solutions and IT@Once spring to mind.


July 30, 2010  6:30 PM

Eli Lilly – Amazon Web Services story still stands

JoMaitland Jo Maitland Profile: JoMaitland

This week I wrote a story about Eli Lilly’s struggle with Amazon Web Services over legal indemnification issues.

Sources told us that Eli Lilly was walking away from contract negotiations with AWS for expanding its use of AWS beyond its current footprint. AWS has chosen to hide this fact by claiming the story says Eli Lilly is leaving Amazon completely, which was not reported.

Since publishing the story Amazon’s CTO Dr. Werner Vogels has called me a liar, attempted to discredit SearchCloudComputing.com and claimed my sources are wrong, all via Twitter. I am curious if he thinks any enterprise IT professionals are following his tweets? My hunch is not many, but that’s another story.

Information Week followed up with Eli Lilly to check out the story and was given this statement:

“Lilly is currently a client of Amazon Web Services. We employ a wide variety of Amazon Web Services solutions, including the utilization of their cloud environment for hosting and analytics of information important to Lilly.”

This statement does not refute the issue at the center of my story which is that Eli Lilly has been struggling to agree to terms with AWS over legal liability issues which has prevented it from deploying more important workloads on AWS.

Yes, AWS still gets some business from Eli Lilly, but larger HPC workloads and other corporate data are off the table, right now.

The story raises lots of questions about the murky area of how much liability cloud computing service providers should assume when things go wrong with their service. So far, AWS seems unwilling to negotiate with its customers, and it’s certainly unwilling to discuss this topic in a public way.

That’s AWS’s prerogative, but the issue will not subside, especially as more big companies debate the wisdom of trusting their business information to cloud providers like AWS, Rackspace, etal.


July 23, 2010  5:24 PM

Did Google oversell itself to the City of LA?

CarlBrooks Carl Brooks Profile: CarlBrooks

Has the endless optimism and sunny disposition of the Google crew finally led them to bite off more than they could chew?

Reported trouble meeting security standards has stalled a high profile deal between Google and the City of LA to implement email and office software in the cloud, replacing on premise Novell GroupWise software. While 10,000 users have moved onto Gmail already, according to city CTO Randi Levin, and 6,000 more will move by mid-August, 13,000 police personnel will not be ready to switch from in-house to out in the cloud until fall.

Google and CSC have reimbursed the city a reported $145,000 dollars to help cover the costs of the delay. There was already a sense that Google was giving Los Angeles a sweetheart deal to prove that Google Apps was ready for big deployments; when we first reported this last year, it was noted that Google could give the city more than a million dollars in kickbacks if other public California agencies joined the deal, and that Google was flying-in teams of specialists to pitch and plan the move, something most customers don’t get.

Also in our original coverage, critics raised precisely these concerns; that the technology was an unknown, that there would be unexpected headaches, and that overall, choosing a technology system because Google wanted to prove something might not be the smartest way to set policy.

“Google justified its pitch by saying that the use of Google Apps will save a ton of money based on productivity gains, when everyone knows that when you put in something new, you never know if it will integrate [well] or not with existing technology,” said Kevin McDonald, who runs an outsourced IT systems management firm. That’s not prescient; that’s common sense. MarketWatch also reports that users are dissatisfied with speed and delivery of email and that’s a primary concern for the LAPD.

There was no word today on the fate of the “Government Cloud” that Google said it was building to support public sector users who had a regulatory need to have their data segregated and accounted for. Google originally said that the Government Cloud would be able to meet any and all concerns over privacy and security by the City of LA. Why that hasn’t happened ten months after the promises were made remains to seen.

Google was happy to gloss over potential roadblocks when the deal was announced, like the fact that the LAPD relies on its messaging system; email, mobile devices etc for police duties and maybe it’s right in claiming, as it often has, that Google can do security better, but I’m going to go out on a limb and guess that when the LAPD’s email goes out, the Chief of Police probably does not want to call Google Support and get placed on hold. He probably wants to be able to literally stand next to the server and scream at someone in IT until it’s back.

Maybe that’s an out of date attitude, but it’s one that is hard to shake, especially in the public sector. These people have been doing their jobs (well, showing up at the office, at least) for a very long time without Google; they are not prone to enjoy experimentation or innovation, and Google needs to recognize that and get its ducks in a row if it wants to become a serious contender for the public sector. The “perpetual beta” attitude that the company seems to revel in simply isn’t going to fly.


July 6, 2010  2:34 PM

Cloud confusion? Does not compute

CarlBrooks Carl Brooks Profile: CarlBrooks

Madhubanti Rudra writes for TMC.net about last week’s Cisco Live event that confusion may still linger over what, exactly, cloud computing is.

The survey revealed that a clear understanding about the actual definition of cloud technology is yet to arrive, but that did not deter 71 percent of organizations from implementing some form of cloud computing.

The survey was conducted by Network Instruments from the show floor; 184 respondents with, presumably, no other agenda than to get to the drinks table and gawk at the technology they probably wouldn’t buy.

Network Instruments pitched the result of the survey as confusing. But if we look closer, were people all that confused? I don’t think so. Just the opposite, actually, and it remains to be seen why Network Instruments would spin results to suggest people weren’t hip.

Meaning of the Cloud Debatable: The term “cloud computing” meant different things to respondents. To the majority, it meant any IT services accessed via public Internet (46 percent). For other respondents, the term referred to computer resources and storage that can be accessed on-demand (34 percent). A smaller number of respondents stated cloud computing pertained to the outsourcing of hosting and management of computing resources to third-party providers (30 percent).

Let’s see; about half think cloud computing means IT services available on the Internet — that’s fair if you include Software as a Service, which most people do. About one-third narrow it down to compute and storage resources available on-demand — that’s a loose working definition of Infrastructure as a Service (and Platform as a Service, to some extent) and also perfectly valid.

Another third think it’s about hosting and managed services, and they could definitely be described as “wrong,” or at least “not yet right,” since managed service providers and hosting firms are scrambling to make their offerings cloud-like with programmatic access and on-demand billing. But that bottom third is at least in the ballpark, since cloud is a direct evolution from hosting and managed hosting.

So what these results really say is that the great majority of respondents are perfectly clear on what cloud computing is, and where it is, and even the minority that aren’t, are well aware of its general proximal market space (hosting/outsourcers) and what need it fills.

I don’t see any evidence that the meaning of cloud is up for debate at all.


June 17, 2010  9:22 PM

Amazon’s early efforts at cloud computing? Partly accidental

CarlBrooks Carl Brooks Profile: CarlBrooks

Former ‘Master of Disaster’ at Amazon Jesse Robbins has a couple of fun tidbits to share about the birth of Amazon EC2. He said the reason it succeeded as an idea in Amazon’s giant retail machine was partly due to his inter-territorial corporate grumpiness and partly due to homesickness–not exactly the masterstroke of carefully planned skunkworks genius it’s been made out to be by some.

Robbins said Chris Pinkham, creator of EC2 along with Chris Brown (and later joined by Wiljem Van Biljon recruited in South Africa)was itching to go back to South Africa right around the time Amazon started noodling around with the idea of selling virtual servers. At the time, Robbins was in charge of all of Amazon’s outward facing web properties and keeping them running.

“Chris really, really wanted to be back in South Africa,” said Robbins, and rather than lose the formidable talent behind Amazon’s then VP of engineering, Amazon brass cleared the project and off they went with a freedom to innovate that many might be jealous of.

“It might never have happened if they weren’t so far away from the mothership”, Amazon’s Seatlle headquarters, said Robbins.

Now half a world away, Christopher Brown, who joined Pinkham as a founding member, architect, and lead developer for EC2, set about finding resources to test his ideas on automation in a fully virtualized server environment. Robbins, who knew about the project, gave Brown the interdepartmental cold shoulder.

“I was horrified at the thought of the dirty, public Internet touching MY beautiful operations,” he said with all the relish of a born operator. Robbins had his hands on the reins of the worlds most successful online retail operation from soup to nuts and wasn’t about to let it be mucked up with long-distance experimentation.

To this day he gets a kick out of the apparently unquenchable (and totally untrue) rumour that EC2 came about because Amazon had spare capacity in its data centers, because his attitude at the time was, like every IT operations manager in a big organization, was that there is no such thing as spare capacity. It’s ALL good for something and NOBODY gets any of it if you can humanly prevent it. It’s‘mine, mine, mine’ as the duck said.

Brown, therefore, grumbled up his own data center (not that that was a stretch for him; Pinkham ran South Africa’s first ISP), set to work, and out popped the world’s first commercially successful cloud, running independently of Amazon’s regular IT. The rest is history (the cartoon in the link is “Ali Baba Bunny“(1957)).

UPDATE: A factual error and the omission of Christopher Brown as Chris Pinkham’s original counterpart in the move from the US to South Africa has been corrected. I regret the error and unintended omission.


June 4, 2010  12:57 AM

VMware wants the whole private cloud software stack- and it may get it

CarlBrooks Carl Brooks Profile: CarlBrooks

Details of VMware’s Project Redwood have been unearthed, and it’s a telling look at where VMware sees itself in the new era of cloud computing: in charge of everything.

While Redwood is still vapor as far as the public is concerned (and the basic VMware cloud technology, vCloud is still in pre-release at ver. 09) – it’s clear that VMware thinks it can capitalize on its position as the default virtualization platform for the enterprise and swoop in to become the private cloud platform of choice as enterprises increasing retool their data centers to look, and work, more like services like Rackspace and Amazon Web Services.

Some people are grumpy about the term private cloud, saying it’s just a data center modernized and automated to the hilt – let’s get that out of the way by noting that “private cloud” is a lot easier to say than “highly automated and fully managed self-provisioning server infrastructure data center system with integrated billing”. It’s also less annoying than “Infrastructure 3.0″, a term that can make normally calm operators scream like enraged pterodactyls. Private cloud it is.

Project Redwood, now known as the VMware Service Director, will lay over a VMware vSphere installation and allow users governed self-service usage via a web portal and an API, effectively obscuring both the data center hardware and the virtualization software VMware customers are used to operating. The goal is to automate resource management so that admins don’t have to and make distributing computing resources as easy and flexible as possible, while maintaining full control.

According to the presentation, vCloud Service Director will support three modes of resource management: “Allocation pools“, where users are given a ‘container’ of resources and allowed to create and use VMs anyway they like up to the limits of the CPU and storage they paid for; “Reservation pools“, which give users a set of resources they can increase or decrease by themselves and “Pay-per-VM” for single-instance purchasing.

–From the article

That’s the IT side taken care of- the other really significant concept is vApps- users can build, save and move application stacks en suite, and will be able to flow out of their private cloud into VMware-approved public cloud services– vCloud Express hosters like BlueLock and Terremark. So admins get control and visibility, and users get true scalability and self-service. That means there’s something for everyone in the enterprise.

Other tidbits from the document-VMware’s concept of cloud:

  • Cloud Computing according to VMware
    Lightweight entry/exit service acquisition model
    Consumption based pricing
    Accessible using standard internet protocols
    Elastic
    Improved economics due to shared infrastructure
    Massively more efficient to manage
  • And how Redwood is the answer:

  • Project Redwood Strategy
    High-Level: Enable broad deployment of
    compute clouds by:
    • Delivering a software solution enabling self-service
    access to compute infrastructure
    • Establishing the most compelling platform for
    internal and external clouds
    Approach
    • Allow enterprises to create fully-functional internal
    cloud infrastructure
    • Create a broad ecosystem of cloud providers to
    give enterprises choice
    • Provide identical interfaces between internal and
    external clouds to allow toolsets to operate
    identically with either
    • Enable developers on the cloud platform to create
    new applications within a cloud framework
  • Of course, there are products that can already do this and already well on the way to maturity- Abiquo springs to mind. You can do everything Redwood is shooting for today, if you’re so inclined. A titillating report says an audience that reportedly contained VMware engineers cheered during an Abiquo demo. The problem is you have to bring your own hypervisor- few want their YAVS(Yet Another Vendor Syndrome)infection complicated.

    Oracle, on the other hand, has reinvented itself as a “complete stack” of private cloud products, from the Sun iron on up, and IBM is happy to sell you iron that behaves like cloud, and so on.

    But VMware is betting brand loyalty, severe antipathy towards non-commodity hardware and inertia will catapult it past the upstarts and comfortably ahead of Microsoft, its real competition here, which is shooting for the same goal with Hyper-V and the Dynamic Data Center but is at least a year behind VMware here.

    Enterprises running clouds are inevitable, goes the thinking; virtualization is ideally suited to both cloud computing and the commoditized hardware market–provide the entire software stack needed to turn those servers and switches into compute clouds, and you’ll make out like a bandit, especially when the only serious competition to try and offer the same thing right now is Canonical on one extreme, and Oracle on the other.

    If you are running an enterprise data center, want drop-in, one-stop cloud computing, and your options are “free–from hippies” or “bend over“, VMware, who already makes your preferred hypervisor, will be a favored alternative. All they have to do is execute.


    May 17, 2010  6:36 PM

    Inside the world of cloud computing at Citrix Synergy 2010

    Mino65434 Steve Cimino Profile: Mino65434

    Donna Lyon, an attendee at Citrix Synergy, offers her take on the cloud announcements from the show.

    There is always a debate over whether cloud computing is a marketing phase or a technological reality; the Citrix Synergy event held in San Francisco was no exception.

    Mark Templeton, president and CEO of Citrix, wasted no time in announcing that the cloud technology built by Sonnenschein Nath & Rosenthal. a global law firm, won the firm the Innovation Award for 2010. The company empowers employees by giving them access to the information they need whenever and wherever, confidentially and securely. Using any device — whether desktop computer, mobile phone or iPad — the firm’s employees can access internal company records immediately through their private cloud. This potentially offers up a better work/life balance to employees, along with allowing them to set up new offices quickly and grow more efficiently.

    “Virtualization and cloud computing is our future…if you’re not doing it now you need to be,” said Andy Jurczyk, CIO of Sonnenschein Nath & Rosenthal.

    A session on the future of IT was lead by Michael Harries and Adam Jaques, both from Citrix. Harries also insisted cloud computing was the way of the future, despite some concerns from audience members working in the healthcare industry. Jaques, on the other hand, noted that he still considers cloud to be mostly a marketing term.

    Duncan Johnston-Watt, CEO of CloudSoft Corporation, and Bruce Tolley, VP, outbound and corporate marketing at Solarflare Communications, hosted a session about how to build an enterprise-class cloud. The pair then demoed the results of their cloud computing test center, created in July 2009, that delivers increased data speeds for internal clouds.

    Frank Gens, senior vice president and chief analyst of IDC, took the stage to talk about three big IT trends that are set to change the industry:

    • Mobility, due to 1 billion mobile internet users, 220 million smart phones, 500,000 mobile phone apps and the fact that emerging markets are phone-centric IT users.
    • Cloud computing, due to the desire to consolidate, virtualization and automate.
    • The information avalanche, due to the 7 billion communicating devices in place, 700 million social networkers, and tons of video dominating new growth. Today there is 0.8 ZB of data out there, but in ten years, there will be 35 ZB.

    Companies still focused on physical resources are going to be doomed, Gens stated. With the influx of data, organizations are going to have to move into the cloud.

    Cloud security concerns remain, especially within the healthcare and government industries, but the takeaway from Citrix Synergy is that people are changing the way they think about cloud computing. The early adopter organizations, such as Sonnenschein, are pushing aside any doubts and embracing the technology. It is early days now, but soon we may not have a choice.

    Donna Lyon specializes in external communications and media relations in the software and hardware industries. She has more than eight years experience in marketing, strategy development, public affairs and public relations, working with companies including Cisco Systems, Hewlett Packard, Informatica and BlueArc. Donna’s technology areas of focus include software, virtualization, data centers, networking and collaboration.

    Donna’s passion for marketing communications is also shown through her work as a board member on the San Francisco chapter of the American Marketing Association. Donna holds an MBA from Golden Gate University along with a Diploma in Marketing from the Chartered Institute of Marketing at Bristol University.


    May 14, 2010  7:49 PM

    Recovery.gov: A slap in the face to business as usual

    CarlBrooks Carl Brooks Profile: CarlBrooks

    The federal government has just launched Recovery.gov running entirely on Amazon’s cloud services. Vivek Kundra, federal CIO and cloud champion, is using the site to browbeat skeptics who said that the fed shouldn’t or couldn’t use one-size-fits-all cloud IT services to run important stuff. It’s an opportunity to do something that he hasn’t been able to do so far- flex some muscle and make people sit up and pay attention.

    Everything to date has either been a science project–apps.gov, hosting data.gov’s front end at Terremark, NASA Nebula, etc– or a bunch of fluff and boosterism, and his promised cloud computing budgets haven’t hit the boards yet, so up until now, it was business as usual. I’ll bet agency CIOs were spending most of the time figuring out how to ignore Kundra and laughing up their sleeves at him.

    This changes things. Recovery.gov is a whole project, soup to nuts, running out in the cloud, not just a little peice of an IT project or a single process outsourced. It’s a deliberate, pointed enjoinder that he can get something done in Washington (even if it’s just a website) by going around, rather than through, the normal people.

    Technology-wise, this is nothing- the choice of Amazon incidental at best, the money absolute peanuts.

    Process-wise, it’s a very public slap in the face to the IT managers and contractors at the fed. It’s absolutely humiliating and horrible for them- every conversation they have for the next year is going to include, “But recovery.gov…” and they know it. If they can’t find a way to squash Kundra, the IT incumbents are in for some scary, fast changes in how they do business.

    Federal contractors and government employees HATE that- it’s the opposite of ‘gravy train’. The system isn’t designed to be competitive; it’s designed to soak up money. Kundra is effectively going to force them to be competitive by rubbing their nose in that fact.

    What it shows on a larger level is something worth remembering; cloud computing isn’t a technological breakthrough as much as it is a process breakthrough. Cloud users may find it neat that Amazon can do what it does with Xen, for example, but fundamentally, they don’t care that much, they’re just there to get the fast, cheap, no-commitment servers and use them. And that’s what Kundra’s done with Recovery.gov (Ok, he picked a contractor do did it, but anyway).

    There are probably thousands of federal IT suppliers that could have built and run Recovery.gov, and they would have taken their sweet time about it, and milked the coffers dry in the process, because that’s the normal process. They might have bought servers, rented space to run them, put a nice 50% (or more) margin on their costs, and delivered the site when they couldn’t duck the contract any more. That’s normal.

    Kundra picking out a contractor who simply went around all that and bought IT at Amazon, cutting the projected costs and delivery time into ribbons?

    That’s not normal-and that’s why cloud computing is so important.


    May 5, 2010  12:07 AM

    Citigroup values AWS sales at $650M in 2010

    JoMaitland Jo Maitland Profile: JoMaitland

    Citigroup estimates Amazon Web Services (AWS) will hit sales of $650 million in 2010, according to a recent article in Businessweek on the prospects for the cloud computing leader.

    Amazon does not break out its AWS revenue, but its headstart and leadership position in cloud computing mean that any indicators on how this business is doing are a helpful data point for the rest of the industry.

    So far, companies using AWS are typically in the high performance computing space, it’s pharmaceutical firms, oil and gas, financial services and academic institutions. Also, web retailers and startups are early adopters.

    We’d like to hear feedback from any organization that’s testing AWS or using it on and ongoing basis to help shape our coverage of this topic on SearchCloudComputing.com.

    You can reach me at jmaitland@techtarget.com.

    Cheers,

    Jo


    Forgot Password

    No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

    Your password has been sent to: