The Troposphere


June 17, 2010  9:22 PM

Amazon’s early efforts at cloud computing? Partly accidental

CarlBrooks Carl Brooks Profile: CarlBrooks

Former ‘Master of Disaster’ at Amazon Jesse Robbins has a couple of fun tidbits to share about the birth of Amazon EC2. He said the reason it succeeded as an idea in Amazon’s giant retail machine was partly due to his inter-territorial corporate grumpiness and partly due to homesickness–not exactly the masterstroke of carefully planned skunkworks genius it’s been made out to be by some.

Robbins said Chris Pinkham, creator of EC2 along with Chris Brown (and later joined by Wiljem Van Biljon recruited in South Africa)was itching to go back to South Africa right around the time Amazon started noodling around with the idea of selling virtual servers. At the time, Robbins was in charge of all of Amazon’s outward facing web properties and keeping them running.

“Chris really, really wanted to be back in South Africa,” said Robbins, and rather than lose the formidable talent behind Amazon’s then VP of engineering, Amazon brass cleared the project and off they went with a freedom to innovate that many might be jealous of.

“It might never have happened if they weren’t so far away from the mothership”, Amazon’s Seatlle headquarters, said Robbins.

Now half a world away, Christopher Brown, who joined Pinkham as a founding member, architect, and lead developer for EC2, set about finding resources to test his ideas on automation in a fully virtualized server environment. Robbins, who knew about the project, gave Brown the interdepartmental cold shoulder.

“I was horrified at the thought of the dirty, public Internet touching MY beautiful operations,” he said with all the relish of a born operator. Robbins had his hands on the reins of the worlds most successful online retail operation from soup to nuts and wasn’t about to let it be mucked up with long-distance experimentation.

To this day he gets a kick out of the apparently unquenchable (and totally untrue) rumour that EC2 came about because Amazon had spare capacity in its data centers, because his attitude at the time was, like every IT operations manager in a big organization, was that there is no such thing as spare capacity. It’s ALL good for something and NOBODY gets any of it if you can humanly prevent it. It’s‘mine, mine, mine’ as the duck said.

Brown, therefore, grumbled up his own data center (not that that was a stretch for him; Pinkham ran South Africa’s first ISP), set to work, and out popped the world’s first commercially successful cloud, running independently of Amazon’s regular IT. The rest is history (the cartoon in the link is “Ali Baba Bunny“(1957)).

UPDATE: A factual error and the omission of Christopher Brown as Chris Pinkham’s original counterpart in the move from the US to South Africa has been corrected. I regret the error and unintended omission.

June 4, 2010  12:57 AM

VMware wants the whole private cloud software stack- and it may get it

CarlBrooks Carl Brooks Profile: CarlBrooks

Details of VMware’s Project Redwood have been unearthed, and it’s a telling look at where VMware sees itself in the new era of cloud computing: in charge of everything.

While Redwood is still vapor as far as the public is concerned (and the basic VMware cloud technology, vCloud is still in pre-release at ver. 09) – it’s clear that VMware thinks it can capitalize on its position as the default virtualization platform for the enterprise and swoop in to become the private cloud platform of choice as enterprises increasing retool their data centers to look, and work, more like services like Rackspace and Amazon Web Services.

Some people are grumpy about the term private cloud, saying it’s just a data center modernized and automated to the hilt – let’s get that out of the way by noting that “private cloud” is a lot easier to say than “highly automated and fully managed self-provisioning server infrastructure data center system with integrated billing”. It’s also less annoying than “Infrastructure 3.0”, a term that can make normally calm operators scream like enraged pterodactyls. Private cloud it is.

Project Redwood, now known as the VMware Service Director, will lay over a VMware vSphere installation and allow users governed self-service usage via a web portal and an API, effectively obscuring both the data center hardware and the virtualization software VMware customers are used to operating. The goal is to automate resource management so that admins don’t have to and make distributing computing resources as easy and flexible as possible, while maintaining full control.

According to the presentation, vCloud Service Director will support three modes of resource management: “Allocation pools“, where users are given a ‘container’ of resources and allowed to create and use VMs anyway they like up to the limits of the CPU and storage they paid for; “Reservation pools“, which give users a set of resources they can increase or decrease by themselves and “Pay-per-VM” for single-instance purchasing.

–From the article

That’s the IT side taken care of- the other really significant concept is vApps- users can build, save and move application stacks en suite, and will be able to flow out of their private cloud into VMware-approved public cloud services– vCloud Express hosters like BlueLock and Terremark. So admins get control and visibility, and users get true scalability and self-service. That means there’s something for everyone in the enterprise.

Other tidbits from the document-VMware’s concept of cloud:

  • Cloud Computing according to VMware
    Lightweight entry/exit service acquisition model
    Consumption based pricing
    Accessible using standard internet protocols
    Elastic
    Improved economics due to shared infrastructure
    Massively more efficient to manage
  • And how Redwood is the answer:

  • Project Redwood Strategy
    High-Level: Enable broad deployment of
    compute clouds by:
    • Delivering a software solution enabling self-service
    access to compute infrastructure
    • Establishing the most compelling platform for
    internal and external clouds
    Approach
    • Allow enterprises to create fully-functional internal
    cloud infrastructure
    • Create a broad ecosystem of cloud providers to
    give enterprises choice
    • Provide identical interfaces between internal and
    external clouds to allow toolsets to operate
    identically with either
    • Enable developers on the cloud platform to create
    new applications within a cloud framework
  • Of course, there are products that can already do this and already well on the way to maturity- Abiquo springs to mind. You can do everything Redwood is shooting for today, if you’re so inclined. A titillating report says an audience that reportedly contained VMware engineers cheered during an Abiquo demo. The problem is you have to bring your own hypervisor- few want their YAVS(Yet Another Vendor Syndrome)infection complicated.

    Oracle, on the other hand, has reinvented itself as a “complete stack” of private cloud products, from the Sun iron on up, and IBM is happy to sell you iron that behaves like cloud, and so on.

    But VMware is betting brand loyalty, severe antipathy towards non-commodity hardware and inertia will catapult it past the upstarts and comfortably ahead of Microsoft, its real competition here, which is shooting for the same goal with Hyper-V and the Dynamic Data Center but is at least a year behind VMware here.

    Enterprises running clouds are inevitable, goes the thinking; virtualization is ideally suited to both cloud computing and the commoditized hardware market–provide the entire software stack needed to turn those servers and switches into compute clouds, and you’ll make out like a bandit, especially when the only serious competition to try and offer the same thing right now is Canonical on one extreme, and Oracle on the other.

    If you are running an enterprise data center, want drop-in, one-stop cloud computing, and your options are “free–from hippies” or “bend over“, VMware, who already makes your preferred hypervisor, will be a favored alternative. All they have to do is execute.


    May 17, 2010  6:36 PM

    Inside the world of cloud computing at Citrix Synergy 2010

    Mino65434 Steve Cimino Profile: Mino65434

    Donna Lyon, an attendee at Citrix Synergy, offers her take on the cloud announcements from the show.

    There is always a debate over whether cloud computing is a marketing phase or a technological reality; the Citrix Synergy event held in San Francisco was no exception.

    Mark Templeton, president and CEO of Citrix, wasted no time in announcing that the cloud technology built by Sonnenschein Nath & Rosenthal. a global law firm, won the firm the Innovation Award for 2010. The company empowers employees by giving them access to the information they need whenever and wherever, confidentially and securely. Using any device — whether desktop computer, mobile phone or iPad — the firm’s employees can access internal company records immediately through their private cloud. This potentially offers up a better work/life balance to employees, along with allowing them to set up new offices quickly and grow more efficiently.

    “Virtualization and cloud computing is our future…if you’re not doing it now you need to be,” said Andy Jurczyk, CIO of Sonnenschein Nath & Rosenthal.

    A session on the future of IT was lead by Michael Harries and Adam Jaques, both from Citrix. Harries also insisted cloud computing was the way of the future, despite some concerns from audience members working in the healthcare industry. Jaques, on the other hand, noted that he still considers cloud to be mostly a marketing term.

    Duncan Johnston-Watt, CEO of CloudSoft Corporation, and Bruce Tolley, VP, outbound and corporate marketing at Solarflare Communications, hosted a session about how to build an enterprise-class cloud. The pair then demoed the results of their cloud computing test center, created in July 2009, that delivers increased data speeds for internal clouds.

    Frank Gens, senior vice president and chief analyst of IDC, took the stage to talk about three big IT trends that are set to change the industry:

    • Mobility, due to 1 billion mobile internet users, 220 million smart phones, 500,000 mobile phone apps and the fact that emerging markets are phone-centric IT users.
    • Cloud computing, due to the desire to consolidate, virtualization and automate.
    • The information avalanche, due to the 7 billion communicating devices in place, 700 million social networkers, and tons of video dominating new growth. Today there is 0.8 ZB of data out there, but in ten years, there will be 35 ZB.

    Companies still focused on physical resources are going to be doomed, Gens stated. With the influx of data, organizations are going to have to move into the cloud.

    Cloud security concerns remain, especially within the healthcare and government industries, but the takeaway from Citrix Synergy is that people are changing the way they think about cloud computing. The early adopter organizations, such as Sonnenschein, are pushing aside any doubts and embracing the technology. It is early days now, but soon we may not have a choice.

    Donna Lyon specializes in external communications and media relations in the software and hardware industries. She has more than eight years experience in marketing, strategy development, public affairs and public relations, working with companies including Cisco Systems, Hewlett Packard, Informatica and BlueArc. Donna’s technology areas of focus include software, virtualization, data centers, networking and collaboration.

    Donna’s passion for marketing communications is also shown through her work as a board member on the San Francisco chapter of the American Marketing Association. Donna holds an MBA from Golden Gate University along with a Diploma in Marketing from the Chartered Institute of Marketing at Bristol University.


    May 14, 2010  7:49 PM

    Recovery.gov: A slap in the face to business as usual

    CarlBrooks Carl Brooks Profile: CarlBrooks

    The federal government has just launched Recovery.gov running entirely on Amazon’s cloud services. Vivek Kundra, federal CIO and cloud champion, is using the site to browbeat skeptics who said that the fed shouldn’t or couldn’t use one-size-fits-all cloud IT services to run important stuff. It’s an opportunity to do something that he hasn’t been able to do so far- flex some muscle and make people sit up and pay attention.

    Everything to date has either been a science project–apps.gov, hosting data.gov’s front end at Terremark, NASA Nebula, etc– or a bunch of fluff and boosterism, and his promised cloud computing budgets haven’t hit the boards yet, so up until now, it was business as usual. I’ll bet agency CIOs were spending most of the time figuring out how to ignore Kundra and laughing up their sleeves at him.

    This changes things. Recovery.gov is a whole project, soup to nuts, running out in the cloud, not just a little peice of an IT project or a single process outsourced. It’s a deliberate, pointed enjoinder that he can get something done in Washington (even if it’s just a website) by going around, rather than through, the normal people.

    Technology-wise, this is nothing- the choice of Amazon incidental at best, the money absolute peanuts.

    Process-wise, it’s a very public slap in the face to the IT managers and contractors at the fed. It’s absolutely humiliating and horrible for them- every conversation they have for the next year is going to include, “But recovery.gov…” and they know it. If they can’t find a way to squash Kundra, the IT incumbents are in for some scary, fast changes in how they do business.

    Federal contractors and government employees HATE that- it’s the opposite of ‘gravy train’. The system isn’t designed to be competitive; it’s designed to soak up money. Kundra is effectively going to force them to be competitive by rubbing their nose in that fact.

    What it shows on a larger level is something worth remembering; cloud computing isn’t a technological breakthrough as much as it is a process breakthrough. Cloud users may find it neat that Amazon can do what it does with Xen, for example, but fundamentally, they don’t care that much, they’re just there to get the fast, cheap, no-commitment servers and use them. And that’s what Kundra’s done with Recovery.gov (Ok, he picked a contractor do did it, but anyway).

    There are probably thousands of federal IT suppliers that could have built and run Recovery.gov, and they would have taken their sweet time about it, and milked the coffers dry in the process, because that’s the normal process. They might have bought servers, rented space to run them, put a nice 50% (or more) margin on their costs, and delivered the site when they couldn’t duck the contract any more. That’s normal.

    Kundra picking out a contractor who simply went around all that and bought IT at Amazon, cutting the projected costs and delivery time into ribbons?

    That’s not normal-and that’s why cloud computing is so important.


    May 5, 2010  12:07 AM

    Citigroup values AWS sales at $650M in 2010

    JoMaitland Jo Maitland Profile: JoMaitland

    Citigroup estimates Amazon Web Services (AWS) will hit sales of $650 million in 2010, according to a recent article in Businessweek on the prospects for the cloud computing leader.

    Amazon does not break out its AWS revenue, but its headstart and leadership position in cloud computing mean that any indicators on how this business is doing are a helpful data point for the rest of the industry.

    So far, companies using AWS are typically in the high performance computing space, it’s pharmaceutical firms, oil and gas, financial services and academic institutions. Also, web retailers and startups are early adopters.

    We’d like to hear feedback from any organization that’s testing AWS or using it on and ongoing basis to help shape our coverage of this topic on SearchCloudComputing.com.

    You can reach me at jmaitland@techtarget.com.

    Cheers,

    Jo


    March 30, 2010  5:03 PM

    UPDATE: Net Neutrality far from dead–National Broadband Plan axes net neutrality proposal?

    CarlBrooks Carl Brooks Profile: CarlBrooks

    The FCC raised eyebrows last Friday with its proposed National Broadband Plan at a Congressional hearing last week by excising any mention of net neutrality—the idea that internet providers have to treat their customers and competitors fairly—from the plan. The Broadband Plan also wants to continue the digital wireless spectrum grab, reallocating TV bands to wireless data providers.

    Net neutrality is a strong area of interest for cloud computing providers, since they rely on telcos to get the computing out of their cloud and into the hands of customers. The federally mandated minimum of at least one blogger going bananas on any topic was met and net neutrality was declared dead.

    Is it? FCC chairman Julius Genachowski is a strong proponent of net neutrality; the proposed national Broadband Plan lists “robust competition” as the first priority; and a rule change submitted by Genachowski governing the carrier status of telcos and ISPs was submitted last year but has yet to be voted on, so I’ll reserve judgment on how dead any of this is.

    There’s no indication this administration or the FCC are anything but net positive on net neutrality. A working hypothesis can be formed by taking the view that the term ‘net neutrality’ has been punched into such a shapeless mush of political irritants that simply bringing up the term in the plan would be a polarizing and wasteful exercise in How to Make it Impossible for a Republican to Vote for Your Idea.

    The FCC’s plan says it aims to:

     “Develop disclosure requirements for broadband service providers to ensure consumers have the pricing and performance information they need to choose the best broadband offers in the market. Increased transparency will incent service providers to compete for customers on the basis of actual performance.”

    If that happens then the networks will, by default, become more neutral as providers strive to undercut each other (provided there’s actually any choices left in your neighborhood or business parks). That’s a long way from dead for net neutrality, even if the term is being avoided.

    UPDATE:

    On April 6th, the DC U.S court of Appeals ruled that the FCC cannot keep Comcast from discriminating against consumers based on how they use the internet. in response, the pundits went full steam ahead on the Net Neutrality is Dead carnival boat, overlooking the true meaning of the ruling. My response below, from Alex Howard’s post:

    I wanted to comment on the heart of the ruling, which is that cable ISPs (and FIOS, essentially any broadband provider) are scheduled (loosely) under Title I of the Telecom Act, since the Powell-era Cable Modem Order, as information services instead of communication services.

    The Supreme Court ruled that classification as within the scope of the FCC’s powers (however strange it might seem in the light of the intent of the law) in BrandX v FCC, and this decision today upholds that classification. Technically, this is a very sound legal decision, which is probably why it was unanimous.

    If the FCC had classified ISPs as common carriers under Title II, the same lawsuit would have be 3-0 the other way. That is the only option the FCC has for exerting regulatory authority of this type over broadband providers.

    Will that happen? It’s unclear. Genachowski isn’t averse to that, philosophically, one supposes, but it would be an epic sh*tstorm.

    To sum up, the FCC clearly has the tools it needs to order broadband providers to be treated like common carriers and if anything, these decisions show that the courts are very consistent in upholding that authority. It simply has not done so.

    So let’s TRY to remember, kids; nothing is bound in stone here. The FCC still retains the power and the ability to effect net neutrality by any number of means, certainly including, but not limited to, Title II of the Telecommunications Act. It is a matter of regulatory policy, not settled law.

    The current administration is far more likely to take a pragmatic approach to settling in competition in small pragmatic steps, as it clearly intends to do in the National Broadband Plan, before it uncorks the inevitably contentious idea of reclassifying ISPs as communications providers instead of information providers.

    So quit saying “net neutrality is dead, OMG”, please. It isn’t helpful or accurate.


    March 23, 2010  10:30 PM

    Cloud spending plans revealed at Cloud Slam ’10

    JoMaitland Jo Maitland Profile: JoMaitland

    If you’re interested in hearing about enterprise IT plans for cloud computing this year, check out the Cloud Slam ’10 virtual trade show happening this week.

    I will be giving the TechTarget keynote presentation on March 25th at 3.30pm EST, on our cloud purchasing intentions survey data. Over 500 members of our audience completed the survey, answering more than 50 questions on their plans for public and private cloud adoption. There’s some interesting trends we’d be happy to share with you.

    Cloud Slam ’10 is offering more than 100 expert session presentations on the rapidly shifting world of cloud-based IT and business strategies. Topics include cloud computing system integration, private vs. public cloud computing, setting up a channel for hybrid clouds and secure cloud environment interoperability.

    Register today!


    February 26, 2010  11:20 PM

    CA’s $100 million dollar cloud wager

    CarlBrooks Carl Brooks Profile: CarlBrooks

    On Wednesday $4 billion software provider CA bought 20-man 3Tera; analysts reported that CA had paid around 30 times the revenue valuation of the cloud software platform maker. Independent sources now confirm the price was a cool $100 million, terms (cash or stock) as yet undisclosed.

    Gut-check this with simple math — 3Tera reported around 80 paying customers, largely small and mid-sized managed service providers (MSPs). They’d have to be paying around $40,000 a year on average, by no measure a startling price for an enterprise software installation, to bring in $3.2 million a year, which, multiplied by 30, brings us right around the reported $100 million.

    That kind of valuation may make stock analysts cringe, since a) any firm that looks like it’s wasting its capital cannot be considered to have a sound growth strategy and b) Pets.com. But it’s a great get from the technology side.

    Was $100 million too much to pay for 3Tera?

    The short answer is no — it was a unique technology with a proven (albeit modestly) track record and it fit a piece of the puzzle CA wanted for cloud computing — point-and-click, one-size-fits-all infrastructure. It’s not like there were dozens of 3Tera’s floating around to spark a bidding war, and there’s not yet a bubble to artificially inflate the worth of cloud computing technology. CA simply decided it needed this and put cash on the barrel until 3Tera said yes.

    However, the figure surely changes the story from “CA snaps up golden opportunity” to “CA just sunk a pile into a future scenario.” CA has $4 billion in revenue and approximately $1 billion in net tangible assets. It just invested a significant portion of that into a software company with 80 customers and a nice looking Web portal product (basically) and are betting that the enterprise appetite for private cloud will exceed predictions.

    Conservative estimates for cloud spending over the next few years hover around $40-$50 billion, and 10-15% of the overall IT market.

    By far the largest part of that cash will go right down the pipe to Software as a Service, leaving a very poor table indeed for infrastructure plays, especially when HP, IBM, and EMC/Cisco/VMware are sitting down to eat with you.

    It’s quite possible that the enterprise appetite for what is now considered private cloud will become a big tent for enterprise IT overall, and make those kinds of figures look undercooked, but any way you slice it, CA has a lot riding on this buy. It’s certainly brightened the days of CEOs of small companies everywhere, I’ll say that much.

    “3Tera’s impressive exit is validation of the tremendous opportunity facing all cloud startups,” said ubiquitous cloudketeer Reuven Cohen, who also makes cloud infrastructure platform software.

    A new CA

    On a brighter note, this is a definitive sign that CA has come around from the old days.

    CA’s new acquisitions have been marked by caution, generosity(!) and foresight, and a good attitude towards the technology and the talent that’s coming in. Out of Oblicore, NetQoS, Cassatt and now 3Tera, I’m fairly sure the majority of those firm’s employees still work at CA if they desire to do so. CA spokesman Bob Gordon said that all twenty of 3Tera’s employees would stay on with CA and CEO Barry X Lynn will stay on for a transitional period.

    Let’s compare that to, say, 1999, when CA would have systemically lured away or undersold all of 3Tera’s customers, bought up their building lease and cut off the heat, shot the CFO’s dog and then bought the company for $6 and a green apple before firing everyone by Post-It note and carving IP out of code like a irritable Atzec priest. On a Monday.

    We’ve come a long way since then, for sure. Congratulations to both companies — to one for the windfall, the other for the bold commitment.


    February 16, 2010  9:20 PM

    Two free cloud beta services to check out

    JoMaitland Jo Maitland Profile: JoMaitland

    Cloud computing services are popping up like daises these days. The good part is many of them are launching with a free beta service, which means you can try before you buy and more importantly, get some valuable experience with cloud-based IT services.

    The first one to check out is an expandable NAS appliance from Natick, MA-based Nasuni, which connects to the cloud for backup, restore and disaster recovery purposes. Active data is cached in the appliance on-site, maintaining the availability of data that’s required day to day, while older data is sent over the wire to the cloud service provider of your choice. Right now that could be Amazon (S3), Iron Mountain or Nirvanix, while Nasuni works on building more cloud partners.

    For anyone with a lot of file-based data and tired of provisioning, managing and paying for yet another NAS filer, this is an interesting service to check out. Our sister site SearchStorage.com covered the company’s launch. For more details read this story (Nasuni Filer Offers Cloud Storage Gateway for NAS).

    On a different note, people using EC2 instances might be interested to check out how to get more utilization out of them with a free service called Silverline.
    IT shops often end up sizing EC2 servers just like in a traditional data center. To meet peak application demand, the servers are over configured. This means spare cycles are costing money.

    Silverline creates a virtual background container on any EC2 instance. When an application is run in this background container it can only use the spare cycles. This guarantees that what was already running on the instance will run unaffected, while the spare cycles are used by the application(s) placed in the virtual background container. The company claims EC2 customers can get more from the servers they are already paying for.

    And one advantage over EC2 Spot Pricing is that Silverline’s virtual background container is persistent – where spot instances can be terminated based on pricing.


    February 5, 2010  1:46 AM

    Microsoft and NSF giving away Azure

    CarlBrooks Carl Brooks Profile: CarlBrooks

    The National Science Foundation and Microsoft have announced they will be giving away Azure resources for researchers in an attempt to: “shift the dialogue to what the appropriate public/private interaction” is for research computing, according to Dan Reed, Corporate Vice President for Extreme Computing (yes, really) at Microsoft.

    For 3 years, Microsoft is giving away an unspecified amount of storage and support as well as CPU time for research applications to be run on Azure. NSF assistant director for Computer & Information Science & Engineering Jeanette Wing suggested that cloud computing platforms and Azure in specific should be considered a better choice for research facilities than maintaining and building their own facilities.

    “It’s just not a good use of money or space,” she said.

    Look at the Large Hadron Collider, said Wing, which has 1.5 petabytes of data already, or digital research projects that can generate an exabyte of data in a week, or less. She urged researchers to use Azure to figure out new ways to coping with all that information.

    This is a nice, charitable gesture, not unlike Amazon’s occasional giveaways to worthy scientific projects, of EC2 instances and bandwidth. There are significcant caveats that Microsoft and the NSF have papered over.

    First, from all reports, Azure is a very large data center operation- possibly as large as some of the less prestigious high-performance computing facilities that researchers use around the world. unless Microsoft is giving away the whole thing, it’s not going to make much of a dent in the demand.

    Second, go down to the local university science department and tell a professor he or she can hop on a virtualized, remote Windows platform and process their experiment data. Go on, I dare you.
    99% of experimental, massive-data, high performance computing is done on open source, *nix-based platforms for some very sound reasons. Microsoft won’t gain much traction suggesting that researchers can do better on Azure. It may find some eggheads desperate for resources, but that’s a different story.

    So what is the real import, the overall aim of setting up Azure as a platform to host boatloads of raw data and let people play with it? Both Reed and Wing said they wanted to see researchers with new ideas on how to search and manage these large amounts of data.
    Well that makes more sense–go sign up for a grant, but read the fine print, or you could be inventing the next Google, brought to you by Microsoft…


    Forgot Password

    No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

    Your password has been sent to: