The OpenStack project’s release of Diablo a few weeks ago invited comparisons to adolescence, but after attending the OpenStack Conference in Boston last week, that analogy strikes me as premature. OpenStack is more like a precocious first-born toddler from whom the family expects great things, but who still has a long way to go.
No doubt about it, OpenStackers have incredibly high hopes for their open-source cloud software stack. Take Chris Kemp, founder and CEO of Nebula, which is building an OpenStack-based private cloud deployment package. “OpenStack is more than just a platform, it’s turning in to an economy,” Kemp said, ”….that will power the next generation of computing.” If successful, “I really think we have an opportunity to change the world.”
At just one year, OpenStack’s achievements are impressive. At the show, Alejandro Comisario, infrastructure senior engineer at MercadoLibre, an e-commerce provider focused on Latin America, described how his firm runs 6000 VMs in a production cloud on top of OpenStack. Meanwhile, researchers from the University of Melbourne told me they are developing a national OpenStack cloud for use by Australian research universities. Clearly, OpenStack has gained a lot of traction in a very short time.
But OpenStack is far from a done deal. The newly formed OpenStack Foundation, which took the reins from RackSpace, is still grappling with fundamental questions about OpenStack’s identity and modus operandi. In a panel session entitled ‘Winning OpenStack’s Second Year,’ panelists from companies including Citrix, HP, RackSpace, Nebula and Cisco discussed issues like whether it should publish a roadmap; whether to stick with Infrastructure as a Service or extend to Platform as a Service; how to ensure code quality; and how to engage end users.
These are all foundational questions which commercial cloud platform providers have, by and large, already answered for themselves.
“The troublesome two’s are a difficult time for parents,” said Tim Hill, group leader of the IT/OIS group within IT at CERN that has experimented with the platform. “Hopefully the OpenStack Foundation will have an easier time.”
IBM has acquired Platform Computing, a score for the commodity private cloud champions over those pushing expensive, proprietary cloud in a box systems.
Historically a strong player in the high performance computing market, Platform switched its focus from grid management software to private cloud management in 2009. Its software enables IT shops to create Infrastructure as a Service in-house from multiple hypervisors, provisioning tools and commodity hardware.
With the acquisition of Platform, Big Blue is hedging its bets on which way users will go to build private clouds. One approach is to lash together x86 servers with some virtualization, automation and management software; the alternative is to buy an expensive cloud in a box, like IBM’s Workload Deployer hardware appliance, where the software and hardware is pre-integrated. Oracle, EMC, Cisco, VMware, NetApp and HP all have cloud in a box systems.
Platform’s approach has won it over 2,000 customers including 23 of the top 30 largest global enterprises. CERN, Citigroup, Infineon, Pratt & Whitney, Red Bull Racing, Sanger Institute, Statoil and the University of Tokyo all use the software to manage commodity clusters.
Other vendors offering cloud platform management software include Embotics, Eucalyptus, Abiquo, Gale Technologies and VMware among others.
Platform Computing has approximately 500 employees worldwide who will join IBM Systems and Technology Group. Platform was privately held and thought to be profitable, due to its range of products and market leadership in HPC, not just Platform ISF its cloud management application.
This week IBM plans to give the world a sneak peak at its latest public and private IBM SmartCloud services and software. It’s porting SAP, ERP and all database applications to it. And claims new technologies will be available to deploy and manage a private cloud 35x faster than existing offerings with support for image management and rapid provisioning. The company did not share numbers on this, so 35x faster doesn’t mean much, yet.
It also said it will be the first enterprise Cloud open to more than 130,000 software vendors, business
partners and value-added resellers — building and selling key applications in supply chain, healthcare and smarter commerce.
Taken together, IBM claimed more than 200 million users will be using IBM’s SmartCloud technology daily.
Stay tuned for more details.
The council includes 200 members from a variety of different companies including Aetna, State Street Bank, Daimler, Pacific Life, John Deere and Lockheed Martin among others. There are also vendors in the group. Melvin Greer, senior fellow and chief strategist for cloud computing at Lockheed Martin and chair of the council, said the vendor members are prohibited from pushing their company’s agenda.
“We are exclusively focused on the customer, we’re not interested in folks with an ax to grind or a position to pontificate,” he said.
The council does not itself create standards but works with other standards bodies like NIST, CSA, Object Management Group and the Open Cloud Consortium), to bring the customer perspective to the discussion on cloud. “We are driving user requirements into the standards process,” Greer said.
The document includes a road map with 9 steps to get you cloud strategy off the ground; it also includes metrics to think about to help measure the success of your strategy. The document does not include legal advice, cost comparisons between cloud providers, SLA advice or case studies at this point in time. Greer said it was a work in progress and the group would appreciate feedback from cloud users.
HP has hired John Purrier, one of the original project leads for OpenStack from Rackspace, to run its HP Cloud Services unit. [Ed note: Is he nuts!?]
Purrier is due to give a keynote at the OpenStack conference this week where he will announce what HP is going to be contributing to OpenStack.
The beleaguered company been working on HP Cloud Service using OpenStack since July but has yet to put any code commits in to the project. Either HP is forking the project (a common problem with open source initiatives), or OpenStack gets a serious partner…
Purrier will oversee the cloud infrastructure engineering, technical operations, and customer satisfaction teams for the HP Cloud Services organization.
Former federal CIO Vivek Kundra has been slammed by IT pros working for the government for his “Cloud First” policy, according to a survey by MeriTalk, an online IT community for U.S. government workers.
The survey of 174 federal IT pros was conducted in August 2011 at the MeriTalk Innovation Nation forum, six months after Kundra’s resignation.
“Vivek’s tenure … was like a bottle of champagne — seems like a great idea, exciting start, but the plan’s unclear, and the next morning you wake up with the same problems and a sore head,” said Steve O’Keeffe, founder, MeriTalk. The firm has presented its findings to Steven VanRoekel, Kundra’s replacement.
The feds supported Kundra’s initiatives, but said timing, funding and conflicting mandates made it impossible to carry them out, according to O’Keeffe. Kundra placed a heavy emphasis on modernizing infrastructure spending on IT, which he said soaked up $19 billion per year out of the approximately $70 billion federal IT budget.
While the majority of federal IT professionals (71%) believe Vivek Kundra made a significant impact while in office and credit his vision as his greatest strength, the study revealed that top challenges under Kundra included lack of funding to fulfill mandates (59%), conflicting mandates (44%) and unrealistic goals/mandates (41%). When asked to vote on the three most important priorities for the new federal CIO, respondents said:
Reduce the number of mandates and conflicting mandates (60%)
Reassess goals/timelines to make success attainable (53%)
Listen to feedback/counsel from IT operations (46%)
According to the study, 92% of feds believe cloud is a good idea for federal IT, but just 29% are following the administration’s mandated “Cloud First” policy. And almost half (42%) say they are adopting a “wait-and-see” approach related to cloud. Respondents cite numerous challenges including security issues (64%), cultural issues (36%) and budget constraints (36%) as barriers to cloud computing.
Almost all feds (95%) also vote for data center consolidation, although the majority (70%) say federal agencies will not be able to eliminate the mandated 800 data centers by 2015. Respondents do anticipate realizing savings from their data center consolidation efforts, with most (74%) estimating the federal government can save at least $75 million overall. Respondents acknowledge, however, that investment is needed — 85% say Feds will not realize data center savings without new investment.
When it comes to cyber security, respondents unanimously agreed threats have increased in the last year (100% say yes). Feds say the most important priorities for cyber security going forward are: securing federal networks (68%), critical infrastructure protection (56%) and privacy protection (36%). However, feds say funding to meet these priorities is, on average, 41% short. Further, feds are unclear who owns cyber security, highlighting a leadership vacuum.
One of the biggest challenges in computer-based design is the amount of processing power it takes to simulate how designs will perform in the real world.
Autodesk Inc., makers of the popular design software AutoCAD, will launch a suite of applications in the next two weeks for visualization, optimization and collaboration on Amazon’s cloud, reducing the computational overhead for users.
The company hopes to be a bridge to the cloud for its existing customers and also to attract smaller design firms that cannot afford big compute farms for 3D visualization.
“Our cloud services will open up these capabilities to more companies,” said Dr. Andrew Anagnost, VP of Web services at Autodesk. “We can do all that processing for them.”
An optimization service will run simulations and show the best result and a collaboration service will crunch data for specific users in a workflow model. In typical cloud fashion, Autodesk will offer a free subscription for a limited amount of capacity, with more capacity for a fee. The company didn’t release exact pricing.
Autodesk has plenty of experience running Software as a Service. Its Buzzsaw online data management tool for the construction industry is over a decade old and taught the company a few lessons. It was spun out and then back in and is currently run from Autodesk’s internal servers.
“There are benefits to that, but absolute problems too around scaling up dynamically … You can’t do it with internal infrastructure,” Anagnost said. Autodesk expects to push elements of Buzzsaw out to the cloud in an experimental way. “As long as the customer will see no difference, it will go to the cloud,” he said. “The lines will blur around what’s on the desktop and what’s in the cloud.”
Anagnost expects all Autodesk’s software will have online versions within three years.
The biggest limitations to its new services will be bandwidth and security. Smaller firms may not have enough pipe to upload data to the cloud. And security in the cloud, or lack thereof continues to be a worry for many companies.
CloudSwitch’s software lets users move applications, or workloads, between company data centers and the cloud without changing the application or the infrastructure layer. This notion of hybrid cloud, or connecting on premises IT with public cloud services, turns out to be the preferred approach for most companies considering cloud computing.
CloudSwitch has proven its software is an enabler of this model and has a dozen or so large enterprises, including Novartis and Biogen, using its product to move workloads to the cloud and back in-house, if necessary.
It’s now clear that VMware did the math on their customers and the new vRAM+socket licensing scheme. But did they unintentionally screw themselves on a booming trend among their own customers?
Survey data from TechTarget’s Data Center Decisions questionnaire, which may be the industry’s largest impartial and sponsorship-free annual survey, with more than 1,000 IT professionals responding, both confirms VMware’s rationale behind the move, and points to a new trend that may indicate a future stumble on its part. The survey data will be published later this week.
VMware CTO Steve Herrod told us in an interview last week that VMware knew the changes would be deleterious to some customers but that VMware’s internal accounting showed that it wouldn’t be more than 10-20% of customers, and it was worth it to simplify licensing for the other 80%. We have some very strong circumstantial evidence that Herrod was right on the money, but historical data shows there’s a twist.
The survey says that 11% of new server buys are going to ship with 128GB RAM and another 11% with more than 128GB RAM. Depending on how many sockets, that’s at least 11% and potentially more server buyers that are going to feel the “OMFG VMware licensing” kick in. So Herrod was right on.
BUT, here’s the kicker: the number of people buying the high RAM density servers in both categories doubled(!) over last year, from 5.75% and 5.29% in 2010. 99% growth is a trend that’s hard to ignore.
Coupled with data about efforts and intentions to build private cloud environments, it’s pretty clear that a lot of IT shops are fully intending to build out cloud-style environments that can show a consolidated economy of scale. But the new licensing means that if you don’t have enough CPU sockets to go with your ocean of RAM, you’re going to get bit hard.
I am taking it as axiomatic here that pretty much all new server buys with a ridiculous amount of RAM per box are intended for virtualization; that VMware’s 85% market share ensures that those buyers are VMware users and not the outliers on Xen or whatever; and that the trend towards trying to consolidate into bigger and bigger boxes will continue to skyrocket.
VMware has posted a fairly silly ‘clarification’ about the new licensing where it attempts to convince the public that users are confused over whether the licensing is about the amount of physical RAM or the amount of vRAM. Nobody is confused about that- that’s why there’s a “v” on there.
It’s the tying of the licenses to CPU sockets that’s causing the heartburn and the outrage from a vocal minority, since so many(twice as many as last year-will it double again this year?) are buying servers with socket/RAM configs that fall outside the licensing norms. Forward thinking users, appreciative of commodity server power and the examples of massively consolidated, massively virtualized cloud computing environments are being told, in a word, they will be penalized for trying to achieve as much efficiency as technology will allow them.
And in case anyone is wondering, VMware cloud providers under the VMware Service Provider Program(VSPP) have been bound by vRAM licensing for some time now. But they’re not limited by socket, like enterprise users are.
Chew on that one for a while as you think about building out a private cloud on vSphere, enterprise IT guys and gals.
In the meantime, here is an incomplete roundup of blogs, back-of-envelope calculations, license calculators and reactions to the vRAM+socket scheme
VMware’s controversial licensing and pricing changes in vSphere 5, leaked today are positively uncloud-like when it comes to cost, casting a shadow over the new features and functions in the product.
Offering pooled RAM as a licensing component instead of charging for physical RAM per host will take away some of the complexity of licensing in a virtual environment but it will increase the cost, according to some analysts and expert bloggers. According to this post, Enterprise vSphere 5 adds licenses every 32GB of RAM.
practically speaking, this may not mean much for a lot of VMware users, and will actually benefit many; anyone running multi-core CPUs in servers with less than 64GB RAM at a standard complement of 10 VMS/physical host might actually see their license pool shrink, something akin to the sun moving backward, according to many VMware users. This covers many kinds of data center operations, from normal workaday servers to blade clusters of many shapes and sizes.
However, this license scheme carries a sharp prejudice against the increasingly common practice of commodity servers with massive amounts of RAM and heavy use of high memory multitenancy and in-memory applications.
For example, provisioning an Exchange server with 64GB of RAM is fairly standard; a hosted exchange provider might run dozens of Exchange VMs across a few machines and giant pool of RAM- that operator is royally screwed. Likewise anyone running a content management or distribution application, or anything with large caching/forwardng requirements.
That’s a dominant model in the cloud world, less so in enterprise, but enterprises are rapidly adopting cloud computing tricks and techniques. Did VMware make the wrong calculation on favoring its majority current customer base over the customer base it’s probably going to have(or not if the licensing remians biased this way) in a few years?
VMware’s CEO Paul Maritz said to get to cloud, users have to have this kind of licensing in order to scale, but this doesn’t jibe with the success Amazon Web Services has had. AWS internal licensing = none, it’s open source and it’s the most proven, scalable cloud on the planet.
Microsoft bundles Hyper-V free with Windows Server. Virtualizing mission critical applications on “free stuff from Microsoft” was never a super attractive option for IT pros, but if the option is an order of magnitude jump in your VMware licenses, that could change.
The question going forward for cloud-style users will be are the features and functions in VMware’s software enough to justify the extra cost?
Most of today’s news was around vSphere 5, but the company also announced vCloud Director 1.5, which now includes the capability to support linked clones. This reduces the time to provision VMs down to 5 seconds, VMware claimed, and cuts the storage costs associated with these VMs as it’s thinly provisioned, meaning only allocated when actually used.