“The explosion of big data and the new paradigm of cloud computing are converging, forcing IT to re-think storage investments that are cost-effective, manageable and scale for the future,” said Brian Stevens, CTO and vice president, Worldwide Engineering at Red Hat. “Our customers are looking for software-based storage solutions that manage their file-based data on-premise, in the cloud and bridging between the two. With unstructured data growth (such as log files, virtual machines, email, audio, video and documents), the 90′s paradigm of forcing everything into expensive, single-system DBMS residing on an internal corporate SAN has become unwieldy and impractical.”
Gluster’s founder, Anand Babu Periasamy spun off the company in 2005 from the supercomputer company California Supercomputer Corp. with the objective to build a better file system. Speaking to its open source roots, the name Gluster is a combination of GNU and cluster. The file system was created around the concept that having a centralized metadata server lowers performance and limits scalability and should instead use the underlying file system on arrays and not store data in proprietary formats.
Gluster provides a software-based, scale-out file system that layers above Red Hat’s other file systems. It is distributed across multiple systems and aggregates the total storage into a single namespace. A Gluster cluster exposes this namespace as an NFS or CIFS mount point that contains every file in the cluster. The underlying storage becomes fully virtualized, and can be distributed into private and public clouds. GlusterFS can also be deployed on Amazon’s EC2 or inside of KVM virtual machines.
“We believe this is a perfect combination of technologies, strategies and cultures and is a great development for our customers, employees, investors and community,” said Periasamy. “Gluster started off with a goal to be the Red Hat of storage. Now, we are the storage of Red Hat.”
With the company’s acquisition, Red Hat also gains the over 2,000 contributors to Gluster.org. The Gluster team recently announced that it is planning to provide highly scalable storage for unstructured data, while preserving the interoperability benefits of NAS.
“The scale out storage technology and expertise Red Hat is gaining from the acquisition of Gluster will serve as a powerful foundation for future public, private and hybrid storage clouds,” said Henry Baltazar, senior analyst of The 451 Group.]]>
A major new feature in this release is its high availability (HA) capability. It is designed to allow no single point of failure –if a system crashes due to any reason, Eucalyptus 3 will immediately trigger a failover to a “hot spare” service that is running concurrently on a different physical machine. This information is then propagated internally to reflect the change while showing no signs of underlying failure to the external world.
“Implementing HA was the obvious next major evolution for Eucalyptus,” said Eucalyptus CEO Mårten Mickos in a blog post. “Originally, our project to develop HA was intended for a few customers who were asking for it early on. But as we dove deeper into the topic, it turned out that a majority of our users needed this feature.”
Eucalyptus 3 also features enhancements to its resource access controls (RAC), which allow admins to tune user group management, perform in-depth cost tracking, and benefit from detailed visibility of cloud usage throughout an enterprise. RAC features include implementation support for the Amazon Web Services (AWS) Identity and Access Management (IAM) API and new service-level management mechanisms. It can also automatically map identities from enterprise LDAP and Active Directory (AD) servers to Eucalyptus accounts, groups, and users. And it includes expanded account and resource reporting interfaces for integration with existing data center chargeback and billing systems.
Eucalyptus 3 also includes cloud storage resource and platform enhancements, such as: Boot from EBS, NetAPP and JBOD SAN drivers, and support for VMware 4.1, RHEL 6.0 and KVM.
The HA capabilities of Eucalyptus may be able to keep this open source option in a competitive position against other IaaS offerings. But, how much demand is out there? Does a HA mechanism make you consider implementing private cloud more than before? If you’ve implemented a private cloud, would you upgrade to Eucalyptus 3 because of this feature?]]>
I talked to Nebula’s co-founder and VP of Engineering, Devin Carlen, who explained that the cloud controller appliance will reduce capital expenditures for companies that may not have the expertise or money to launch their own private cloud using OpenStack. This opens up the potential of cloud computing to small companies that otherwise would need expensive hardware or software and in-house expertise to go to the cloud.
Because all of this was announced at an open source conference, I had to ask the question that the top-of-mind question: Would the cloud controller be opened to the community? The answer: No.
“What’s in the box is not as interesting as what you can do with the box, and how fast,” said Carlen. Emphasizing that speed of deployment was the biggest benefit of the cloud controller, he explained that Nebula customers would be able to order the controller and quickly deploy their own private cloud. Of course, all of the hardware that is inside the box that is open source will continue to be open source and any improvements made to it by Nebula will be shared back with the community. But the box will remain closed.
More secret sauce has been added to the software side of the appliance, which is built on the same APIs and runtime as OpenStack, including security, management, and platform enhancements. Some of these will be kept proprietary as well.
The Nebula debut was the biggest cloud news at OSCON, but OpenStack made other news as well. There was Dell’s OpenStack offering, Gluster’s connector for OpenStack, and HP’s announcement that it was joining OpenStack.
As one enthusiastic Racker tweeted: “What do you call it when a project takes over a conference? I call it a mandate. #OSCON #OpenStack”
Check out more Linux news at SearchEnterpriseLinux.com, and follow us on Twitter @LinuxTT.]]>
The Crowbar software framework manages the OpenStack deployment from the initial server boot to the configuration of the primary OpenStack components, allowing users to complete bare-metal deployment of multi-node OpenStack clouds in a matter of hours. Once the initial deployment is complete, Crowbar can be used to maintain, expand, and architect the complete solution, including BIOS configuration, network discovery, status monitoring, performance data gathering, and alerting. Crowbar has been released to the community as open source code and Dell is working with the community to submit Crowbar as a core project in the OpenStack initiative.
“In order to efficiently serve over 300,000 customers, DreamHost has built intelligent service automation into all our Web hosting solutions,” said Simon Anderson, CEO of DreamHost. His company has used Dell’s OpenStack Cloud solution in their expansion of cloud solutions based on Ceph, an open source distributed storage system. They are also contributing to the OpenStack project.
Check out more Linux news and tips on SearchEnterpriseLinux.com.]]>
This question was pondered by a panel of experts at Structure 2011 on Wednesday. Lew Moorman, president of Rackspace led the discussion around OpenStack, Open Compute and Cloud Foundry projects, why they exist and how they can be monetized by their sponsor companies.
As for the “why open source?” question, the panel agreed that the model has proven successful, as demonstrated by the success of Linux over the last 20 years.
VMware CTO, Derek Collison posited that for cloud computing to advance technologically, use of open source is a must.
“The open source movement is required for the cloud,” he said, explaining that the transparency and no vendor lock-in desire is helped by open source. When people find out it’s open source, they relax about committing to a vendor. And really, cloud computing is too big a problem—and opportunity– for a company or small set of companies to handle. Cloud requires the industry to collaborate.
The shift from making money on intellectual property to a model where some core components are freely available isn’t a huge hurdle for companies, they just need to focus on adding value in order to make money.
“Customers will pay for the innovation from the product or service – there are many different ways to add value,” said Forrest Norrod, VP and GM of server platforms at Dell. “This will disrupt some of the models. [But] the industry will find ways to monetize and add value.”
Collison said that while everyone wants to be in a public cloud, “very few people are willing to walk to the deep end of the pool.” The difficulty in setting up and maintaining distributed systems is something VMware is looking to capitalize on with their cloud services packages, alleviating the IT teams from getting caught up in all of that work.
Adapting to fill these needs is necessary, or the open model could threaten the long-term economics of companies such as VMware and Dell which derive most of their revenue through software sales tied to intellectual property said Norrod.
“If we don’t offer value above and beyond what’s completely commoditized in the standard, then we’re in trouble,” he said. “It can be product value based on IP, it could be service value or just support services. We have to adapt and continue to seek out value-add or we will become irrelevant.”
And building an open source project isn’t cheap. Moorman, whose company sponsors OpenStack, said “it’s a lot of work to run a community – it’s a big investment. We’re spending a lot of money to get input and contributors.”
The experts agreed that their involvement and sponsorship of the open source cloud projects has been costly, but worth it, to their companies, and to the industry as a whole.
Leah Rosin is Site Editor at SearchEnterpriseLinux.com. Follow us on Twitter!
However, Canonical’s decision to move away from Eucalyptus in favor of OpenStack could be risky. OpenStack is less than a year old and still very much in its infancy. Given all the publicity OpenStack has received, it might be fair to wonder whether Canonical was more concerned about being left behind than it was about the technology’s current efficacy.
“A lot of folks figured it was a no brainer just because of the buzz. To be honest, that was not the only reason why we switched. If you switch to something just because it gets buzz, you’d be changing all the time,” said Robbie Williamson, the engineering manager for the Ubuntu server team at Canonical.
Instead, Williamson said, Canonical sees clear technical advantages to OpenStack, specifically when focusing on ARM-based servers.
“We want to be sure Ubuntu can be in the forefront among server operating systems for ARM. We feel like we have an advantage there versus any of the established markets,” he said. “ARM’s Java support isn’t that solid yet, and Eucalyptus is written in Java. So that would have presented a problem for us. We’re also very focused on cloud deployment. For ARM, the virtualization technologies aren’t as mature there. With OpenStack, and the open development model, anyone can participate and contribute as they want, and really drive that functionality in their own self-interest. That is something we will contribute to and drive for our own self-interest. With Eucalyptus, there are some hurdles if we wanted to do that.”
Canonical has done this before, in the case of its support for KVM over Xen. In that case, the company took a risk in deciding to support what it believed was a superior virtualization hypervisor. That decision turned out OK, and there’s no reason to believe – as of yet – that its choice of OpenStack will hurt business.
Even so, Williamson admitted that he, and other Canonical executives, are nervous about whether OpenStack will be enterprise-ready in time for an expected September beta release of Ubuntu 11.10. Like any good gambler, Canonical is hedging its bet, planning to keep support for Eucalyptus through April 2015, and not ruling out a delay in its plan to make OpenStack the default.
“You never know. Come August, maybe we do need to switch it around. Both products are anticipated around the August, September timeline – there is some wiggle room there. … Talking to some of the other engineers, even Mark [Shuttleworth] himself, they were just as nervous, even more so, about this [decision],” Williamson said, comparing the OpenStack decision to the company’s choice of KVM.
Have you tried OpenStack and Eucalyptus? What are your impressions of the two technologies? Do you think this risk will pay off for Ubuntu?
1: Red Hat is pitching itself hard as the “open” cloud player. It’s new CloudForms Infrastructure as a Service (IaaS) offering promises to let users (buzzword alert) “leverage” existing technologies–virtual servers from Red Hat or VMware, public clouds by Amazon, IBM, and others; and on-premises or hosted physical servers.
Then there’s Red Hat OpenShift Platform-as-a-Service (PaaS) which, Red Hat said, will support Java, Python, PHP and Ruby languages and Spring, Seam, Weld, CDI, Rails, Zend, Django, Java EE and other frameworks.
Isaac Roth, Red Hat’s PaaS Master, said developers just want to develop. Figuring out infrastructure, platform basics, servers, and fundamentals is not how developers should be spending their time.
“God it’s awful,” Roth told reporters on Wednesday. “I just want to write Angry Birds.” His claim is that OpenShift Express will ease their pain.
OpenShift Express, a free set of client development tools, is available now. Two other, higher-end versions, OpenShift Flex and OpenShift Power add more capabilities.
2: Last year, Summit attendees were busy weighing Red Hat’s Xen-for-KVM virtualization switch and what issues they might experience in a Xen-to-KVM migration of their own. Flash forward to this year, Red Hat appears to embrace the idea of multiple hypervisors. It must be that whole “openness” thing. VMware doesn’t share that philosophy, according to Red Hat exec VP Paul Cormier who charged that VMware ”is trying to take the entire world back to the 1980s by locking you into the hardware level with ESX.”
3: Perhaps Red Hat is getting all kumbaya about virtualization because it has no choice. Judging from another Summit session, there’s a heckuva a lot RHEL shops running (gasp!) VMware. Even RHEL shops that would love to go with Red Hat Enterprise Virtualization (RHEV) aren’t gonna go there until they no longer have to run RHEV management on a Windows (yes, Windows!) server. That hated Windows requirement will finally go away with the upcoming RHEV 3 release.
4: Judging from the packed session on running high-availability Oracle databases on RHEL, Oracle’s efforts to supplant RHEL with Oracle Unbreakable Linux are falling woefully short.
5: Opinions on Red Hat support remain mixed. Some RHEL customers privately say companies deploy RHEL because they have to prove they’re running a supported OS. But the problem is, when they actually call for support, the results are wildly inconsistent. Two Summit attendees — who work for different government agencies — said they are very happy with RHEL support, although they both also noted that they never, ever use it. Many techie-heavy Linux shops may be in the same boat. (If a support call is never dialed, does support really happen?)
Here’s more cloud news from Red Hat Summit/JBoss World.
Let us know what you think about the story; email Barbara Darrow, Senior News Director at email@example.com.]]>
Ubuntu 11.04 server changes
Ubuntu 11.04 enterprise desktop changes
Ubuntu 11.04 offers users a choice of the new Unity interface or the option to retain the “classic” Ubuntu interface. Unity will be the interface of Ubuntu for all users in the next long term support (LTS) release in April 2012, but beta users gave Unity mixed reviews.
If you’re planning to implement the new Ubuntu in your Linux data center, let us know. We’d love to hear about your experience.]]>
I would really like to know what part of the patent sale (and which specific patents) were deemed to be a threat to the future development of the Linux operating system. We know that Red Hat’s voice was heard via the Open Source Initiative, as we reported last week. And it sure seems like IBM would have been likely to throw some cash at the issue. There was some fear that CPTN could be another SCO debacle.
The changes that have been made to the original CPTN patent arrangement include:
But perhaps the most interesting, and what could be considered the biggest victory for the open source community are the following three changes to the deal:
Now that this is all settled, according to Novell documents filed Wednesday with the Securities and Exchange Commission, the new closing date for the sale is April 27, 2011. How settled does that leave you feeling?]]>
With the elimination of the netbook edition, Ubuntu is further simplifying the name, removing the word “desktop edition” from the PC version, and instead just calling that product “Ubuntu 11.04.” The server edition will be simply called “Ubuntu Server 11.04.”
Canonical recently announced that Ubuntu 11.10, is called Onieric Ocelot. Shuttleworth also hinted that Canonical will be looking to limit the cloud platforms that the next long-term support release (Ubuntu 12.04 LTS) will support.]]>