LAS VEGAS — Advertisements are popping up along the Las Vegas strip this week that challenge Amazon Web Services’ position in the cloud market — and the perpetrator is competitor IBM. As an estimated 9,000 IT pros have come to Las Vegas for AWS re:Invent, Amazon’s second cloud conference, IBM has taken the opportunity to promote its cloud services and partnership with SoftLayer, saying that the company powers 270,000 more websites than Amazon. The ads additionally state that “The IBM cloud offerings also support 30% more of the most popular websites than anyone else in the world.”
Conference attendees have been buzzing about the ads, which have adorned shuttle buses from the hotels, have been digitally projected across the Fashion Show Mall next to Treasure Island hotel and take up small billboards in hotel hallways, including the Venetian, where AWS re:Invent takes place. Andy Jassy, senior vice president of AWS, addressed the ads during today’s keynote address.
“It’s creative, I’ll say that,” Jassy said. “It’s a way to jump up and down … to try to distract customers.”
No one would argue that IBM has a bigger cloud business than AWS, he added.
This ad campaign highlights how the cloud market is heating up rivalries among vendors. As the industry and the products mature, vendors are looking to rise to the top, fighting against competitors for enterprise customers and market share.
Similarly, in August, Microsoft took shots at Google on its blog, citing that the company has many “hidden costs.”
Image credit: IBM
HyTrust will add encryption to its cloud security software following its acquisition of HighCloud Security this week.
HyTrust Inc. already enforces access controls at the management layer of virtual environment so that only authorized users have access to VMs. HighCloud offers encryption of cloud workloads with key management that gets handled at the customer site – an important factor for security-conscious companies considering Infrastructure as a Service.
The two can already be used together, but HighCloud’s software will be integrated into HyTrust to make encryption and key management invisible to the end user, according to Eric Chiu, HyTrust CEO in a blog post about the acquisition.
Amazon Web Services has lowered the price of its second-generation standard instances by 10% across the board, continuing the downward trend of IaaS pricing.
The EC2 M3 instances, which debuted last November, triggered a price reduction of the previous generation of instances when they were launched. Now that second generation is seeing prices fall.
The AWS Blog cited two examples of the 10% price cut this morning: the m3.xlarge on-demand instance was $0.50 per hour, and is now $0.45 cents per hour. The m3.2xlarge on-demand instance was $1.00 per hour and is now $0.90 per hour. Reserved EC2 M3 instances are now 15% cheaper, too.
Even as Infrastructure as a Service (IaaS) providers continue to cut cloud pricing in an effort to lure new customers, it’s not clear just how many IT shops will take the bait and move away from their current cloud computing deployment plans because of a 10% price reduction. Cloud pricing is just one of the many factors that go into the service provider selection process.
There was hope among Rackspace users this week that the company’s latest acquisition of LiteStack would improve cloud provisioning times, but Rackspace officials said the technology isn’t going to be offered as a product for some time.
LiteStack Inc., the open-source hypervisor company acquired by Rackspace Inc. this week, developed technology based on Google’s Native Client that encapsulates applications rather than virtualizing individual servers. This technology, called ZeroVM, can be provisioned in less than five milliseconds according to LiteStack’s Wiki page.
IT pros who use Rackspace’s Cloud Servers Infrastructure as a Service were immediately interested in how this provisioning time could be applied to improve existing offerings, but company officials said that isn’t where the technology is headed.
Eventually, there will be Rackspace product offerings based on ZeroVM, but not for at least a year if not multiple years, according to Bret Piatt, senior director of corporate development and strategy for Rackspace.
Instead, according to Rackspace spokespeople, what Rackspace has really acquired here is the beginning of an open source community that could change the way computing is done when addressing large sets of data, such as with Hadoop. ZeroVM’s lightweight containers can be provisioned as fast as opening a browser tab, according to Van Lindberg, VP of intellectual property for Rackspace.
Lindberg said the app can be brought to the data rather than having to bring the data to the app for processing, which could also make big data analytics go much faster.
Analysts say Rackspace could also be after additional open-source programming prowess to add to its development team.
“They’re not buying the company so much as they’re picking up the software talent that created ZeroVM,” said Carl Brooks, analyst with Boston-based 451 Research.
Financial terms of the acquisition were not disclosed.
Amazon Web Services will allow users to change the size of reserved instances – a capability that is high on cloud customers’ wish lists.
Amazon Web Services (AWS) customers were left wanting more when Amazon first relaxed restrictions around reserved instance (RI) networking and geographic location last month. RIs can now be moved among availability zones and between EC2 Classic and virtual private cloud networks.
Several customers had the idea of ‘trading in’ instances within a certain total pool of resources in order to resize them – and that appears to be exactly what Amazon has done.
The AWS blog has the breakdown of compute units which can be traded between instance sizes. For example, one small instance translates into 64 8xlarge instances, so 64 small instances can be combined into one 8xlarge, or one 8xlarge can be broken up into 64 small instances.
“It’s a near perfect answer to the wish we expressed back in September,” said Nicolas Fonrose, founder of Teevity, a cloud computing monitoring software startup based in France. “I’m sure this is going to really increase RI usage since it removes something that was really painful for any AWS user.”
However, there’s still room for even more added flexibility with RIs, Fonrose pointed out. Today these modifications can only be performed within instance families (m1, m2, m3 and c1).
“There’s one thing that users are still locked to: instance families,” he said.
I attended the NYC OpenStack Meetup this week, which focused on understanding what place Amazon EC2 APIs should hold for OpenStack design and implementation. It was billed as the third round of the AWS API debate, with the first two rounds held on the West Coast. And this event did not disappoint.
The audience seemed more focused on enterprise applicability versus a theoretical discussion of AWS APIs. I suppose this was because these enterprise clients and IT bosses want to know if they can make OpenStack work with some of the rogue AWS implementations their companies already own.
Randy Bias, founding CEO, CTO and co-founder of Cloudscaling, was quick to point out the OpenStack community uses an OpenStack version of the Amazon Web Services (AWS) Elastic Compute Cloud (EC2) API already.
Attendee concerns centered on whether developers could depend on AWS to keep the APIs intact, so that if an OpenStack private cloud developer were to make a call to the API, they’re sure it will work. IT pros know that how much you rely on each of the public cloud APIs affects the portability of clouds. So, it was encouraging to hear AWS APIs are not being deprecated and, therefore, are reliable for multi-cloud architectures for the foreseeable future.
Randy Bias, Nati Shalom, CTO and founder of GigaSpaces, and Alex Freedland, CFO of Mirantis, all provided a healthy divergence of how they see OpenStack evolving and what is needed to strengthen the industry.
Freedland postulated that Moore’s Law applies to cloud computing; the acceleration of innovation and the financial impact to companies will drive cloud adoption, he added. Bias and Shalom took a more technology-focused, ‘If you build it, they will come’ view.
But all three speakers agreed there are two things driving the adoption of, or at least investigation into, the use of OpenStack in a private cloud: an IT manager’s desire to stay independent of any single cloud provider and the ability to interface any private cloud for resources they don’t want to build in-house.
Most enterprise IT managers have been watching the public cloud race as a proof of concept and a way to shake out the contenders. Amazon Web Services has clearly been the winner with majority market share — and at the rate the company is spending money, it would be difficult for any one company to catch them, including IBM. The industry’s answer to combat AWS in the public cloud has been the OpenStack alliance of IBM, HP, Rackspace and others. But VMware’s recent announcement could put it in the running, too.
In May, Dell dropped out of the public cloud race, VMware entered the private cloud race with VMware vCloud Hybrid Service and IBM purchased SoftLayer. These announcements have created what I believe are the three main choices for enterprise cloud: AWS, OpenStack or VMware’s vCloud Hybrid Service.
Most enterprise IT managers look at their infrastructure using VMware-colored glasses; everything must be built on an existing VMware infrastructure. Because of this, they don’t need to worry about hypervisor choices to move forward. But that may not always be the way.
From a strategic point of view, it does not make sense to choose a public Infrastructure as a Service (IaaS) provider until you understand what your enterprise private cloud design will be. Most enterprises have not implemented a private cloud and are wrestling with ways to implement a service-oriented architecture (SOA) to provide more agile and responsive business to their clients, customers, staff and partners, while maintaining a firm risk-management discipline.
Take action; use your enterprise IaaS as a strategic differentiator, and leverage the public cloud for commoditized services, community/B2B, DevOps or global distribution needs.
While in the past, enterprise IT has shunned open source cloud, OpenStack is emerging as an unlikely leader in the long-term race. You can look at past IT leaders, such as IBM, Oracle, Microsoft, DEC, Novell, Cisco or Sun, as precedents of this kind of turnaround.
The growing number of OpenStack adopters, especially IBM, has made this path more appealing to improve enterprise portability options. Surveys show that many organizations expect to use multiple clouds in the future, and OpenStack offers the widest portability choice today. This is something your auditors will like, and we as enterprise IT managers know how important that is.
AWS is easily the leader in public cloud, with more than 55% market share and a long list of features and functions. Many enterprises IT managers let their developers play there, test their ideas there and when done, bring the app in-house to build for production. For those shops that find the AWS service offerings too appealing to turn their back on, consider installing Eucalyptus for your OpenStack private cloud.
VMware’s vCloud Hybrid Service is the third enterprise cloud option, but it’s the least mature of the three. VMware owns the enterprise virtualization marketplace, and with that installed base and trained skillset already in the enterprise, vCloud could be a path of least resistance for many organizations.
Cisco, EMC and VMware have teamed up to form VCE, which has products that allow enterprise IT managers to move toward a more automated private cloud. With this path, you can connect easily with CSC, AT&T and Bluelock public clouds. VMware also supports Cloud Foundry as its OpenStack offering, but with the recent hybrid announcement and the departure of the CTO Steve Herrod – who was its leading OpenStack advocate — VMware may be splitting off to defend its own ecosystem.
The community cloud is quickly becoming the more efficient way for enterprises to implement business-to-business connections. In the past, enterprises would create a VPN connection to each and every one of their business partners, which required working with many different partner IT shops with varying abilities. When I was working at a large financial company, setting up more than 500 B2B connections meant dealing with some small IT shops that did not have a clue about security practices or VPN connections. Often when we had outages, smaller companies were unable to provide support after hours to help restore the connection. But individual VPN connections were the best way to set up an isolated network connection to interface with business partners, suppliers and other supply chain partners.
Now many IT shops find it more practical to set up a community cloud to connect to more than five business partners. A community cloud allows you to have a common meeting place to exchange required information, and you no longer need to have untrusted partners connecting to your network — even if it was only to a DMZ.
I have set up two such community clouds: one for an insurance company and the other for a pharmaceutical company. We were able to set up lightweight directory access protocol (LDAP) and security assertion markup language (SAM) access, and then we created a virtual private cloud (VPC) for each client to connect into. We then connected each of those VPCs to the community cloud. In this case, if a client connection to the community cloud goes down, it is no longer the host company’s problem.
But community cloud implementations may not be for every enterprise. Many enterprises are still working to build out their private clouds before they move to a hybrid cloud or connect to a community cloud. And when working with enterprises trying to implement a community cloud, the bulk of efforts often involves getting stakeholders to agree to the connection and not build out of a public cloud and connect to a SaaS provider.
What are your thoughts on using community clouds for B2B connections? Share your comments below or tweet us @TTintheCloud.
The differences between the last OpenStack Summit in San Diego and the one held this week in Portland are sizable. San Diego saw about 1300 attendees – here, the number has been closer to 3000, an estimated 2600 to 2800 in all. Instead of function rooms in a hotel, the conference has expanded to fill a convention center. Instead of a gathering of close-knit propellerheads, this Summit has seen new faces with a distinctive corporate air about them.
None of the above has gone unremarked-upon by conference organizers and presenters, of course, not by a long shot. This is “The Year of the User,” according to a keynote presentation by OpenStack Foundation executive director Jonathan Bryce. Tuesday’s sessions were a coming out party for several household name companies that run OpenStack, including BestBuy.com, PayPal, Samsung and Comcast. An HP session Wednesday was entitled, “OpenStack to Enterprise: Boldly go…”
But while the growth of the Summit, as well as the buzz around OpenStack, has been undeniable, the industry remains in an early adopter phase with this technology. The companies presenting Tuesday were impressive and well-known, but note that it was BestBuy.com architects who showed up to present, not the Best Buy enterprise itself. PayPal, too, is a Web company; Samsung, a technology purveyor; Comcast, a service provider.
In other words, OpenStack appears to be in a boat very similar to the one Amazon Web Services (AWS) currently finds itself in, albeit a small fishing vessel compared to AWS’ hundred -foot yacht. In either case, the messaging is all about the enterprise, but scratch the surface, and the product is still all about the cutting-edge Web developer.
Sure, security is an important concern for any company moving to the cloud. But enterprise worries run much deeper than that.
As large enterprises try cloud computing — either by moving specific workloads to public cloud or by adding a fully automated private cloud — factors such as federation, automation, common management policies and transparency will surface.
When BMW embarked on its cloud project in 2008, its primary goal was to standardize technology across multiple data centers and business units, plus get a better quality at a lower cost. The “Golden Egg” for most enterprises.
“We are nearly at the end of traditional infrastructure,” said Mario Mueller, vice president of IT infrastructure at BMW. “We had clear targets: zero downtime; and with the solution we had, that wasn’t possible.”
But even a long-established company such as BMW, with skilled IT teams in locations throughout the world, had questions about where to start with cloud. “How do you do all the automation?” Mueller said. “How do we implement security? How do we do the identity management?”
Mueller and his team at BMW looked to the Open Data Center Alliance (ODCA) for guidance on building a private cloud to tackle those questions and, ultimately, get from the technology the agility, speed and uptime it had hoped for.
Mueller also happens to be chairman of the ODCA, which was established in 2010 and aims to create a unified voice for cloud customers. More than 300 companies are members and look to the group for examples of cloud applications that help show the way.
Private cloud: Just the beginning
It was clear from the start that private cloud wasn’t the end-game for BMW, said Mueller, nor should it be.
“The real target for most enterprises is the hybrid [cloud] model,” he said. “We have use of a new data center in Iceland where we do high-performance computing; we will get into the hybrid cloud model there.”
Benefits of cloud computing may not be immediate; it takes some time to get things right. Enterprises need to establish a successful private cloud first — and get all the benefits they can there — before moving workloads out of the company, Mueller emphasized.
But in the end, it doesn’t matter which technology you’re using. “It’s all about cost, quality, compliance and security in the infrastructure,” he added.