A federal court ruling on the government’s access to data stored offshore by U.S.-based companies could have far-reaching impacts on the cloud market.
A federal district judge in New York ruled this week that Microsoft had to turn over a customer’s emails stored in Ireland in response to a warrant issued earlier this year. Microsoft argued that it’s unlawful for prosecutors to seize customer data held outside the U.S., but Judge Loretta Preska told the company that the location of its data was immaterial.
“It is a question of control, not a question of the location of that information,” Preska said, according to Reuters.
It’s unclear how this could damage the U.S. cloud computing industry, as email has been one of the most popular tools in the cloud. Over the next 12 months, 38% of enterprises plan to deploy the service in the public cloud, second only to test and development, according to the TechTarget Cloud Infrastructure Research Survey Q2 2014.
The ruling comes as Microsoft tries to make inroads in Europe with its Azure cloud and chip away at Amazon’s lead in the market. It also follows last year’s revelations about the U.S. National Security Agency’s secretive data collection around the world that the nonprofit Information Technology and Innovation Foundation estimated at the time could cost the U.S. cloud computing industry $22 billion to $35 billion over the next three years. Other analysts have put the figure even higher.
Security and data control of environment in the cloud are major hurdles for enterprises, with more than a third of IT pros citing those two issues as obstacles to adopting cloud computing, according to the TechTarget survey.
Providers have been building or purchasing datacenters around the world, in part to help localize data in countries in Europe and elsewhere with stricter storage regulations, but this could open the door for European-based and other localized cloud providers to gain traction in a market dominated by U.S.-based vendors.
The judge’s order has been temporarily suspended, as Microsoft intends to challenge the decision in 2nd U.S. Circuit Court of Appeals in what is believed to be the first case in which a corporation challenged a warrant for data held in other nations. AT&T, Apple Inc., Cisco Systems Inc. and Verizon Communications Inc. all submitted briefs in support of Microsoft’s appeal.
The judge’s decision centered on a sealed investigation that involved a warrant a New York prosecutor served for a Microsoft customer’s emails stored in Dublin, Ireland.
ATLANTA — Red Hat was the talk of the OpenStack Summit this week after it made headlines concerning an alleged policy of not supporting Red Hat Enterprise Linux customers who use non-Red Hat distros of OpenStack.
Red Hat has chosen not to provide support to its commercial Linux customers if they use rival versions of OpenStack, The Wall Street Journal reported this week.
At first, this drew ire toward Red Hat from attendees at the summit. To quote one OpenStack guru at the time, “What a bunch of [expletive redacted].”
But then, Paul Cormier, president of products and Technologies for Red Hat, issued a denial of this statement from the Journal story on the official Red Hat blog.
“Users are free to deploy Red Hat Enterprise Linux (RHEL) with any OpenStack offering, and there is no requirement to use our OpenStack technologies to get a Red Hat Enterprise Linux subscription,” Cormier wrote.
Just to make sure, I sought further clarification, because the question raised by the Journal wasn’t that users are required to use Red Hat OpenStack if they want RHEL — the question was whether RHEL will be supported in environments where another OpenStack distro is in place.
Here is part of the answer I got from Tim Yeaton, senior vice president, Infrastructure Group, Red Hat:
“RHEL guests are certified to hypervisor platforms, such as KVM, not to OpenStack per se.”
Yeaton went on to say:
Since we are in the business of building mission-critical cloud infrastructure, delivering on stringent SLAs for enterprise customers based on RHEL, KVM, and OpenStack, we must take responsibility for enterprise-readiness and supportability of our RHEL guests on other vendors’ hypervisors within their OpenStack platforms, and the underlying Linux that is being used within them.
In Red Hat’s enterprise licensing agreement, which is freely available on its website, there is no mention of OpenStack at all in the main body of the agreement, but the following statement can be found in Appendix I:
Red Hat Enterprise Linux is supported solely when used as the host operating system for Red Hat Enterprise Linux OpenStack Platform or when used as the guest operating system on virtual machines created and managed with this Subscription.
This matches up with what Yeaton said about RHEL being certified to the hypervisor rather than OpenStack itself. The second clause of the sentence appears to allow for other distros of OpenStack, since its scope is limited to the virtual machine, not the cloud infrastructure.
An FAQ page on the Red Hat website states that when third-party software and/or uncertified hardware/hypervisors are the potential suspect in a support case, Red Hat reserves the right to ask customers to attempt to recreate the issue with Red Hat shipped/supported software to aid in determining the problem.
This has a faint whiff of the infamous Oracle VM policy, which many attendees at OpenStack Summit brought up when they heard about the Journal story.
To be fair, Red Hat’s language is much less clear than Canonical’s in the Ubuntu support agreement, which says, in part that license must not place restrictions on other software that is distributed along with it. For example, the license must not insist that all other programs distributed on the same medium be free software.
But there doesn’t seem to be any evidence in publicly available resources that Red Hat will remove or refuse support to RHEL users running non-Red Hat distros of OpenStack. It would be interesting to see what the documents are that the WSJ reporter has cited — at this point, the onus would appear to be on the Journal to back up its story.
LAS VEGAS — Advertisements are popping up along the Las Vegas strip this week that challenge Amazon Web Services’ position in the cloud market — and the perpetrator is competitor IBM. As an estimated 9,000 IT pros have come to Las Vegas for AWS re:Invent, Amazon’s second cloud conference, IBM has taken the opportunity to promote its cloud services and partnership with SoftLayer, saying that the company powers 270,000 more websites than Amazon. The ads additionally state that “The IBM cloud offerings also support 30% more of the most popular websites than anyone else in the world.”
Conference attendees have been buzzing about the ads, which have adorned shuttle buses from the hotels, have been digitally projected across the Fashion Show Mall next to Treasure Island hotel and take up small billboards in hotel hallways, including the Venetian, where AWS re:Invent takes place. Andy Jassy, senior vice president of AWS, addressed the ads during today’s keynote address.
“It’s creative, I’ll say that,” Jassy said. “It’s a way to jump up and down … to try to distract customers.”
No one would argue that IBM has a bigger cloud business than AWS, he added.
This ad campaign highlights how the cloud market is heating up rivalries among vendors. As the industry and the products mature, vendors are looking to rise to the top, fighting against competitors for enterprise customers and market share.
Similarly, in August, Microsoft took shots at Google on its blog, citing that the company has many “hidden costs.”
Image credit: IBM
HyTrust will add encryption to its cloud security software following its acquisition of HighCloud Security this week.
HyTrust Inc. already enforces access controls at the management layer of virtual environment so that only authorized users have access to VMs. HighCloud offers encryption of cloud workloads with key management that gets handled at the customer site – an important factor for security-conscious companies considering Infrastructure as a Service.
The two can already be used together, but HighCloud’s software will be integrated into HyTrust to make encryption and key management invisible to the end user, according to Eric Chiu, HyTrust CEO in a blog post about the acquisition.
Amazon Web Services has lowered the price of its second-generation standard instances by 10% across the board, continuing the downward trend of IaaS pricing.
The EC2 M3 instances, which debuted last November, triggered a price reduction of the previous generation of instances when they were launched. Now that second generation is seeing prices fall.
The AWS Blog cited two examples of the 10% price cut this morning: the m3.xlarge on-demand instance was $0.50 per hour, and is now $0.45 cents per hour. The m3.2xlarge on-demand instance was $1.00 per hour and is now $0.90 per hour. Reserved EC2 M3 instances are now 15% cheaper, too.
Even as Infrastructure as a Service (IaaS) providers continue to cut cloud pricing in an effort to lure new customers, it’s not clear just how many IT shops will take the bait and move away from their current cloud computing deployment plans because of a 10% price reduction. Cloud pricing is just one of the many factors that go into the service provider selection process.
There was hope among Rackspace users this week that the company’s latest acquisition of LiteStack would improve cloud provisioning times, but Rackspace officials said the technology isn’t going to be offered as a product for some time.
LiteStack Inc., the open-source hypervisor company acquired by Rackspace Inc. this week, developed technology based on Google’s Native Client that encapsulates applications rather than virtualizing individual servers. This technology, called ZeroVM, can be provisioned in less than five milliseconds according to LiteStack’s Wiki page.
IT pros who use Rackspace’s Cloud Servers Infrastructure as a Service were immediately interested in how this provisioning time could be applied to improve existing offerings, but company officials said that isn’t where the technology is headed.
Eventually, there will be Rackspace product offerings based on ZeroVM, but not for at least a year if not multiple years, according to Bret Piatt, senior director of corporate development and strategy for Rackspace.
Instead, according to Rackspace spokespeople, what Rackspace has really acquired here is the beginning of an open source community that could change the way computing is done when addressing large sets of data, such as with Hadoop. ZeroVM’s lightweight containers can be provisioned as fast as opening a browser tab, according to Van Lindberg, VP of intellectual property for Rackspace.
Lindberg said the app can be brought to the data rather than having to bring the data to the app for processing, which could also make big data analytics go much faster.
Analysts say Rackspace could also be after additional open-source programming prowess to add to its development team.
“They’re not buying the company so much as they’re picking up the software talent that created ZeroVM,” said Carl Brooks, analyst with Boston-based 451 Research.
Financial terms of the acquisition were not disclosed.
Amazon Web Services will allow users to change the size of reserved instances – a capability that is high on cloud customers’ wish lists.
Amazon Web Services (AWS) customers were left wanting more when Amazon first relaxed restrictions around reserved instance (RI) networking and geographic location last month. RIs can now be moved among availability zones and between EC2 Classic and virtual private cloud networks.
Several customers had the idea of ‘trading in’ instances within a certain total pool of resources in order to resize them – and that appears to be exactly what Amazon has done.
The AWS blog has the breakdown of compute units which can be traded between instance sizes. For example, one small instance translates into 64 8xlarge instances, so 64 small instances can be combined into one 8xlarge, or one 8xlarge can be broken up into 64 small instances.
“It’s a near perfect answer to the wish we expressed back in September,” said Nicolas Fonrose, founder of Teevity, a cloud computing monitoring software startup based in France. “I’m sure this is going to really increase RI usage since it removes something that was really painful for any AWS user.”
However, there’s still room for even more added flexibility with RIs, Fonrose pointed out. Today these modifications can only be performed within instance families (m1, m2, m3 and c1).
“There’s one thing that users are still locked to: instance families,” he said.
I attended the NYC OpenStack Meetup this week, which focused on understanding what place Amazon EC2 APIs should hold for OpenStack design and implementation. It was billed as the third round of the AWS API debate, with the first two rounds held on the West Coast. And this event did not disappoint.
The audience seemed more focused on enterprise applicability versus a theoretical discussion of AWS APIs. I suppose this was because these enterprise clients and IT bosses want to know if they can make OpenStack work with some of the rogue AWS implementations their companies already own.
Randy Bias, founding CEO, CTO and co-founder of Cloudscaling, was quick to point out the OpenStack community uses an OpenStack version of the Amazon Web Services (AWS) Elastic Compute Cloud (EC2) API already.
Attendee concerns centered on whether developers could depend on AWS to keep the APIs intact, so that if an OpenStack private cloud developer were to make a call to the API, they’re sure it will work. IT pros know that how much you rely on each of the public cloud APIs affects the portability of clouds. So, it was encouraging to hear AWS APIs are not being deprecated and, therefore, are reliable for multi-cloud architectures for the foreseeable future.
Randy Bias, Nati Shalom, CTO and founder of GigaSpaces, and Alex Freedland, CFO of Mirantis, all provided a healthy divergence of how they see OpenStack evolving and what is needed to strengthen the industry.
Freedland postulated that Moore’s Law applies to cloud computing; the acceleration of innovation and the financial impact to companies will drive cloud adoption, he added. Bias and Shalom took a more technology-focused, ‘If you build it, they will come’ view.
But all three speakers agreed there are two things driving the adoption of, or at least investigation into, the use of OpenStack in a private cloud: an IT manager’s desire to stay independent of any single cloud provider and the ability to interface any private cloud for resources they don’t want to build in-house.
Most enterprise IT managers have been watching the public cloud race as a proof of concept and a way to shake out the contenders. Amazon Web Services has clearly been the winner with majority market share — and at the rate the company is spending money, it would be difficult for any one company to catch them, including IBM. The industry’s answer to combat AWS in the public cloud has been the OpenStack alliance of IBM, HP, Rackspace and others. But VMware’s recent announcement could put it in the running, too.
In May, Dell dropped out of the public cloud race, VMware entered the private cloud race with VMware vCloud Hybrid Service and IBM purchased SoftLayer. These announcements have created what I believe are the three main choices for enterprise cloud: AWS, OpenStack or VMware’s vCloud Hybrid Service.
Most enterprise IT managers look at their infrastructure using VMware-colored glasses; everything must be built on an existing VMware infrastructure. Because of this, they don’t need to worry about hypervisor choices to move forward. But that may not always be the way.
From a strategic point of view, it does not make sense to choose a public Infrastructure as a Service (IaaS) provider until you understand what your enterprise private cloud design will be. Most enterprises have not implemented a private cloud and are wrestling with ways to implement a service-oriented architecture (SOA) to provide more agile and responsive business to their clients, customers, staff and partners, while maintaining a firm risk-management discipline.
Take action; use your enterprise IaaS as a strategic differentiator, and leverage the public cloud for commoditized services, community/B2B, DevOps or global distribution needs.
While in the past, enterprise IT has shunned open source cloud, OpenStack is emerging as an unlikely leader in the long-term race. You can look at past IT leaders, such as IBM, Oracle, Microsoft, DEC, Novell, Cisco or Sun, as precedents of this kind of turnaround.
The growing number of OpenStack adopters, especially IBM, has made this path more appealing to improve enterprise portability options. Surveys show that many organizations expect to use multiple clouds in the future, and OpenStack offers the widest portability choice today. This is something your auditors will like, and we as enterprise IT managers know how important that is.
AWS is easily the leader in public cloud, with more than 55% market share and a long list of features and functions. Many enterprises IT managers let their developers play there, test their ideas there and when done, bring the app in-house to build for production. For those shops that find the AWS service offerings too appealing to turn their back on, consider installing Eucalyptus for your OpenStack private cloud.
VMware’s vCloud Hybrid Service is the third enterprise cloud option, but it’s the least mature of the three. VMware owns the enterprise virtualization marketplace, and with that installed base and trained skillset already in the enterprise, vCloud could be a path of least resistance for many organizations.
Cisco, EMC and VMware have teamed up to form VCE, which has products that allow enterprise IT managers to move toward a more automated private cloud. With this path, you can connect easily with CSC, AT&T and Bluelock public clouds. VMware also supports Cloud Foundry as its OpenStack offering, but with the recent hybrid announcement and the departure of the CTO Steve Herrod – who was its leading OpenStack advocate — VMware may be splitting off to defend its own ecosystem.
The community cloud is quickly becoming the more efficient way for enterprises to implement business-to-business connections. In the past, enterprises would create a VPN connection to each and every one of their business partners, which required working with many different partner IT shops with varying abilities. When I was working at a large financial company, setting up more than 500 B2B connections meant dealing with some small IT shops that did not have a clue about security practices or VPN connections. Often when we had outages, smaller companies were unable to provide support after hours to help restore the connection. But individual VPN connections were the best way to set up an isolated network connection to interface with business partners, suppliers and other supply chain partners.
Now many IT shops find it more practical to set up a community cloud to connect to more than five business partners. A community cloud allows you to have a common meeting place to exchange required information, and you no longer need to have untrusted partners connecting to your network — even if it was only to a DMZ.
I have set up two such community clouds: one for an insurance company and the other for a pharmaceutical company. We were able to set up lightweight directory access protocol (LDAP) and security assertion markup language (SAM) access, and then we created a virtual private cloud (VPC) for each client to connect into. We then connected each of those VPCs to the community cloud. In this case, if a client connection to the community cloud goes down, it is no longer the host company’s problem.
But community cloud implementations may not be for every enterprise. Many enterprises are still working to build out their private clouds before they move to a hybrid cloud or connect to a community cloud. And when working with enterprises trying to implement a community cloud, the bulk of efforts often involves getting stakeholders to agree to the connection and not build out of a public cloud and connect to a SaaS provider.
What are your thoughts on using community clouds for B2B connections? Share your comments below or tweet us @TTintheCloud.