A lot has happened in the cloud market in the past year, especially concerning public cloud vendors. Some threw in the towel, including Hewlett Packard Enterprise and Verizon, by closing their public clouds, while others, such as Dell, conducted major acquisitions. Amazon Web Services, Microsoft Azure and Google Cloud Platform continue to dominate the public cloud market and show no signs of slowing down. But what will the future bring?
We asked the SearchCloudComputing Advisory Board how they expect the public cloud vendor landscape to evolve or change in 2016. Here’s a look at their predictions.
There are several things I see evolving in 2016 with public cloud vendors. First, 2016 will be the year public cloud vendors will establish their identities, especially with the top three vendors: Amazon Web Services (AWS), Microsoft Azure and Google’s Cloud Platform (GCP).
AWS will stay the course, offering a broad range of cloud services, storage and applications — I do not see much deviation. Microsoft has stepped-up its game to focus on providing an interconnected platform to improve business communications and user experiences from personal computing, cloud, productivity and business processes. Bots and Skype for Business will be two areas where Microsoft Azure will drive innovation and market awareness.
Finally, GCP owns the market for data-intensive and born-in-the-cloud enterprises. GCP has a deep tradition in compute, storage and big data/analytics. GCP wants to extend its reach to own machine learning environments that will enable a whole new world of applications that can see, hear and learn. I believe GCP has the most integrated and complete vision of all the providers.
In the last half of 2016, we will see large carriers that have deployed software-defined networking and network functions virtualization environments to begin offering public cloud services as a way to compete against the big three, as well as address enterprise customer demand to move beyond just providing connectivity and dial-tone.
The mega public cloud platforms like AWS and Microsoft Azure are functionally mature and growing fast, but there is also growing evidence they are safe and secure — maybe safer than your enterprise.
In the early days of public cloud, skeptical potential customers asked if the cloud was “secure” — a somewhat vague question, but a legitimate concern. In the past five years, the questions have become more specific, and focused on compliance. [Enterprises ask] ‘is [the cloud] HIPPA compliant? Is it ‘compliant?’ and so forth. In the early years, those questions were easy to answer because the cloud vendors had so few [certifications], but these days it’s become just as easy to answer because they have so many. In fact, if you look at the compliance pages for Azure and AWS, they have so many certifications, attestations and assurances to talk about it looks like they had to get their user experience experts to organize all that data to be understandable. There are dozens of categories, including those for certain countries, such EU Data Protection, for certain industries, such as the Payment Card Industry, and for the government, such as FEDRAMP.
This is a lot of evidence that these public cloud vendors know how to manage their systems reliably. There have been some outages in all public clouds, but those get the headlines, not the underlying robustness.
The mega public cloud platforms’ infrastructure is modern and highly homogeneous, with everything in sight fully automated and audited — so it makes sense that it is easier to manage and secure. This is the trend to watch for in 2016: Along with data centers across the world — Microsoft has 22 Azure regions, with 5 more coming soon and Amazon has 12 AWS regions, with 5 more coming soon — the myriad certifications and the well-known agility and cost-efficiency benefits, the benefits of going to the public will become more and more difficult for any enterprises to resist.
Gaurav “GP” Pal
Digital Services Platforms (DSPs) are coming soon, as cloud computing expertise matures within enterprises and container technologies allow application mobility and easier big data cluster management. DSPs are business-oriented infrastructure services that allow the creation of digital eco-systems of applications for specific industries. DSPs are built-on commercial cloud infrastructure and include application management, security and data management at scale. Examples of DSP’s are GE’s Predix for the Industrial Internet and Cloud.gov for federal application services.
DSPs will adopt, adapt and integrate multiple cloud services to deliver business-oriented services through automation. For example, an organization may use Microsoft’s Azure Active Directory, providing identity services with AWS Lambda serverless microservices and Google Analytics data in Google Cloud’s BigQuery. Platform architects with systems integrators and CIO shops with talented cloud engineers will start innovating and building DSPs that are increasingly interoperable and automated, using containers and using application program interfaces and software developer kits offered by cloud platforms.
2016, by the dollars, will be another year of acquiring/condensation of market share by the emerging cloud providers from overall IT spend. This means the larger companies like AWS, Azure, IBM, Google Compute Engine and similar vendors will be building products and offerings to help bring enterprise companies into their clouds to tap into larger corporate client revenues. Further niche providers will try to concrete their positions by providing targeted solutions that specifically solve industry-specific issues in a novel or cheaper way, with examples of this being SAP, Salesforce.com and similar cloud providers.
Watching these cloud providers compete with each other will be instructive to understand their trajectories.
At this point, it comes as no surprise: the cloud computing market is growing fast, and it shows no signs of slowing down. As organizations seek out lower-cost, more flexible IT environments, many are turning to the cloud — whether private, public or hybrid — to make those goals a reality.
Global spending on cloud IT infrastructure is projected to grow at a CAGR of 15.1% — reaching a whopping $53.1 billion — by 2019, according to analyst firm IDC. This means cloud will account for 46% of all spending on enterprise IT infrastructure.
As cloud adoption grows, new trends sweep the market. Buzz continues to grow around container-based virtualization, new software development models and hybrid IT. To help keep pace with the cloud market, and the effects of these emerging IT models, SearchCloudComputing has formed an Advisory Board that consists of both cloud computing experts and users.
The SearchCloudComputing Advisory Board will offer additional focus on the trends that matter most to IT pros building and managing the cloud. Throughout the year, we will ask our Advisory Board members to share their insights on the market and answer your most pressing IT questions.
Here’s a brief introduction to our new Advisory Board members, and the trends they’re tracking in the world of cloud. Stay tuned for more.
Christopher Wilder, senior analyst and practice lead for cloud services and enterprise software at Moor Insights & Strategy
Christopher Wilder covers the cloud computing and infrastructure markets, as well as enterprise apps and the emerging Internet of Things. Wilder closely tracks cloud providers including AWS, Google, Microsoft, Hewlett Packard Enterprise, SoftLayer and Oracle, and also follows emerging technologies in the telecom and carrier markets, including network functions virtualization and software-defined networking.
Bill Wilder, CTO at Finomial
In addition to his role at Boston-based Finomial, a software as a service (SaaS) provider for the global hedge fund industry, Bill Wilder founded the Boston Azure Cloud User Group in 2009, a community-run group of Azure users that meets regularly to discuss the challenges and best practices associated with Microsoft’s public cloud platform. Wilder is a Microsoft Azure MVP and also focuses on cloud security and compliance, cloud architecture and platform as a service.
Alex Witherspoon, VP of platform engineering, FlightStats
Alex Witherspoon manages Portland, Ore.-based FlightStat’s IT infrastructure and software engineering teams, which handle the aggregation, processing and transport of global flight data for the company. This accounts for roughly one-third of business at FlightStats, in terms of headcount, and 95% of its revenue. Witherspoon has also spearheaded the company’s migration to Amazon Web Services’ public cloud, and manages the company’s hybrid cloud environment.
Gaurav “GP” Pal, principal, stackArmor
At stackArmor, a cloud consulting firm and AWS partner in Potomac, Md., Gaurav “GP” Pal helps SaaS and cloud-based online businesses deliver secure and compliant services. GP has led large cloud migrations and information security programs for customers in highly regulated industries, such as healthcare, financial services, defense and public sector. He also serves as the industry chair for the University of Maryland’s Digital Innovation, Technology and Strategy Center of Excellence.
A former Red Hat executive has apparently taken the reins of Google Cloud Platform, in a move that could signal a bigger embrace of the open-source community by the cloud provider.
Brian Stevens, former Red Hat CTO, has reportedly been hired by Google as the vice president of cloud platforms. Stevens’ Twitter and LinkedIn profile list his presumed new title while Google says it doesn’t comment on individual hires.
Industry analysts praised Stevens’ work with Red Hat, and see the hire as a smart move by Google.
Stevens has been a tireless advocate for OpenStack and drove Red Hat’s involvement with the OpenStack Foundation and its leadership within the community, according to Dave Bartoletti, an analyst with Forrester Research, Inc., based in Cambridge, Mass.
“He understands how open source projects need to be supported and nurtured into something the enterprise can actually use,” Bartoletti said. “I expect Google sees in him someone who can help them become a leader in open source and OpenStack, and not just a contributor.”
Stevens understands the power of open-source and how to set a vision — two attributes that could help Google as it looks to new leadership to direct its cloud strategy, according to David Linthicum, senior vice president with Cloud Technology Partners, a Boston-based consulting firm. Linthicum singled out Stevens’ role in getting Red Hat to embrace OpenStack and Docker.
“Were I in charge of the Google cloud strategy, Brian would be on my list of people to tap, so it’s not that much of a surprise,” Linthicum said.
Google is currently spearheading Kubernetes, an open-source container management project with the backing of some of the biggest vendors in the industry, and Google Compute Engine is compatible with a number of open-source tools. But Stevens can help Google craft an OpenStack strategy and lure developers to Google Compute Engine and Google App Engine, the company’s infrastructure- and platform as a service offerings, respectively.
“The battle for the developer mindshare in the cloud will be around API support, so I expect him to help build some bridges between Google’s cloud platforms and APIs and the broader OpenStack community,” Bartoletti said.
There are huge demands that come with the position of CTO, Linthicum said, and they’re not always related to technology. Google and Red Hat are both great companies to work for, but it may have been time for a change.
“I suspect after 12 years of doing that, he may be looking for new challenges,” Linthicum said.
“Good for Google, good for Brian, and certainly takes nothing away from Red Hat.”
A federal court ruling on the government’s access to data stored offshore by U.S.-based companies could have far-reaching impacts on the cloud market.
A federal district judge in New York ruled this week that Microsoft had to turn over a customer’s emails stored in Ireland in response to a warrant issued earlier this year. Microsoft argued that it’s unlawful for prosecutors to seize customer data held outside the U.S., but Judge Loretta Preska told the company that the location of its data was immaterial.
“It is a question of control, not a question of the location of that information,” Preska said, according to Reuters.
It’s unclear how this could damage the U.S. cloud computing industry, as email has been one of the most popular tools in the cloud. Over the next 12 months, 38% of enterprises plan to deploy the service in the public cloud, second only to test and development, according to the TechTarget Cloud Infrastructure Research Survey Q2 2014.
The ruling comes as Microsoft tries to make inroads in Europe with its Azure cloud and chip away at Amazon’s lead in the market. It also follows last year’s revelations about the U.S. National Security Agency’s secretive data collection around the world that the nonprofit Information Technology and Innovation Foundation estimated at the time could cost the U.S. cloud computing industry $22 billion to $35 billion over the next three years. Other analysts have put the figure even higher.
Security and data control of environment in the cloud are major hurdles for enterprises, with more than a third of IT pros citing those two issues as obstacles to adopting cloud computing, according to the TechTarget survey.
Providers have been building or purchasing datacenters around the world, in part to help localize data in countries in Europe and elsewhere with stricter storage regulations, but this could open the door for European-based and other localized cloud providers to gain traction in a market dominated by U.S.-based vendors.
The judge’s order has been temporarily suspended, as Microsoft intends to challenge the decision in 2nd U.S. Circuit Court of Appeals in what is believed to be the first case in which a corporation challenged a warrant for data held in other nations. AT&T, Apple Inc., Cisco Systems Inc. and Verizon Communications Inc. all submitted briefs in support of Microsoft’s appeal.
The judge’s decision centered on a sealed investigation that involved a warrant a New York prosecutor served for a Microsoft customer’s emails stored in Dublin, Ireland.
ATLANTA — Red Hat was the talk of the OpenStack Summit this week after it made headlines concerning an alleged policy of not supporting Red Hat Enterprise Linux customers who use non-Red Hat distros of OpenStack.
Red Hat has chosen not to provide support to its commercial Linux customers if they use rival versions of OpenStack, The Wall Street Journal reported this week.
At first, this drew ire toward Red Hat from attendees at the summit. To quote one OpenStack guru at the time, “What a bunch of [expletive redacted].”
But then, Paul Cormier, president of products and Technologies for Red Hat, issued a denial of this statement from the Journal story on the official Red Hat blog.
“Users are free to deploy Red Hat Enterprise Linux (RHEL) with any OpenStack offering, and there is no requirement to use our OpenStack technologies to get a Red Hat Enterprise Linux subscription,” Cormier wrote.
Just to make sure, I sought further clarification, because the question raised by the Journal wasn’t that users are required to use Red Hat OpenStack if they want RHEL — the question was whether RHEL will be supported in environments where another OpenStack distro is in place.
Here is part of the answer I got from Tim Yeaton, senior vice president, Infrastructure Group, Red Hat:
“RHEL guests are certified to hypervisor platforms, such as KVM, not to OpenStack per se.”
Yeaton went on to say:
Since we are in the business of building mission-critical cloud infrastructure, delivering on stringent SLAs for enterprise customers based on RHEL, KVM, and OpenStack, we must take responsibility for enterprise-readiness and supportability of our RHEL guests on other vendors’ hypervisors within their OpenStack platforms, and the underlying Linux that is being used within them.
In Red Hat’s enterprise licensing agreement, which is freely available on its website, there is no mention of OpenStack at all in the main body of the agreement, but the following statement can be found in Appendix I:
Red Hat Enterprise Linux is supported solely when used as the host operating system for Red Hat Enterprise Linux OpenStack Platform or when used as the guest operating system on virtual machines created and managed with this Subscription.
This matches up with what Yeaton said about RHEL being certified to the hypervisor rather than OpenStack itself. The second clause of the sentence appears to allow for other distros of OpenStack, since its scope is limited to the virtual machine, not the cloud infrastructure.
An FAQ page on the Red Hat website states that when third-party software and/or uncertified hardware/hypervisors are the potential suspect in a support case, Red Hat reserves the right to ask customers to attempt to recreate the issue with Red Hat shipped/supported software to aid in determining the problem.
This has a faint whiff of the infamous Oracle VM policy, which many attendees at OpenStack Summit brought up when they heard about the Journal story.
To be fair, Red Hat’s language is much less clear than Canonical’s in the Ubuntu support agreement, which says, in part that license must not place restrictions on other software that is distributed along with it. For example, the license must not insist that all other programs distributed on the same medium be free software.
But there doesn’t seem to be any evidence in publicly available resources that Red Hat will remove or refuse support to RHEL users running non-Red Hat distros of OpenStack. It would be interesting to see what the documents are that the WSJ reporter has cited — at this point, the onus would appear to be on the Journal to back up its story.
LAS VEGAS — Advertisements are popping up along the Las Vegas strip this week that challenge Amazon Web Services’ position in the cloud market — and the perpetrator is competitor IBM. As an estimated 9,000 IT pros have come to Las Vegas for AWS re:Invent, Amazon’s second cloud conference, IBM has taken the opportunity to promote its cloud services and partnership with SoftLayer, saying that the company powers 270,000 more websites than Amazon. The ads additionally state that “The IBM cloud offerings also support 30% more of the most popular websites than anyone else in the world.”
Conference attendees have been buzzing about the ads, which have adorned shuttle buses from the hotels, have been digitally projected across the Fashion Show Mall next to Treasure Island hotel and take up small billboards in hotel hallways, including the Venetian, where AWS re:Invent takes place. Andy Jassy, senior vice president of AWS, addressed the ads during today’s keynote address.
“It’s creative, I’ll say that,” Jassy said. “It’s a way to jump up and down … to try to distract customers.”
No one would argue that IBM has a bigger cloud business than AWS, he added.
This ad campaign highlights how the cloud market is heating up rivalries among vendors. As the industry and the products mature, vendors are looking to rise to the top, fighting against competitors for enterprise customers and market share.
Similarly, in August, Microsoft took shots at Google on its blog, citing that the company has many “hidden costs.”
Image credit: IBM
HyTrust will add encryption to its cloud security software following its acquisition of HighCloud Security this week.
HyTrust Inc. already enforces access controls at the management layer of virtual environment so that only authorized users have access to VMs. HighCloud offers encryption of cloud workloads with key management that gets handled at the customer site – an important factor for security-conscious companies considering Infrastructure as a Service.
The two can already be used together, but HighCloud’s software will be integrated into HyTrust to make encryption and key management invisible to the end user, according to Eric Chiu, HyTrust CEO in a blog post about the acquisition.
Amazon Web Services has lowered the price of its second-generation standard instances by 10% across the board, continuing the downward trend of IaaS pricing.
The EC2 M3 instances, which debuted last November, triggered a price reduction of the previous generation of instances when they were launched. Now that second generation is seeing prices fall.
The AWS Blog cited two examples of the 10% price cut this morning: the m3.xlarge on-demand instance was $0.50 per hour, and is now $0.45 cents per hour. The m3.2xlarge on-demand instance was $1.00 per hour and is now $0.90 per hour. Reserved EC2 M3 instances are now 15% cheaper, too.
Even as Infrastructure as a Service (IaaS) providers continue to cut cloud pricing in an effort to lure new customers, it’s not clear just how many IT shops will take the bait and move away from their current cloud computing deployment plans because of a 10% price reduction. Cloud pricing is just one of the many factors that go into the service provider selection process.
There was hope among Rackspace users this week that the company’s latest acquisition of LiteStack would improve cloud provisioning times, but Rackspace officials said the technology isn’t going to be offered as a product for some time.
LiteStack Inc., the open-source hypervisor company acquired by Rackspace Inc. this week, developed technology based on Google’s Native Client that encapsulates applications rather than virtualizing individual servers. This technology, called ZeroVM, can be provisioned in less than five milliseconds according to LiteStack’s Wiki page.
IT pros who use Rackspace’s Cloud Servers Infrastructure as a Service were immediately interested in how this provisioning time could be applied to improve existing offerings, but company officials said that isn’t where the technology is headed.
Eventually, there will be Rackspace product offerings based on ZeroVM, but not for at least a year if not multiple years, according to Bret Piatt, senior director of corporate development and strategy for Rackspace.
Instead, according to Rackspace spokespeople, what Rackspace has really acquired here is the beginning of an open source community that could change the way computing is done when addressing large sets of data, such as with Hadoop. ZeroVM’s lightweight containers can be provisioned as fast as opening a browser tab, according to Van Lindberg, VP of intellectual property for Rackspace.
Lindberg said the app can be brought to the data rather than having to bring the data to the app for processing, which could also make big data analytics go much faster.
Analysts say Rackspace could also be after additional open-source programming prowess to add to its development team.
“They’re not buying the company so much as they’re picking up the software talent that created ZeroVM,” said Carl Brooks, analyst with Boston-based 451 Research.
Financial terms of the acquisition were not disclosed.
Amazon Web Services will allow users to change the size of reserved instances – a capability that is high on cloud customers’ wish lists.
Amazon Web Services (AWS) customers were left wanting more when Amazon first relaxed restrictions around reserved instance (RI) networking and geographic location last month. RIs can now be moved among availability zones and between EC2 Classic and virtual private cloud networks.
Several customers had the idea of ‘trading in’ instances within a certain total pool of resources in order to resize them – and that appears to be exactly what Amazon has done.
The AWS blog has the breakdown of compute units which can be traded between instance sizes. For example, one small instance translates into 64 8xlarge instances, so 64 small instances can be combined into one 8xlarge, or one 8xlarge can be broken up into 64 small instances.
“It’s a near perfect answer to the wish we expressed back in September,” said Nicolas Fonrose, founder of Teevity, a cloud computing monitoring software startup based in France. “I’m sure this is going to really increase RI usage since it removes something that was really painful for any AWS user.”
However, there’s still room for even more added flexibility with RIs, Fonrose pointed out. Today these modifications can only be performed within instance families (m1, m2, m3 and c1).
“There’s one thing that users are still locked to: instance families,” he said.