Google’s Cloud Next 2019 conference is coming up and the event session catalog illuminates some of the top priorities for the Google Cloud Platform. Here’s a look at some notable talks slated for the show, which starts April 9 in San Francisco.
Kurian’s time to shine
Recently appointed Google Cloud CEO Thomas Kurian will deliver his first Google Cloud Next keynote. Kurian proffered up some interesting tidbits at the recent Goldman Sachs Technology and Internet Conference, but expect much more about his vision for the platform at Google Cloud Next.
Too many vendor keynotes are little more than long-winded product pitches, but that’s not what Google Cloud Next’s audience wants to hear. Kurian should, and likely will, clearly articulate his vision for how customers can trust Google Cloud Platform (GCP) as a partner with a bigger role in their enterprise IT landscapes, not just a product hawker.
As for the weedier technical material, Urs Hölzle, Google’s SVP of technical infrastructure, will be on hand with his own keynote at Google Cloud Next 2019. For more technical depth, there’s also the Google I/O developer conference later this spring.
Securing its identity
Google Cloud touts AI as a strength in its cloud platform services, but security and access control is another area where it can outpace competitors.
To that end, a session listed in the catalog for Google Cloud Next 2019 features Mailjet, an EU company that sells email services for teams, and one of the first customers to migrate onto Google Cloud Identity for Customers and Partners (CICP).
Mailjet operates under the EU’s strict General Data Protection Regulation (GDPR) rules and runs its systems on GCP. Its user base of more than 600,000 accounts includes customers such as Microsoft and KIA Motors, according to the session abstract. Mailjet CTO Pierre Puchois will describe the company’s effort to adopt CICP, which Google unveiled in October.
Another rip and replace revealed
The Telegraph, one of the U.K.’s largest news organizations, recently outlined its plans to move the vast majority of its IT resources to GCP — and away from AWS.
These stories are increasingly common as the major cloud providers’ offerings mature, and customers decide that another provider is a better strategic fit for their needs.
At Google Cloud Next, representatives from gaming engine developer Unity Technologies will describe the company’s AWS-to-GCP migration it underwent last year, and completed in six months with the help of Google’s professional services organization. The complex project “involved seven different workstreams in five different countries with lots of dependencies among each other,” according to the session abstract.
Cloud vendors love to spotlight these all-in cloud customers, but Google especially wants them onstage at Google Cloud Next 2019. Despite GCP’s growth in market share, it’s still a small percentage of the overall installed base. Prospects and companies with modest investments in the cloud platform want to learn about the successes of those who took a much bigger plunge.
Great together: G Suite and GCP
GCP’s set of infrastructure and app-development services competes with AWS, Azure and other public clouds. But G Suite, Google’s collaboration and email tools, also fall under the auspices of Google’s cloud business.
This has been the case for years, but Google apparently still feels a need to educate customers on how GCP and G Suite can work together. One session at Google Cloud Next 2019 describes how GCP services can be used to analyze G Suite data, and showcases a set of sample applications that tie together G Suite and GCP.
For Google, G Suite can play a similar role as Office and Office 365 have for Microsoft and Azure. Connections between the applications that rank-and-file workers live in every day and a provider’s underlying infrastructure and services sets the stage for both stickiness and better cohesion among processes. Google wins with the former, and customers stand to gain from the latter.
Office 365 has proven to be stiff competition for G Suite, however, and the battle may not be over very soon.
All hands in for hybrid cloud
Google recently pushed its Cloud Services Platform (CSP) into beta. The software stack for hybrid cloud scenarios is based on Google Kubernetes Engine (GKE), the managed service for container orchestration. CSP is one of Google’s responses to the demand for hybrid cloud capabilities — it also has a partnership with Nutanix — and, if the Google Cloud Next 2019 agenda is any indication, hybrid cloud will be a big focus for Google in 2019.
There are already 40 sessions tagged as hybrid cloud scheduled for the event, and more could be added in the weeks ahead. Topics on tap so far include GKE, the Istio service mesh, container security and application modernization.
SAN FRANCISCO – IBM’s bold push of its Watson AI technology to the masses may produce product and services opportunities, but the democratization of AI won’t happen overnight.
Perhaps the strongest message at the IBM Think 2019 event here was the company’s “Watson anywhere” strategy to allow customers to run Watson AI services on any public cloud or in any hybrid cloud environment.
“You might call this the democratization of Watson and the effect should find a growing number of organizations leveraging IBM AI technologies across various business processes,” said Charles King, an analyst at Pund-IT in Hayward, Calif.
This “democratization” comes from IBM’s services heritage to meet customers where they are technologically. To gain an advantage in the quickening AI race, IBM is giving folks the opportunity to use Watson on any cloud they want.
Several attendees said this would likely lead their organizations to at least try Watson out to add a bit of AI to applications to build smart apps.
“One thing that kind of popped for me was the Watson AI thing, but also that ‘There is no AI without IA [information architecture],” said one attendee who asked not to be identified, echoing a line from IBM CEO Ginni Rometty’s keynote. The idea is that a prerequisite to AI is machine learning, to get machine learning you need analytics, and analytics requires the proper data and information architecture — thus, there is no AI without IA.
Despite IBM’s AI splash, one partner said AI is of little use to the majority of enterprise developers today because it’s still nascent.
“IBM, like all the rest, is not selling you AI – not AI out of the box,” he said. “They are selling services for you to build your own stuff. And it doesn’t always turn out like you want it to. Plus Watson is expensive.”
Aside from AI, other buzz at IBM Think 2019 involved IBM’s hybrid cloud and multi-cloud strategy, approaches that more enterprises have begun to embrace.
“Almost every enterprise on the cloud has more than one provider and they all want to keep some of their stuff private,” said Dave Link, founder and CEO of ScienceLogic, a Reston, Va.-based IT monitoring system provider. “IBM has a lot of legacy stuff out in the wild and everybody cannot move it to the cloud all at once.”
The multi-cloud and hybrid cloud services opportunity is huge. Last month alone, IBM inked a $325 million services agreement to manage Juniper Networks’ infrastructure, applications and IT services and facilitate their move to the cloud, and a similar $260 million deal to move the Bank of the Philippines to hybrid cloud and hasten the bank’s digital transformation.
Multi-cloud, particularly multi-cloud management, was on the mind of an IT director who works for the Saudi government. His organization has an array of IT systems, gobs of data and multiple cloud providers, and he needs to bring that all together and manage it.
Yet, he identified the IBM Garage Method as the big draw for him at the show because it provides a methodology to digitally transform an organization.
“My organization is in the early phases of a digital transformation and I believe that this method can be of value to us as we modernize our systems and software,” he said.
Thomas Kurian has kept a low profile since he replaced Diane Greene as CEO of Google’s cloud division — until now.
The longtime former Oracle executive spoke publicly for the first time as Google’s top cloud exec on Feb. 12 at the Goldman Sachs Technology and Internet conference in San Francisco. He touched on a number of topics of interest regarding Google Cloud plans for customers, prospects and the cloud market at large. Here are six key takeaways from Kurian’s appearance.
A customer love-fest
Google Cloud has extremely loyal customers and very low churn, Kurian said. But Google wants to do more to help customers and prospects, through a new program it calls Customers for Life.
“This is a very well-defined methodology within [Google Cloud] and with our partners to track the customer, help them on board, derive business value and then convert them into advocacy so they can talk about their happiness with our cloud,” he said. “Nothing speaks more importantly to a global CIO than another global CIO.”
Such efforts have many precedents in the technology industry. Kurian is no doubt familiar with Oracle’s Customer Success program, and IBM has long pushed a reference sales model. But any additional structure that can be put in place around customers’ post-sale experience is a good thing.
Google’s tech won him over
At Oracle, Kurian led all product development and reported directly to executive chairman Larry Ellison, but reportedly clashed with Ellison over the direction of Oracle’s cloud business. Initially, Oracle said Kurian had taken leave but soon it was clear he left the company entirely, and news of his hire at Google Cloud Platform (GCP) emerged.
An executive with Kurian’s background probably had his pick of opportunities, but he chose Google for its technical prowess.
“I talked to some of the largest companies and asked, ‘Why did you choose Google?’ Uniformly, the feedback I got was, ‘The technology,'” he said. By that, Kurian meant not just Google’s software, but also its data center designs and operational history to run reliably at massive scale. “When was the last time you remember Google search being down?” Kurian added.
Resiliency is a good talking point for Kurian to emphasize in talks with customers and prospects about Google Cloud plans. It remains to be seen how clearly he can make this a differentiator compared to the likes of AWS and Microsoft Azure, both of which make continuous improvements around availability, reliability and scale.
Souped-up sales teams
Historically, Google has focused on digital-native companies that started on modern cloud platforms with little to no legacy IT systems. They are sophisticated tech buyers, often with developers that lead the way.
Google has quadrupled its direct enterprise sales force for GCP in the past three years, but Kurian tacitly admitted that it must do more to compete credibly for enterprise business. To that end, Google plans to target a set of verticals, including financial services, telecommunications, retail and health care. It has hired salespeople that can speak the language of these industries and have the background to sell GCP into large, traditional companies, Kurian said.
Meanwhile, many global systems integrators work with GCP on industry-specific products and services. “They have a lot more domain expertise in the application business process layer. We have expertise in the infrastructure layer,” he said.
Eyeing the cost equation
Price wars have shaped — and scarred — the cloud industry landscape for years, although the battleground has been quiet of late. Kurian is confident in GCP’s technology chops, but it also must compete on cost against AWS and Azure. “We are very focused as we grow on not just having the best technology but also the lowest cost delivery vehicle for that technology,” Kurian said.
Power usage is a huge factor in the ultimate cost to deliver cloud services, and to determine pricing for customers. GCP will continue to make enormous efforts to improve its data center efficiency, including the use of AI to optimize load balancing and other factors, to potentially lower price for customers, Kurian said.
Google Cloud plans target hybrid and multi-cloud
Kurian touched briefly on where Google Cloud stands in the market for hybrid and multi-cloud computing. “[Multi- and hybrid cloud] is an important factor for every Fortune 500 CIO,” he said.
Google Cloud Services Platform can be run inside a customer’s data center, on GCP as well as on other providers’ clouds. GCP has been fairly quiet about the Cloud Services platform of late, but it’s a safe bet there’ll be more noise in the run-up to Google’s Next conference in April.
Since the start, GCP positioned Cloud Services as partner-led, particularly for on-premises scenarios, and Kurian said nothing to indicate this plan had changed. That’s probably a smart play given the lineup of large-scale vendors that shuttered their public clouds due to lack of business, such as Cisco and HPE, but stand ready to be service providers for hybrid and multi-cloud scenarios.
GCP will go up the stack
There has been ample speculation that Kurian will pursue major acquisitions to scale up the GCP business. Nothing has been announced, but GCP could get into more types of SaaS applications along with continued advances in IaaS and PaaS.
GCP has options here in the form of marketing and contact center software, and this will expand into more applications over time, Kurian said.
The IBM Think conference will provide insight into Big Blue’s current and future cloud agenda, which now focuses on private and multi-cloud scenarios.
Like many legacy IT vendors, IBM has struggled to make gains in public cloud. “IBM’s DNA is really into selling large enterprise and government deals. Providing a little widget and a few gigs of storage are not the tasks people think of when they think of IBM,” said Hyoun Park, founder and CEO of Amalgam Insights.
Private cloud projects involve higher capital expenditures and overall expense, which appeals to companies like IBM, with its armies of consultants. It has historically derived significant income from services in general.
Here’s a look at some of the more notable cloud-related sessions at IBM Think that certainly tie into those ideas.
Setting the hybrid and multicloud vision
One of five IBM Think keynotes, titled “Hybrid, Multicloud by Design, Accelerating the Enterprise Cloud Journey,” will feature executives from IBM, VMWare CEO Pat Gelsinger and Red Hat EVP Paul Cormier, as well as customers such as ExxonMobil. The session abstract describes how the ideal foundation for cloud computing is built on open standards and workloads that can move freely between platforms.
ExxonMobil is one of SAP’s largest and longest-standing customers, having used the software since the 1980s. This combination of speakers suggests the session will examine how the oil giant has adapted its massive SAP implementation to the modern cloud world.
Separately, IBM likely will cement its public commitment to open source – which got a jolt with its pending $34 billion acquisition of Red Hat – and as an enabler of multi-cloud and cloud migration, Park said. IBM and Red Hat executives may talk in general terms about their vision for the combined company, given the deal isn’t expected to close until later this year. Don’t anticipate concrete details about product roadmaps, launches or rationalizations.
WebSphere in the cloud
One big opportunity for both IBM and its customers is the migration of on-premises Java applications built on the venerable WebSphere application server to the cloud. The IBM Think session “Application Modernization to Migrate and Manage Existing Applications in a Cloud World” will hone in on this topic, which is an irritation for many customers awash in legacy WebSphere apps.
WebSphere is available on IBM’s cloud, and has been since the service was still named Bluemix. It’s also an option on other cloud services, such as AWS. A couple of years ago, IBM talked about how customers should shift WebSphere applications to a microservices architecture as part of a move to the cloud. It’s likely that this year’s Think session will cover the same ground but attendees can expect an evolved message from IBM based on two more years of customer feedback and internal research.
One dashboard to rule them all
IBM will discuss its new Application Center for Cloud, a dashboard that provides visual tools for administrators to manage all their applications, whether hosted on premises or in the cloud. Application Center appears to be an expansion of IBM’s Cloud App Management service. Overall, it’s another measure by IBM to position itself as a neutral cloud infrastructure vendor — something rivals HPE and Cisco both try to do as well.
Drive all that data to the sky
A Think session titled “What’s New, What’s Coming and Client Journeys to the IBM Cloud” will discuss how IBM has improved Mass Data Migration’s data workload compatibility, share customer stories and preview features set for release this year.
While data migration may not be the sexiest topic on the IBM Think cloud agenda, it’s a reality every customer that makes the move must address.
Just in time for Java
IBM has long supported Eclipse OpenJ9, the open source Java virtual machine, which has gained currency in the cloud.
Java developers will likely want to take a big sip of a Think session on OpenJ9, which will discuss IBM’s new just-in-time (JIT) compiler as a service. JIT compilation is a longstanding feature of Java that translates Java bytecode into machine instructions at runtime and speeds up the creation of virtual machines.
Expect to learn more about how IBM will implement JIT at the conference. The session will be led by long-time IBMer Tim Ellison, who has contributed to Java for more than 20 years and now serves as IBM’s Java CTO.
Java developers could also get insights into how IBM plans to manage its increased influence over the stewardship of enterprise Java alongside Red Hat.
Executive appointments happen all the time in the enterprise tech industry, but some have the potential to transform an organization. Oracle veteran Thomas Kurian hired as the next CEO of Google Cloud could be a game-changer for Google’s enterprise appeal.
Kurian, who starts work at Google in January, spent more than 20 years at Oracle and ultimately oversaw all product development. He reportedly left Oracle over a disagreement on the company’s willingness to make more of its products available on rival clouds.
His long tenure is a notable achievement in a corporate culture as cutthroat as Oracle’s. It also gave Kurian a wealth of knowledge about the technological needs and desires of large enterprises.
VMware founder Diane Greene arrived at Google Cloud three years ago with the intent to build up its enterprise business. Under her leadership, Google Cloud picked up high-profile enterprise customers, including Colgate and the New York Times, as well as forged partnerships with enterprise-centric vendors like SAP and Cisco. She also presided over a number of acquisitions, such as Apigee for API management and Bitium for SaaS single sign-on, which laid some groundwork for future enterprise wins and gave clues toward Google’s longer-term plans.
Kurian is well-poised to build on Greene’s accomplishments. He led development for hundreds of products and presided over the company’s move to the cloud at all three layers of the stack: SaaS, PaaS, and IaaS. Oracle may lag in market share on the last couple of fronts, but its evolution away from a predominantly on-premises software vendor is undeniable.
Kurian also found ways to form and preserve a team of seasoned lieutenants, such as database chief Andrew Mendelsohn and applications head Steve Miranda, both well over 20-year Oracle vets — another testament to Kurian’s ability to build a stable product development organization at the leadership level. It’s a safe bet that Kurian will recruit former Oracle troops to help build out Google Cloud’s enterprise products and sales organization.
So what Kurian will do at Google? We can make a few safe predictions.
First, Google Cloud has largely been focused on selling plumbing: IaaS and PaaS, with applications in third place. Expect this to change with Kurian in charge.
Kurian likely won’t push hard for organic application development — he probably still has nightmares from Oracle Fusion Applications, an ambitious plan to combine a superset of capabilities from JD Edwards, E-Business Suite, Siebel and PeopleSoft into a next-generation suite. Fusion was unveiled in 2005, but the first apps didn’t become generally available until 2011.
Google needs sticky major-category apps such as ERP, HCM and CRM to make major enterprise inroads, and Kurian knows this, but he won’t want to go through a Fusion-like experience again. Expect Kurian to pursue acquisitions in application software. Potential targets include Plex for ERP, Ultimate Software for HCM, and Pegasystems for CRM and marketing.
Some speculate whether Kurian, steeped in the Oracle tea, will thrive in Google’s culture, but it would be a mistake to count him out. Kurian holds business and computer science degrees from Princeton and Stanford; he can speak both the language of developers and residents of the C-suite. Like any new role, Kurian will face a period of adjustment, but the smart money rides on a successful run for him at Google.
With Halloween just around the corner, the usual seasonal mix of ghosts, goblins and ghouls is top-of-mind for many.
But if you asked cloud admins what their biggest scare was so far in 2018, you’d likely get a very different response.
From security breaches to data center outages that went bump in the night, here are three events this year that sent a chill down cloud admins’ spines.
Meltdown, Spectre cause major security fears
Not even a full month into 2018, cloud admins got their first big scare of the year: the Meltdown and Spectre security flaws.
The two vulnerabilities affected Intel, AMD and ARM chips, which power a wide range of computers, smartphones and servers – including servers within the data centers of major cloud computing providers, such as AWS, Azure and Google.
To quell customers’ fears, these providers moved quickly to issue statements, and implement the necessary patches and updates to protect the data hosted on their infrastructure. AWS, for example, issued a statement the first week of January that said, aside from a single-digit percentage, all of its EC2 instances were already protected, with the remaining ones expected to be safe in a matter of hours.
Ultimately, while cloud admins still had to perform some updates on their own, the providers’ swift response to implement patches and take other steps against the vulnerabilities went a long way to mitigate the risks.
Service outages leave users in the dark
No one likes when the lights go out — and cloud admins definitely don’t want to be left in the dark when it comes to their IT infrastructure. But during a stormy night in September, a data center outage took nearly three dozen Azure cloud services offline, including the Azure health status page itself.
While Microsoft restored power later that day, some of its cloud services remained non-operational up to two days later. To prevent a similar scare from happening again, Microsoft has looked into more resilient hardware designs, better automated recovery and expanded redundancy.
That said, Azure users weren’t the only ones haunted by an outage this year. In May, Google Compute Engine (GCE) network issues affected services such as GCE VMs, Google Kubernetes Engine, Cloud VPN and Cloud Private Interconnect in the us-east4 region for an hour. Additionally, in March, AWS’ S3 in the U.S.-East-1 region went offline for several hours due to human error during debugging.
The cold chill of compliance requirements
In May, the GDPR compliance deadline loomed over cloud admins whose businesses have a presence in Europe. Many were definitely spooked by the regulation’s new compliance requirements, and feared facing hefty fines if they failed to meet them.
Admins needed to assess their cloud environments, find appropriate compliance tools and hire staff, as needed, to meet GDPR requirements – all by the May 28 deadline. The good news? Any company who felt like a straggler in terms of meeting those requirements, certainly wasn’t alone.
Microsoft shed more light on last week’s major Azure outage that generally confirm what everyone already knew – a storm near the Azure South Central US region knocked cooling systems offline and shut down systems that took days to recover because of issues with the cloud platform’s architecture.
But the reports also illuminate the scope of systems damage, the infrastructure dependencies that crippled the systems, and plans to increase resiliency for customers.
What we know now
The storm damaged hardware. Multiple voltage surges and sags in the utility power supply caused part of the data center to transfer to generator power, and knocked the cooling system offline despite the existence of surge protectors, according to Microsoft’s overall root-cause analysis (RCA). A thermal buffer in the cooling system eventually depleted and temperatures quickly rose, which triggered the automated systems shutdown.
But that shutdown wasn’t soon enough. “A significant number of storage servers were damaged, as well as a small number of network devices and power units,” according to the company.
Microsoft will now look for more environmentally resilient storage hardware designs, and try to improve its software to help automate and accelerate recovery efforts.
Microsoft wants more zone redundancy. Earlier this year Microsoft introduced Azure Availability Zones, defined as one or more physical data centers in a region with independent power, cooling and networking. AWS and Google already broadly offer these zones, and Azure provides zone-redundant storage in some regions, but not in South Central US.
For Visual Studio Team Services (VSTS), this was the worst outage in its seven-year history, according to the team’s postmortem, written by Buck Hodges, VSTS director of engineering. Ten regions, including this affected one, globally host VSTS customers, and many of those don’t have availability zones. Going forward, Microsoft will enable VSTS to use availability zones and move to whatever regions support them, though the service won’t move out of geographic regions where customers have specific data sovereignty requirements.
Service dependencies hurt everyone. Various Azure infrastructure and systems dependencies harmed services outside the region and slowed recovery efforts:
- The Azure South Central region is the primary site for Azure Service Manager (ASM), which customers typically use for classic resource types. ASM does not support automatic failure, so ASM requests everywhere experienced higher latencies and failures.
- Authentication traffic from Azure Active Directory automatically routed to other regions which triggered throttling mechanisms, and created latency and timeouts for customers in other regions.
- Many Azure regions depend on services in VSTS, which led to slowdowns and inaccessibility for several related services.
- Dependencies on Azure Active Directory and platform services affected Application Insights, according to the group’s postmortem.
Microsoft will review these ASM dependencies, and determine how to migrate services to Azure Resource Manager APIs.
Time to rethink replication options? The VSTS team further explained failover options: wait for recovery, or access data from a read-only backup copy. The latter option would cause latency and data loss, but users of services such as Git, Team Foundation Version Control and Build would be unable to check in, save or deploy code.
Sychronous replication ideally prevents data loss in failovers but in practice it’s hard to do. All services involved must be ready to commit data and respond at any point in time, and that’s not possible, the company said.
Lessons learned? Microsoft said it will reexamine asynchronous replication, and explore active geo-replication for Azure SQL and Azure Storage to asynchronously write data to primary and secondary regions and keep a copy ready for failover.
The VSTS team also will explore how to let customers choose a recovery method based on whether they prioritize faster recovery and productivity over potential loss of data. The system would indicate if the secondary copy is up to date and manually reconcile once the primary data center is back up and running.
Google made a splash when it pledged to push its cloud software into corporate data centers. Just don’t expect to buy it directly from Google.
The container-based Cloud Services Platform was the marquee rollout at the vendor’s annual user conference this week. The Google hybrid cloud framework aims to help corporations address the considerable challenges from being stuck between the on premises and cloud worlds. But it wasn’t until later in the week that Google acknowledged it won’t have much involvement in handling the non-cloud side of the equation for customers.
Later this year, customers will be able to deploy a unified architecture that spans from their private data centers using Google Cloud Services Platform, to Google’s public cloud. The two most important pieces to this puzzle are managed versions of the Google-led open source projects Kubernetes and Istio. Kubernetes has become the de facto orchestrator for the wildly popular container architectures, and Google expects Istio, a service mesh developed by the same team, to solve some of Kubernetes’s shortcomings in areas such as security and audit logging. These technologies are designed for modern, cloud-based software development, but Google says they’ll work just as well wrapped around and connected to legacy systems that sit inside enterprises’ facilities.
In a press conference with reporters, Urs Hölzle, senior vice president of technical infrastructure at Google, described Istio and Kubernetes as a joint ecosystem akin to the Andriod mobile OS Google developed, with products packaged and sold on top. Cloud Services Platform has proprietary Google software, but he referred to it repeatedly as an ecosystem and talked about the importance of partnerships like the one Google has with Cisco.
When asked to clarify the go-to-market strategy for the Google hybrid cloud, Hölzle said the company would largely take a back seat with deployments outside its cloud.
“We’re not an on premises company itself, so [Cloud Services Platform] is really an enabler,” he said. “We get a license fee, but it’s not our goal to displace the relationships we have, so I don’t expect us to be the direct seller and supporter for most accounts.”
The service will be primarily partner-led for two reasons, Hölzle said. ISVs and other potential partners already have trusted on premises relationships with enterprises. And Google plans to keep its focus on compatibility among environments, so the service can act as an onramp to future cloud adoption, and corporations can move at their own comfortable pace.
There could be exceptions, such as if a large-enough customer asked Google to take the lead, Hölzle said. But enterprises should expect to see even more third-party vendors join Cisco as part of the Cloud Services Platform ecosystem over time.
That partner-led approach may have been conveniently left out – or, at best, underplayed – by Google this week, but Hölzle’s points aren’t unreasonable. And in fairness, this is just the alpha release. But if the Google hybrid cloud approach takes off, it will only engender more support from third parties – something Google has moved aggressively to improve and is a critical piece to any public cloud’s success.
And in some ways, it’s not a unique approach. The most prominent hybrid cloud partnership on the market merges technology from VMware and AWS, but as far as customers are concerned, that service is wholly sold and supported by VMware.
But that’s just one vendor – albeit one with a massive install base in traditional enterprises’ data centers. Google has pitched its cloud as a more open alternative to the competition, and it appears that approach will extend to hybrid, too, with plans to cast a wider net to get ISVs and large legacy vendors to buy into its approach. Time will tell if those ISVs, and more importantly, their customers, will do just that.
Walmart’s cloud deployment strategy takes a big turn to Microsoft Azure, but it won’t leave OpenStack completely in the dust.
A five-year deal to make Microsoft Azure the preferred cloud provider for Walmart, the largest corporation in the world, isn’t a shocker, but it does raise questions about Walmart’s future use of the OpenStack. The retailer has been one of OpenStack’s biggest cheerleaders and has spent millions of dollars on a massive private cloud deployment of the software inside its own data centers. The technology, originally seen as an open alternative to public cloud platforms such as AWS and backed by some of the biggest names in IT, has lost nearly all of its momentum in recent years and failed to keep pace with the hyperscale public cloud providers.
OpenStack is notoriously difficult to operate, which is part of why it never truly took off. Some companies, including Walmart, have reported success, though it helps to possess a fleet of engineers to build and maintain it. Abandonment by a company the size and cachet of Walmart would all but serve as the death knell for OpenStack, but that doesn’t appear to be the case just yet, as its OpenStack deployment isn’t going anywhere, according to a company spokesperson.
“In no way does this take away from the work we’ve done there,” the spokesperson said. “Clearly we’ve invested a lot there from a time and financial perspective.”
Walmart will continue to contribute to the OpenStack project and use the software for its private cloud, but the deal with Microsoft adds flexibility and agility to the company’s hybrid cloud strategy, said the spokesperson, who declined to be identified. Walmart will rely on Microsoft to burst workloads to the public cloud, and will utilize a range of Microsoft cloud tools across its various brands, including Azure and Microsoft 365. Large parts of Walmart.com and Samsclub.com will move to Azure, while Walmart will use IoT tools to control energy consumption in its facilities, and implement machine learning to improve supply chain logistics. All told, the companies will work together to migrate hundreds of existing applications, Microsoft said in a blog post.
Walmart’s cloud strategy: Anything but AWS?
It’s a major win for Microsoft, if not entirely shocking. AWS is the largest provider in the market, but parent company Amazon.com happens to be a huge competitor for Walmart’s retail business. Some retailers have been wary to support Amazon through its IT arm, and reports last year indicated Walmart pressured its retail partners to get off AWS.
And Walmart already had a relationship with Azure — online retailer Jet.com, which Walmart acquired in 2016, has hosted its infrastructure on Azure since its inception.
Azure may be Walmart’s official preferred cloud provider, but the company leaves open the possibility to use other cloud platforms when appropriate.
“We obviously are going to continue to look at ways to partner with everyone and anyone that we think will helps us be more agile for our customers,” the spokesperson said.
Even as Walmart underscores its commitment to OpenStack for its private cloud, however, it remains to be seen how much of that agility will come from OpenStack going forward.
Google Cloud Platform (GCP) has become a more robust and reliable public cloud in recent years but still has nowhere near the enterprise mindshare afforded to fellow hyperscale platforms AWS and Microsoft Azure. Google’s Cloud Next conference later this month, only the second large-scale cloud conference hosted by the company, is its best chance yet to change those impressions and make GCP a larger part of enterprises’ IT strategy.
Google is in the same place AWS was earlier this decade: popular among developers, but trepidation from the enterprise IT market. AWS reversed those fears through outreach and additional technical and contractual safeguards, but broader perceptions didn’t truly change until Capital One, GE and other big-name corporations stamped their approval at AWS’ annual user conference in 2015.
Around the same time, and with virtually no penetration in the enterprise market, Google hired Diane Greene to head its cloud unit. Since then, GCP has spent billions to expand its footprint, bolster its portfolio of services and the depth of its features, and get closer to feature parity with AWS and Azure. More recently, Google also beefed up its sales and support teams and partnerships. Nonetheless, it still has a ways to go to be in the same breath as the other two in terms of market share.
For example, Google says it now brings in more than $1 billion in cloud revenues every quarter. Comparison of companies’ cloud businesses is an imperfect metric, but the disparity is stark: AWS generated $5.4 billion in its first quarter this year. For a sense of how much of that revenue comes from the enterprise market, here’s a hint from AWS CEO Andy Jassy at a press conference last November:
“People who mistakenly believe that most of AWS is startups are not being straight with themselves. You can’t have an $18 billion business just on startups.”
Google must accelerate its momentum with corporate customers if it truly wants to compete in the public cloud market, and the Next session catalog reveals a parade of household name brands to make that pitch. Home Depot, Citi, The New York Times and Spotify will share their GCP experiences, as will some newcomers, including AWS darling Netflix. Google needs those companies to reinforce its claims that it’s ready for enterprise prime time, but it also must show its feature set is worthy of selection over the competition. For that, it must answer lingering questions about bumps in the road.
It scored a coup last year when it hired Intel alum Diane Bryant as COO of its cloud division, but she abruptly left this month amid speculation she may return to Intel as CEO. Moreover, an incident last month gave users duplicate IP addresses and asked to delete and reboot their VMs as a workaround, and the month before one of GCP’s East Coast regions suffered an hour-long outage.
Bryant’s departure hasn’t damaged customer perception so far, said Deepak Mohan, an IDC analyst. And cloud outages are inevitable for any platform; in fact, GCP outages have dropped considerably compared to two years ago.
Google can help its enterprise case with more hybrid cloud deployments, Mohan said. Google has positive deals with Nutanix and Cisco, but it could really use a deal with a company with a major on-premises footprint. (The Cisco partnership is more about modern app development and Kubernetes than linkage between legacy systems and the public cloud.)
Companies also prioritize innovation when they move to the cloud, and GCP has caught the industry’s attention, particularly around machine learning and big data.
“They have strength in terms of momentum, and innovation and TCO,” Mohan said. “It does look like they’re putting the right pieces in place and there doesn’t seem to be any hard obstacles to them moving up in the market.”
Is Google ready to ride that momentum to greater success and a larger chunk of the market? Its impressions on potential customers at the Next show will go a long way to provide an answer.