Microsoft’s Build conference for developers and IT operations professionals is just a few weeks away, but a close look at the session catalog reveals clues to the event’s likely highlights. Here’s a look at four key themes slated for the event.
The .NET lowdown
Build attendees will peer into the future of Microsoft’s venerable .NET software development framework. One session, “.NET Platform Overview and Roadmap,” will focus on .NET Core, the open-source, cross-platform version of .NET first released in 2016.
.NET Core 3.0 was released as public preview in December 2018, with another update in February. This year’s Build could serve as the general-availability date for the framework update, which in previous versions focused mostly on ASP.NET web apps and ones for Universal Windows Platform-compatible devices.
NET Core 3.0 also adds support for WinForms and Windows Presentation Foundation, the company’s longstanding GUI frameworks for desktop applications, which rounds out the picture for enterprise IT shops.
Getting real on GitHub
Microsoft’s $7.5 billion acquisition of GitHub last year was a seminal event for enterprise IT, as a tech bellwether scooped up a cherished champion in the DevOps community.
Microsoft has sought to burnish its image as an open source-friendly company under CEO Satya Nadella, but the GitHub deal nonetheless roiled many users. Microsoft aims to use Build as a platform to further ease any lingering concerns, judged by one session, “Microsoft’s journey to becoming an open source enterprise with GitHub.”
Redmond now has more than 20,000 developers that contribute to or use open-source projects on GitHub, according to the session abstract. The talk will feature the .NET Compiler Platform team, which will discuss its use of Azure DevOps and GitHub to build software.
Like rivals AWS and Google, Microsoft has steadily increased the amount and variety of databases available on Azure. One prominent Build session, led by Rohan Kumar, corporate VP of Microsoft’s data group, will discuss “Azure’s price performance leadership in cloud scale analytics and innovations in Azure SQL Database, Azure Database for PostgreSQL, Azure Database for MySQL and Azure Cosmos DB.”
Also on the Microsoft Build 2019 agenda is Azure Databricks, Microsoft’s option for Apache Spark for large-scale analytics. Coca-Cola, Paychex and ASB Bank will join Kumar onstage to discuss their use of Microsoft’s data technologies. Other Build sessions hone in separately on Cosmos DB, MySQL, and PostgreSQL.
In one talk, Citus Data, a company Microsoft acquired in January, will provide an update on an extension for PostgreSQL that turns it into a distributed data store. PostgreSQL has increasingly become a favorite target for Oracle workload migrations, which could be a key reason behind Microsoft’s decision to buy Citus.
Countdown to containers
Application modernization is a key theme for Microsoft and rivals such as Google. The Microsoft Build 2019 agenda reflects that, with a talk titled “Take the right path: 5 ways to modernize your .NET apps with Windows Server Containers.” This type of Microsoft container shares a kernel with the container’s host and other containers on the host, according to Microsoft documentation. They can be managed with Docker.
Microsoft also offers Hyper-V containers, which provide more isolation since each container runs within a virtual machine and don’t share the kernel. This is geared more for multitenant SaaS applications that require additional security measures against attackers.
The Windows Server container session at Build, then, likely targets Microsoft shops with a back-catalog of .NET applications that would benefit from a new deployment model. The session will cover container scenarios for on-premises, Azure and hybrid deployments and feature “lots of demos,” according to the abstract.
Microsoft Build 2019 attendees may seek more clarity on the company’s longer-term container strategy, particularly for large-scale deployments. In December 2018, Microsoft said it would sunset Azure Container Service, its managed service which supported Docker containers, and focus development efforts on Azure Kubernetes Service, as the latter has emerged as the dominant standard for containers.
SAN FRANCISCO – In his biggest spotlight to date as Google Cloud CEO, Thomas Kurian played emcee rather than the star attraction.
Kurian, who recently took the helm from Diane Greene after a lengthy career at Oracle, mostly ceded the Tuesday keynote here at the Google Cloud Next show to a parade of Google executives, including Google CEO Sundar Pichai who opened the keynote, as well as partners and customers.
That restraint suggested he did not carry over one tradition from Oracle: bombastic competition-trashing, made famous by his former boss, Oracle co-founder and CTO Larry Ellison.
In fact, Kurian didn’t mention any competitors at all, except in the context of partnerships and Google’s efforts to provide third-party software on its cloud.
Instead, the Google Cloud CEO quickly described a broad vision for the company’s future — vertical industries and a bigger partner ecosystem are two key planks — as well as steps it has already taken to serve customers more effectively.
Google Cloud will continue to expand its go-to-market efforts, not just with more salespeople but also technical staff, Kurian said. Enterprise contract frameworks, simplified pricing and other means to make doing business with the cloud provider easier are on tap as well, the Google Cloud CEO said, albeit without including many specifics.
That’s the kind of message — and overall delivery — that resonates with large enterprises as they consider their cloud strategies and technology options, including Google Cloud.
Arguably, Kurian took a few cues from his old Oracle playbook, with respect to an emphasis on verticals, for example. Oracle spent billions to acquire a slew of industry-specific application vendors over the past 10 to 15 years and moved them into its sprawling Global Business Unit corporate structure. The acquisition roll-up method is credited for Oracle’s steady gains in application revenue against the likes of SAP.
But again, the big difference with Google’s vertical strategy under Kurian hews to the company’s broader goal for enterprise cloud computing: To serve as a technology provider from the infrastructure, PaaS and SaaS layers that can help customers improve their operations where they stand today.
“Google has changed from AI and ML being the main mousetrap to having multiple value propositions for the enterprise,” said Holger Mueller, an analyst with Constellation Research in Cupertino, Calif.
As cloud computing continues to advance, it can be hard for IT professionals to stay up to date. Cloud conferences are the perfect way to connect with industry experts and peers to learn about new technologies, current market trends, best practices and what to expect in the future.
Here are a few cloud conferences and events for IT pros to attend in the second half of 2019.
Google Cloud Next ‘19
Google’s biggest event of the year invites IT professionals of all skill levels to attend hundreds of sessions, labs, demonstrations, panels, breakouts and bootcamps, as well as complete certification exams. Topics covered include Google Cloud Platform services and products, containers, serverless, machine learning and more.
Open Infrastructure Summit
April 29-May 1
This OpenStack event offers more than 300 sessions, presentations and workshops that focus on open infrastructure and related technologies. Attendees can learn more about public, private and hybrid cloud architectures, orchestration, networks, security, open source platforms and other related subjects.
AWS U.S. Summits
Chicago – May 30
Washington, DC – June 11 and 12
New York – July 11
These AWS Summits are free events that offer a place for IT professionals to become more familiar with AWS services and products. Attendees can hear from leaders, experts and customers in numerous sessions, demonstrations, workshops, labs and more.
Santa Clara, California
The 23rd International CloudEXPO invites all IT professions to attend sessions and panels to learn more about cloud technologies and future advances. Popular topics include cloud infrastructure management, DevOps, containers, serverless, machine learning, AI and more.
The 11th USENIX Workshop on Hot Topics in Cloud Computing, known as HotCloud, brings together IT pros and researchers to discuss current and future trends in the cloud. There are sessions and workshops to help users expand their knowledge of cloud computing technologies.
Gartner Catalyst Conference
At the Gartner Catalyst Conference, analysts are available to take a technical deep dive in current trends that challenge users. IT pros have a chance to talk to experts and peers about their issues and how to solve them. Attendees can choose to follow different tracks that cover specific subjects such as cloud infrastructure, architecture and operations.
VMworld’s event covers numerous subjects and offers different learning tracks. For cloud, users can navigate hybrid cloud infrastructure, multi-cloud operations, container technologies, networking and security. It also covers emerging trends which includes IoT, edge computing and machine learning.
At Oracle OpenWorld, IT pros can get hands-on experience in labs, check out product demos, listen to industry experts and more. Sessions cover cloud topics such as Oracle cloud adoption, implementation and deployment.
At this Microsoft’s conference, IT pros can connect with experts, explore new technology, get hands-on experience and more. With more than 700 deep-dive sessions and over 100 workshops, attendees can discover various new products, skills and advancement in a wide range of subjects. Those interested in Azure can attend multiple sessions that cover Kubernetes, cloud application security, cost optimization, Microsoft cloud services and other topics.
KubeCon and CloudNativeCon
These two conferences bring together the open source and cloud native community. Users and experts can attend sessions and co-located events to talk about cloud-native apps, containers, orchestration, open source technologies and more.
AWS re:Invent is one of the largest cloud events of the year that hosts users of all experience levels. It offers a large number of breakout sessions, labs and bootcamps for different skill sets. IT pros can navigate cloud advancements, current services and new AWS products.
Google’s Cloud Next 2019 conference is coming up and the event session catalog illuminates some of the top priorities for the Google Cloud Platform. Here’s a look at some notable talks slated for the show, which starts April 9 in San Francisco.
Kurian’s time to shine
Recently appointed Google Cloud CEO Thomas Kurian will deliver his first Google Cloud Next keynote. Kurian proffered up some interesting tidbits at the recent Goldman Sachs Technology and Internet Conference, but expect much more about his vision for the platform at Google Cloud Next.
Too many vendor keynotes are little more than long-winded product pitches, but that’s not what Google Cloud Next’s audience wants to hear. Kurian should, and likely will, clearly articulate his vision for how customers can trust Google Cloud Platform (GCP) as a partner with a bigger role in their enterprise IT landscapes, not just a product hawker.
As for the weedier technical material, Urs Hölzle, Google’s SVP of technical infrastructure, will be on hand with his own keynote at Google Cloud Next 2019. For more technical depth, there’s also the Google I/O developer conference later this spring.
Securing its identity
Google Cloud touts AI as a strength in its cloud platform services, but security and access control is another area where it can outpace competitors.
To that end, a session listed in the catalog for Google Cloud Next 2019 features Mailjet, an EU company that sells email services for teams, and one of the first customers to migrate onto Google Cloud Identity for Customers and Partners (CICP).
Mailjet operates under the EU’s strict General Data Protection Regulation (GDPR) rules and runs its systems on GCP. Its user base of more than 600,000 accounts includes customers such as Microsoft and KIA Motors, according to the session abstract. Mailjet CTO Pierre Puchois will describe the company’s effort to adopt CICP, which Google unveiled in October.
Another rip and replace revealed
The Telegraph, one of the U.K.’s largest news organizations, recently outlined its plans to move the vast majority of its IT resources to GCP — and away from AWS.
These stories are increasingly common as the major cloud providers’ offerings mature, and customers decide that another provider is a better strategic fit for their needs.
At Google Cloud Next, representatives from gaming engine developer Unity Technologies will describe the company’s AWS-to-GCP migration it underwent last year, and completed in six months with the help of Google’s professional services organization. The complex project “involved seven different workstreams in five different countries with lots of dependencies among each other,” according to the session abstract.
Cloud vendors love to spotlight these all-in cloud customers, but Google especially wants them onstage at Google Cloud Next 2019. Despite GCP’s growth in market share, it’s still a small percentage of the overall installed base. Prospects and companies with modest investments in the cloud platform want to learn about the successes of those who took a much bigger plunge.
Great together: G Suite and GCP
GCP’s set of infrastructure and app-development services competes with AWS, Azure and other public clouds. But G Suite, Google’s collaboration and email tools, also fall under the auspices of Google’s cloud business.
This has been the case for years, but Google apparently still feels a need to educate customers on how GCP and G Suite can work together. One session at Google Cloud Next 2019 describes how GCP services can be used to analyze G Suite data, and showcases a set of sample applications that tie together G Suite and GCP.
For Google, G Suite can play a similar role as Office and Office 365 have for Microsoft and Azure. Connections between the applications that rank-and-file workers live in every day and a provider’s underlying infrastructure and services sets the stage for both stickiness and better cohesion among processes. Google wins with the former, and customers stand to gain from the latter.
Office 365 has proven to be stiff competition for G Suite, however, and the battle may not be over very soon.
All hands in for hybrid cloud
Google recently pushed its Cloud Services Platform (CSP) into beta. The software stack for hybrid cloud scenarios is based on Google Kubernetes Engine (GKE), the managed service for container orchestration. CSP is one of Google’s responses to the demand for hybrid cloud capabilities — it also has a partnership with Nutanix — and, if the Google Cloud Next 2019 agenda is any indication, hybrid cloud will be a big focus for Google in 2019.
There are already 40 sessions tagged as hybrid cloud scheduled for the event, and more could be added in the weeks ahead. Topics on tap so far include GKE, the Istio service mesh, container security and application modernization.
SAN FRANCISCO – IBM’s bold push of its Watson AI technology to the masses may produce product and services opportunities, but the democratization of AI won’t happen overnight.
Perhaps the strongest message at the IBM Think 2019 event here was the company’s “Watson anywhere” strategy to allow customers to run Watson AI services on any public cloud or in any hybrid cloud environment.
“You might call this the democratization of Watson and the effect should find a growing number of organizations leveraging IBM AI technologies across various business processes,” said Charles King, an analyst at Pund-IT in Hayward, Calif.
This “democratization” comes from IBM’s services heritage to meet customers where they are technologically. To gain an advantage in the quickening AI race, IBM is giving folks the opportunity to use Watson on any cloud they want.
Several attendees said this would likely lead their organizations to at least try Watson out to add a bit of AI to applications to build smart apps.
“One thing that kind of popped for me was the Watson AI thing, but also that ‘There is no AI without IA [information architecture],” said one attendee who asked not to be identified, echoing a line from IBM CEO Ginni Rometty’s keynote. The idea is that a prerequisite to AI is machine learning, to get machine learning you need analytics, and analytics requires the proper data and information architecture — thus, there is no AI without IA.
Despite IBM’s AI splash, one partner said AI is of little use to the majority of enterprise developers today because it’s still nascent.
“IBM, like all the rest, is not selling you AI – not AI out of the box,” he said. “They are selling services for you to build your own stuff. And it doesn’t always turn out like you want it to. Plus Watson is expensive.”
Aside from AI, other buzz at IBM Think 2019 involved IBM’s hybrid cloud and multi-cloud strategy, approaches that more enterprises have begun to embrace.
“Almost every enterprise on the cloud has more than one provider and they all want to keep some of their stuff private,” said Dave Link, founder and CEO of ScienceLogic, a Reston, Va.-based IT monitoring system provider. “IBM has a lot of legacy stuff out in the wild and everybody cannot move it to the cloud all at once.”
The multi-cloud and hybrid cloud services opportunity is huge. Last month alone, IBM inked a $325 million services agreement to manage Juniper Networks’ infrastructure, applications and IT services and facilitate their move to the cloud, and a similar $260 million deal to move the Bank of the Philippines to hybrid cloud and hasten the bank’s digital transformation.
Multi-cloud, particularly multi-cloud management, was on the mind of an IT director who works for the Saudi government. His organization has an array of IT systems, gobs of data and multiple cloud providers, and he needs to bring that all together and manage it.
Yet, he identified the IBM Garage Method as the big draw for him at the show because it provides a methodology to digitally transform an organization.
“My organization is in the early phases of a digital transformation and I believe that this method can be of value to us as we modernize our systems and software,” he said.
Thomas Kurian has kept a low profile since he replaced Diane Greene as CEO of Google’s cloud division — until now.
The longtime former Oracle executive spoke publicly for the first time as Google’s top cloud exec on Feb. 12 at the Goldman Sachs Technology and Internet conference in San Francisco. He touched on a number of topics of interest regarding Google Cloud plans for customers, prospects and the cloud market at large. Here are six key takeaways from Kurian’s appearance.
A customer love-fest
Google Cloud has extremely loyal customers and very low churn, Kurian said. But Google wants to do more to help customers and prospects, through a new program it calls Customers for Life.
“This is a very well-defined methodology within [Google Cloud] and with our partners to track the customer, help them on board, derive business value and then convert them into advocacy so they can talk about their happiness with our cloud,” he said. “Nothing speaks more importantly to a global CIO than another global CIO.”
Such efforts have many precedents in the technology industry. Kurian is no doubt familiar with Oracle’s Customer Success program, and IBM has long pushed a reference sales model. But any additional structure that can be put in place around customers’ post-sale experience is a good thing.
Google’s tech won him over
At Oracle, Kurian led all product development and reported directly to executive chairman Larry Ellison, but reportedly clashed with Ellison over the direction of Oracle’s cloud business. Initially, Oracle said Kurian had taken leave but soon it was clear he left the company entirely, and news of his hire at Google Cloud Platform (GCP) emerged.
An executive with Kurian’s background probably had his pick of opportunities, but he chose Google for its technical prowess.
“I talked to some of the largest companies and asked, ‘Why did you choose Google?’ Uniformly, the feedback I got was, ‘The technology,'” he said. By that, Kurian meant not just Google’s software, but also its data center designs and operational history to run reliably at massive scale. “When was the last time you remember Google search being down?” Kurian added.
Resiliency is a good talking point for Kurian to emphasize in talks with customers and prospects about Google Cloud plans. It remains to be seen how clearly he can make this a differentiator compared to the likes of AWS and Microsoft Azure, both of which make continuous improvements around availability, reliability and scale.
Souped-up sales teams
Historically, Google has focused on digital-native companies that started on modern cloud platforms with little to no legacy IT systems. They are sophisticated tech buyers, often with developers that lead the way.
Google has quadrupled its direct enterprise sales force for GCP in the past three years, but Kurian tacitly admitted that it must do more to compete credibly for enterprise business. To that end, Google plans to target a set of verticals, including financial services, telecommunications, retail and health care. It has hired salespeople that can speak the language of these industries and have the background to sell GCP into large, traditional companies, Kurian said.
Meanwhile, many global systems integrators work with GCP on industry-specific products and services. “They have a lot more domain expertise in the application business process layer. We have expertise in the infrastructure layer,” he said.
Eyeing the cost equation
Price wars have shaped — and scarred — the cloud industry landscape for years, although the battleground has been quiet of late. Kurian is confident in GCP’s technology chops, but it also must compete on cost against AWS and Azure. “We are very focused as we grow on not just having the best technology but also the lowest cost delivery vehicle for that technology,” Kurian said.
Power usage is a huge factor in the ultimate cost to deliver cloud services, and to determine pricing for customers. GCP will continue to make enormous efforts to improve its data center efficiency, including the use of AI to optimize load balancing and other factors, to potentially lower price for customers, Kurian said.
Google Cloud plans target hybrid and multi-cloud
Kurian touched briefly on where Google Cloud stands in the market for hybrid and multi-cloud computing. “[Multi- and hybrid cloud] is an important factor for every Fortune 500 CIO,” he said.
Google Cloud Services Platform can be run inside a customer’s data center, on GCP as well as on other providers’ clouds. GCP has been fairly quiet about the Cloud Services platform of late, but it’s a safe bet there’ll be more noise in the run-up to Google’s Next conference in April.
Since the start, GCP positioned Cloud Services as partner-led, particularly for on-premises scenarios, and Kurian said nothing to indicate this plan had changed. That’s probably a smart play given the lineup of large-scale vendors that shuttered their public clouds due to lack of business, such as Cisco and HPE, but stand ready to be service providers for hybrid and multi-cloud scenarios.
GCP will go up the stack
There has been ample speculation that Kurian will pursue major acquisitions to scale up the GCP business. Nothing has been announced, but GCP could get into more types of SaaS applications along with continued advances in IaaS and PaaS.
GCP has options here in the form of marketing and contact center software, and this will expand into more applications over time, Kurian said.
The IBM Think conference will provide insight into Big Blue’s current and future cloud agenda, which now focuses on private and multi-cloud scenarios.
Like many legacy IT vendors, IBM has struggled to make gains in public cloud. “IBM’s DNA is really into selling large enterprise and government deals. Providing a little widget and a few gigs of storage are not the tasks people think of when they think of IBM,” said Hyoun Park, founder and CEO of Amalgam Insights.
Private cloud projects involve higher capital expenditures and overall expense, which appeals to companies like IBM, with its armies of consultants. It has historically derived significant income from services in general.
Here’s a look at some of the more notable cloud-related sessions at IBM Think that certainly tie into those ideas.
Setting the hybrid and multicloud vision
One of five IBM Think keynotes, titled “Hybrid, Multicloud by Design, Accelerating the Enterprise Cloud Journey,” will feature executives from IBM, VMWare CEO Pat Gelsinger and Red Hat EVP Paul Cormier, as well as customers such as ExxonMobil. The session abstract describes how the ideal foundation for cloud computing is built on open standards and workloads that can move freely between platforms.
ExxonMobil is one of SAP’s largest and longest-standing customers, having used the software since the 1980s. This combination of speakers suggests the session will examine how the oil giant has adapted its massive SAP implementation to the modern cloud world.
Separately, IBM likely will cement its public commitment to open source – which got a jolt with its pending $34 billion acquisition of Red Hat – and as an enabler of multi-cloud and cloud migration, Park said. IBM and Red Hat executives may talk in general terms about their vision for the combined company, given the deal isn’t expected to close until later this year. Don’t anticipate concrete details about product roadmaps, launches or rationalizations.
WebSphere in the cloud
One big opportunity for both IBM and its customers is the migration of on-premises Java applications built on the venerable WebSphere application server to the cloud. The IBM Think session “Application Modernization to Migrate and Manage Existing Applications in a Cloud World” will hone in on this topic, which is an irritation for many customers awash in legacy WebSphere apps.
WebSphere is available on IBM’s cloud, and has been since the service was still named Bluemix. It’s also an option on other cloud services, such as AWS. A couple of years ago, IBM talked about how customers should shift WebSphere applications to a microservices architecture as part of a move to the cloud. It’s likely that this year’s Think session will cover the same ground but attendees can expect an evolved message from IBM based on two more years of customer feedback and internal research.
One dashboard to rule them all
IBM will discuss its new Application Center for Cloud, a dashboard that provides visual tools for administrators to manage all their applications, whether hosted on premises or in the cloud. Application Center appears to be an expansion of IBM’s Cloud App Management service. Overall, it’s another measure by IBM to position itself as a neutral cloud infrastructure vendor — something rivals HPE and Cisco both try to do as well.
Drive all that data to the sky
A Think session titled “What’s New, What’s Coming and Client Journeys to the IBM Cloud” will discuss how IBM has improved Mass Data Migration’s data workload compatibility, share customer stories and preview features set for release this year.
While data migration may not be the sexiest topic on the IBM Think cloud agenda, it’s a reality every customer that makes the move must address.
Just in time for Java
IBM has long supported Eclipse OpenJ9, the open source Java virtual machine, which has gained currency in the cloud.
Java developers will likely want to take a big sip of a Think session on OpenJ9, which will discuss IBM’s new just-in-time (JIT) compiler as a service. JIT compilation is a longstanding feature of Java that translates Java bytecode into machine instructions at runtime and speeds up the creation of virtual machines.
Expect to learn more about how IBM will implement JIT at the conference. The session will be led by long-time IBMer Tim Ellison, who has contributed to Java for more than 20 years and now serves as IBM’s Java CTO.
Java developers could also get insights into how IBM plans to manage its increased influence over the stewardship of enterprise Java alongside Red Hat.
Executive appointments happen all the time in the enterprise tech industry, but some have the potential to transform an organization. Oracle veteran Thomas Kurian hired as the next CEO of Google Cloud could be a game-changer for Google’s enterprise appeal.
Kurian, who starts work at Google in January, spent more than 20 years at Oracle and ultimately oversaw all product development. He reportedly left Oracle over a disagreement on the company’s willingness to make more of its products available on rival clouds.
His long tenure is a notable achievement in a corporate culture as cutthroat as Oracle’s. It also gave Kurian a wealth of knowledge about the technological needs and desires of large enterprises.
VMware founder Diane Greene arrived at Google Cloud three years ago with the intent to build up its enterprise business. Under her leadership, Google Cloud picked up high-profile enterprise customers, including Colgate and the New York Times, as well as forged partnerships with enterprise-centric vendors like SAP and Cisco. She also presided over a number of acquisitions, such as Apigee for API management and Bitium for SaaS single sign-on, which laid some groundwork for future enterprise wins and gave clues toward Google’s longer-term plans.
Kurian is well-poised to build on Greene’s accomplishments. He led development for hundreds of products and presided over the company’s move to the cloud at all three layers of the stack: SaaS, PaaS, and IaaS. Oracle may lag in market share on the last couple of fronts, but its evolution away from a predominantly on-premises software vendor is undeniable.
Kurian also found ways to form and preserve a team of seasoned lieutenants, such as database chief Andrew Mendelsohn and applications head Steve Miranda, both well over 20-year Oracle vets — another testament to Kurian’s ability to build a stable product development organization at the leadership level. It’s a safe bet that Kurian will recruit former Oracle troops to help build out Google Cloud’s enterprise products and sales organization.
So what will Kurian do at Google? We can make a few safe predictions.
First, Google Cloud has largely been focused on selling plumbing: IaaS and PaaS, with applications in third place. Expect this to change with Kurian in charge.
Kurian likely won’t push hard for organic application development — he probably still has nightmares from Oracle Fusion Applications, an ambitious plan to combine a superset of capabilities from JD Edwards, E-Business Suite, Siebel and PeopleSoft into a next-generation suite. Fusion was unveiled in 2005, but the first apps didn’t become generally available until 2011.
Google needs sticky major-category apps such as ERP, HCM and CRM to make major enterprise inroads, and Kurian knows this, but he won’t want to go through a Fusion-like experience again. Expect Kurian to pursue acquisitions in application software. Potential targets include Plex for ERP, Ultimate Software for HCM, and Pegasystems for CRM and marketing.
Some speculate whether Kurian, steeped in the Oracle tea, will thrive in Google’s culture, but it would be a mistake to count him out. Kurian holds business and computer science degrees from Princeton and Stanford; he can speak both the language of developers and residents of the C-suite. Like any new role, Kurian will face a period of adjustment, but the smart money rides on a successful run for him at Google.
With Halloween just around the corner, the usual seasonal mix of ghosts, goblins and ghouls is top-of-mind for many.
But if you asked cloud admins what their biggest scare was so far in 2018, you’d likely get a very different response.
From security breaches to data center outages that went bump in the night, here are three events this year that sent a chill down cloud admins’ spines.
Meltdown, Spectre cause major security fears
Not even a full month into 2018, cloud admins got their first big scare of the year: the Meltdown and Spectre security flaws.
The two vulnerabilities affected Intel, AMD and ARM chips, which power a wide range of computers, smartphones and servers – including servers within the data centers of major cloud computing providers, such as AWS, Azure and Google.
To quell customers’ fears, these providers moved quickly to issue statements, and implement the necessary patches and updates to protect the data hosted on their infrastructure. AWS, for example, issued a statement the first week of January that said, aside from a single-digit percentage, all of its EC2 instances were already protected, with the remaining ones expected to be safe in a matter of hours.
Ultimately, while cloud admins still had to perform some updates on their own, the providers’ swift response to implement patches and take other steps against the vulnerabilities went a long way to mitigate the risks.
Service outages leave users in the dark
No one likes when the lights go out — and cloud admins definitely don’t want to be left in the dark when it comes to their IT infrastructure. But during a stormy night in September, a data center outage took nearly three dozen Azure cloud services offline, including the Azure health status page itself.
While Microsoft restored power later that day, some of its cloud services remained non-operational up to two days later. To prevent a similar scare from happening again, Microsoft has looked into more resilient hardware designs, better automated recovery and expanded redundancy.
That said, Azure users weren’t the only ones haunted by an outage this year. In May, Google Compute Engine (GCE) network issues affected services such as GCE VMs, Google Kubernetes Engine, Cloud VPN and Cloud Private Interconnect in the us-east4 region for an hour. Additionally, in March, AWS’ S3 in the U.S.-East-1 region went offline for several hours due to human error during debugging.
The cold chill of compliance requirements
In May, the GDPR compliance deadline loomed over cloud admins whose businesses have a presence in Europe. Many were definitely spooked by the regulation’s new compliance requirements, and feared facing hefty fines if they failed to meet them.
Admins needed to assess their cloud environments, find appropriate compliance tools and hire staff, as needed, to meet GDPR requirements – all by the May 28 deadline. The good news? Any company who felt like a straggler in terms of meeting those requirements, certainly wasn’t alone.
Microsoft shed more light on last week’s major Azure outage that generally confirm what everyone already knew – a storm near the Azure South Central US region knocked cooling systems offline and shut down systems that took days to recover because of issues with the cloud platform’s architecture.
But the reports also illuminate the scope of systems damage, the infrastructure dependencies that crippled the systems, and plans to increase resiliency for customers.
What we know now
The storm damaged hardware. Multiple voltage surges and sags in the utility power supply caused part of the data center to transfer to generator power, and knocked the cooling system offline despite the existence of surge protectors, according to Microsoft’s overall root-cause analysis (RCA). A thermal buffer in the cooling system eventually depleted and temperatures quickly rose, which triggered the automated systems shutdown.
But that shutdown wasn’t soon enough. “A significant number of storage servers were damaged, as well as a small number of network devices and power units,” according to the company.
Microsoft will now look for more environmentally resilient storage hardware designs, and try to improve its software to help automate and accelerate recovery efforts.
Microsoft wants more zone redundancy. Earlier this year Microsoft introduced Azure Availability Zones, defined as one or more physical data centers in a region with independent power, cooling and networking. AWS and Google already broadly offer these zones, and Azure provides zone-redundant storage in some regions, but not in South Central US.
For Visual Studio Team Services (VSTS), this was the worst outage in its seven-year history, according to the team’s postmortem, written by Buck Hodges, VSTS director of engineering. Ten regions, including this affected one, globally host VSTS customers, and many of those don’t have availability zones. Going forward, Microsoft will enable VSTS to use availability zones and move to whatever regions support them, though the service won’t move out of geographic regions where customers have specific data sovereignty requirements.
Service dependencies hurt everyone. Various Azure infrastructure and systems dependencies harmed services outside the region and slowed recovery efforts:
- The Azure South Central region is the primary site for Azure Service Manager (ASM), which customers typically use for classic resource types. ASM does not support automatic failure, so ASM requests everywhere experienced higher latencies and failures.
- Authentication traffic from Azure Active Directory automatically routed to other regions which triggered throttling mechanisms, and created latency and timeouts for customers in other regions.
- Many Azure regions depend on services in VSTS, which led to slowdowns and inaccessibility for several related services.
- Dependencies on Azure Active Directory and platform services affected Application Insights, according to the group’s postmortem.
Microsoft will review these ASM dependencies, and determine how to migrate services to Azure Resource Manager APIs.
Time to rethink replication options? The VSTS team further explained failover options: wait for recovery, or access data from a read-only backup copy. The latter option would cause latency and data loss, but users of services such as Git, Team Foundation Version Control and Build would be unable to check in, save or deploy code.
Sychronous replication ideally prevents data loss in failovers but in practice it’s hard to do. All services involved must be ready to commit data and respond at any point in time, and that’s not possible, the company said.
Lessons learned? Microsoft said it will reexamine asynchronous replication, and explore active geo-replication for Azure SQL and Azure Storage to asynchronously write data to primary and secondary regions and keep a copy ready for failover.
The VSTS team also will explore how to let customers choose a recovery method based on whether they prioritize faster recovery and productivity over potential loss of data. The system would indicate if the secondary copy is up to date and manually reconcile once the primary data center is back up and running.