The Troposphere


October 29, 2018  4:57 PM

Three events that spooked cloud admins in 2018

Kathleen Casey Kathleen Casey Profile: Kathleen Casey

With Halloween just around the corner, the usual seasonal mix of ghosts, goblins and ghouls is top-of-mind for many.

But if you asked cloud admins what their biggest scare was so far in 2018, you’d likely get a very different response.

From security breaches to data center outages that went bump in the night, here are three events this year that sent a chill down cloud admins’ spines.

Meltdown, Spectre cause major security fears

Not even a full month into 2018, cloud admins got their first big scare of the year: the Meltdown and Spectre security flaws.

The two vulnerabilities affected Intel, AMD and ARM chips, which power a wide range of computers, smartphones and servers – including servers within the data centers of major cloud computing providers, such as AWS, Azure and Google.

To quell customers’ fears, these providers moved quickly to issue statements, and implement the necessary patches and updates to protect the data hosted on their infrastructure. AWS, for example, issued a statement the first week of January that said, aside from a single-digit percentage, all of its EC2 instances were already protected, with the remaining ones expected to be safe in a matter of hours.

Ultimately, while cloud admins still had to perform some updates on their own, the providers’ swift response to implement patches and take other steps against the vulnerabilities went a long way to mitigate the risks.

Service outages leave users in the dark

No one likes when the lights go out — and cloud admins definitely don’t want to be left in the dark when it comes to their IT infrastructure. But during a stormy night in September, a data center outage took nearly three dozen Azure cloud services offline, including the Azure health status page itself.

While Microsoft restored power later that day, some of its cloud services remained non-operational up to two days later. To prevent a similar scare from happening again, Microsoft has looked into more resilient hardware designs, better automated recovery and expanded redundancy.

That said, Azure users weren’t the only ones haunted by an outage this year. In May, Google Compute Engine (GCE) network issues affected services such as GCE VMs, Google Kubernetes Engine, Cloud VPN and Cloud Private Interconnect in the us-east4 region for an hour. Additionally, in March, AWS’ S3 in the U.S.-East-1 region went offline for several hours due to human error during debugging.

The cold chill of compliance requirements

In May, the GDPR compliance deadline loomed over cloud admins whose businesses have a presence in Europe. Many were definitely spooked by the regulation’s new compliance requirements, and feared facing hefty fines if they failed to meet them.

Admins needed to assess their cloud environments, find appropriate compliance tools and hire staff, as needed, to meet GDPR requirements – all by the May 28 deadline. The good news? Any company who felt like a straggler in terms of meeting those requirements, certainly wasn’t alone.

September 13, 2018  9:02 PM

Learn from these Microsoft’s Azure outage postmortem takeaways

James Montgomery Profile: James Montgomery

Microsoft shed more light on last week’s major Azure outage that generally confirm what everyone already knew –  a storm near the Azure South Central US region knocked cooling systems offline and shut down systems that took days to recover because of issues with the cloud platform’s architecture.

But the reports also illuminate the scope of systems damage, the infrastructure dependencies that crippled the systems, and plans to increase resiliency for customers.

What we know now

The storm damaged hardware. Multiple voltage surges and sags in the utility power supply caused part of the data center to transfer to generator power, and knocked the cooling system offline despite the existence of surge protectors, according to Microsoft’s overall root-cause analysis (RCA). A thermal buffer in the cooling system eventually depleted and temperatures quickly rose, which triggered the automated systems shutdown.

But that shutdown wasn’t soon enough. “A significant number of storage servers were damaged, as well as a small number of network devices and power units,” according to the company.

Microsoft will now look for more environmentally resilient storage hardware designs, and try to improve its software to help automate and accelerate recovery efforts.

Microsoft wants more zone redundancy. Earlier this year Microsoft introduced Azure Availability Zones, defined as one or more physical data centers in a region with independent power, cooling and networking. AWS and Google already broadly offer these zones, and Azure provides zone-redundant storage in some regions, but not in South Central US.

For Visual Studio Team Services (VSTS), this was the worst outage in its seven-year history, according to the team’s postmortem, written by Buck Hodges, VSTS director of engineering. Ten regions, including this affected one, globally host VSTS customers, and many of those don’t have availability zones. Going forward, Microsoft will enable VSTS to use availability zones and move to whatever regions support them, though the service won’t move out of geographic regions where customers have specific data sovereignty requirements.

Service dependencies hurt everyone. Various Azure infrastructure and systems dependencies harmed services outside the region and slowed recovery efforts:

  • The Azure South Central region is the primary site for Azure Service Manager (ASM), which customers typically use for classic resource types. ASM does not support automatic failure, so ASM requests everywhere experienced higher latencies and failures.
  • Authentication traffic from Azure Active Directory automatically routed to other regions which triggered throttling mechanisms, and created latency and timeouts for customers in other regions.
  • Many Azure regions depend on services in VSTS, which led to slowdowns and inaccessibility for several related services.
  • Dependencies on Azure Active Directory and platform services affected Application Insights, according to the group’s postmortem.

Microsoft will review these ASM dependencies, and determine how to migrate services to Azure Resource Manager APIs.

Time to rethink replication options? The VSTS team further explained failover options: wait for recovery, or access data from a read-only backup copy. The latter option would cause latency and data loss, but users of services such as Git, Team Foundation Version Control and Build would be unable to check in, save or deploy code.

Sychronous replication ideally prevents data loss in failovers but in practice it’s hard to do. All services involved must be ready to commit data and respond at any point in time, and that’s not possible, the company said.

Lessons learned? Microsoft said it will reexamine asynchronous replication, and explore active geo-replication for Azure SQL and Azure Storage to asynchronously write data to primary and secondary regions and keep a copy ready for failover.

The VSTS team also will explore how to let customers choose a recovery method based on whether they prioritize faster recovery and productivity over potential loss of data. The system would indicate if the secondary copy is up to date and manually reconcile once the primary data center is back up and running.


July 27, 2018  8:34 PM

Want the new Google hybrid cloud? You’ll need a middleman

Trevor Jones Trevor Jones Profile: Trevor Jones

Google made a splash when it pledged to push its cloud software into corporate data centers. Just don’t expect to buy it directly from Google.

The container-based Cloud Services Platform was the marquee rollout at the vendor’s annual user conference this week. The Google hybrid cloud framework aims to help corporations address the considerable challenges from being stuck between the on premises and cloud worlds. But it wasn’t until later in the week that Google acknowledged it won’t have much involvement in handling the non-cloud side of the equation for customers.

Later this year, customers will be able to deploy a unified architecture that spans from their private data centers using Google Cloud Services Platform, to Google’s public cloud. The two most important pieces to this puzzle are managed versions of the Google-led open source projects Kubernetes and Istio. Kubernetes has become the de facto orchestrator for the wildly popular container architectures, and Google expects Istio, a service mesh developed by the same team, to solve some of Kubernetes’s shortcomings in areas such as security and audit logging. These technologies are designed for modern, cloud-based software development, but Google says they’ll work just as well wrapped around and connected to legacy systems that sit inside enterprises’ facilities.

In a press conference with reporters, Urs Hölzle, senior vice president of technical infrastructure at Google, described Istio and Kubernetes as a joint ecosystem akin to the Andriod mobile OS Google developed, with products packaged and sold on top. Cloud Services Platform has proprietary Google software, but he referred to it repeatedly as an ecosystem and talked about the importance of partnerships like the one Google has with Cisco.

When asked to clarify the go-to-market strategy for the Google hybrid cloud, Hölzle said the company would largely take a back seat with deployments outside its cloud.

“We’re not an on premises company itself, so [Cloud Services Platform] is really an enabler,” he said. “We get a license fee, but it’s not our goal to displace the relationships we have, so I don’t expect us to be the direct seller and supporter for most accounts.”

The service will be primarily partner-led for two reasons,  Hölzle said. ISVs and other potential partners already have trusted on premises relationships with enterprises. And Google plans to keep its focus on compatibility among environments, so the service can act as an onramp to future cloud adoption, and corporations can move at their own comfortable pace.

There could be exceptions, such as if a large-enough customer asked Google to take the lead, Hölzle said. But enterprises should expect to see even more third-party vendors join Cisco as part of the Cloud Services Platform ecosystem over time.

That partner-led approach may have been conveniently left out – or, at best, underplayed – by Google this week, but Hölzle’s points aren’t unreasonable. And in fairness, this is just the alpha release. But if the Google hybrid cloud approach takes off, it will only engender more support from third parties – something Google has moved aggressively to improve and is a critical piece to any public cloud’s success.

And in some ways, it’s not a unique approach. The most prominent hybrid cloud partnership on the market merges technology from VMware and AWS, but as far as customers are concerned, that service is wholly sold and supported by VMware.

But that’s just one vendor – albeit one with a massive install base in traditional enterprises’ data centers. Google has pitched its cloud as a more open alternative to the competition, and it appears that approach will extend to hybrid, too, with plans to cast a wider net to get ISVs and large legacy vendors to buy into its approach. Time will tell if those ISVs, and more importantly, their customers, will do just that.


July 17, 2018  6:31 PM

Walmart’s cloud strategy sticks by OpenStack, despite Azure migration

Trevor Jones Trevor Jones Profile: Trevor Jones

Walmart’s cloud deployment strategy takes a big turn to Microsoft Azure, but it won’t leave OpenStack completely in the dust.

A five-year deal to make Microsoft Azure the preferred cloud provider for Walmart, the largest corporation in the world, isn’t a shocker, but it does raise questions about Walmart’s future use of the OpenStack. The retailer has been one of OpenStack’s biggest cheerleaders and has spent millions of dollars on a massive private cloud deployment of the software inside its own data centers. The technology, originally seen as an open alternative to public cloud platforms such as AWS and backed by some of the biggest names in IT, has lost nearly all of its momentum in recent years and failed to keep pace with the hyperscale public cloud providers.

OpenStack is notoriously difficult to operate, which is part of why it never truly took off. Some companies, including Walmart, have reported success, though it helps to possess a fleet of engineers to build and maintain it. Abandonment by a company the size and cachet of Walmart would all but serve as the death knell for OpenStack, but that doesn’t appear to be the case just yet, as its OpenStack deployment isn’t going anywhere, according to a company spokesperson.

“In no way does this take away from the work we’ve done there,” the spokesperson said. “Clearly we’ve invested a lot there from a time and financial perspective.”

Walmart will continue to contribute to the OpenStack project and use the software for its private cloud, but the deal with Microsoft adds flexibility and agility to the company’s hybrid cloud strategy, said the spokesperson, who declined to be identified. Walmart will rely on Microsoft to burst workloads to the public cloud, and will utilize a range of Microsoft cloud tools across its various brands, including Azure and Microsoft 365. Large parts of Walmart.com and Samsclub.com will move to Azure, while Walmart will use IoT tools to control energy consumption in its facilities, and implement machine learning to improve supply chain logistics. All told, the companies will work together to migrate hundreds of existing applications, Microsoft said in a blog post.

Walmart’s cloud strategy: Anything but AWS?

It’s a major win for Microsoft, if not entirely shocking. AWS is the largest provider in the market, but parent company Amazon.com happens to be a huge competitor for Walmart’s retail business. Some retailers have been wary to support Amazon through its IT arm, and reports last year indicated Walmart pressured its retail partners to get off AWS.

And Walmart already had a relationship with Azure —  online retailer Jet.com, which Walmart acquired in 2016, has hosted its infrastructure on Azure since its inception.

Azure may be Walmart’s official preferred cloud provider, but the company leaves open the possibility to use other cloud platforms when appropriate.

“We obviously are going to continue to look at ways to partner with everyone and anyone that we think will helps us be more agile for our customers,” the spokesperson said.

Even as Walmart underscores its commitment to OpenStack for its private cloud, however, it remains to be seen how much of that agility will come from OpenStack going forward.


July 12, 2018  8:37 PM

Google needs to win enterprise confidence at cloud conference

Trevor Jones Trevor Jones Profile: Trevor Jones

Google Cloud Platform (GCP) has become a more robust and reliable public cloud in recent years but still has nowhere near the enterprise mindshare afforded to fellow hyperscale platforms AWS and Microsoft Azure. Google’s Cloud Next conference later this month, only the second large-scale cloud conference hosted by the company, is its best chance yet to change those impressions and make GCP a larger part of enterprises’ IT strategy.

Google is in the same place AWS was earlier this decade: popular among developers, but trepidation from the enterprise IT market. AWS reversed those fears through outreach and additional technical and contractual safeguards, but broader perceptions didn’t truly change until Capital One, GE and other big-name corporations stamped their approval at AWS’ annual user conference in 2015.

Around the same time, and with virtually no penetration in the enterprise market, Google hired Diane Greene to head its cloud unit. Since then, GCP has spent billions to expand its footprint, bolster its portfolio of services and the depth of its features, and get closer to feature parity with AWS and Azure. More recently, Google also beefed up its sales and support teams and partnerships. Nonetheless, it still has a ways to go to be in the same breath as the other two in terms of market share.

For example, Google says it now brings in more than $1 billion in cloud revenues every quarter. Comparison of companies’ cloud businesses is an imperfect metric, but the disparity is stark: AWS generated $5.4 billion in its first quarter this year. For a sense of how much of that revenue comes from the enterprise market, here’s a hint from AWS CEO Andy Jassy at a press conference last November:

“People who mistakenly believe that most of AWS is startups are not being straight with themselves. You can’t have an $18 billion business just on startups.”

Google must accelerate its momentum with corporate customers if it truly wants to compete in the public cloud market, and the Next session catalog reveals a parade of household name brands to make that pitch. Home Depot, Citi, The New York Times and Spotify will share their GCP experiences, as will some newcomers, including AWS darling Netflix. Google needs those companies to reinforce its claims that it’s ready for enterprise prime time, but it also must show its feature set is worthy of selection over the competition. For that, it must answer lingering questions about bumps in the road.

It scored a coup last year when it hired Intel alum Diane Bryant as COO of its cloud division, but she abruptly left this month amid speculation she may return to Intel as CEO. Moreover, an incident last month gave users duplicate IP addresses and asked to delete and reboot their VMs as a workaround, and the month before one of GCP’s East Coast regions suffered an hour-long outage.

Bryant’s departure hasn’t damaged customer perception so far, said Deepak Mohan, an IDC analyst. And cloud outages are inevitable for any platform; in fact, GCP outages have dropped considerably compared to two years ago.

Google can help its enterprise case with more hybrid cloud deployments, Mohan said. Google has positive deals with Nutanix and Cisco, but it could really use a deal with a company with a major on-premises footprint. (The Cisco partnership is more about modern app development and Kubernetes than linkage between legacy systems and the public cloud.)

Companies also prioritize innovation when they move to the cloud, and GCP has caught the industry’s attention, particularly around machine learning and big data.

“They have strength in terms of momentum, and innovation and TCO,” Mohan said. “It does look like they’re putting the right pieces in place and there doesn’t seem to be any hard obstacles to them moving up in the market.”

Is Google ready to ride that momentum to greater success and a larger chunk of the market? Its impressions on potential customers at the Next show will go a long way to provide an answer.


June 28, 2018  7:52 PM

Cloud conferences to pencil in for the second half of 2018

Kathleen Casey Kathleen Casey Profile: Kathleen Casey
Cloud Computing

With the popularity of hybrid and multi-cloud models, enterprises continue to seek new tools and skillsets to fold into their IT strategy. Cloud conferences, summits and events are a great way to discover the latest market trends, learn about in-demand technologies and expand your knowledge.

Here are a few cloud conferences and events, all in the second half of 2018, for IT pros to attend.

IEEE Cloud 2018

July 2-7
San Francisco

The Institute of Electrical and Electronics Engineers will hold its International Conference on Cloud Computing. The event brings researchers and industry specialists together to discuss cloud computing best practices and advancements. IT pros can attend panel discussions and sessions that explore different cloud technologies.

HotCloud ’18

July 9
Boston

The 10th USENIX Workshop on Hot Topics in Cloud Computing, known as HotCloud, hosts IT pros, researchers and experts to explore current trends. The conference covers multiple “hot” cloud topics, including serverless, networking and security, as well as design and deployment challenges. Attendees can attend workshop programs presented by academic and industry experts.

AWS Global Summit

New York City, July 17
Chicago, August 2
Anaheim, August 23
Atlanta, September 13
Toronto, September 20

These free AWS Summits host users of different experience levels to discuss and learn about AWS features and products. These events include multiple technical break-out sessions, workshops, bootcamps, hands-on labs and team challenges. The summits will cover popular topics such as AI, machine learning and serverless computing.

Google Cloud Next ’18

July 24 – 26
San Francisco

Google’s three-day event invites IT professionals to take part in cloud computing demonstrations, sessions, breakouts, panels, bootcamps and more. Attendees can follow different session tracks, such as application development, cloud infrastructure and operations, IoT, AI and machine learning. IT pros can also take Google Cloud certification exams at the event.

VMworld 2018

August 26 -30
Las Vegas

At VMware’s biggest cloud infrastructure and technology event of the year, attendees can connect with peers, attend sessions, participate in hands-on labs and receive certifications. The Data Center and Cloud track focuses on public, private and hybrid cloud, as well as the development and management of cloud-native applications.

Hosting & Cloud Transformation Summit 2018

September 24 – 26
Las Vegas

Hosted by 451 Research, the Hosting & Cloud Transformation Summit offers various sessions presented by analysts and industry experts. Topics include multi- and hybrid cloud implementation, IoT and edge computing. Participants can network with other IT professionals, attend one-on-one analyst sessions and listen to expert panels.

Microsoft Ignite

September 24 – 28
Orlando

At Microsoft’s conference, IT pros have the ability to attend over 700 deep-dive sessions and over 100 workshops to expand their knowledge and gain more insights into their relative industries. For those who want to explore Microsoft’s public cloud, pre-day workshops cover multiple Azure tools and services, as well as topics such as migrations to Azure, container technology, cloud-native app development and data science.

AWS re:Invent

November 26 – 30
Las Vegas

The 7th annual AWS re:Invent conference — the largest AWS of the year — welcomes both current uers and those new to the cloud platform. There is a wide range of breakout sessions, hands-on labs and bootcamps that cater to every experience level. Attendees can take a deep dive into available services, be the first to hear about future products and test their AWS knowledge via certification exams.

Oracle OpenWorld

October 22 – 25
San Francisco

At this year’s Oracle OpenWorld, IT pros can dive into trending technologies, like containers, machine learning and AI. There are numerous session tracks that attendees can follow such as integrated cloud platform, Oracle cloud infrastructure options and intelligent cloud applications. Alongside sessions, this cloud conference offers training sessions, case studies, hands-on labs and certifications.


June 21, 2018  8:35 PM

Edge devices’ compute demands complicate cloud IoT choices

Trevor Jones Trevor Jones Profile: Trevor Jones

Cloud vendors want companies to use their platforms for the full scope of their IoT deployments, but that might not be their best choice.

As edge computing emerges as part of IoT deployments, users must decide not only how often to send data to the cloud, but whether to send it there in the first place. These decisions are particularly imperative for industrial customers and other settings where connected devices require more compute power nearby.

For starters, some edge devices have limited or no connectivity, so a steady stream of data transmission back to a cloud platform isn’t feasible. Furthermore, massive cloud data centers are typically located far from the source of IoT data, which can impact latency for data that requires quick analysis and decisions, such as to make an autonomous car change lanes. And some devices must process lots of data quickly or closer to the source of that information for compliance reasons, which in turn necessitates for more compute power at the edge.

AWS and Microsoft have begun to fill those gaps in their services with IoT services that extend from the cloud to the edge. AWS’ addition of Greengrass, its stripped-down software for edge devices, was particularly striking — for the first time in more than a decade of operations, AWS made its compute capabilities available outside its own data centers. That shift in philosophy illustrates just how much potential AWS sees in this market, and also some of the limitations.

With Greengrass and Azure IoT Edge, users now can streamline their IoT operations under one umbrella, and companies dabbling in IoT may find that attractive. Others may be drawn to the emerging collection of IoT vendors that process data as close to the source as possible.

Major cloud providers take a “cloud down” approach that uses existing big data technologies, but that emphasis doesn’t help if the business value of IoT requires decisions in a short timeframe, said Ramya Ravichandar, director of product management at FogHorn Systems. The startup company provides industrial customers machine learning at the edge, in partnerships and competition with those cloud providers.

Ravichandar cited the example of a review of system of assembly lines, where data is sent back to the cloud to run large-scale machine learning models to improve those systems, potentially across global regions.

“[The cloud is] where you want to leverage heavy duty training on large data stores, because building that model is always going to require bigger [compute power] than what is at the edge,” she said.

Users must decide if there’s value to send edge device data to the cloud determine where to store and process the data, weigh latency requirements and risks, and from all that determine costs and how to spread them between the edge and the cloud, said Alfonso Velosa, a Gartner analyst.

“We’re still figuring out how that architecture is going to roll out,” Velosa said. “Many companies are investing in it but we don’t know the final shape of it.”


June 6, 2018  8:19 PM

JEDI cloud contract looms large for customers, providers

Trevor Jones Trevor Jones Profile: Trevor Jones

Public sector IT and private sector IT can be very different animals, but a looming decision by the Department of Defense has the potential to send shock waves through both sides of the IT world.

The Department of Defense is preparing to accept bids for a potential 10-year, $10 billion Joint Enterprise Defense Infrastructure (JEDI) contract for cloud services as it modernizes and unifies its IT infrastructure. The JEDI cloud deal’s winner-take-all parameters could result in one of the largest windfalls in the history of the market, but also reinforce perceptions in the private sector — that AWS’ decade-plus stronghold on the market is even more dominant, or that a challenger will assert itself as a viable alternative.

It wouldn’t be the first time a federal cloud contract moved the needle in the private sector. Perceptions about the security of cloud infrastructure changed several years ago as big banks and well-known corporations gave their stamp of approval, but a public sector deal in 2013 stood out with many customers, when AWS won a $600 million contract to build a private cloud for the CIA. As will be the case with the JEDI contract, there were technical differences between the infrastructure the spy agency could access compared to the rest of the AWS customer base, but many corporate decision-makers have argued that if AWS security is good enough for the CIA, it’s certainly good enough for them. At the very least it provided an extra layer of comfort for the choices they made.

The JEDI cloud deal would have less impact on AWS today, as the company brought in more than $5 billion in revenues in its latest quarter alone. Still, the $10 billion contract would dwarf the 2013 CIA deal, and similarly echo across the entire cloud market. Cloud computing is a very capital-intensive, potentially very profitable business — a decade-long cash infusion on that scale would nicely buffer against the torrid growth required for a provider to compete in the hyperscale market.

But AWS isn’t the only cloud vendor making inroads with the federal government. Microsoft signed a deal in May, reportedly worth hundreds of millions of dollars, to provide cloud-based services to the U.S. Intelligence Community. The JEDI cloud contract would be an even bigger feather in Microsoft’s cap as it tries to lure companies to its Azure public cloud.

“If the award goes to Amazon it would tend to expand its lead in the market,” said Andrew Bartels, a Forrester Research analyst. “If it goes to Microsoft it would boost Microsoft Azure, not into the lead, but it would make it more of a two-horse competition.”

The JEDI contract would be an even bigger boon to IBM or Oracle, which have histories with the public sector but struggle to keep pace in the public cloud market. IBM has publicly tossed its hat into the RFP ring for this contact, and much of the public attention on this deal sprang from a private dinner between President Donald Trump and Oracle CEO Safra Katz in which she reportedly told the president the contract heavily favored AWS.

And what about Google Cloud Platform? It’s often lumped in with AWS and Azure for its technical prowess but it hasn’t resonated as much with the enterprise market, and a deal of this size would turn heads. But Google recently pulled out of another Defense contract amid employee concerns about the use of its AI capabilities, and it hasn’t said publicly whether it will seek this JEDI cloud contract.

The government believes the contract is so critical to its defense mission that it must align with a single partner for the next ten years. The counter argument is that cloud technology, capabilities and vendors change so rapidly that such a lengthy contract would lock in and limit the government’s options, said Jason Parry, vice president of client solutions at Force 3, an IT provider that contracts with the federal government.

An updated solicitation for input from the Defense Department was supposed to be published by the end of May. The delay is likely due to the volume of responses the government received, Parry added. The DoD has since declined to give a timeline on when the latest request would become available.

“It will be very interesting to see if they take the input provided and release something that people feel is more aligned with where the industry is headed, or if they stick with a single award,” he said.

Forrester’s Bartels recommends that the government split the JEDI cloud contract among multiple vendors to preserve flexibility and keep providers on their toes. But regardless of who wins, the deal will inevitably serve as another marker in the growth of this market.

“It validates adoption of cloud more broadly,” he said. “In a sense it reinforces the notion that your company can trust the security of cloud platform services.”


February 22, 2018  5:44 PM

Cloud-centric IBM patents promise payoff

Darryl Taft Profile: Darryl Taft

IBM plans to sow its latest crop of U.S. patents with a strong cloud emphasis, and prepare a feast for its hungry customers.

A substantial number of IBM patents for 2017 — more than 1900 out of 9043 total patents — were for cloud technologies, the company disclosed last month. Those numbers illustrate a clear shift in the company’s roadmap for products and services. In past years chip technology dominated IBM’s patent portfolio, which supported the bulk of the company’s business.

But IBM has pivoted to embrace the cloud as its foundational technology, and shored up its cloud presence through research and development. The company also received a substantial number of patents for AI (1,400 patents) and for security (1,200).

These thousands of cloud-centric patents is a competitive advantage for IBM because the cloud is the vehicle to deliver its strategic imperatives: Watson artificial intelligence technology, analytics, blockchain, cybersecurity, and other  areas such as microservices and serverless computing. For instance, two cloud projects that tap growing interest in serverless computing came out of IBM Research: IBM Cloud Functions, IBM’s serverless computing platform formerly known as OpenWhisk; and IBM Composer, a programming model to help developers build, manage, and scale serverless computing applications on IBM Cloud Functions.

“Many of these patents will be very useful in helping customers with cloud performance, integration and management of a multi-cloud environment,” said Judith Hurwitz, CEO of Hurwitz and Associates, Needham, MA. “Hybrid cloud management patents will be really important.”

Intelligence at the edge works with simpler predictive models and locally generated data to make real-time decisions, while cloud datacenters work with massive amounts of data to generate much deeper context, insight, and predictive models, said Paul Teich, an analyst with TIRIAS Research in Austin, Texas.

IBM also can extract value from new server, storage, and network technologies that underpin and improve cloud infrastructures, because cloud-based real-time assistance depends on affordable, reliable, and persistent network latency and bandwidth, Teich said.

“IBM is funding R&D work in all of those areas, as well as developing new algorithms to run on all that clever new hardware, which then enable new services based on their Watson software platform,” he said. “Complex neural network, neuromorphic, and eventually quantum computing accelerators for machine learning and artificial intelligence will live in the cloud for the foreseeable future.”

Turn research into real products

For customers, the increase in cloud patents indicates where IBM is putting R&D dollars. A growing chunk of IBM’s $6 billion annual spend on R&D has been on cloud research. And as AI and big data drive new demands for cloud, IBM has doubled down on cloud research areas such as infrastructure, containers, serverless computing and cloud security, said Jason McGee, IBM Fellow and VP, IBM Cloud.

For instance, ‘US Patent 9,755,923, “Predictive cloud provisioning based on human behaviors and heuristics,” describes ‘a system that monitors unstructured data sources such as news feeds, network traffic/statistics, weather reports and social networks to forecast cloud resource needs and pinpoint where to match increased demand.

But Big Blue must translate that R&D emphasis into compelling products and services. “Commercial relevance and immediate value is what matters, and there is no direct correlation that matters for CXOs’ purchasing decisions,” said Holger Mueller, an analyst with Constellation Research, San Francisco. Major IBM patents, such as relational database technology, happened decades ago, although in quantum computing IBM research is close to commercial viability and enterprise interest, he said.

IBM also can use its formidable patent portfolio as defensive or offensive weapons — keep the patents in-house to develop into products and services, or license them out for profit, said Frank Dzubeck, CEO of Communications Network Architects in Washington, DC. Companies with strong patent portfolios tend to strike up patent agreements with competitors or partners and allow them to license their patented technology. This also can discourage challenges from competitors who might claim they came up with an idea first.

IBM cloud competitors like Microsoft and Amazon have something to watch out for, because they might want to pay for the use of some of these things,” Dzubeck said.


January 29, 2018  3:42 PM

Prepare for these cloud computing technologies in 2018

Kathleen Casey Kathleen Casey Profile: Kathleen Casey

It seems that every year, a new set of cloud computing technologies emerge and shake up the enterprise market. Containers and serverless were buzzworthy in 2017, but what other new trends will this year bring?

AI is expected to be a hot topic, as organizations look to gain more business value out of their data, and cloud security will continue to garner a lot of attention, especially if – or when — more breaches make headlines. Meanwhile, many enterprises in 2018 will prepare for the General Data Protection Regulation, and its effect on their compliance policies.

We asked members of the SearchCloudComputing Advisory Board to share which cloud computing technologies and trends they think are worth following closely this year. Here are their responses:

Gaurav “GP” Pal

2017 saw rapid growth and adoption in public cloud services from major providers, including Microsoft Azure and Amazon Web Services. A number of other cloud providers, including Google and IBM, also managed to score a few high-profile wins. Clearly, commercial cloud computing platforms are here to stay and have rapidly evolved way beyond infrastructure services. Now, these platforms are increasingly focused on business and data services to deliver greater agility and innovation.

2018 promises to be an exciting year and will be all about security, AI and voice-commerce. The continued high-profile data breaches have spurred government agencies to drive new regulations to force companies to take cybersecurity and privacy concerns more seriously. New York’s Department of Financial Services mandated specific cybersecurity and compliance requirements in 23 NYCRR 500, while the Department of Defense mandated the use of NIST SP 800-171 security guidelines by suppliers. More sectors and industries will be forced to seriously evaluate their cloud security posture and make investments to ensure confidentiality, integrity and availability of their digital assets.

After years of hype, AI will finally take off in 2018 due to the confluence of multiple factors, including big data, cybersecurity issues and new user interfaces using voice commands. Major cloud platforms increasingly offer easier-to-use AI-enabled services. For example, AWS GuardDuty is an AI-enabled cybersecurity service that analyzes thousands of log records to provide anomalies and patterns.

AI adoption by mainstream businesses will be driven through the creation of specialized decision support systems (DSS). DSS is an old technology that will get a new lease of life through cloud-platforms and AI services that will support decision-making and provide major productivity enhancements for businesses. For example, one can see a CIO or CFO within an organization having access to a DSS that helps answer specific questions like, “Should I buy this software?” In part, the emergence of voice-driven interfaces through devices like Alexa will drive the rapid adoption of DSS.

Bill Wilder

Sometimes the best way to predict tomorrow’s weather is to look out the window today. I expect the cloud weather in 2018 will look a lot like it did in 2017. Expect to see the march of packaged machine learning and chatbot services progress with new features. In addition, serverless computing will continue to take mind- and market-share from traditional IaaS and PaaS models, as it’s more flexible, less costly and more fun.

To pinpoint other cloud trends in 2018, we can look to privacy and security. The General Data Protection Regulation (GDPR) takes effect on March 25, 2018 and includes sweeping new rules for data privacy protections for individuals in the EU, with harsh penalties for non-compliance. I forecast two outcomes:

  1. For many businesses, implementing new privacy measures will have a side-effect of better overall security processes, including encryption and password protection, and;
  2. Companies will double-down after seeing other companies hit with big penalties.

Part of this cycle may be delayed, as companies not based in the EU belatedly realize that GDPR applies to them because they do business there. Aside from GDPR, expect the big three public cloud platform providers — Amazon, Azure and Google — to continue to have stellar security records at unprecedented scale, enhancing their collective reputation as the least risky home for the majority of modern compute workloads.

Chris Wilder

I think 2018 is going to be the year of software-defined everything. I think we’re moving away from really expensive hardware… [and] are living in a point where we need to be able to use commodity-based hardware just to reduce the cost of our infrastructure. This is the year that software-defined anything — especially SDN — really takes off.

I think we’re still going to see application containers. There’s still going to be a foothold there, but when you get past that, it’s probably still early for Linux containers. Containers are an interesting way for organizations to pass and process information really quickly; it’s an interesting way to manage information. When you look at DevOps environments… and the development world, containers are just going to be massive.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: