The Troposphere


February 22, 2018  5:44 PM

Cloud-centric IBM patents promise payoff

Darryl Taft Profile: Darryl Taft

IBM plans to sow its latest crop of U.S. patents with a strong cloud emphasis, and prepare a feast for its hungry customers.

A substantial number of IBM patents for 2017 — more than 1900 out of 9043 total patents — were for cloud technologies, the company disclosed last month. Those numbers illustrate a clear shift in the company’s roadmap for products and services. In past years chip technology dominated IBM’s patent portfolio, which supported the bulk of the company’s business.

But IBM has pivoted to embrace the cloud as its foundational technology, and shored up its cloud presence through research and development. The company also received a substantial number of patents for AI (1,400 patents) and for security (1,200).

These thousands of cloud-centric patents is a competitive advantage for IBM because the cloud is the vehicle to deliver its strategic imperatives: Watson artificial intelligence technology, analytics, blockchain, cybersecurity, and other  areas such as microservices and serverless computing. For instance, two cloud projects that tap growing interest in serverless computing came out of IBM Research: IBM Cloud Functions, IBM’s serverless computing platform formerly known as OpenWhisk; and IBM Composer, a programming model to help developers build, manage, and scale serverless computing applications on IBM Cloud Functions.

“Many of these patents will be very useful in helping customers with cloud performance, integration and management of a multi-cloud environment,” said Judith Hurwitz, CEO of Hurwitz and Associates, Needham, MA. “Hybrid cloud management patents will be really important.”

Intelligence at the edge works with simpler predictive models and locally generated data to make real-time decisions, while cloud datacenters work with massive amounts of data to generate much deeper context, insight, and predictive models, said Paul Teich, an analyst with TIRIAS Research in Austin, Texas.

IBM also can extract value from new server, storage, and network technologies that underpin and improve cloud infrastructures, because cloud-based real-time assistance depends on affordable, reliable, and persistent network latency and bandwidth, Teich said.

“IBM is funding R&D work in all of those areas, as well as developing new algorithms to run on all that clever new hardware, which then enable new services based on their Watson software platform,” he said. “Complex neural network, neuromorphic, and eventually quantum computing accelerators for machine learning and artificial intelligence will live in the cloud for the foreseeable future.”

Turn research into real products

For customers, the increase in cloud patents indicates where IBM is putting R&D dollars. A growing chunk of IBM’s $6 billion annual spend on R&D has been on cloud research. And as AI and big data drive new demands for cloud, IBM has doubled down on cloud research areas such as infrastructure, containers, serverless computing and cloud security, said Jason McGee, IBM Fellow and VP, IBM Cloud.

For instance, ‘US Patent 9,755,923, “Predictive cloud provisioning based on human behaviors and heuristics,” describes ‘a system that monitors unstructured data sources such as news feeds, network traffic/statistics, weather reports and social networks to forecast cloud resource needs and pinpoint where to match increased demand.

But Big Blue must translate that R&D emphasis into compelling products and services. “Commercial relevance and immediate value is what matters, and there is no direct correlation that matters for CXOs’ purchasing decisions,” said Holger Mueller, an analyst with Constellation Research, San Francisco. Major IBM patents, such as relational database technology, happened decades ago, although in quantum computing IBM research is close to commercial viability and enterprise interest, he said.

IBM also can use its formidable patent portfolio as defensive or offensive weapons — keep the patents in-house to develop into products and services, or license them out for profit, said Frank Dzubeck, CEO of Communications Network Architects in Washington, DC. Companies with strong patent portfolios tend to strike up patent agreements with competitors or partners and allow them to license their patented technology. This also can discourage challenges from competitors who might claim they came up with an idea first.

IBM cloud competitors like Microsoft and Amazon have something to watch out for, because they might want to pay for the use of some of these things,” Dzubeck said.

January 29, 2018  3:42 PM

Prepare for these cloud computing technologies in 2018

Kathleen Casey Kathleen Casey Profile: Kathleen Casey

It seems that every year, a new set of cloud computing technologies emerge and shake up the enterprise market. Containers and serverless were buzzworthy in 2017, but what other new trends will this year bring?

AI is expected to be a hot topic, as organizations look to gain more business value out of their data, and cloud security will continue to garner a lot of attention, especially if – or when — more breaches make headlines. Meanwhile, many enterprises in 2018 will prepare for the General Data Protection Regulation, and its effect on their compliance policies.

We asked members of the SearchCloudComputing Advisory Board to share which cloud computing technologies and trends they think are worth following closely this year. Here are their responses:

Gaurav “GP” Pal

2017 saw rapid growth and adoption in public cloud services from major providers, including Microsoft Azure and Amazon Web Services. A number of other cloud providers, including Google and IBM, also managed to score a few high-profile wins. Clearly, commercial cloud computing platforms are here to stay and have rapidly evolved way beyond infrastructure services. Now, these platforms are increasingly focused on business and data services to deliver greater agility and innovation.

2018 promises to be an exciting year and will be all about security, AI and voice-commerce. The continued high-profile data breaches have spurred government agencies to drive new regulations to force companies to take cybersecurity and privacy concerns more seriously. New York’s Department of Financial Services mandated specific cybersecurity and compliance requirements in 23 NYCRR 500, while the Department of Defense mandated the use of NIST SP 800-171 security guidelines by suppliers. More sectors and industries will be forced to seriously evaluate their cloud security posture and make investments to ensure confidentiality, integrity and availability of their digital assets.

After years of hype, AI will finally take off in 2018 due to the confluence of multiple factors, including big data, cybersecurity issues and new user interfaces using voice commands. Major cloud platforms increasingly offer easier-to-use AI-enabled services. For example, AWS GuardDuty is an AI-enabled cybersecurity service that analyzes thousands of log records to provide anomalies and patterns.

AI adoption by mainstream businesses will be driven through the creation of specialized decision support systems (DSS). DSS is an old technology that will get a new lease of life through cloud-platforms and AI services that will support decision-making and provide major productivity enhancements for businesses. For example, one can see a CIO or CFO within an organization having access to a DSS that helps answer specific questions like, “Should I buy this software?” In part, the emergence of voice-driven interfaces through devices like Alexa will drive the rapid adoption of DSS.

Bill Wilder

Sometimes the best way to predict tomorrow’s weather is to look out the window today. I expect the cloud weather in 2018 will look a lot like it did in 2017. Expect to see the march of packaged machine learning and chatbot services progress with new features. In addition, serverless computing will continue to take mind- and market-share from traditional IaaS and PaaS models, as it’s more flexible, less costly and more fun.

To pinpoint other cloud trends in 2018, we can look to privacy and security. The General Data Protection Regulation (GDPR) takes effect on March 25, 2018 and includes sweeping new rules for data privacy protections for individuals in the EU, with harsh penalties for non-compliance. I forecast two outcomes:

  1. For many businesses, implementing new privacy measures will have a side-effect of better overall security processes, including encryption and password protection, and;
  2. Companies will double-down after seeing other companies hit with big penalties.

Part of this cycle may be delayed, as companies not based in the EU belatedly realize that GDPR applies to them because they do business there. Aside from GDPR, expect the big three public cloud platform providers — Amazon, Azure and Google — to continue to have stellar security records at unprecedented scale, enhancing their collective reputation as the least risky home for the majority of modern compute workloads.

Chris Wilder

I think 2018 is going to be the year of software-defined everything. I think we’re moving away from really expensive hardware… [and] are living in a point where we need to be able to use commodity-based hardware just to reduce the cost of our infrastructure. This is the year that software-defined anything — especially SDN — really takes off.

I think we’re still going to see application containers. There’s still going to be a foothold there, but when you get past that, it’s probably still early for Linux containers. Containers are an interesting way for organizations to pass and process information really quickly; it’s an interesting way to manage information. When you look at DevOps environments… and the development world, containers are just going to be massive.


January 26, 2018  2:49 PM

Cloud conferences to check out in the first half of 2018

Kathleen Casey Kathleen Casey Profile: Kathleen Casey

As cloud adoption grows, IT professionals with relevant experience are in high demand. Cloud conferences and events are the perfect place to build your skill sets, meet industry leaders and expand your business network. Whether you’re just starting out in cloud, or are a seasoned pro, there’s an opportunity to learn new technologies, gain more experience and add certifications to your resume.

Here are a few cloud conferences and events, all in the first half of 2018, for IT pros to consider.

Think 2018

March 19 – 22
Las Vegas

At IBM’s Think event, IT pros can meet experts and attend sessions on cloud topics ranging from hybrid cloud to DevOps and microservices. For those who want to gain some hands-on experience, Think Academy provides labs that cover various technologies, and offers over 300 certifications.

Microsoft Tech Summit

March 5 – 6
Washington D.C.

March 19 – 20
San Francisco

Microsoft’s free technical learning event is aimed at IT pros and developers who want to learn more about Azure and Microsoft 365. There are over 80 sessions — ranging from introductory to advanced — that cover current cloud trends and Azure tools. This year, the event will also dive into the details of Azure Government and Microsoft 365 for U.S. Government.

Cloud Foundry Summit

April 18 – 20
Boston

The Cloud Foundry Summit is for users and developers of all experience levels that want to expand their knowledge of the open source platform. Learn how to scale, manage and deploy applications at the hands-on training workshops. Join sessions that cover Cloud Foundry features and core updates. Beyond introductory information, some tracks focus on particular services, buildpacks and architectural components. Attendees can also take the Certified Developer Exam at the event.

Dell Technologies World

April 30 – May 3
Las Vegas

Dell’s technology conference offers three major tracks that cater to IT pros, administrators and architects, as well as over 400 sessions that cover topics including cloud, big data and security. Hands-on labs are available — both as self-paced and guided workshops – and attendees can pursue certifications for cloud architecture, systems administration, data science and other topics.

Interop ITX 2018

April 30 – May 4
Las Vegas

The 32nd Interop event offers seven tracks, including one dedicated to cloud that offers sessions on migration, management and integration. The five-day event includes Summits, which drill down into specific topics, like AI, as well as hands-on sessions, which are a new addition to the event. Two of these interactive sessions focus on Kubernetes and IoT/cognitive services.

OpenStack Summit

May 21 – 24
Vancouver

In addition to OpenStack, this summit offers sessions and education on various open source technologies, including Kubernetes and Docker. Cloud admins, architects, developers and other IT pros can hone their skills for private, public and multi-cloud environments. Hands-on training and labs from the OpenStack Academy help IT pros prepare for the Certified OpenStack Administer Exam. Users and developers can also share their opinions on recent OpenStack releases, and brainstorm ideas for the future, at the event’s Forum.

Cloud Expo

June 5 – 7
New York City

The 22nd international Cloud Expo enables cloud professionals to network with colleagues and choose from hundreds of sessions to attend, including user panels and industry keynotes. The cloud event has eight tracks to follow that explore hot topics like hybrid and multi-cloud, machine learning, artificial intelligence and microservices.


November 10, 2017  3:07 PM

Three pieces of advice to master disaster recovery in the cloud

Kathleen Casey Kathleen Casey Profile: Kathleen Casey

With so many natural disasters in the news this year, many enterprises have thought about their approach to disaster recovery — and whether their plan is foolproof.

The cloud provides many benefits for disaster recovery (DR), and enables an enterprise to safeguard its data across multiple regions. But that doesn’t mean the work stops there. We asked members of the SearchCloudComputing Advisory Board to share their top advice for disaster recovery in the cloud. Here are their responses:

Bill Wilder

There are many ways to leverage the cloud for DR, so I will focus on one use case: a modern mission-critical application running in the cloud. In this scenario, the most basic advice for DR is to embrace the cloud. Otherwise, you are doing it the hard way.

As you migrate workloads to the cloud, the more modern and cloud-native your workloads and infrastructure are, the easier the DR. Let’s consider both your application code and underlying databases. Whether leveraging infrastructure as a service, platform as a service or serverless infrastructure to run application logic, it is typically not challenging to configure an equivalent DR environment running those same code assets for web servers, microservices, etc. But what about DR for the database? While running your own databases in virtual machines is a very common cloud usage pattern, this makes disaster recovery in the cloud nearly as daunting as on premises; your team plays a big role in tackling DR complexity and operations, including needing access to folks with very specialized skills.

It is better to let your cloud do the heavy lifting for you by using native database services. Whether a relational model, like Azure SQL Database, or NoSQL document or graph-oriented, like Azure Cosmos DB, or equivalents on other cloud platforms, you can configure the database through the cloud platform to do the DR for you with a few mouse clicks — or with a DevOps template. These services offer sophisticated geo-replication capabilities with dozens of remote data center locations from which to choose for your DR sites. In my experience, when people first see this, they think it is too good to be true, but crazy thing is, it’s true. Stop resisting. Use the cloud.

Gaurav “GP” Pal

The recent spate of natural disasters has spurred a number of organizations to mature and think hard about disaster recovery in the cloud and Continuity of Operations (COOP). Cloud platforms are especially well suited to help organizations implement cost-effective DR and COOP systems, given the multi-region spread of cloud data centers. To begin a COOP plan, classify and categorize various information assets into critical, high, medium and low, with regard to their criticality in supporting business operations. Once the services have been classified, cloud-based services can readily help meet the availability requirements for the full stack as appropriate.

For example, if desktop services are needed, then Amazon WorkSpaces, amongst others, provide “desktops in the sky.” To avoid data loss, back up corporate data, including essential financial or customer data, to durable storage services like Amazon Simple Storage Service, Glacier or similar offerings. Cloud platforms also allow the creation of cost-effective DR environments using concepts like cold DR, pilot-light DR or hot DR. These variations allow organizations to find the right balance between money and level of continuity needed in the event of a disaster.

Christopher Wilder

There are many approaches to disaster recovery in the cloud, including data backup, infrastructure/network recovery and security. The one piece of advice I give is to outsource DR functionally to a security operations center (SOC) a managed security services provider to protect data, infrastructure and communications integrity. Companies like Accenture, CyberHat’s Cyrebro, C&W Business and Deloitte provide extensive SOC services that enable both security and recovery services in the event of a disaster.

I am a fan of SOCs because their one job is to protect their customers from threats both outside and within the organization. SOCs understand the physical location of critical systems, where data resides and what safeguards need to be put in place to eliminate disaster proactively, and in many cases stop it before it occurs. Unlike most cloud service providers (CSPs), SOCs focus on mitigating risk and protecting data and systems in the event of an attack or disaster, whereas CSPs look at uptime and business continuity as their crucial measure of success.

From a cloud DR perspective, it is essential to understand what your risks are and identify the top considerations of DR within your business. These include, but are not limited to:

  • preventing downtime and time to recovery;
  • data integrity/mirroring to reduce data loss; and
  • infrastructure and data security controls to ensure you have the right systems in place to withstand a disaster.

DR plans must be at the forefront of your cloud strategy and most organizations do not have the skills, budget or resources to be successful. That is why it is vital to choose the right SOC provider to protect what matters.


September 22, 2017  8:13 PM

In cloud migration services, what’s old is new again

Trevor Jones Trevor Jones Profile: Trevor Jones

Despite all the promise that the cloud will usher in the next wave of technical innovations, a very traditional distribution model has taken a central role for cloud providers.

With IBM Cloud Mass Data Migration, Big Blue is the latest large-scale vendor to lean on shipping companies to help get enterprises to its public cloud. The data-transfer appliance is similar to offerings from Amazon Web Services and Google Cloud Platform, and it shows that even with advancements in networking and the proliferation of cloud facilities around the globe, snail mail is still the quickest way to get large amounts of data from here to there.

Enterprises may encounter a significant learning curve to adapt to a public cloud provider’s platform, so typically they map out a slow, deliberate move to the cloud that takes months to complete. With that kind of lead time, direct connections can be a viable option if a company’s private data center is close enough to the fat pipes that provide sufficient bandwidth for a cloud migration strategy.

But if there are issues with network reliability or concerns about moving data over the public internet, these shippable boxes can be a better option. And as these data transfers swell into the petabytes, enterprises simply want the fastest means to get that data out of their data center.

Every hyperscale cloud provider needs to offer this physical cloud migration service to satisfy enterprises’ large-scale cloud migrations demands, said Rhett Dillingham, an analyst with Moor Insights and Strategy.

“Magnetic disc storage is so much more dense and efficient than network over the Internet or even direct connections,” he said. “It’s going to be years to decades before [snail mail] is surpassed by networking.”

The specs on these devices aren’t all that different. With 120 terabytes of storage, IBM’s device offers more capacity than AWS but less than Google. It’s shipped in a rugged container and IBM says the entire process of ordering the device, uploading the data, shipping it back and offloading it to its cloud can be done in under a week. Currently limited to the U.S., there’s a flat fee of $395 per device, which includes shipping costs.

The concept between these services certainly isn’t novel, as companies have shipped data on physical devices for decades for archival or disaster recovery. AWS was first on the market with Snowball in 2015, and its advancements portend a future where these boxes are more than just a faster means to backup data. Snowballs can be used to ship data between a customer and Amazon’s data centers, and the latest iteration, Snowball Edge, includes limited compute and networking capabilities so it can be used as an edge device.

It’s not surprising that IBM, which has shifted its cloud strategy to focus on platform as a service and AI with Watson, is not pitching its device as a means to clear out space in enterprises’ data centers and act as a surrogate for cold storage. Instead, Big Blue wants users to get as much data on its cloud as quickly as possible, so they can run advanced analytics.

That also fits nicely with IBM’s partnerships with VMware and SAP, with the expectation that many large enterprises will use this device to speed up those data-heavy migrations to the public cloud.

So whether it’s for moving legacy workloads or taking advantage of advanced cloud services, expect postal services to be an integral part of enterprises’ cloud migrations for the foreseeable future.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.


May 8, 2017  2:39 PM

A customer success story, with a twist

James Montgomery Profile: James Montgomery

One packed session at last week’s Red Hat Summit in Boston was a case study on an internally built integration of Ansible with Red Hat CloudForms, to orchestrate a hybrid cloud environment with VMware and AWS. Ansible is an especially useful tool to deploy Puppet agents into the environment, noted Phil Avery, senior Unix engineer at BJ’s Wholesale Club.

“I was pretty chuffed about this,” he said.

And then on the first day of the conference he heard, for the first time, about Red Hat’s brand-new CloudForms 4.5 and its native Ansible integration. So he redid his presentation to play down the novelty of his own Ansible work.

Avery hadn’t had an inside track on Red Hat’s CloudForms development, but Red Hat worked with him for “a long time” including a visit from top management, confirmed copresenter Matt St. Onge, senior solutions architect at Red Hat. “I’d like to think that Phil is an upstream contributor,” he quipped during the session.

Welcome to business models at the intersection of open source and “-as-a-service” — where vendors have less insight into what customers are doing, and vice-versa.

“Vendors want to build things for hundreds and thousands of customers, so it is a fine balance of building something that is adopted and then shows traction,” said Holger Mueller, principal analyst and vice president at Constellation Research, Inc. in Cupertino, CA.

Business models that attempt to tap the upswell of interest in open source and -as-a-service offerings, however, don’t have as much customer visibility — SaaS is particularly outdated, Mueller said. Hence, the existence of customer advisory boards and special interest groups.

Clever customers that conjure workarounds to solve a pressing need aren’t rare, of course — but especially in the world of open source and as-a-service, birds of a feather tend to flock together.

“This could have been my demo” of Ansible integration for hybrid cloud orchestration, crowed one engineer at a global bank headquartered in the Netherlands, after the session. “Why haven’t I seen this?” said another, a technical architect at a U.S.-based financial services company.

For his part, Avery seems unfazed by how things turned out. He’s using CloudForms 4.2 but will look at version 4.5 and likely upgrade — after all, Ansible integration is native now. Still, as he continues to figure out what improvements help him do his job, communication lines need to improve: “We need to get back on that wavelength,” he said.


April 7, 2017  6:03 PM

With vCloud Air sale, VMware clears cloud computing path

Ed Scannell Ed Scannell Profile: Ed Scannell
VMware

With the sale of its long languishing vCloud Air offering this week, VMware found a way to step away from the product that has had an uncertain future for quite some time.

The company sold its vCloud Air business to OVH, Europe’s largest cloud provider, for an undisclosed sum, handing off its vCloud Air operations, sales team and data centers to add to OVH’s existing cloud services business.

But VMware isn’t exactly washing its hands of the product. The company will continue to direct research and development for vCloud Air, supplying the technology to OVH – meaning VMware still wants to control the technical direction of the product. It also will assist OVH with various go-to-market strategies, and jointly support VMware users as they transfer their cloud operations to OVH’s 20 data centers spread across 17 countries.

The sale of vCloud Air should lift the last veil of mist that has shrouded VMware’s cloud computing strategy from the start for years. VMware first talked about its vCloud initiative in 2008, and six years later re-launched the product as vCloud Air, a hybrid, IaaS offering for its vSphere users. It never gained any measurable traction among IT shops, getting swallowed up by a number of competitors, most notably AWS and Microsoft.

The company quickly narrowed its early ambitions for vCloud Air to a few specific areas, such as disaster recovery, acknowledged Raghu Raghuram, VMware’s chief operating officer for products and cloud services, in a conference call to discuss the deal.

Further obscuring VMware’s cloud strategy was EMC’s $12 billion purchase of Virtustream in 2015, a product that had every appearance of being a competitor to vCloud Air. This froze the purchasing decisions of would-be buyers of vCloud Air who waited to see how EMC-VMware would position the two offerings.

Even a proposed joint venture between VMware and EMC, called the Virtustream Cloud Services Business, an attempt to deliver a more cohesive technical strategy, collapsed when VMware pulled out of the deal. Dell’s acquisition of EMC, and by extension VMware, didn’t do much to clarify what direction the company’s cloud computing strategy would take.

But last year VMware realized the level of competition it was up against with AWS and made peace with the cloud giant, signing a deal that makes it easier for corporate shops to run VMware on both their own servers as well as servers running in AWS’ public cloud. Announced last October and due in mid-2017, the upcoming product will be called VMware Cloud on AWS, that lets users run applications across vSphere-based private, hybrid and public clouds.

With the sale of vCloud Air, the company removes another distraction for both itself and its customers. Perhaps now the company can focus fully on its ambitious cross-cloud architecture, announced at VMworld last August, which promises to help users manage and connect applications across multiple clouds. VMware delivered those offerings late last year, but the products haven’t created much buzz since.

VMware officials, of course, don’t see the sale as the removal of an obstacle, but rather “the next step in vCloud Air’s evolution,” according to CEO Pat Gelsinger, in a prepared statement. He added the deal is a “win” for users because it presents them with greater choice — meaning they can now choose to migrate to OVH’s data centers, which both companies claim can deliver better performance.

Hmm, well that’s an interesting spin. But time will tell if this optimism has any basis in reality.

After the sale is completed, which should be sometime this quarter, OVH will run the service under the name of vCloud Air Powered by OVH. Whether it is wise to keep the vCloud brand, given the product’s less-than-stellar success, again, remains to be seen.

Ed Scannell is a senior executive editor with TechTarget. Contact him at escannell@techtarget.com.


March 20, 2017  2:48 PM

Awareness of shared-responsibility model is critical to cloud success

Trevor Jones Trevor Jones Profile: Trevor Jones

When companies move to the cloud, it’s paramount that they know where the provider’s security role ends and where the customer’s begins.

The shared-responsibility model is one of the fundamental underpinnings of a successful public cloud deployment. It requires vigilance by the cloud provider and customer—but in different ways. Amazon Web Services (AWS), which developed the philosophy as it ushered in public cloud, describes it succinctly as knowing the difference between security in the cloud versus the security of the cloud.

And that model, which can be radically different from how organizations are used to securing their own data centers, often creates a disconnect for newer cloud customers.

“Many organizations are not asking the right question,” said Ananda Rajagopal, vice president of products at Gigamon, a network-monitoring company based in Santa Clara, Calif. “The right question is not, ‘Is the cloud secure?’ It’s, ‘Is the cloud being used securely?'”

And that’s a change from how enterprises are used to operating behind the firewall, said Abhi Dugar, research director at IDC. The security of the cloud refers to all the underlying hardware and software:

  • compute, storage and networking
  • AWS global infrastructure

That leaves everything else—including the configuration of those foundational services—in the hands of the customer:

  • customer data
  • apps and identify and access management
  • operating system patches
  • network and firewall configuration
  • data and network encryption

Public cloud vendors and third-party vendors offer services to assist in these areas, but it’s ultimately up to the customers to set policies and track things.

The result is a balancing act, said Jason Cradit, senior director of technology at TRC Companies, an engineering and consulting firm for the oil and gas industry. TRC, which uses AWS as its primary public cloud provider, turns to companies like Sumo Logic and Trend Micro to help segregate duties and fill the gaps. And it also does its part to ensure it and its partners are operating securely.

“Even though it’s a shared responsibility, I still feel like with all my workloads I have to be aware and checking [that they] do their part, which I’m sure they are,” Cradit said. “If we’re going to put our critical infrastructure out there, we have to live up to standards on our side as much as we can.”

Trevor Jones is a news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.


December 16, 2016  7:01 PM

Plan to kill Cisco public cloud highlights the investment needed to compete

Trevor Jones Trevor Jones Profile: Trevor Jones

The graveyard of public clouds is littered with traditional IT vendors, and it’s about to get a bit more crowded.

Cisco has confirmed a report by The Register that it will shut down its Cisco Intercloud Services public cloud early next year. The company rolled out Intercloud in 2014 with plans to spend $1 billion to create a global interconnection among data center nodes targeted at IoT and software as a service offerings.

The networking giant never hitched its strategy to being a pure infrastructure as a service provider, instead focusing on a hybrid model based on its Intercloud Fabric. The goal was to connect to other cloud providers, both public and private. Those disparate environments could then be coupled with its soon-to-be shuttered OpenStack-based public cloud, which includes a collection of compute, storage and networking.

“The end of Cisco’s Intercloud public cloud is no surprise,” said Dave Bartoletti, principal analyst at Forrester. “We’re long past the time when any vendor can construct a public cloud from some key technology bits, some infrastructure, and a whole mess of partners.”

Cisco will help customers migrate existing workloads off the platform. In a statement the company indicated it expects no “material customer issues as a result of the transition” – a possible indication of the limited customer base using the service. Cisco pledged to continue to act as a connector for hybrid environments despite the dissolution of Intercould Services.

Cisco is hardly the first big-name vendor to enter this space with a bang and exit with a whimper. AT&T, Dell, HPE — twice — and Verizon all planned to be major players only to later back out. Companies such as Rackspace and VMware still operate public clouds but have deemphasized those services and reconfigured their cloud strategy around partnerships with market leaders.

Of course, legacy vendors are not inherently denied success in the public cloud, though clearly the transition to an on-demand model involves some growing pains. Microsoft Azure is the closest rival to Amazon Web Services (AWS) after some early struggles. IBM hasn’t found the success it likely expected when it bought bare metal provider SoftLayer, but it now has some buzz around Watson and some of its higher-level services. Even Oracle, which famously derided cloud years ago, is seen as a dark horse by some after it spent years on a rebuilt public cloud.

To compete in the public cloud means a massive commitment to resources. AWS, which essentially created the notion of public cloud infrastructure a decade ago and still holds a sizable lead over its nearest competitors, says it adds enough server capacity every day to accommodate the entire Amazon.com data center demand from 2005. Google says it spent $27 billion over the past three years to build Google Cloud Platform — and is still seen as a distant third in the market.

Public cloud also has become much more than just commodity VMs. Providers continue to extend infrastructure and development tools. AWS alone has 92 unique services for customers.

“We don’t expect any new global public clouds to emerge anytime soon,” Bartoletti said. “The barriers to entry are way too high.”

Intercloud won’t be alone in its public flogging on the way to the scrap heap, but high-profile public cloud obits will become fewer and farther between in 2017 and beyond — simply because there’s no room left to try and fail.

Trevor Jones is a news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.


November 15, 2016  4:30 PM

Google cloud consulting service a two-way street

Trevor Jones Trevor Jones Profile: Trevor Jones

Google received plenty of attention when it reshuffled its various cloud services under one business-friendly umbrella, but tucked within that news was a move that also could pay big dividends down the road.

The rebranded Google Cloud pulls together various business units, including Google Cloud Platform (GCP), the renamed G Suite set of apps, machine learning tools and APIs and any Google devices that connect to the cloud. Google also launched a consulting services program called Customer Reliability Engineering, which may have an outsized impact compared to the relatively few customers that will ever get to participate in it.

Customer Reliability Engineering isn’t a typical professional services contract in which a vendor guides its customer through the various IT operations processes for a fee, nor is it aimed at partnering with a forward-leaning company to develop new features. Instead, this is focused squarely on ensuring reliability — and perhaps most notably, there’s no charge for participating.

The reliability focus is not on the platform, per se, but rather the customers’ applications that are run on the platform. It’s a response to uncertainty about how those applications will behave in these new environments, and the fact that IT operations teams are no longer in the war room making decisions when things go awry.

“It’s easy to feel at 3 in the morning that the platform you’re running on doesn’t care as much as you do because you’re one of some larger number,” said Dave Rensin, director of the Customer Reliability Engineering initiative.

Here’s the idea behind the CRE program: a team of Google engineers shares responsibility for the uptime and health operations of a system, including service level objectives, monitoring and paging. They inspect all elements of an application to determine gaps and determine the best ways to get move from four nines to five or six nines.

There are a couple ways Google hopes to reap rewards from this new program. While some customers come to Google just to solve a technical problem such as big data analytics, this program could prove tantalizing for another type of user Rensin describes as looking to “buy a little bit of Google’s operational culture and sprinkle it into some corners of their business.”

Of course, Google’s role here clearly isn’t altruistic. One successful deployment likely begets another, and that spreads to other IT shops as they learn what some of their peers are doing on GCP.

It also doesn’t do either side any favors when resources aren’t properly utilized and a new customer walks away dissatisfied. It’s in Google’s interest to make sure customers get the most out of the platform and to be a partner rather than a disinterested supplier that’s just offering up a bucket of different bits, said Dave Bartoletti, principal analyst with Forrester Research.

“It’s clear people have this idea about the large public cloud providers that they just want to sell you crap and they don’t care how you use it, that they just want you to buy as much as possible — and that’s not true,” Bartoletti said.

Rensin also was quick to note that “zero additional dollars” is not the same as “free” — CRE will cost users effort and organizational capital to change procedures and culture. Google also has instituted policies for participation that require the system to pass an inspection process and not routinely blow its error budget, while the customer must actively participate in reviews and postmortems.

You scratch my back, I’ll scratch yours

Customer Reliability Engineering also comes back to the question of whether Google is ready to handle enterprise demands. It’s one of the biggest knocks against Google as it attempts to catch Amazon and Microsoft in the market, and an image the company has fought hard to reverse under the leadership of Diane Greene. So not only does this program aim to bring a little Google operations to customers, it also aims to bring some of that enterprise know-how back inside the GCP team.

It’s not easy to shift from building tools that focus on consumer life to a business-oriented approach, and this is another sign of how Greene is guiding the company to respond to that challenge, said Sid Nag, research director at Gartner.

“They’re getting a more hardened enterprise perspective,” he said.

There’s also a limit to how many users can participate in the CRE program. Google isn’t saying exactly what that cap is, but it does expect demand to exceed supply — only so many engineers will be dedicated to a program without direct correlation to generating revenues.

Still, participation won’t be selected purely by which customer has the biggest bill. Those decisions will be made by the business side of the GCP team, but with a willingness to partner with teams doing interesting things, Rensin said. To that end, it’s perhaps telling that the first customer wasn’t a well-established Fortune 500 company, but rather Niantic, a gaming company behind the popular Pokémon Go mobile game.

Trevor Jones is a news writer with TechTarget’s Data Center and Virtualization Media Group. Contact him at tjones@techtarget.com.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: