The Troposphere


November 15, 2016  4:30 PM

Google cloud consulting service a two-way street

Trevor Jones Trevor Jones Profile: Trevor Jones

Google received plenty of attention when it reshuffled its various cloud services under one business-friendly umbrella, but tucked within that news was a move that also could pay big dividends down the road.

The rebranded Google Cloud pulls together various business units, including Google Cloud Platform (GCP), the renamed G Suite set of apps, machine learning tools and APIs and any Google devices that connect to the cloud. Google also launched a consulting services program called Customer Reliability Engineering, which may have an outsized impact compared to the relatively few customers that will ever get to participate in it.

Customer Reliability Engineering isn’t a typical professional services contract in which a vendor guides its customer through the various IT operations processes for a fee, nor is it aimed at partnering with a forward-leaning company to develop new features. Instead, this is focused squarely on ensuring reliability — and perhaps most notably, there’s no charge for participating.

The reliability focus is not on the platform, per se, but rather the customers’ applications that are run on the platform. It’s a response to uncertainty about how those applications will behave in these new environments, and the fact that IT operations teams are no longer in the war room making decisions when things go awry.

“It’s easy to feel at 3 in the morning that the platform you’re running on doesn’t care as much as you do because you’re one of some larger number,” said Dave Rensin, director of the Customer Reliability Engineering initiative.

Here’s the idea behind the CRE program: a team of Google engineers shares responsibility for the uptime and health operations of a system, including service level objectives, monitoring and paging. They inspect all elements of an application to determine gaps and determine the best ways to get move from four nines to five or six nines.

There are a couple ways Google hopes to reap rewards from this new program. While some customers come to Google just to solve a technical problem such as big data analytics, this program could prove tantalizing for another type of user Rensin describes as looking to “buy a little bit of Google’s operational culture and sprinkle it into some corners of their business.”

Of course, Google’s role here clearly isn’t altruistic. One successful deployment likely begets another, and that spreads to other IT shops as they learn what some of their peers are doing on GCP.

It also doesn’t do either side any favors when resources aren’t properly utilized and a new customer walks away dissatisfied. It’s in Google’s interest to make sure customers get the most out of the platform and to be a partner rather than a disinterested supplier that’s just offering up a bucket of different bits, said Dave Bartoletti, principal analyst with Forrester Research.

“It’s clear people have this idea about the large public cloud providers that they just want to sell you crap and they don’t care how you use it, that they just want you to buy as much as possible — and that’s not true,” Bartoletti said.

Rensin also was quick to note that “zero additional dollars” is not the same as “free” — CRE will cost users effort and organizational capital to change procedures and culture. Google also has instituted policies for participation that require the system to pass an inspection process and not routinely blow its error budget, while the customer must actively participate in reviews and postmortems.

You scratch my back, I’ll scratch yours

Customer Reliability Engineering also comes back to the question of whether Google is ready to handle enterprise demands. It’s one of the biggest knocks against Google as it attempts to catch Amazon and Microsoft in the market, and an image the company has fought hard to reverse under the leadership of Diane Greene. So not only does this program aim to bring a little Google operations to customers, it also aims to bring some of that enterprise know-how back inside the GCP team.

It’s not easy to shift from building tools that focus on consumer life to a business-oriented approach, and this is another sign of how Greene is guiding the company to respond to that challenge, said Sid Nag, research director at Gartner.

“They’re getting a more hardened enterprise perspective,” he said.

There’s also a limit to how many users can participate in the CRE program. Google isn’t saying exactly what that cap is, but it does expect demand to exceed supply — only so many engineers will be dedicated to a program without direct correlation to generating revenues.

Still, participation won’t be selected purely by which customer has the biggest bill. Those decisions will be made by the business side of the GCP team, but with a willingness to partner with teams doing interesting things, Rensin said. To that end, it’s perhaps telling that the first customer wasn’t a well-established Fortune 500 company, but rather Niantic, a gaming company behind the popular Pokémon Go mobile game.

Trevor Jones is a news writer with TechTarget’s Data Center and Virtualization Media Group. Contact him at tjones@techtarget.com.

October 31, 2016  8:03 PM

Google’s Stackdriver taps into growing multicloud trend

Kristin Knapp Kristin Knapp Profile: Kristin Knapp

A clear trend has emerged around public cloud adoption in the enterprise: organizations increasingly employ a mix of different cloud services, rather than go all in with one. As that movement continues, cloud providers who support integration with platforms outside their own – and especially with public cloud titan Amazon Web Services – have the most to gain.

Google seems to have that very thought in mind with the recent rollout of its Stackdriver monitoring tool.

Stackdriver, originally built for Amazon Web Services (AWS) but bought by Google in 2014, became generally available this month, providing monitoring, alerting and a number of other capabilities for Google Cloud Platform. Most notably, though, it hasn’t shaken its AWS cloud roots.

Google’s continued support for AWS shouldn’t come as a big surprise for legacy Stackdriver users, said Dan Belcher, product manager for Google Cloud Platform and co-founder of Stackdriver. His team has attempted for the past two years to assuage any customer concerns about AWS support falling by the wayside.

“[Customers were] looking for assurances that, at the time, we were going to continue to invest in support for Amazon Web Services,” Belcher said. “And I think we have addressed those in many ways.”

Mark Annati, VP of IT at Extreme Reach, an advertising firm in Needham, Mass., is a Stackdriver user since 2013 and still uses the tool to monitor his company’s cloud deployment, which spans Google, AWS and Azure. He said his company is still evaluating the full impact of the Stackdriver tool being migrated onto Google’s internal infrastructure, but so far it appears to be business as usual.

And, considering his need for AWS monitoring support, that’s a relief.

“I have had no indication from Stackdriver that they would stop monitoring AWS,” Annati said. “If they did, that would cause us significant pain.”

There are a few changes, however, for legacy Stackdriver users post-acquisition. Now that Stackdriver is hosted on Google’s own infrastructure, for example, users need a Google cloud account to access the tool, and to manage user access and billing. In addition, a few features that existed in the tool pre-acquisition — such as chart annotations, on-premises server monitoring and integration with AWS CloudTrail — are unsupported, at least for now, as part of the migration to Google.

Stackdriver pricing options are slightly different, depending on whether you use the tool exclusively for Google, or for both Google and AWS. All Google Cloud Platform (GCP) users, for example, have access to a free Basic tier and a Premium tier, while users who require the AWS integration only have access to the Premium tier. That higher-level tier costs $8 per monitored cloud resource per month and, in addition to the AWS support, offers more advanced monitoring, as well as a larger allotment for log data.

In general, since the Google acquisition, Stackdriver’s feature set has expanded beyond the tool’s traditional monitoring features, such as alerts and dashboards, to now offer logging, error reporting and debugging tools to both AWS and Google users, Belcher said.

“As an AWS-only customer, your experience using Stackdriver is just as good,” he said.

Moving to a multicloud world

This cross-platform support – particularly for market leader AWS, whose public cloud revenue climbed 55% percent year-over-year in the third quarter, totaling over $3 billion — is going to become table stakes for cloud providers, explained Dave Bartoletti, principal analyst at Forrester Research.

“When you are offering a tool that is great for your platform, you’d better support AWS,” Bartoletti said. “What Google recognizes is that it would be stupid to say, ‘We’re going to release a management tool that is only good for our platform.'”

Google stands to gain from this AWS integration in other ways, too. For example, Stackdriver may eventually prompt more AWS users to evaluate Google’s homegrown data analytics tools, such as BigQuery, as a supplement to Stackdriver itself, Bartoletti said.

“It lets Google show off what else it has to offer,” he said.

While he didn’t offer any specifics, Belcher said Google will consider broadening Stackdriver to support other cloud platforms, such as Azure, and potentially on-premises deployments as well.

“There are more than enough customers on AWS and GCP that are running in some hybrid mode with some unsupported platform, so you can imagine we get requests every day to extend the support,” he said.

Annati, for one, would welcome the move.

“It would be great if Stackdriver covered it all,” he said. “That would be an easy decision for us.”


October 28, 2016  4:17 PM

Three IT nightmares that haunted cloud admins in 2016

Kathleen Casey Kathleen Casey Profile: Kathleen Casey

Cloud doesn’t treat enterprise IT teams all the time; in fact, it occasionally throws out a few tricks. While there are many benefits to cloud, sometimes a cloud deployment can go terribly awry, prompting real-life IT nightmares — ranging from spooky security breaches to pesky platform as a service implementations.

We asked the SearchCloudComputing Advisory Board to share the biggest cloud-related IT nightmares they faced, or saw others face, so far in 2016. Here’s a look at their tales of terror:

Bill Wilder

Halloween nightmares came ten days early this year for DNS provider Dyn, as it was hit with a massive DDoS attack. The Internet simply can’t function without reliable DNS, and most cloud applications and services outsource that to companies like Dyn. Among the parties impacted by the attack on Dyn is a “who’s who” of consumer sites, such as Twitter, Spotify and Netflix, and developer-focused cloud services, such as Amazon Web Services, Heroku and Github. This news comes about a month after security researcher and journalist Brian Krebs had his own web site attacked by one of the largest ever DDoS attacks, reportedly reaching staggering levels exceeding a half terabit of data per second.

Both attacks appear to have been powered by bot armies with significant firepower from unwitting internet-connected internet of things (IoT) devices. This is truly frightening, considering that there are billions of IoT devices in the wild already, from video cameras, DVRs and door locks to refrigerators and Barbie dolls. Since internet-exposed IoT devices are easily found through specialized search engines, and IoT device exploit code is readily available for download, we can be sure of one thing: we are only seeing the early wave of this new brand of DDoS attack.

Gaurav “GP” Pal

My biggest cloud computing nightmare was the first-hand experience of implementing a custom platform as a service (PaaS) on an infrastructure as a service (IaaS) platform. Many large organizations are pushing the innovation envelope in search of cloud nirvana, including hyper-automation, cloud-platform independence and container everything. Sounds great! But with the lines between IaaS, managed IaaS and PaaS constantly blurring, the path to nirvana is not a straight one. It took way longer to create the plumbing than anticipated, the platform was unable to pass security audits and getting the operational hygiene in place was challenging.

Adding to the cup of woes is the lack of qualified talent that truly has experience with custom PaaS, given that it has been around only for a short period of time. On top of that you have a constantly changing technology foundation on the container orchestration side. All of this made for a ghoulish mix. Only time will tell whether a custom PaaS on an IaaS platform is a trick or a treat.

 Alex Witherspoon

The trend I keep seeing repeat is off-base cost expectations and the risk of operating non-cloud-architected applications in a private or public cloud environment that is not ideal for them.

Cloud environments should essentially be the automated abstraction and utilization of physical resources. Additionally, public cloud charges you for that value, in addition to the physical servers that cloud lives on — without your input in the buying decisions. For some, businesses align well with the public cloud of choice and its cost model, and so perhaps the tradeoffs to the business tabulate well. For many, such as Dropbox, to name a public example, they find that public cloud was quickly going to transform at an inflection point from a savings to an operational cost that would only continue to grow with the business and never provide stable controlled operational expenditure (OPEX) or capital expenditure (CAPEX) like a private cloud could provide. Given modern financial mechanisms to take CAPEX investments in private clouds and convert them into flexible OPEX arrangements, the financial models for private cloud are often more economically feasible at the expense of some additional complexity in managing the private cloud. Often, though, that tradeoff for complexity is justified in the control one gains by shaping the architecture of the private cloud to perfectly align to the business needs technologically and economically.

These optimizations in cloud can be numerous, one of them being support for non-cloud architected applications. This is important to consider because not all clouds are built alike, and many public cloud providers like AWS, Azure and Google suggest the minimum viable architecture is a widely distributed application that can survive random outages at any single node. Many modern applications do provide for that, but ultimately, the majority of software in play today is operating with the expectation that the infrastructure underneath it is going to be 100% reliable, and these applications can be dangerous in a public or private cloud environment that isn’t designed for high availability.

To this end, it’s endlessly important to consider the risks, and the return on investment (ROI) picture throughout the lifecycle of the service. Clouds of all types carry diverse ROI profiles, and being able to specifically quantify the strategic fitment of the business needs against these offerings can avert technological and economic disaster for your business.


October 21, 2016  6:05 PM

Another week, another sign of VMware adapting to a multicloud world

Trevor Jones Trevor Jones Profile: Trevor Jones

VMware made waves last week by partnering with Amazon, but another big-name public cloud integration that flew under the radar this week also highlights where the company is headed as enterprises move to a multicloud strategy.

VMware rolled out a number of updates at VMworld Europe around vSphere, vSAN and vRealize that expand on its strategy of providing a common operating environment across public and private clouds. Among those updates is improved management of Microsoft Azure – a platform built by VMware’s biggest competitor in the virtualization space – providing out-of-the-box support with simplified service blueprints for multi-tier applications running on Azure.

VMware rightly recognizes that it’s a multi-cloud world among enterprise customers and Azure is one of the major players.

“We see many enterprises including Azure as one of several clouds in their catalog,” said Mary Johnston Turner, research vice president at IDC. “To be a credible enterprise multi-cloud management player today, support for Azure is just as important as support for AWS.”

VRealize, meanwhile, continues to be an important element of VMware’s broader multicloud management strategy and the focal point of its cloud management for private cloud and virtualized infrastructure automation, operations monitoring and log analytics. IDC ranks it as the top cloud systems management software on the market, based on revenue.

The new vRealize capabilities build on VMware’s Cross-Cloud Architecture for running, managing and connecting applications across environments, including Azure and AWS.

Both those efforts, though, fall short of what VMware is working on with AWS.

VMware’s planned integration with Amazon will provide bare metal servers within AWS that VMware will manage and sell services on top of for customers to migrate workloads via a software-defined data center. (VMware announced a similar deal with IBM earlier this year.) In many ways, this represents the culmination of a two-year shift from trying to keep everything within its own ecosystem to trying to get its software-defined data center (SDDC) on as many different platforms as possible.

The added Azure support on vRealize is a positive step, but even better would be something similar to the capabilities being worked on with AWS, said Cory De Arkland, senior cloud engineer at San Francisco-based Pacific Gas and Electric Co. (PG&E).

PG&E, which uses vRealize, already has to manage dev-test workloads set up in AWS, thanks to shadow IT. Extending VMware environments to Azure would be beneficial in case the utility wants to provide its developers with public cloud resources in the future, either because of special feature sets or just to pit vendors against each other on price, De Arkland said.

“It just encourages competitiveness,” he said.

Microsoft says a third of all Azure VMs use Linux. And while there isn’t a lot of crossover between vSphere and Hyper-V users, there is growing demand among customers to integrate with Azure, according to VMware.

Still, there are competitive issues that could limit the depth of any hypothetical partnership between the two companies, said Gary Chen, research manager at IDC. VMware is still treating Azure as another cloud resource it can manage, and while the company should be pursuing deeper partnerships with other cloud providers, each deal will likely be different and may not look exactly like the AWS partnership.

“It would be a stretch to see Azure running VMware software anytime soon for deeper infrastructure integration, which is what the AWS deal, IBM and the rest of VCAN [the vCloud Air Network] has required,” Chen said.

Trevor Jones is a news writer with TechTarget’s data center and virtualization media group. Contact him at tjones@techtarget.com.


October 3, 2016  12:48 PM

Cloud computing employment trends shift from generalists to specialists

Kathleen Casey Kathleen Casey Profile: Kathleen Casey

Cloud computing has created a plethora of new jobs in the IT industry and shows no signs of slowing down. But what are companies looking for in a potential cloud employee? Job hunters face the difficult choice of zeroing in on a certain cloud service or vendor, or becoming a jack of all trades.

We asked the SearchCloudComputing Advisory Board what they consider to be the biggest cloud computing employment trends — and what employers are looking for. Here’s a look at their answers.

Alex Witherspoon

[In] the older enterprise IT model, there was a drive toward specialists [who] have a deep understanding of complex systems, like modern storage, servers and networking, to operate software. The strength in this is that one can be complexly involved with all elements of the platform. Many of those systems are still there, but in a modern cloud, those infrastructure problems and specialties are obfuscated to the end user, so a modern company can focus much more heavily on the software, the customer and the business.

This trend has led to an Agile-focused mindset — one that is much more concerned with technology as an operating cost and series of capabilities. This could be called DevOps, but it’s effectively removing the complexity of infrastructure [from] development and operation teams and turns the focus much more heavily into the software and design of software platforms, rather than infrastructure itself. Jobs will still be diverse between centralized architects and decentralized, general jack-of-all- trades, such as found in site reliability engineering teams, but they will be commonly focused on software architecture, rather than infrastructure architecture.

Gaurav “GP” Pal

Given the breadth and scope of the cloud computing marketplace and offerings, we are starting to see specializations by certain lines of services and expertise. For example, until a couple of years ago, we used to have Amazon Web Services (AWS) Solution Architects, but given the scope of services, a Solution Architect can’t cover every topic and must specialize.

We are starting to see specializations or competencies as they are referred to more commonly around DevOps, security, big data and managed services. A further sub-set of specialization is developing around specific regulated markets, specifically healthcare (HIPAA), U.S. public sector (FedRAMP), commercial (PCI) or financial services (FFIEC).

Bill Wilder

I expect specialization in cloud computing roles to evolve along with the cloud platforms. The big public cloud platform vendors are supporting common industry approaches, such as use of containers, through vendors such as Docker, and VM configuration tools, such as Chef and Puppet. While valuable in the cloud, these skills are infrastructure as a service (IaaS) focused. You can put some VMs in the cloud so that infrastructure looks and acts like our on-premises world, except with incredible convenience around scale, pay-as-you-go, great automation support and other aspects. Many of these skills are not significantly tied to any platform, cloud or not, but they are certainly important for the IaaS style of cloud usage since a high degree of automation is the norm.

But the trend is that the big public cloud platform players are driving services for easy access to databases, messaging, security, scaling, key management, backup management and on and on. The catch is that even though the services are covering a lot of the same ground, they aren’t used in the same ways, and sophisticated use of these services begins to require specialization. For example, AWS Lambda and Azure Functions both support serverless compute models, but they are part of different ecosystems and there is a learning curve for becoming an expert in that particular feature and the broader set of functionality it sits within. I expect skills for cloud platform expertise will increasingly diverge because there is so much active investment and innovation across Amazon, Microsoft and Google.


September 9, 2016  6:43 PM

Can Virtustream succeed as a niche public cloud?

Trevor Jones Trevor Jones Profile: Trevor Jones

Virtustream – not VMware – is the lead public cloud infrastructure provider for the new Dell Technologies, but is its narrow focus enough to sway IT pros?

The company, which officially became part of Dell this week with the close of the protracted EMC acquisition, is taking a markedly different approach to a public cloud market where much of the focus and buzz is around the net-new. Instead Virtustream  is focusing exclusively on the less sexy legacy systems that still make up the vast majority of enterprise IT.

Virtustream tailors its public cloud to mission-critical and highly regulated applications, such as SAP and other ERP systems. The majority of its workloads are brownfield, lift-and-shift applications, and there’s often little re-architecting needed for this transition, said Kevin Reid, president and CTO of Virtustream, who spoke to TechTarget  at VMworld in advance of the completion of the Dell deal.

It may be a niche in terms of the services offered by the typical public cloud provider, but it’s a massive one for potential conversions — industry observers estimate it to be a multi-billion dollar market that’s just starting to ramp up.

Going solely after the enterprise market seems to be a good model for Virtustream, said Carl Brooks, an analyst with 451 Research, in New York.

“They have some specialties but mostly they give a highly defined level of service for managed services around the infrastructure,” he said. “Virtustream does it better than any of their customers do and usually by a pretty wide margin.”

Of course, Virtustream, with its circumscribed approach, isn’t alone going after this market segment. Amazon has pushed hard to get customers to offload some of their more burdensome IT assets on to Amazon Web Services (AWS), while IBM and Microsoft already have relationships with customers in this space. Even Oracle is seen by some as a dark horse that could push its way into the market.

Pitch to IT  pros: We’re better at this than you

Part of the argument for the public cloud is the ability to build redundancy into the application and scale as needed, and proponents argue that the lift-and-shift model never provides the full benefits of cloud computing. Virtustream freely admits it’s probably not the best place for new applications.

“When doing a greenfield app let me look at public cloud model and go cloud-native, built resiliency and scale, but if I’m going to work with something I’ve invested tens of hundreds of millions of dollars into there’s no justifiable ROI to rewriting that,” Reid said.

What often happens is enterprises make edge applications cloud-native while continuing to run the nucleus systems as stateful applications because of the investments on-premises and the lack of internal skills to rewrite applications to the cloud, Reid said.

Putting complex systems in standard public clouds involves extra integration engineering to make them run properly, Virtustream argues, because most of that infrastructure is standardized and commoditized to provide the lowest cost of entry. Virtustream is increasingly coming up against AWS and other large-scale public clouds, and its pricing is comparable when total cost of ownership is taken into account, Reid said.

So the question becomes, why move your on-premises workloads at all if they aren’t going to be rewritten? Virtustream’s response is that it’s better at running infrastructure than you. That entails better infrastructure utilization, improved application management via provisioning and automation, and built-in integration for security and compliance.

Virtustream and its place inside Dell

Virtustream  has evolved into the primary cloud services arm of Dell Technologies, but that strategy didn’t come about smoothly. Last fall Virtustream’s and VMware’s cloud assets were to be combined and reshuffled, with the infrastructure components handled by Virtustream and the software pieces overseen by VMware. That plan was ultimately scuttled amid investor concerns and the two remain separate companies, though Virtustream does plan to add NSX support later this year to better connect to VMware environments.

As a result of the failed merger, Virtustream’s strategy remains largely unchanged, while VMware has tweaked its roadmap to focus on software delivery and connectivity to other cloud providers. Lump that with the private cloud offering from Dell and platform-as-a-service provider Pivotal, also brought over through the EMC Federation, and Dell Technologies has an amalgam of cloud services targeted at enterprises with a hefty share of legacy systems. It’s a model that runs contrary to how larger providers such as AWS are extending beyond infrastructure to a variety of higher-level services all accessible through one umbrella.

“If they can leverage the SAP workload play and somehow do something [with VMware] that is less tightly coupled that doesn’t raise question around CapEx and opening more data centers, there’s a potential synergy there,” said Sid Nag, research director at Gartner.

Recombining all the sensibly related pieces of the federation would have provided a more materially significant offering as opposed to what is now essentially a service catalogue, Brooks said.

But EMC has served as a good pipeline for Virtustream business, and ideally the various business units under Dell will be able to have some level of connectivity to the other assets, which have such a huge footprint within IT, Brooks said.

“Overall I don’t know if that gives them anything flat-out remarkable,” Brooks said. “There’s not any world-beaters here but you can definitely say that to the extent that they can reduce that friction, it never hurts if it’s easier to get gear and customers.”

Trevor Jones is a news writer with TechTarget’s data center and virtualization media group. Contact him at tjones@techtarget.com


August 30, 2016  6:22 PM

Google’s SQL Server support is latest bid to win enterprise love

Kristin Knapp Kristin Knapp Profile: Kristin Knapp

In its latest attempt to shake perceptions that it’s not an enterprise-grade IaaS option, Google is cozying up again to Windows workloads in the cloud.

Starting this month, Google users can launch Google Compute Engine VM images preinstalled with Microsoft SQL Server. Google now offers beta support for three versions of the SQL Server relational database management system: SQL Server Express 2016; SQL Server Standard 2012, 2014 and 2016; and SQL Server Web 2016. The cloud provider said support for SQL Server Enterprise Edition 2012, 2014 and 2016 is “coming soon.”

Organizations could technically run SQL Server workloads on Google Compute Engine before by spinning up a VM, and then installing and managing SQL Server themselves, explained Simon Margolis, director, cloud platform at SADA Systems, a cloud and IT consulting shop, and Google partner, based in North Hollywood, Calif.

With this expanded support from Google, however, those SQL Server images can now come preinstalled, and with a broader range of baked-in administrative capabilities. That shifts much of the SQL Server deployment, management and support responsibilities away from users and onto Google.

“It brings a good deal of peace of mind that otherwise didn’t exist,” Margolis said.

The move also expands the licensing options for running SQL Server on Google cloud. Since 2014, organizations with license mobility, through the Microsoft Software Assurance program, could move existing SQL Server licenses to a Windows Server instance running on Google, and then manage those licenses themselves.

Now, users can also choose to spin up new SQL Server databases on Google and pay as they go, based on Google’s per-minute billing cycles, just as they would for other Google cloud resources.

“If I run my instance for 90 minutes, I’m not paying the same as I would if I just bought a license from Microsoft for a server I have physically,” Margolis said.

Google eyes the enterprise

Google’s expanded SQL Server support isn’t a major surprise. Over the past year, the cloud provider has made a series of moves intended to grow its enterprise appeal. For example, it rolled out a number of new cloud security features, including identity and access management (IAM) and the ability for users to bring their own encryption keys, as well as new database and big data services.

The expanded SQL Server support is another “paving stone” for Google to more seamlessly bridge corporate data centers to its cloud, said Dave Bartoletti, principal analyst at Forrester Research, an analyst firm in Cambridge, Mass.

“[This] removes the friction for companies who just want to get their database workloads into the cloud, often before they are ready to modernize them,” Bartoletti said.

But Google’s attempts to win enterprise mindshare haven’t exactly gone off with a hitch (or two). The provider has grappled with a series of service outages and disruptions this year that renewed questions about its reliability, and the platform still trails rivals Amazon Web Services and Microsoft Azure in the public cloud market.

While the vendor’s cloud portfolio has evolved significantly this year, gaining ground in the enterprise is not so much about Google’s cloud products, anymore, but its performance, support and partner base.

“There are no major gaps in Google’s offerings in terms of developer and infrastructure services,” Bartoletti said. “It’s now about execution with enterprise customers and building out its partner ecosystem.”

For Google, only time will tell if that’s easier said than done.


July 29, 2016  6:06 PM

In the fine print: Dissecting a cloud service agreement

Kristin Knapp Kristin Knapp Profile: Kristin Knapp

Without a cloud service agreement, public cloud users would be in the dark about crucial factors related to cloud performance, security and data privacy.

Like any contract, though, cloud service agreements aren’t the easiest documents to digest; their length and complexity can tempt even the most diligent cloud users to skim, sign and be done. This complexity has only increased as the industry itself has evolved, to reflect the move toward hybrid cloud, as well as new data privacy requirements.

Failing to scrutinize a cloud service agreement, however, is a major mistake, said Claude Baudoin, owner of cébé IT & Knowledge Management, an IT consulting firm in Austin, Texas.

Baudoin and other members of the Cloud Standards Customer Council (CSCC), a cloud user advocacy group, hosted a webinar this week focused on evaluating cloud service agreements. The overarching message: Don’t ignore the fine print.

The four pillars of a cloud service agreement

A public cloud service agreement, Baudoin explained, is generally broken up into four parts:

  1. Customer agreement document – the overarching, “umbrella” document that the user signs
  2. Acceptable use policy – a document that defines what the user is allowed, and not allowed, to do as a tenant of a public cloud service
  3. Service-level agreement – a document that defines the public cloud provider’s commitment to the user, and the consequences when the provider fails to meet those commitments
  4. Privacy policy – a document that outlines what the public cloud provider can, and can’t, do with the personally identifiable information (PII) that the cloud user might share in the process of contracting the service

The language of the agreement is usually spread between these four documents, and often the initial version the provider proposes won’t match the user’s expectations, Baudoin said. “That is why you cannot just close your eyes and sign on the bottom line. You have to scrutinize this language,” he said.

Data privacy and residency terms may not be presented in a single, well-identified place, for example. Instead, those terms could be scattered throughout these four documents, particularly throughout the acceptable use policy, the service level agreement and the privacy policy.

When evaluating the data privacy and residency terms of an agreement, remember that most security clauses are intended to protect the provider from any potential threats that users pose to the provider, the platform itself and to other users of the platform. In other words, most of the language reflects what the cloud users — not the provider — can and can’t do.

Also make sure the agreement outlines, in detail, the process that occurs in the event of cloud security breach. Terms should spell out how users will be notified of a breach, how they will be protected and how they will be compensated for data loss or corruption.

“This is where you really want these [terms] to be well-defined,” Baudoin said.

Agreements vary based on deployment model

Terms of a cloud service agreement will vary greatly depending on the deployment model – infrastructure as a service (IaaS), platform as a service (PaaS) or software as a service (SaaS) – and users should look for specific criteria in each.

With a public IaaS agreement, the service revolves around the provider offering IT infrastructure resources – such as compute, networking and storage – and then supporting and securing those resources. The user is responsible for most everything else, said Mike Edwards, cloud computing standards expert and Bluemix PaaS evangelist at IBM, and a CSCC member.

“Largely, the customer is … responsible for all the components that are going to run on that system – your applications, your data, the operating systems you may install, database software, whatever,” Edwards said during the webinar.

This means, in general, commitments made by the IaaS provider to a user will be somewhat limited.

With PaaS, users receive access to a hosted application platform and development services. When navigating a PaaS agreement, it can be difficult to determine which services are native to the PaaS environment, and which aren’t. For example, in most cases, a database service will be part of the fundamental PaaS offering. Other types of services that your application might access – such as those related to social media – might actually direct your apps to services outside the platform.

Of course, the commitments and expectations for services native to the PaaS offering will differ from those outside it. Users should ask their PaaS provider for a complete list or catalog to identify which services fall into which category.

Lastly, with SaaS, users have access to a “complete” application, with all the middleware, database, storage, compute and other associated components, Edwards said. In this case, the cloud agreement will be most extensive.

“All of these things are really the responsibility of the provider in a software-as-a-service environment,” he said.

However, if the SaaS application deals with personal data – such as a hosted HR application – make sure the privacy statements and data protection policies are clearly defined, as the user is still responsible for the protection of that data.

Negotiation options

When negotiating better terms in a cloud service agreement, there is generally less flexibility with a “bucket” cloud service, such as public IaaS, because the provider expects to give “one-size-fits-all” terms, Baudoin said.

Still, that doesn’t mean there’s no hope in scoring more favorable terms. In most cases, larger cloud users have the most negotiating power, but all cloud users could benefit in the long run.

“Smaller customers are not totally without recourse or help,” Baudoin said. “Over time, if a larger customer demands and obtains certain changes in the terms, that will trickle down to all the customers of this particular cloud service provider.”


July 28, 2016  10:02 PM

Latest Xen bug has limited impact on cloud users

Trevor Jones Trevor Jones Profile: Trevor Jones

Another Xen bug generated some headlines this week, but it’s much ado about nothing for most cloud consumers.

A bug in the Xen open-source hypervisor, popular among cloud infrastructure vendors from Amazon Web Services (AWS) to Rackspace to IBM Softlayer, was publicly identified this week that allows malicious para-virtualized guests to escalate their privilege to that of the host. But unlike past Xen vulnerabilities, this one involved minimal reboots, with most vendors avoiding it altogether.

Two reasons this bug should cause minimal problems for cloud providers are that the required access level isn’t trivial and the patch is “absurdly simple,” said Carl Brooks, an analyst with 451 Research. In truth, enterprise IT shops face a much bigger threat from someone opening a bad email or clicking on the wrong link.

“Some providers can be affected and some may well be … but if this is a problem for you as a provider, you have more systemic issues around security than this particular exploit,” Brooks said.

Cloud vendors also are generally given advanced notice in these situations so they can address it internally before the vulnerability is made public. And they’ve had plenty of chances to learn how to deal with Xen problems — there have been more than a dozen publicized patches for it in 2016 alone.

Cloud providers have learned to dodge bullets

Xen bugs have dogged AWS in the past, requiring roughly 10% of its instances to be rebooted as recently as October 2014. The last time a major Xen bug came to light in February 2015, however, AWS limited the impact to only 0.1% of EC2 instances.

This time, AWS managed to avoid a reboot entirely, telling customers that data and instances are not affected and no customer action is required. Amazon declined to comment on how it addressed this patch and managed to avoid the reboot.

It’s possible that Amazon is so far off the trunk of the open-source project that this incident didn’t cause any major impact to AWS in the first place, Brooks said. But for customers that are still worried about the bug, the simplest solution is to switch from para-virtualization to full virtualization.

Other Xen-dependent cloud platforms appear to be taking a cue from Amazon’s improvements in addressing these patches to avoid downtime. Details are scarce, but it’s believed some vendors have relied on live migration, while AWS is among those suspected of using hot patching.

Rackspace, which also declined to comment, issued a statement telling customers it had no plans to perform any reboots and that there were no actions needed by its customers. That’s a reverse from 2015, when it needed to do a Xen-related reboot but even then it communicated with customers, giving them advanced notice when their instances would be shut down.

The only major cloud provider that appeared to have significant downtime from this Xen bug was IBM. CloudHarmony, which tracks uptime for a wide range of cloud providers, noticed all 20 of its IBM SoftLayer status VMs were rebooted over the weekend with five to 15 minutes of downtime for each.

IBM, which did not return requests for comment, does not appear to have a public-facing status page for maintenance notifications — its Twitter feed for SoftLayer notifications has been updated once in the past year, to proclaim it is “only used if there is an emergency situation.”

Cloud hosting provider Linode has had to perform maintenance on its servers that rely on Xen. Since the last major bug in early 2015, Linode has moved its newer servers to the KVM hypervisor. In its security notice, it told customers that anyone wanting to move from the legacy Xen virtual machines to the KVM virtual machines would avoid the reboot.

It’s unclear if Linode’s move to KVM had anything to do with Xen’s spat of patches over the past 18 months. The company declined to comment.


July 27, 2016  3:18 PM

Bust these common cloud computing myths

Kathleen Casey Kathleen Casey Profile: Kathleen Casey

For most enterprises, it is no longer a question of if they will move to the cloud, but when.

The public cloud services market has steadily grown over the past few years. In 2016 alone, it will increase 16.5% to $204 billion, according to analyst firm Gartner. But even with cloud’s popularity, there’s still a number of cloud computing myths and misconceptions that steer some enterprises in the wrong direction.

We asked the SearchCloudComputing Advisory Board what they consider to be the most common cloud computing myths that influence enterprises’ decisions – and why those misconceptions exist. Here’s a look at their answers.

Christopher Wilder

The first misconception of cloud computing is that it’s a business strategy. It’s not. Cloud has evolved from competitive advantage to competitive parity. However, organizations need to be smart about what they put in the cloud, and what they keep on-premises. I anticipate in the next three years, over 80% of businesses will have at least one application that resides in the cloud.

Another misconception is that cloud will save you money by moving IT costs from Capex to Opex.  The upfront costs of moving to the cloud tend to be smaller, but the long-term costs are normally on par with in-house infrastructure costs. In the next two years, cloud deployment costs, especially within the public cloud providers, will be equal, if not more, to the investment for on-premises IT deployment costs.

Finally, another misconception is the ease of migration to the cloud from traditional on-premises environments. Although it’s getting better, the lack of viable cloud migration and automation tools make it more difficult for companies to seamlessly move. These tools have traditionally been very difficult to use and require specific and expensive expertise. Furthermore, moving applications to the cloud also means you must move all integration and supporting elements; if you decide to change providers, it can lead to both a contractual and technical quagmire.

Bill Wilder

One myth is that running applications in the public cloud is less expensive than the on-premises equivalent. Let’s consider a simple scenario: your company is running a line of business (LoB) web application on-premises and is considering moving it to the public cloud, such as Microsoft Azure or Amazon Web Services (AWS).

A LoB application typically requires servers, disk, and database resources — and corresponding staff to install, tune and patch databases, servers and more. A lift and shift approach could move this exact architecture to the cloud. The architecture would map cleanly to cloud features, such as VMs and storage, but you will still need the same staff expertise — plus more, since you’ve added new skills to work with your public cloud platform. There are many valid reasons to move to the cloud with a lift and shift approach, but if cost is the only driver, this configuration in isolation may be more expensive than on-premises.

Now consider a more cloud-native approach. Instead of running your own database on a VM, you choose a provider’s managed database with lower operational complexity. You deploy your web code on platform as a service, further reducing operational complexity and supporting fast deployments. Your LoB web application may be nearly idle overnight and on weekends, so you scale resource usage to match the need at any given time — without any downtime. By taking advantage of these and other efficiencies available in the public cloud, this configuration can be highly cost-efficient.

And the lesson here is that cost efficiency in the cloud requires effort.

Gaurav Pal

Many IT organizations, especially in large organizations, continue to believe that commercial cloud platforms, such as AWS and Azure, are not secure or cost-effective alternatives for providing hosting and infrastructure services. We continue to see security issues as the top reason for not migrating to a more cost-efficient and flexible model of enterprise services delivery However, this is largely a myth that partially stems from a lack of understanding of the new security model that cloud platforms provide. The U.S. federal government recently announced that three cloud platforms meet the Federal Risk and Authorization Management Program’s High Baseline Requirements, which include a comprehensive set of controls to protect sensitive data.

Progressive organizations like Capital One, GE and many others, including large public sector agencies, continue to realize the business benefits of agility and better security through the tools and services provided through standardized interfaces and APIs. I fully expect more large organizations to pivot and direct their investment dollars to business-oriented services, such as machine learning and data science, rather than build data centers or private clouds.

Alex Witherspoon

I think there’s a misconception that cloud is universally cheaper and better. I like cloud, a lot. It is where I am putting a lot of my investments and my business … but every time I would talk about a potential business use case to [a vendor] it wasn’t, “Hey, deploy this thing to solve that,” instead it’s, “Well, get this recipe of seven products we sell, put it together, engineer it and then see how magical it can be.” To do that, I am going to have to invest a whole bunch of engineering effort to actually make my application work that way.

Cloud can be a groundbreaker; it can make you a winner in your business, but be very specific about your intentions. As cloud providers try to differentiate themselves, trying to understand the problem you [need] to solve and the best tool to solve it is going to be really hard and difficult in the coming years. Do you go with a generalist, like AWS, that just tries to be good at everything, or are you going to find someone who specializes in [specific cloud features]?

We really [need to] respect cloud for what is it: a series of hosting providers that have different unique qualities that need to be evaluated. I think some folks are just really quick to jump right into what their one favorite cloud is and just live there. I think folks would do well to give a little more credence to what problems they are really trying to solve, and what cloud providers are trying to specialize in. AWS is obviously heading in many different directions, but some of these smaller providers have a specialty. That’s where they are spending their money, which means that’s where you’re going to get the best return on your investment as a customer.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: