The Troposphere


May 8, 2017  2:39 PM

A customer success story, with a twist

James Montgomery Profile: James Montgomery

One packed session at last week’s Red Hat Summit in Boston was a case study on an internally built integration of Ansible with Red Hat CloudForms, to orchestrate a hybrid cloud environment with VMware and AWS. Ansible is an especially useful tool to deploy Puppet agents into the environment, noted Phil Avery, senior Unix engineer at BJ’s Wholesale Club.

“I was pretty chuffed about this,” he said.

And then on the first day of the conference he heard about the Red Hat’s brand-new CloudForms 4.5 and its native Ansible integration, for the first time. So he redid his presentation to play down the novelty of his own Ansible work.

Avery hadn’t had an inside track on Red Hat’s CloudForms development, but Red Hat worked with him for “a long time” including a visit from top management, confirmed copresenter Matt St. Onge, senior solutions architect at Red Hat. “I’d like to think that Phil is an upstream contributor,” he quipped during the session.

Welcome to business models at the intersection of open source and “-as-a-service” — where vendors have less insight into what customers are doing, and vice-versa.

“Vendors want to build things for hundreds and thousands of customers, so it is a fine balance of building something that is adopted and then shows traction,” said Holger Mueller, principal analyst and vice president at Constellation Research, Inc. in Cupertino, CA.

Business models that attempt to tap the upswell of interest in open source and -as-a-service offerings, however, don’t have as much customer visibility — SaaS is particularly outdated, Mueller said. Hence, the existence of customer advisory boards and special interest groups.

Clever customers that conjure workarounds to solve a pressing need aren’t rare, of course — but especially in the world of open source and as-a-service, birds of a feather tend to flock together.

“This could have been my demo” of Ansible integration for hybrid cloud orchestration, crowed one engineer at a global bank headquartered in the Netherlands, after the session. “Why haven’t I seen this?” said another, a technical architect at a U.S.-based financial services company.

For his part, Avery seems unfazed by how things turned out. He’s using CloudForms 4.2 but will look at version 4.5 and likely upgrade — after all, Ansible integration is native now. Still, as he continues to figure out what improvements help him do his job, communication lines need to improve: “We need to get back on that wavelength,” he said.

April 7, 2017  6:03 PM

With vCloud Air sale, VMware clears cloud computing path

Ed Scannell Ed Scannell Profile: Ed Scannell
VMware

With the sale of its long languishing vCloud Air offering this week, VMware found a way to step away from the product that has had an uncertain future for quite some time.

The company sold its vCloud Air business to OVH, Europe’s largest cloud provider, for an undisclosed sum, handing off its vCloud Air operations, sales team and data centers to add to OVH’s existing cloud services business.

But VMware isn’t exactly washing its hands of the product. The company will continue to direct research and development for vCloud Air, supplying the technology to OVH – meaning VMware still wants to control the technical direction of the product. It also will assist OVH with various go-to-market strategies, and jointly support VMware users as they transfer their cloud operations to OVH’s 20 data centers spread across 17 countries.

The sale of vCloud Air should lift the last veil of mist that has shrouded VMware’s cloud computing strategy from the start for years. VMware first talked about its vCloud initiative in 2008, and six years later re-launched the product as vCloud Air, a hybrid, IaaS offering for its vSphere users. It never gained any measurable traction among IT shops, getting swallowed up by a number of competitors, most notably AWS and Microsoft.

The company quickly narrowed its early ambitions for vCloud Air to a few specific areas, such as disaster recovery, acknowledged Raghu Raghuram, VMware’s chief operating officer for products and cloud services, in a conference call to discuss the deal.

Further obscuring VMware’s cloud strategy was EMC’s $12 billion purchase of Virtustream in 2015, a product that had every appearance of being a competitor to vCloud Air. This froze the purchasing decisions of would-be buyers of vCloud Air who waited to see how EMC-VMware would position the two offerings.

Even a proposed joint venture between VMware and EMC, called the Virtustream Cloud Services Business, an attempt to deliver a more cohesive technical strategy, collapsed when VMware pulled out of the deal. Dell’s acquisition of EMC, and by extension VMware, didn’t do much to clarify what direction the company’s cloud computing strategy would take.

But last year VMware realized the level of competition it was up against with AWS and made peace with the cloud giant, signing a deal that makes it easier for corporate shops to run VMware on both their own servers as well as servers running in AWS’ public cloud. Announced last October and due in mid-2017, the upcoming product will be called VMware Cloud on AWS, that lets users run applications across vSphere-based private, hybrid and public clouds.

With the sale of vCloud Air, the company removes another distraction for both itself and its customers. Perhaps now the company can focus fully on its ambitious cross-cloud architecture, announced at VMworld last August, which promises to help users manage and connect applications across multiple clouds. VMware delivered those offerings late last year, but the products haven’t created much buzz since.

VMware officials, of course, don’t see the sale as the removal of an obstacle, but rather “the next step in vCloud Air’s evolution,” according to CEO Pat Gelsinger, in a prepared statement. He added the deal is a “win” for users because it presents them with greater choice — meaning they can now choose to migrate to OVH’s data centers, which both companies claim can deliver better performance.

Hmm, well that’s an interesting spin. But time will tell if this optimism has any basis in reality.

After the sale is completed, which should be sometime this quarter, OVH will run the service under the name of vCloud Air Powered by OVH. Whether it is wise to keep the vCloud brand, given the product’s less-than-stellar success, again, remains to be seen.

Ed Scannell is a senior executive editor with TechTarget. Contact him at escannell@techtarget.com.


March 20, 2017  2:48 PM

Awareness of shared-responsibility model is critical to cloud success

Trevor Jones Trevor Jones Profile: Trevor Jones

When companies move to the cloud, it’s paramount that they know where the provider’s security role ends and where the customer’s begins.

The shared-responsibility model is one of the fundamental underpinnings of a successful public cloud deployment. It requires vigilance by the cloud provider and customer—but in different ways. Amazon Web Services (AWS), which developed the philosophy as it ushered in public cloud, describes it succinctly as knowing the difference between security in the cloud versus the security of the cloud.

And that model, which can be radically different from how organizations are used to securing their own data centers, often creates a disconnect for newer cloud customers.

“Many organizations are not asking the right question,” said Ananda Rajagopal, vice president of products at Gigamon, a network-monitoring company based in Santa Clara, Calif. “The right question is not, ‘Is the cloud secure?’ It’s, ‘Is the cloud being used securely?'”

And that’s a change from how enterprises are used to operating behind the firewall, said Abhi Dugar, research director at IDC. The security of the cloud refers to all the underlying hardware and software:

  • compute, storage and networking
  • AWS global infrastructure

That leaves everything else—including the configuration of those foundational services—in the hands of the customer:

  • customer data
  • apps and identify and access management
  • operating system patches
  • network and firewall configuration
  • data and network encryption

Public cloud vendors and third-party vendors offer services to assist in these areas, but it’s ultimately up to the customers to set policies and track things.

The result is a balancing act, said Jason Cradit, senior director of technology at TRC Companies, an engineering and consulting firm for the oil and gas industry. TRC, which uses AWS as its primary public cloud provider, turns to companies like Sumo Logic and Trend Micro to help segregate duties and fill the gaps. And it also does its part to ensure it and its partners are operating securely.

“Even though it’s a shared responsibility, I still feel like with all my workloads I have to be aware and checking [that they] do their part, which I’m sure they are,” Cradit said. “If we’re going to put our critical infrastructure out there, we have to live up to standards on our side as much as we can.”

Trevor Jones is a news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.


December 16, 2016  7:01 PM

Plan to kill Cisco public cloud highlights the investment needed to compete

Trevor Jones Trevor Jones Profile: Trevor Jones

The graveyard of public clouds is littered with traditional IT vendors, and it’s about to get a bit more crowded.

Cisco has confirmed a report by The Register that it will shut down its Cisco Intercloud Services public cloud early next year. The company rolled out Intercloud in 2014 with plans to spend $1 billion to create a global interconnection among data center nodes targeted at IoT and software as a service offerings.

The networking giant never hitched its strategy to being a pure infrastructure as a service provider, instead focusing on a hybrid model based on its Intercloud Fabric. The goal was to connect to other cloud providers, both public and private. Those disparate environments could then be coupled with its soon-to-be shuttered OpenStack-based public cloud, which includes a collection of compute, storage and networking.

“The end of Cisco’s Intercloud public cloud is no surprise,” said Dave Bartoletti, principal analyst at Forrester. “We’re long past the time when any vendor can construct a public cloud from some key technology bits, some infrastructure, and a whole mess of partners.”

Cisco will help customers migrate existing workloads off the platform. In a statement the company indicated it expects no “material customer issues as a result of the transition” – a possible indication of the limited customer base using the service. Cisco pledged to continue to act as a connector for hybrid environments despite the dissolution of Intercould Services.

Cisco is hardly the first big-name vendor to enter this space with a bang and exit with a whimper. AT&T, Dell, HPE — twice — and Verizon all planned to be major players only to later back out. Companies such as Rackspace and VMware still operate public clouds but have deemphasized those services and reconfigured their cloud strategy around partnerships with market leaders.

Of course, legacy vendors are not inherently denied success in the public cloud, though clearly the transition to an on-demand model involves some growing pains. Microsoft Azure is the closest rival to Amazon Web Services (AWS) after some early struggles. IBM hasn’t found the success it likely expected when it bought bare metal provider SoftLayer, but it now has some buzz around Watson and some of its higher-level services. Even Oracle, which famously derided cloud years ago, is seen as a dark horse by some after it spent years on a rebuilt public cloud.

To compete in the public cloud means a massive commitment to resources. AWS, which essentially created the notion of public cloud infrastructure a decade ago and still holds a sizable lead over its nearest competitors, says it adds enough server capacity every day to accommodate the entire Amazon.com data center demand from 2005. Google says it spent $27 billion over the past three years to build Google Cloud Platform — and is still seen as a distant third in the market.

Public cloud also has become much more than just commodity VMs. Providers continue to extend infrastructure and development tools. AWS alone has 92 unique services for customers.

“We don’t expect any new global public clouds to emerge anytime soon,” Bartoletti said. “The barriers to entry are way too high.”

Intercloud won’t be alone in its public flogging on the way to the scrap heap, but high-profile public cloud obits will become fewer and farther between in 2017 and beyond — simply because there’s no room left to try and fail.

Trevor Jones is a news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.


November 15, 2016  4:30 PM

Google cloud consulting service a two-way street

Trevor Jones Trevor Jones Profile: Trevor Jones

Google received plenty of attention when it reshuffled its various cloud services under one business-friendly umbrella, but tucked within that news was a move that also could pay big dividends down the road.

The rebranded Google Cloud pulls together various business units, including Google Cloud Platform (GCP), the renamed G Suite set of apps, machine learning tools and APIs and any Google devices that connect to the cloud. Google also launched a consulting services program called Customer Reliability Engineering, which may have an outsized impact compared to the relatively few customers that will ever get to participate in it.

Customer Reliability Engineering isn’t a typical professional services contract in which a vendor guides its customer through the various IT operations processes for a fee, nor is it aimed at partnering with a forward-leaning company to develop new features. Instead, this is focused squarely on ensuring reliability — and perhaps most notably, there’s no charge for participating.

The reliability focus is not on the platform, per se, but rather the customers’ applications that are run on the platform. It’s a response to uncertainty about how those applications will behave in these new environments, and the fact that IT operations teams are no longer in the war room making decisions when things go awry.

“It’s easy to feel at 3 in the morning that the platform you’re running on doesn’t care as much as you do because you’re one of some larger number,” said Dave Rensin, director of the Customer Reliability Engineering initiative.

Here’s the idea behind the CRE program: a team of Google engineers shares responsibility for the uptime and health operations of a system, including service level objectives, monitoring and paging. They inspect all elements of an application to determine gaps and determine the best ways to get move from four nines to five or six nines.

There are a couple ways Google hopes to reap rewards from this new program. While some customers come to Google just to solve a technical problem such as big data analytics, this program could prove tantalizing for another type of user Rensin describes as looking to “buy a little bit of Google’s operational culture and sprinkle it into some corners of their business.”

Of course, Google’s role here clearly isn’t altruistic. One successful deployment likely begets another, and that spreads to other IT shops as they learn what some of their peers are doing on GCP.

It also doesn’t do either side any favors when resources aren’t properly utilized and a new customer walks away dissatisfied. It’s in Google’s interest to make sure customers get the most out of the platform and to be a partner rather than a disinterested supplier that’s just offering up a bucket of different bits, said Dave Bartoletti, principal analyst with Forrester Research.

“It’s clear people have this idea about the large public cloud providers that they just want to sell you crap and they don’t care how you use it, that they just want you to buy as much as possible — and that’s not true,” Bartoletti said.

Rensin also was quick to note that “zero additional dollars” is not the same as “free” — CRE will cost users effort and organizational capital to change procedures and culture. Google also has instituted policies for participation that require the system to pass an inspection process and not routinely blow its error budget, while the customer must actively participate in reviews and postmortems.

You scratch my back, I’ll scratch yours

Customer Reliability Engineering also comes back to the question of whether Google is ready to handle enterprise demands. It’s one of the biggest knocks against Google as it attempts to catch Amazon and Microsoft in the market, and an image the company has fought hard to reverse under the leadership of Diane Greene. So not only does this program aim to bring a little Google operations to customers, it also aims to bring some of that enterprise know-how back inside the GCP team.

It’s not easy to shift from building tools that focus on consumer life to a business-oriented approach, and this is another sign of how Greene is guiding the company to respond to that challenge, said Sid Nag, research director at Gartner.

“They’re getting a more hardened enterprise perspective,” he said.

There’s also a limit to how many users can participate in the CRE program. Google isn’t saying exactly what that cap is, but it does expect demand to exceed supply — only so many engineers will be dedicated to a program without direct correlation to generating revenues.

Still, participation won’t be selected purely by which customer has the biggest bill. Those decisions will be made by the business side of the GCP team, but with a willingness to partner with teams doing interesting things, Rensin said. To that end, it’s perhaps telling that the first customer wasn’t a well-established Fortune 500 company, but rather Niantic, a gaming company behind the popular Pokémon Go mobile game.

Trevor Jones is a news writer with TechTarget’s Data Center and Virtualization Media Group. Contact him at tjones@techtarget.com.


October 31, 2016  8:03 PM

Google’s Stackdriver taps into growing multicloud trend

Kristin Knapp Kristin Knapp Profile: Kristin Knapp

A clear trend has emerged around public cloud adoption in the enterprise: organizations increasingly employ a mix of different cloud services, rather than go all in with one. As that movement continues, cloud providers who support integration with platforms outside their own – and especially with public cloud titan Amazon Web Services – have the most to gain.

Google seems to have that very thought in mind with the recent rollout of its Stackdriver monitoring tool.

Stackdriver, originally built for Amazon Web Services (AWS) but bought by Google in 2014, became generally available this month, providing monitoring, alerting and a number of other capabilities for Google Cloud Platform. Most notably, though, it hasn’t shaken its AWS cloud roots.

Google’s continued support for AWS shouldn’t come as a big surprise for legacy Stackdriver users, said Dan Belcher, product manager for Google Cloud Platform and co-founder of Stackdriver. His team has attempted for the past two years to assuage any customer concerns about AWS support falling by the wayside.

“[Customers were] looking for assurances that, at the time, we were going to continue to invest in support for Amazon Web Services,” Belcher said. “And I think we have addressed those in many ways.”

Mark Annati, VP of IT at Extreme Reach, an advertising firm in Needham, Mass., is a Stackdriver user since 2013 and still uses the tool to monitor his company’s cloud deployment, which spans Google, AWS and Azure. He said his company is still evaluating the full impact of the Stackdriver tool being migrated onto Google’s internal infrastructure, but so far it appears to be business as usual.

And, considering his need for AWS monitoring support, that’s a relief.

“I have had no indication from Stackdriver that they would stop monitoring AWS,” Annati said. “If they did, that would cause us significant pain.”

There are a few changes, however, for legacy Stackdriver users post-acquisition. Now that Stackdriver is hosted on Google’s own infrastructure, for example, users need a Google cloud account to access the tool, and to manage user access and billing. In addition, a few features that existed in the tool pre-acquisition — such as chart annotations, on-premises server monitoring and integration with AWS CloudTrail — are unsupported, at least for now, as part of the migration to Google.

Stackdriver pricing options are slightly different, depending on whether you use the tool exclusively for Google, or for both Google and AWS. All Google Cloud Platform (GCP) users, for example, have access to a free Basic tier and a Premium tier, while users who require the AWS integration only have access to the Premium tier. That higher-level tier costs $8 per monitored cloud resource per month and, in addition to the AWS support, offers more advanced monitoring, as well as a larger allotment for log data.

In general, since the Google acquisition, Stackdriver’s feature set has expanded beyond the tool’s traditional monitoring features, such as alerts and dashboards, to now offer logging, error reporting and debugging tools to both AWS and Google users, Belcher said.

“As an AWS-only customer, your experience using Stackdriver is just as good,” he said.

Moving to a multicloud world

This cross-platform support – particularly for market leader AWS, whose public cloud revenue climbed 55% percent year-over-year in the third quarter, totaling over $3 billion — is going to become table stakes for cloud providers, explained Dave Bartoletti, principal analyst at Forrester Research.

“When you are offering a tool that is great for your platform, you’d better support AWS,” Bartoletti said. “What Google recognizes is that it would be stupid to say, ‘We’re going to release a management tool that is only good for our platform.'”

Google stands to gain from this AWS integration in other ways, too. For example, Stackdriver may eventually prompt more AWS users to evaluate Google’s homegrown data analytics tools, such as BigQuery, as a supplement to Stackdriver itself, Bartoletti said.

“It lets Google show off what else it has to offer,” he said.

While he didn’t offer any specifics, Belcher said Google will consider broadening Stackdriver to support other cloud platforms, such as Azure, and potentially on-premises deployments as well.

“There are more than enough customers on AWS and GCP that are running in some hybrid mode with some unsupported platform, so you can imagine we get requests every day to extend the support,” he said.

Annati, for one, would welcome the move.

“It would be great if Stackdriver covered it all,” he said. “That would be an easy decision for us.”


October 28, 2016  4:17 PM

Three IT nightmares that haunted cloud admins in 2016

Kathleen Casey Kathleen Casey Profile: Kathleen Casey

Cloud doesn’t treat enterprise IT teams all the time; in fact, it occasionally throws out a few tricks. While there are many benefits to cloud, sometimes a cloud deployment can go terribly awry, prompting real-life IT nightmares — ranging from spooky security breaches to pesky platform as a service implementations.

We asked the SearchCloudComputing Advisory Board to share the biggest cloud-related IT nightmares they faced, or saw others face, so far in 2016. Here’s a look at their tales of terror:

Bill Wilder

Halloween nightmares came ten days early this year for DNS provider Dyn, as it was hit with a massive DDoS attack. The Internet simply can’t function without reliable DNS, and most cloud applications and services outsource that to companies like Dyn. Among the parties impacted by the attack on Dyn is a “who’s who” of consumer sites, such as Twitter, Spotify and Netflix, and developer-focused cloud services, such as Amazon Web Services, Heroku and Github. This news comes about a month after security researcher and journalist Brian Krebs had his own web site attacked by one of the largest ever DDoS attacks, reportedly reaching staggering levels exceeding a half terabit of data per second.

Both attacks appear to have been powered by bot armies with significant firepower from unwitting internet-connected internet of things (IoT) devices. This is truly frightening, considering that there are billions of IoT devices in the wild already, from video cameras, DVRs and door locks to refrigerators and Barbie dolls. Since internet-exposed IoT devices are easily found through specialized search engines, and IoT device exploit code is readily available for download, we can be sure of one thing: we are only seeing the early wave of this new brand of DDoS attack.

Gaurav “GP” Pal

My biggest cloud computing nightmare was the first-hand experience of implementing a custom platform as a service (PaaS) on an infrastructure as a service (IaaS) platform. Many large organizations are pushing the innovation envelope in search of cloud nirvana, including hyper-automation, cloud-platform independence and container everything. Sounds great! But with the lines between IaaS, managed IaaS and PaaS constantly blurring, the path to nirvana is not a straight one. It took way longer to create the plumbing than anticipated, the platform was unable to pass security audits and getting the operational hygiene in place was challenging.

Adding to the cup of woes is the lack of qualified talent that truly has experience with custom PaaS, given that it has been around only for a short period of time. On top of that you have a constantly changing technology foundation on the container orchestration side. All of this made for a ghoulish mix. Only time will tell whether a custom PaaS on an IaaS platform is a trick or a treat.

 Alex Witherspoon

The trend I keep seeing repeat is off-base cost expectations and the risk of operating non-cloud-architected applications in a private or public cloud environment that is not ideal for them.

Cloud environments should essentially be the automated abstraction and utilization of physical resources. Additionally, public cloud charges you for that value, in addition to the physical servers that cloud lives on — without your input in the buying decisions. For some, businesses align well with the public cloud of choice and its cost model, and so perhaps the tradeoffs to the business tabulate well. For many, such as Dropbox, to name a public example, they find that public cloud was quickly going to transform at an inflection point from a savings to an operational cost that would only continue to grow with the business and never provide stable controlled operational expenditure (OPEX) or capital expenditure (CAPEX) like a private cloud could provide. Given modern financial mechanisms to take CAPEX investments in private clouds and convert them into flexible OPEX arrangements, the financial models for private cloud are often more economically feasible at the expense of some additional complexity in managing the private cloud. Often, though, that tradeoff for complexity is justified in the control one gains by shaping the architecture of the private cloud to perfectly align to the business needs technologically and economically.

These optimizations in cloud can be numerous, one of them being support for non-cloud architected applications. This is important to consider because not all clouds are built alike, and many public cloud providers like AWS, Azure and Google suggest the minimum viable architecture is a widely distributed application that can survive random outages at any single node. Many modern applications do provide for that, but ultimately, the majority of software in play today is operating with the expectation that the infrastructure underneath it is going to be 100% reliable, and these applications can be dangerous in a public or private cloud environment that isn’t designed for high availability.

To this end, it’s endlessly important to consider the risks, and the return on investment (ROI) picture throughout the lifecycle of the service. Clouds of all types carry diverse ROI profiles, and being able to specifically quantify the strategic fitment of the business needs against these offerings can avert technological and economic disaster for your business.


October 21, 2016  6:05 PM

Another week, another sign of VMware adapting to a multicloud world

Trevor Jones Trevor Jones Profile: Trevor Jones

VMware made waves last week by partnering with Amazon, but another big-name public cloud integration that flew under the radar this week also highlights where the company is headed as enterprises move to a multicloud strategy.

VMware rolled out a number of updates at VMworld Europe around vSphere, vSAN and vRealize that expand on its strategy of providing a common operating environment across public and private clouds. Among those updates is improved management of Microsoft Azure – a platform built by VMware’s biggest competitor in the virtualization space – providing out-of-the-box support with simplified service blueprints for multi-tier applications running on Azure.

VMware rightly recognizes that it’s a multi-cloud world among enterprise customers and Azure is one of the major players.

“We see many enterprises including Azure as one of several clouds in their catalog,” said Mary Johnston Turner, research vice president at IDC. “To be a credible enterprise multi-cloud management player today, support for Azure is just as important as support for AWS.”

VRealize, meanwhile, continues to be an important element of VMware’s broader multicloud management strategy and the focal point of its cloud management for private cloud and virtualized infrastructure automation, operations monitoring and log analytics. IDC ranks it as the top cloud systems management software on the market, based on revenue.

The new vRealize capabilities build on VMware’s Cross-Cloud Architecture for running, managing and connecting applications across environments, including Azure and AWS.

Both those efforts, though, fall short of what VMware is working on with AWS.

VMware’s planned integration with Amazon will provide bare metal servers within AWS that VMware will manage and sell services on top of for customers to migrate workloads via a software-defined data center. (VMware announced a similar deal with IBM earlier this year.) In many ways, this represents the culmination of a two-year shift from trying to keep everything within its own ecosystem to trying to get its software-defined data center (SDDC) on as many different platforms as possible.

The added Azure support on vRealize is a positive step, but even better would be something similar to the capabilities being worked on with AWS, said Cory De Arkland, senior cloud engineer at San Francisco-based Pacific Gas and Electric Co. (PG&E).

PG&E, which uses vRealize, already has to manage dev-test workloads set up in AWS, thanks to shadow IT. Extending VMware environments to Azure would be beneficial in case the utility wants to provide its developers with public cloud resources in the future, either because of special feature sets or just to pit vendors against each other on price, De Arkland said.

“It just encourages competitiveness,” he said.

Microsoft says a third of all Azure VMs use Linux. And while there isn’t a lot of crossover between vSphere and Hyper-V users, there is growing demand among customers to integrate with Azure, according to VMware.

Still, there are competitive issues that could limit the depth of any hypothetical partnership between the two companies, said Gary Chen, research manager at IDC. VMware is still treating Azure as another cloud resource it can manage, and while the company should be pursuing deeper partnerships with other cloud providers, each deal will likely be different and may not look exactly like the AWS partnership.

“It would be a stretch to see Azure running VMware software anytime soon for deeper infrastructure integration, which is what the AWS deal, IBM and the rest of VCAN [the vCloud Air Network] has required,” Chen said.

Trevor Jones is a news writer with TechTarget’s data center and virtualization media group. Contact him at tjones@techtarget.com.


October 3, 2016  12:48 PM

Cloud computing employment trends shift from generalists to specialists

Kathleen Casey Kathleen Casey Profile: Kathleen Casey

Cloud computing has created a plethora of new jobs in the IT industry and shows no signs of slowing down. But what are companies looking for in a potential cloud employee? Job hunters face the difficult choice of zeroing in on a certain cloud service or vendor, or becoming a jack of all trades.

We asked the SearchCloudComputing Advisory Board what they consider to be the biggest cloud computing employment trends — and what employers are looking for. Here’s a look at their answers.

Alex Witherspoon

[In] the older enterprise IT model, there was a drive toward specialists [who] have a deep understanding of complex systems, like modern storage, servers and networking, to operate software. The strength in this is that one can be complexly involved with all elements of the platform. Many of those systems are still there, but in a modern cloud, those infrastructure problems and specialties are obfuscated to the end user, so a modern company can focus much more heavily on the software, the customer and the business.

This trend has led to an Agile-focused mindset — one that is much more concerned with technology as an operating cost and series of capabilities. This could be called DevOps, but it’s effectively removing the complexity of infrastructure [from] development and operation teams and turns the focus much more heavily into the software and design of software platforms, rather than infrastructure itself. Jobs will still be diverse between centralized architects and decentralized, general jack-of-all- trades, such as found in site reliability engineering teams, but they will be commonly focused on software architecture, rather than infrastructure architecture.

Gaurav “GP” Pal

Given the breadth and scope of the cloud computing marketplace and offerings, we are starting to see specializations by certain lines of services and expertise. For example, until a couple of years ago, we used to have Amazon Web Services (AWS) Solution Architects, but given the scope of services, a Solution Architect can’t cover every topic and must specialize.

We are starting to see specializations or competencies as they are referred to more commonly around DevOps, security, big data and managed services. A further sub-set of specialization is developing around specific regulated markets, specifically healthcare (HIPAA), U.S. public sector (FedRAMP), commercial (PCI) or financial services (FFIEC).

Bill Wilder

I expect specialization in cloud computing roles to evolve along with the cloud platforms. The big public cloud platform vendors are supporting common industry approaches, such as use of containers, through vendors such as Docker, and VM configuration tools, such as Chef and Puppet. While valuable in the cloud, these skills are infrastructure as a service (IaaS) focused. You can put some VMs in the cloud so that infrastructure looks and acts like our on-premises world, except with incredible convenience around scale, pay-as-you-go, great automation support and other aspects. Many of these skills are not significantly tied to any platform, cloud or not, but they are certainly important for the IaaS style of cloud usage since a high degree of automation is the norm.

But the trend is that the big public cloud platform players are driving services for easy access to databases, messaging, security, scaling, key management, backup management and on and on. The catch is that even though the services are covering a lot of the same ground, they aren’t used in the same ways, and sophisticated use of these services begins to require specialization. For example, AWS Lambda and Azure Functions both support serverless compute models, but they are part of different ecosystems and there is a learning curve for becoming an expert in that particular feature and the broader set of functionality it sits within. I expect skills for cloud platform expertise will increasingly diverge because there is so much active investment and innovation across Amazon, Microsoft and Google.


September 9, 2016  6:43 PM

Can Virtustream succeed as a niche public cloud?

Trevor Jones Trevor Jones Profile: Trevor Jones

Virtustream – not VMware – is the lead public cloud infrastructure provider for the new Dell Technologies, but is its narrow focus enough to sway IT pros?

The company, which officially became part of Dell this week with the close of the protracted EMC acquisition, is taking a markedly different approach to a public cloud market where much of the focus and buzz is around the net-new. Instead Virtustream  is focusing exclusively on the less sexy legacy systems that still make up the vast majority of enterprise IT.

Virtustream tailors its public cloud to mission-critical and highly regulated applications, such as SAP and other ERP systems. The majority of its workloads are brownfield, lift-and-shift applications, and there’s often little re-architecting needed for this transition, said Kevin Reid, president and CTO of Virtustream, who spoke to TechTarget  at VMworld in advance of the completion of the Dell deal.

It may be a niche in terms of the services offered by the typical public cloud provider, but it’s a massive one for potential conversions — industry observers estimate it to be a multi-billion dollar market that’s just starting to ramp up.

Going solely after the enterprise market seems to be a good model for Virtustream, said Carl Brooks, an analyst with 451 Research, in New York.

“They have some specialties but mostly they give a highly defined level of service for managed services around the infrastructure,” he said. “Virtustream does it better than any of their customers do and usually by a pretty wide margin.”

Of course, Virtustream, with its circumscribed approach, isn’t alone going after this market segment. Amazon has pushed hard to get customers to offload some of their more burdensome IT assets on to Amazon Web Services (AWS), while IBM and Microsoft already have relationships with customers in this space. Even Oracle is seen by some as a dark horse that could push its way into the market.

Pitch to IT  pros: We’re better at this than you

Part of the argument for the public cloud is the ability to build redundancy into the application and scale as needed, and proponents argue that the lift-and-shift model never provides the full benefits of cloud computing. Virtustream freely admits it’s probably not the best place for new applications.

“When doing a greenfield app let me look at public cloud model and go cloud-native, built resiliency and scale, but if I’m going to work with something I’ve invested tens of hundreds of millions of dollars into there’s no justifiable ROI to rewriting that,” Reid said.

What often happens is enterprises make edge applications cloud-native while continuing to run the nucleus systems as stateful applications because of the investments on-premises and the lack of internal skills to rewrite applications to the cloud, Reid said.

Putting complex systems in standard public clouds involves extra integration engineering to make them run properly, Virtustream argues, because most of that infrastructure is standardized and commoditized to provide the lowest cost of entry. Virtustream is increasingly coming up against AWS and other large-scale public clouds, and its pricing is comparable when total cost of ownership is taken into account, Reid said.

So the question becomes, why move your on-premises workloads at all if they aren’t going to be rewritten? Virtustream’s response is that it’s better at running infrastructure than you. That entails better infrastructure utilization, improved application management via provisioning and automation, and built-in integration for security and compliance.

Virtustream and its place inside Dell

Virtustream  has evolved into the primary cloud services arm of Dell Technologies, but that strategy didn’t come about smoothly. Last fall Virtustream’s and VMware’s cloud assets were to be combined and reshuffled, with the infrastructure components handled by Virtustream and the software pieces overseen by VMware. That plan was ultimately scuttled amid investor concerns and the two remain separate companies, though Virtustream does plan to add NSX support later this year to better connect to VMware environments.

As a result of the failed merger, Virtustream’s strategy remains largely unchanged, while VMware has tweaked its roadmap to focus on software delivery and connectivity to other cloud providers. Lump that with the private cloud offering from Dell and platform-as-a-service provider Pivotal, also brought over through the EMC Federation, and Dell Technologies has an amalgam of cloud services targeted at enterprises with a hefty share of legacy systems. It’s a model that runs contrary to how larger providers such as AWS are extending beyond infrastructure to a variety of higher-level services all accessible through one umbrella.

“If they can leverage the SAP workload play and somehow do something [with VMware] that is less tightly coupled that doesn’t raise question around CapEx and opening more data centers, there’s a potential synergy there,” said Sid Nag, research director at Gartner.

Recombining all the sensibly related pieces of the federation would have provided a more materially significant offering as opposed to what is now essentially a service catalogue, Brooks said.

But EMC has served as a good pipeline for Virtustream business, and ideally the various business units under Dell will be able to have some level of connectivity to the other assets, which have such a huge footprint within IT, Brooks said.

“Overall I don’t know if that gives them anything flat-out remarkable,” Brooks said. “There’s not any world-beaters here but you can definitely say that to the extent that they can reduce that friction, it never hurts if it’s easier to get gear and customers.”

Trevor Jones is a news writer with TechTarget’s data center and virtualization media group. Contact him at tjones@techtarget.com


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: