The Troposphere

October 21, 2016  6:05 PM

Another week, another sign of VMware adapting to a multicloud world

Trevor Jones Trevor Jones Profile: Trevor Jones

VMware made waves last week by partnering with Amazon, but another big-name public cloud integration that flew under the radar this week also highlights where the company is headed as enterprises move to a multicloud strategy.

VMware rolled out a number of updates at VMworld Europe around vSphere, vSAN and vRealize that expand on its strategy of providing a common operating environment across public and private clouds. Among those updates is improved management of Microsoft Azure – a platform built by VMware’s biggest competitor in the virtualization space – providing out-of-the-box support with simplified service blueprints for multi-tier applications running on Azure.

VMware rightly recognizes that it’s a multi-cloud world among enterprise customers and Azure is one of the major players.

“We see many enterprises including Azure as one of several clouds in their catalog,” said Mary Johnston Turner, research vice president at IDC. “To be a credible enterprise multi-cloud management player today, support for Azure is just as important as support for AWS.”

VRealize, meanwhile, continues to be an important element of VMware’s broader multicloud management strategy and the focal point of its cloud management for private cloud and virtualized infrastructure automation, operations monitoring and log analytics. IDC ranks it as the top cloud systems management software on the market, based on revenue.

The new vRealize capabilities build on VMware’s Cross-Cloud Architecture for running, managing and connecting applications across environments, including Azure and AWS.

Both those efforts, though, fall short of what VMware is working on with AWS.

VMware’s planned integration with Amazon will provide bare metal servers within AWS that VMware will manage and sell services on top of for customers to migrate workloads via a software-defined data center. (VMware announced a similar deal with IBM earlier this year.) In many ways, this represents the culmination of a two-year shift from trying to keep everything within its own ecosystem to trying to get its software-defined data center (SDDC) on as many different platforms as possible.

The added Azure support on vRealize is a positive step, but even better would be something similar to the capabilities being worked on with AWS, said Cory De Arkland, senior cloud engineer at San Francisco-based Pacific Gas and Electric Co. (PG&E).

PG&E, which uses vRealize, already has to manage dev-test workloads set up in AWS, thanks to shadow IT. Extending VMware environments to Azure would be beneficial in case the utility wants to provide its developers with public cloud resources in the future, either because of special feature sets or just to pit vendors against each other on price, De Arkland said.

“It just encourages competitiveness,” he said.

Microsoft says a third of all Azure VMs use Linux. And while there isn’t a lot of crossover between vSphere and Hyper-V users, there is growing demand among customers to integrate with Azure, according to VMware.

Still, there are competitive issues that could limit the depth of any hypothetical partnership between the two companies, said Gary Chen, research manager at IDC. VMware is still treating Azure as another cloud resource it can manage, and while the company should be pursuing deeper partnerships with other cloud providers, each deal will likely be different and may not look exactly like the AWS partnership.

“It would be a stretch to see Azure running VMware software anytime soon for deeper infrastructure integration, which is what the AWS deal, IBM and the rest of VCAN [the vCloud Air Network] has required,” Chen said.

Trevor Jones is a news writer with TechTarget’s data center and virtualization media group. Contact him at

October 3, 2016  12:48 PM

Cloud computing employment trends shift from generalists to specialists

Kathleen Casey Kathleen Casey Profile: Kathleen Casey

Cloud computing has created a plethora of new jobs in the IT industry and shows no signs of slowing down. But what are companies looking for in a potential cloud employee? Job hunters face the difficult choice of zeroing in on a certain cloud service or vendor, or becoming a jack of all trades.

We asked the SearchCloudComputing Advisory Board what they consider to be the biggest cloud computing employment trends — and what employers are looking for. Here’s a look at their answers.

Alex Witherspoon

[In] the older enterprise IT model, there was a drive toward specialists [who] have a deep understanding of complex systems, like modern storage, servers and networking, to operate software. The strength in this is that one can be complexly involved with all elements of the platform. Many of those systems are still there, but in a modern cloud, those infrastructure problems and specialties are obfuscated to the end user, so a modern company can focus much more heavily on the software, the customer and the business.

This trend has led to an Agile-focused mindset — one that is much more concerned with technology as an operating cost and series of capabilities. This could be called DevOps, but it’s effectively removing the complexity of infrastructure [from] development and operation teams and turns the focus much more heavily into the software and design of software platforms, rather than infrastructure itself. Jobs will still be diverse between centralized architects and decentralized, general jack-of-all- trades, such as found in site reliability engineering teams, but they will be commonly focused on software architecture, rather than infrastructure architecture.

Gaurav “GP” Pal

Given the breadth and scope of the cloud computing marketplace and offerings, we are starting to see specializations by certain lines of services and expertise. For example, until a couple of years ago, we used to have Amazon Web Services (AWS) Solution Architects, but given the scope of services, a Solution Architect can’t cover every topic and must specialize.

We are starting to see specializations or competencies as they are referred to more commonly around DevOps, security, big data and managed services. A further sub-set of specialization is developing around specific regulated markets, specifically healthcare (HIPAA), U.S. public sector (FedRAMP), commercial (PCI) or financial services (FFIEC).

Bill Wilder

I expect specialization in cloud computing roles to evolve along with the cloud platforms. The big public cloud platform vendors are supporting common industry approaches, such as use of containers, through vendors such as Docker, and VM configuration tools, such as Chef and Puppet. While valuable in the cloud, these skills are infrastructure as a service (IaaS) focused. You can put some VMs in the cloud so that infrastructure looks and acts like our on-premises world, except with incredible convenience around scale, pay-as-you-go, great automation support and other aspects. Many of these skills are not significantly tied to any platform, cloud or not, but they are certainly important for the IaaS style of cloud usage since a high degree of automation is the norm.

But the trend is that the big public cloud platform players are driving services for easy access to databases, messaging, security, scaling, key management, backup management and on and on. The catch is that even though the services are covering a lot of the same ground, they aren’t used in the same ways, and sophisticated use of these services begins to require specialization. For example, AWS Lambda and Azure Functions both support serverless compute models, but they are part of different ecosystems and there is a learning curve for becoming an expert in that particular feature and the broader set of functionality it sits within. I expect skills for cloud platform expertise will increasingly diverge because there is so much active investment and innovation across Amazon, Microsoft and Google.

September 9, 2016  6:43 PM

Can Virtustream succeed as a niche public cloud?

Trevor Jones Trevor Jones Profile: Trevor Jones

Virtustream – not VMware – is the lead public cloud infrastructure provider for the new Dell Technologies, but is its narrow focus enough to sway IT pros?

The company, which officially became part of Dell this week with the close of the protracted EMC acquisition, is taking a markedly different approach to a public cloud market where much of the focus and buzz is around the net-new. Instead Virtustream  is focusing exclusively on the less sexy legacy systems that still make up the vast majority of enterprise IT.

Virtustream tailors its public cloud to mission-critical and highly regulated applications, such as SAP and other ERP systems. The majority of its workloads are brownfield, lift-and-shift applications, and there’s often little re-architecting needed for this transition, said Kevin Reid, president and CTO of Virtustream, who spoke to TechTarget  at VMworld in advance of the completion of the Dell deal.

It may be a niche in terms of the services offered by the typical public cloud provider, but it’s a massive one for potential conversions — industry observers estimate it to be a multi-billion dollar market that’s just starting to ramp up.

Going solely after the enterprise market seems to be a good model for Virtustream, said Carl Brooks, an analyst with 451 Research, in New York.

“They have some specialties but mostly they give a highly defined level of service for managed services around the infrastructure,” he said. “Virtustream does it better than any of their customers do and usually by a pretty wide margin.”

Of course, Virtustream, with its circumscribed approach, isn’t alone going after this market segment. Amazon has pushed hard to get customers to offload some of their more burdensome IT assets on to Amazon Web Services (AWS), while IBM and Microsoft already have relationships with customers in this space. Even Oracle is seen by some as a dark horse that could push its way into the market.

Pitch to IT  pros: We’re better at this than you

Part of the argument for the public cloud is the ability to build redundancy into the application and scale as needed, and proponents argue that the lift-and-shift model never provides the full benefits of cloud computing. Virtustream freely admits it’s probably not the best place for new applications.

“When doing a greenfield app let me look at public cloud model and go cloud-native, built resiliency and scale, but if I’m going to work with something I’ve invested tens of hundreds of millions of dollars into there’s no justifiable ROI to rewriting that,” Reid said.

What often happens is enterprises make edge applications cloud-native while continuing to run the nucleus systems as stateful applications because of the investments on-premises and the lack of internal skills to rewrite applications to the cloud, Reid said.

Putting complex systems in standard public clouds involves extra integration engineering to make them run properly, Virtustream argues, because most of that infrastructure is standardized and commoditized to provide the lowest cost of entry. Virtustream is increasingly coming up against AWS and other large-scale public clouds, and its pricing is comparable when total cost of ownership is taken into account, Reid said.

So the question becomes, why move your on-premises workloads at all if they aren’t going to be rewritten? Virtustream’s response is that it’s better at running infrastructure than you. That entails better infrastructure utilization, improved application management via provisioning and automation, and built-in integration for security and compliance.

Virtustream and its place inside Dell

Virtustream  has evolved into the primary cloud services arm of Dell Technologies, but that strategy didn’t come about smoothly. Last fall Virtustream’s and VMware’s cloud assets were to be combined and reshuffled, with the infrastructure components handled by Virtustream and the software pieces overseen by VMware. That plan was ultimately scuttled amid investor concerns and the two remain separate companies, though Virtustream does plan to add NSX support later this year to better connect to VMware environments.

As a result of the failed merger, Virtustream’s strategy remains largely unchanged, while VMware has tweaked its roadmap to focus on software delivery and connectivity to other cloud providers. Lump that with the private cloud offering from Dell and platform-as-a-service provider Pivotal, also brought over through the EMC Federation, and Dell Technologies has an amalgam of cloud services targeted at enterprises with a hefty share of legacy systems. It’s a model that runs contrary to how larger providers such as AWS are extending beyond infrastructure to a variety of higher-level services all accessible through one umbrella.

“If they can leverage the SAP workload play and somehow do something [with VMware] that is less tightly coupled that doesn’t raise question around CapEx and opening more data centers, there’s a potential synergy there,” said Sid Nag, research director at Gartner.

Recombining all the sensibly related pieces of the federation would have provided a more materially significant offering as opposed to what is now essentially a service catalogue, Brooks said.

But EMC has served as a good pipeline for Virtustream business, and ideally the various business units under Dell will be able to have some level of connectivity to the other assets, which have such a huge footprint within IT, Brooks said.

“Overall I don’t know if that gives them anything flat-out remarkable,” Brooks said. “There’s not any world-beaters here but you can definitely say that to the extent that they can reduce that friction, it never hurts if it’s easier to get gear and customers.”

Trevor Jones is a news writer with TechTarget’s data center and virtualization media group. Contact him at

August 30, 2016  6:22 PM

Google’s SQL Server support is latest bid to win enterprise love

Kristin Knapp Kristin Knapp Profile: Kristin Knapp

In its latest attempt to shake perceptions that it’s not an enterprise-grade IaaS option, Google is cozying up again to Windows workloads in the cloud.

Starting this month, Google users can launch Google Compute Engine VM images preinstalled with Microsoft SQL Server. Google now offers beta support for three versions of the SQL Server relational database management system: SQL Server Express 2016; SQL Server Standard 2012, 2014 and 2016; and SQL Server Web 2016. The cloud provider said support for SQL Server Enterprise Edition 2012, 2014 and 2016 is “coming soon.”

Organizations could technically run SQL Server workloads on Google Compute Engine before by spinning up a VM, and then installing and managing SQL Server themselves, explained Simon Margolis, director, cloud platform at SADA Systems, a cloud and IT consulting shop, and Google partner, based in North Hollywood, Calif.

With this expanded support from Google, however, those SQL Server images can now come preinstalled, and with a broader range of baked-in administrative capabilities. That shifts much of the SQL Server deployment, management and support responsibilities away from users and onto Google.

“It brings a good deal of peace of mind that otherwise didn’t exist,” Margolis said.

The move also expands the licensing options for running SQL Server on Google cloud. Since 2014, organizations with license mobility, through the Microsoft Software Assurance program, could move existing SQL Server licenses to a Windows Server instance running on Google, and then manage those licenses themselves.

Now, users can also choose to spin up new SQL Server databases on Google and pay as they go, based on Google’s per-minute billing cycles, just as they would for other Google cloud resources.

“If I run my instance for 90 minutes, I’m not paying the same as I would if I just bought a license from Microsoft for a server I have physically,” Margolis said.

Google eyes the enterprise

Google’s expanded SQL Server support isn’t a major surprise. Over the past year, the cloud provider has made a series of moves intended to grow its enterprise appeal. For example, it rolled out a number of new cloud security features, including identity and access management (IAM) and the ability for users to bring their own encryption keys, as well as new database and big data services.

The expanded SQL Server support is another “paving stone” for Google to more seamlessly bridge corporate data centers to its cloud, said Dave Bartoletti, principal analyst at Forrester Research, an analyst firm in Cambridge, Mass.

“[This] removes the friction for companies who just want to get their database workloads into the cloud, often before they are ready to modernize them,” Bartoletti said.

But Google’s attempts to win enterprise mindshare haven’t exactly gone off with a hitch (or two). The provider has grappled with a series of service outages and disruptions this year that renewed questions about its reliability, and the platform still trails rivals Amazon Web Services and Microsoft Azure in the public cloud market.

While the vendor’s cloud portfolio has evolved significantly this year, gaining ground in the enterprise is not so much about Google’s cloud products, anymore, but its performance, support and partner base.

“There are no major gaps in Google’s offerings in terms of developer and infrastructure services,” Bartoletti said. “It’s now about execution with enterprise customers and building out its partner ecosystem.”

For Google, only time will tell if that’s easier said than done.

July 29, 2016  6:06 PM

In the fine print: Dissecting a cloud service agreement

Kristin Knapp Kristin Knapp Profile: Kristin Knapp

Without a cloud service agreement, public cloud users would be in the dark about crucial factors related to cloud performance, security and data privacy.

Like any contract, though, cloud service agreements aren’t the easiest documents to digest; their length and complexity can tempt even the most diligent cloud users to skim, sign and be done. This complexity has only increased as the industry itself has evolved, to reflect the move toward hybrid cloud, as well as new data privacy requirements.

Failing to scrutinize a cloud service agreement, however, is a major mistake, said Claude Baudoin, owner of cébé IT & Knowledge Management, an IT consulting firm in Austin, Texas.

Baudoin and other members of the Cloud Standards Customer Council (CSCC), a cloud user advocacy group, hosted a webinar this week focused on evaluating cloud service agreements. The overarching message: Don’t ignore the fine print.

The four pillars of a cloud service agreement

A public cloud service agreement, Baudoin explained, is generally broken up into four parts:

  1. Customer agreement document – the overarching, “umbrella” document that the user signs
  2. Acceptable use policy – a document that defines what the user is allowed, and not allowed, to do as a tenant of a public cloud service
  3. Service-level agreement – a document that defines the public cloud provider’s commitment to the user, and the consequences when the provider fails to meet those commitments
  4. Privacy policy – a document that outlines what the public cloud provider can, and can’t, do with the personally identifiable information (PII) that the cloud user might share in the process of contracting the service

The language of the agreement is usually spread between these four documents, and often the initial version the provider proposes won’t match the user’s expectations, Baudoin said. “That is why you cannot just close your eyes and sign on the bottom line. You have to scrutinize this language,” he said.

Data privacy and residency terms may not be presented in a single, well-identified place, for example. Instead, those terms could be scattered throughout these four documents, particularly throughout the acceptable use policy, the service level agreement and the privacy policy.

When evaluating the data privacy and residency terms of an agreement, remember that most security clauses are intended to protect the provider from any potential threats that users pose to the provider, the platform itself and to other users of the platform. In other words, most of the language reflects what the cloud users — not the provider — can and can’t do.

Also make sure the agreement outlines, in detail, the process that occurs in the event of cloud security breach. Terms should spell out how users will be notified of a breach, how they will be protected and how they will be compensated for data loss or corruption.

“This is where you really want these [terms] to be well-defined,” Baudoin said.

Agreements vary based on deployment model

Terms of a cloud service agreement will vary greatly depending on the deployment model – infrastructure as a service (IaaS), platform as a service (PaaS) or software as a service (SaaS) – and users should look for specific criteria in each.

With a public IaaS agreement, the service revolves around the provider offering IT infrastructure resources – such as compute, networking and storage – and then supporting and securing those resources. The user is responsible for most everything else, said Mike Edwards, cloud computing standards expert and Bluemix PaaS evangelist at IBM, and a CSCC member.

“Largely, the customer is … responsible for all the components that are going to run on that system – your applications, your data, the operating systems you may install, database software, whatever,” Edwards said during the webinar.

This means, in general, commitments made by the IaaS provider to a user will be somewhat limited.

With PaaS, users receive access to a hosted application platform and development services. When navigating a PaaS agreement, it can be difficult to determine which services are native to the PaaS environment, and which aren’t. For example, in most cases, a database service will be part of the fundamental PaaS offering. Other types of services that your application might access – such as those related to social media – might actually direct your apps to services outside the platform.

Of course, the commitments and expectations for services native to the PaaS offering will differ from those outside it. Users should ask their PaaS provider for a complete list or catalog to identify which services fall into which category.

Lastly, with SaaS, users have access to a “complete” application, with all the middleware, database, storage, compute and other associated components, Edwards said. In this case, the cloud agreement will be most extensive.

“All of these things are really the responsibility of the provider in a software-as-a-service environment,” he said.

However, if the SaaS application deals with personal data – such as a hosted HR application – make sure the privacy statements and data protection policies are clearly defined, as the user is still responsible for the protection of that data.

Negotiation options

When negotiating better terms in a cloud service agreement, there is generally less flexibility with a “bucket” cloud service, such as public IaaS, because the provider expects to give “one-size-fits-all” terms, Baudoin said.

Still, that doesn’t mean there’s no hope in scoring more favorable terms. In most cases, larger cloud users have the most negotiating power, but all cloud users could benefit in the long run.

“Smaller customers are not totally without recourse or help,” Baudoin said. “Over time, if a larger customer demands and obtains certain changes in the terms, that will trickle down to all the customers of this particular cloud service provider.”

July 28, 2016  10:02 PM

Latest Xen bug has limited impact on cloud users

Trevor Jones Trevor Jones Profile: Trevor Jones

Another Xen bug generated some headlines this week, but it’s much ado about nothing for most cloud consumers.

A bug in the Xen open-source hypervisor, popular among cloud infrastructure vendors from Amazon Web Services (AWS) to Rackspace to IBM Softlayer, was publicly identified this week that allows malicious para-virtualized guests to escalate their privilege to that of the host. But unlike past Xen vulnerabilities, this one involved minimal reboots, with most vendors avoiding it altogether.

Two reasons this bug should cause minimal problems for cloud providers are that the required access level isn’t trivial and the patch is “absurdly simple,” said Carl Brooks, an analyst with 451 Research. In truth, enterprise IT shops face a much bigger threat from someone opening a bad email or clicking on the wrong link.

“Some providers can be affected and some may well be … but if this is a problem for you as a provider, you have more systemic issues around security than this particular exploit,” Brooks said.

Cloud vendors also are generally given advanced notice in these situations so they can address it internally before the vulnerability is made public. And they’ve had plenty of chances to learn how to deal with Xen problems — there have been more than a dozen publicized patches for it in 2016 alone.

Cloud providers have learned to dodge bullets

Xen bugs have dogged AWS in the past, requiring roughly 10% of its instances to be rebooted as recently as October 2014. The last time a major Xen bug came to light in February 2015, however, AWS limited the impact to only 0.1% of EC2 instances.

This time, AWS managed to avoid a reboot entirely, telling customers that data and instances are not affected and no customer action is required. Amazon declined to comment on how it addressed this patch and managed to avoid the reboot.

It’s possible that Amazon is so far off the trunk of the open-source project that this incident didn’t cause any major impact to AWS in the first place, Brooks said. But for customers that are still worried about the bug, the simplest solution is to switch from para-virtualization to full virtualization.

Other Xen-dependent cloud platforms appear to be taking a cue from Amazon’s improvements in addressing these patches to avoid downtime. Details are scarce, but it’s believed some vendors have relied on live migration, while AWS is among those suspected of using hot patching.

Rackspace, which also declined to comment, issued a statement telling customers it had no plans to perform any reboots and that there were no actions needed by its customers. That’s a reverse from 2015, when it needed to do a Xen-related reboot but even then it communicated with customers, giving them advanced notice when their instances would be shut down.

The only major cloud provider that appeared to have significant downtime from this Xen bug was IBM. CloudHarmony, which tracks uptime for a wide range of cloud providers, noticed all 20 of its IBM SoftLayer status VMs were rebooted over the weekend with five to 15 minutes of downtime for each.

IBM, which did not return requests for comment, does not appear to have a public-facing status page for maintenance notifications — its Twitter feed for SoftLayer notifications has been updated once in the past year, to proclaim it is “only used if there is an emergency situation.”

Cloud hosting provider Linode has had to perform maintenance on its servers that rely on Xen. Since the last major bug in early 2015, Linode has moved its newer servers to the KVM hypervisor. In its security notice, it told customers that anyone wanting to move from the legacy Xen virtual machines to the KVM virtual machines would avoid the reboot.

It’s unclear if Linode’s move to KVM had anything to do with Xen’s spat of patches over the past 18 months. The company declined to comment.

July 27, 2016  3:18 PM

Bust these common cloud computing myths

Kathleen Casey Kathleen Casey Profile: Kathleen Casey

For most enterprises, it is no longer a question of if they will move to the cloud, but when.

The public cloud services market has steadily grown over the past few years. In 2016 alone, it will increase 16.5% to $204 billion, according to analyst firm Gartner. But even with cloud’s popularity, there’s still a number of cloud computing myths and misconceptions that steer some enterprises in the wrong direction.

We asked the SearchCloudComputing Advisory Board what they consider to be the most common cloud computing myths that influence enterprises’ decisions – and why those misconceptions exist. Here’s a look at their answers.

Christopher Wilder

The first misconception of cloud computing is that it’s a business strategy. It’s not. Cloud has evolved from competitive advantage to competitive parity. However, organizations need to be smart about what they put in the cloud, and what they keep on-premises. I anticipate in the next three years, over 80% of businesses will have at least one application that resides in the cloud.

Another misconception is that cloud will save you money by moving IT costs from Capex to Opex.  The upfront costs of moving to the cloud tend to be smaller, but the long-term costs are normally on par with in-house infrastructure costs. In the next two years, cloud deployment costs, especially within the public cloud providers, will be equal, if not more, to the investment for on-premises IT deployment costs.

Finally, another misconception is the ease of migration to the cloud from traditional on-premises environments. Although it’s getting better, the lack of viable cloud migration and automation tools make it more difficult for companies to seamlessly move. These tools have traditionally been very difficult to use and require specific and expensive expertise. Furthermore, moving applications to the cloud also means you must move all integration and supporting elements; if you decide to change providers, it can lead to both a contractual and technical quagmire.

Bill Wilder

One myth is that running applications in the public cloud is less expensive than the on-premises equivalent. Let’s consider a simple scenario: your company is running a line of business (LoB) web application on-premises and is considering moving it to the public cloud, such as Microsoft Azure or Amazon Web Services (AWS).

A LoB application typically requires servers, disk, and database resources — and corresponding staff to install, tune and patch databases, servers and more. A lift and shift approach could move this exact architecture to the cloud. The architecture would map cleanly to cloud features, such as VMs and storage, but you will still need the same staff expertise — plus more, since you’ve added new skills to work with your public cloud platform. There are many valid reasons to move to the cloud with a lift and shift approach, but if cost is the only driver, this configuration in isolation may be more expensive than on-premises.

Now consider a more cloud-native approach. Instead of running your own database on a VM, you choose a provider’s managed database with lower operational complexity. You deploy your web code on platform as a service, further reducing operational complexity and supporting fast deployments. Your LoB web application may be nearly idle overnight and on weekends, so you scale resource usage to match the need at any given time — without any downtime. By taking advantage of these and other efficiencies available in the public cloud, this configuration can be highly cost-efficient.

And the lesson here is that cost efficiency in the cloud requires effort.

Gaurav Pal

Many IT organizations, especially in large organizations, continue to believe that commercial cloud platforms, such as AWS and Azure, are not secure or cost-effective alternatives for providing hosting and infrastructure services. We continue to see security issues as the top reason for not migrating to a more cost-efficient and flexible model of enterprise services delivery However, this is largely a myth that partially stems from a lack of understanding of the new security model that cloud platforms provide. The U.S. federal government recently announced that three cloud platforms meet the Federal Risk and Authorization Management Program’s High Baseline Requirements, which include a comprehensive set of controls to protect sensitive data.

Progressive organizations like Capital One, GE and many others, including large public sector agencies, continue to realize the business benefits of agility and better security through the tools and services provided through standardized interfaces and APIs. I fully expect more large organizations to pivot and direct their investment dollars to business-oriented services, such as machine learning and data science, rather than build data centers or private clouds.

Alex Witherspoon

I think there’s a misconception that cloud is universally cheaper and better. I like cloud, a lot. It is where I am putting a lot of my investments and my business … but every time I would talk about a potential business use case to [a vendor] it wasn’t, “Hey, deploy this thing to solve that,” instead it’s, “Well, get this recipe of seven products we sell, put it together, engineer it and then see how magical it can be.” To do that, I am going to have to invest a whole bunch of engineering effort to actually make my application work that way.

Cloud can be a groundbreaker; it can make you a winner in your business, but be very specific about your intentions. As cloud providers try to differentiate themselves, trying to understand the problem you [need] to solve and the best tool to solve it is going to be really hard and difficult in the coming years. Do you go with a generalist, like AWS, that just tries to be good at everything, or are you going to find someone who specializes in [specific cloud features]?

We really [need to] respect cloud for what is it: a series of hosting providers that have different unique qualities that need to be evaluated. I think some folks are just really quick to jump right into what their one favorite cloud is and just live there. I think folks would do well to give a little more credence to what problems they are really trying to solve, and what cloud providers are trying to specialize in. AWS is obviously heading in many different directions, but some of these smaller providers have a specialty. That’s where they are spending their money, which means that’s where you’re going to get the best return on your investment as a customer.

June 30, 2016  2:54 PM

SearchCloudComputing Advisory Board profile: Alex Witherspoon

Kristin Knapp Kristin Knapp Profile: Kristin Knapp

In March 2016, SearchCloudComputing formed an Advisory Board to delve deeper into the latest cloud trends. In our last post, we introduced Advisory Board member Bill Wilder. This week, we talked with Alex Witherspoon, vice president of platform engineering at FlightStats, a global data service company in the aviation space, based in Portland, Oregon.

Witherspoon manages IT infrastructure and software engineering teams, which handle global flight data for the company. Witherspoon was also in charge of the company’s migration to Amazon Web Services’ public cloud, and manages the company’s hybrid cloud environment.

In the Q&A below, SearchCloudComputing spoke with Witherspoon about everything from cloud security and management trends to hybrid cloud and drones.

SearchCloudComputing: What drew you toward a career in cloud?

Alex Witherspoon: What really drew me into this whole field is I just have this kind of love/hate relationship for computers and what they did. I originally got my beginning on mainframes when I was working on an IBM AIX system. It’s kind of funny, in computer science we see trends come and go again, but they tend to repeat. With mainframes, they actually operate at a lot like what we refer to as the current cloud. It was a way to interface with a whole bunch of CPUs, memory, storage and network all in one big box. It was kind of elastic and when you needed more you just shoved more in there. And that’s a really cool capability.

It was hard to afford back then, but it was cool, because you could just expand to whatever scale you needed; you could tackle the really hard problems. There was this period of time when you could buy a smaller, cheaper server and people thought, “Well, instead of buying a big monolith, I’ll buy a hundred of these smaller cheaper servers.” We did that as an industry… and at the time, managing and orchestrating those [servers] had to become software-driven and that’s where we see the cloud today, in all of its various facets. [Cloud computing] is a really cool way of managing computers at a scale that we’ve never been able to do before.

What’s one project you’re especially proud of in your cloud career?

Witherspoon: There were a lot of projects around Internet2 earlier in my career when I worked with Wichita State University in Kansas. At the time, we were stringing 10 gig network connections, when most people only had dial-up, so this was considered absolutely blazing fast. We were trying to build a cloud — I mean, what we today would call cloud — and this private cloud was a second rendition of the internet. When we work on cloud, what we are doing is shifting the human effort into a different level of the stack, instead of spending all the human investment; we only have so many waking hours in the day. The project is still alive, except much, much faster now. And from what I’ve heard, those projects have only continued to grow and have enabled these folks to do higher resolution and more rich data exchanges across the world, and Internet2 has become an international effort worldwide.

What are the top challenges organizations face with cloud?

Witherspoon: One of the big issues has always been management, so if you want to deconstruct what cloud really means, it’s the ability to actually manage and orchestrate all of these CPUs, memory, storage, network and all these [other resources]. What we commonly refer to as cloud means that it is all software-orchestrated. It means that we go to AWS and push a button or make an API call, which is a huge fundamental step from just a few years ago when everyone was pushing a button on a server and waiting for it to boot up.

The setup time for an individual service or server could be an hour, it could be days– and that isn’t someone just twiddling their thumbs, it’s a different way of interfacing with the same classical problem.

Why did you adopt a hybrid cloud at FlightStats?

Witherspoon: We wanted to have high performance compute so we could do things like predict where airplanes would be months from now – to be able to determine and clean up dirty data in a big data set. That required higher performance than what AWS could give us at a reasonable rate, so we created this private colocation facility and endowed it with all those cloud-like qualities, so we could actually have a private cloud to complement a public cloud. It’s programmatically managed, it is amorphous in that I can expand it horizontally as much as our budget allows and have it manage itself. I don’t have engineers working under me, staring at a storage array and managing it day to day.

What cloud trends are you especially interested in?

Witherspoon: At a really broad level, I am watching the investments of these cloud providers because they’re not all lining up. Some of them are making investments that cater to specific use cases and we’re still watching that evolve. What is really interesting is the niches of cloud; these cloud providers are working to provide for very specific niches. So AWS and Azure are trying to be general cloud compute and solve every issue, but we see some other folks making other steps and I am kind of watching those because, as someone managing engineering efforts into cloud, I might choose to pick one if it better caters to my solution at a better rate. So that is kind of the cost-effectiveness piece, but that’s also the technical capability of these clouds — they aren’t all built the same on purpose, they all are a little bit different. On the other hand, I’m looking for any kind of growth pains and some of the really big ones with AWS. They have been really struggling to get their support where it needs to be. If anything, it’s a growth pain from the enormity of their success.

To get a little more in the weeds, I am definitely watching cloud security. I am not one of the folks who are simply “tin foil hat” scared of the cloud, however, I am absolutely confident that I can call cloud security less mature on most environments than what I would expect in an enterprise environment. I would expect better reporting [in the cloud], and I would expect to see that security layer. In AWS, we just have to trust that it’s there. So, watching that trend to see how it improves will be very curious to me.

When not working, what do you enjoy doing?

Witherspoon: I am also a business man, and I find a lot of those work passions manifest in some of my hobbies. For example, I really like to race cars and motorcycles, and so I did semi-pro racing and things like that. So I really enjoy that. I also used to run a little company, as a hobby, that built drones and flew those around. Obviously, that’s a very interesting topic with all kinds of stuff going on right now. So I did long range drone flights where I would fly from Portland, Oregon to the coast and back with autonomous flights. And even stuff like gardening, so I am all over the board — you could call me a polymath.

June 21, 2016  4:34 PM

Ten cloud conferences to pencil in for 2016

Kathleen Casey Kathleen Casey Profile: Kathleen Casey

Cloud conferences and events are the perfect place to gain more industry knowledge and sharpen your skills. For the remainder of 2016, there are numerous opportunities to grow your networking base and meet top influencers. IT pros of all roles and experience levels can benefit from what cloud conferences have to give by attending speeches, sessions and hands-on trainings, as well as discovering new tips, tricks and tools. Here are a few of the top cloud conferences and events you should attend in 2016.

HotCloud ’16

June 20-21

The 8th USENIX Workshop on Hot Topics in Cloud Computing, known at HotCloud ’16, covers multiple models of cloud computing, including IaaS, PaaS and SaaS. Researchers and practitioners will discuss current developments, new trends and recent research related to cloud computing. Attendees can join discussions focused on cloud implementation, deployment and design issues, and delve deep into hot, emerging topics, such as serverless computing and big data.

IEEE Cloud 2016

June 27 – July 2
San Francisco

The Institute of Electrical and Electronics Engineers is holding its 9th International Conference on Cloud Computing, providing an opportunity for researchers and industry specialists to come together and discuss recent advances and best practices for cloud computing. Attendees can participate in panel discussions concerning multiple topics, such as big data, mobile and the internet of things (IoT). Other sessions focus on tips for managing cloud computing SLAs, performance and storage systems.

Hadoop Summit

June 28-30
San Jose, Calif.

At the Hadoop Summit, attendees can meet with users, developers, vendors and other members from the IT community to explore Apache Hadoop. This three-day event offers direction for using and developing on Hadoop, as well as sessions that explore how to build an enterprise data architecture with Hadoop. The show has eight technical tracks — including one on Cloud and Operations — and two business tracks, and includes Hadoop case studies from companies including Macy’s and Progressive.

How to Make the Move to the Hybrid Cloud

June 28
Burbank, Calif.

At this TechTarget seminar, sit down with Jon Brown, VP of Market Intelligence at TechTarget, and join other IT pros to talk about hybrid cloud challenges and strategies, including those related to automation, availability and security. While most know the benefits of public and private cloud, few have been able to properly mix them together to support modern apps and workloads. This seminar will discuss how to use the hybrid cloud to host production workloads, rather than just for development and testing.

AWS Global Summit Series

AWS London Summit, July 6-7
AWS Santa Clara Summit, July 12-13
AWS NYC Summit, August 10-11

Open to AWS cloud users of all experience levels, the traveling AWS summit will focus on the latest in AWS innovations and services. New and old AWS users will share insights into topics as AWS architecture, performance and operations. Other session topics include how to create IoT applications to run on AWS, and how to secure AWS workloads through DevOps automation. IT pros can also gain hands-on AWS experience in a series of technical bootcamps and labs.

Gartner Catalyst Conference

August 15-18
San Diego

Gartner’s Catalyst Conference is an event for technical professionals across various industries and roles, ranging from application development and business intelligence to infrastructure management, operations and security. There are multiple cloud-related tracks, including “Designing Your Cloud-First Architecture and Strategy,” which offers sessions on how to select cloud providers, minimize cloud security risks and deploy cloud applications.

Microsoft Ignite

September 26-30
For IT professionals interested in learning more about Azure, Microsoft Ignite offers over 50 sessions dedicated to the public cloud platform. Topics range from Azure security and encryption to serverless computing with the new Azure Functions. In addition to receiving advice and hands-on experience, learn what Microsoft has in store for Azure Compute services and how it could affect you or your organization. Attendees can also take steps to beef up their cloud resumes with five Certification Exam Prep sessions dedicated to Azure.

Modern Infrastructure and Operations: The Next Era of Hybrid Cloud, Virtualization Management, and Containers

September 27 — Dallas
November 17 — Atlanta
December 6 — San Francisco

Hybrid cloud and containers are hot topics today, and Keith Townsend, expert in the enterprise virtualization space and TechTarget contributor, explores both technologies in this two-part seminar. In part one, attendees will learn how to optimize and manage virtualization costs and identify factors that increase those costs, such as networked storage. In part two, Townsend details technologies offered to complement server virtualization, including hybrid cloud and containers, and the benefits they can bring to data management.

Dreamforce ’16

October 4-7
San Francisco

With over 2,000 sessions, this software as a service conference brings together experts, influencers, users and developers to network and discuss Salesforce. The sessions are for all experience levels and offer insights for cloud admins that support and manage Salesforce apps. Topics include agile release management, building a center of excellence and previews from the Salesforce App Cloud roadmap.

AWS re:Invent

November 28 – December 2
Las Vegas

AWS re:Invent is the largest assembly of the AWS cloud community and caters to current customers as well as users and developers new to the platform. There are numerous types of meetings, such as question and answer sessions, technical sessions, hackathons and keynote speeches, that seek to expand your knowledge of AWS features and products. Be the first to know what is coming in the future at the conference’s main event — the announcement of new AWS products and services.

May 31, 2016  5:58 PM

SearchCloudComputing Advisory Board profile: Bill Wilder

Kathleen Casey Kathleen Casey Profile: Kathleen Casey

In March 2016, SearchCloudComputing formed an Advisory Board, consisting of cloud users and experts, to provide insight into the latest cloud computing trends and technologies. One Advisory Board members is Bill Wilder, CTO at Finomial, a Boston-based software as a service provider for the hedge fund industry. Wilder is also the founder of the Boston Azure Cloud User Group, a community-run group who meets regularly to discuss challenges and best practices associated with Microsoft’s public cloud platform. In addition to Azure, Wilder focuses on cloud security, architecture and platform as a service.

In the Q&A below, SearchCloudComputing spoke with Wilder about his career, top cloud market trends and the various challenges users face in cloud.

What drew you toward a career in cloud?

Bill Wilder: I think it was in 2008 [that] I was at a technical conference — Microsoft had an annual technical conference at the time called the Professional Developers Conference, or PDT. At that conference, Microsoft first publicly unveiled their Azure cloud platform and that was kind of a turning point for me, knowing Microsoft’s role in the technology community as being a very influential one and knowing the resources they have and their staying power, I pretty much decided that the cloud thing was getting real and decided to devote more of my time to it. Within a year, I had started Boston Azure, which is a community group, because I was looking for a vehicle to accelerate my own learning and that gave me a vehicle for bringing in speakers and experts.

In the early days of the public cloud, as that was, it became apparent that an open-minded or non-risk-averse… mind-set was needed to jump into the cloud. So, I left my day job to start consulting full-time, because it was just a more interesting endeavor for me.

So, Microsoft has this recognition program for experts in the community who share their knowledge, and I was sharing my knowledge through my user group and blogging, so they recognized me as what they call a Microsoft MVP for Azure, which was a new specialization at the time. So I was pretty much all in; all I was doing was cloud. I had a number of clients who I was helping — some of them were tiny startups, some of them were big enterprises — [to] either deliver [services] or work out a strategy. And one of my clients was Finomial, a born-in-the-cloud start-up that is addressing the hedge fund industry and eventually I ended joining them as CTO.

Is there one memorable project you’re especially proud of in your cloud career?

Wilder: There isn’t one particular project that sticks out, but the thing that really does stick out, thematically, is that the public cloud has done a lot to democratize access to the kinds of sophisticated resources that only big companies used to be able to afford. A number of my clients were start-ups and they probably couldn’t have done what they were doing in a pre-cloud world. So the idea that this stuff was available for anybody with short money, and you could rent it and you didn’t have to have a million dollars in [venture capital] funding to figure out what you’re doing — that sticks out to me as pretty cool.

What trends in the cloud market are you currently following?

Wilder: In the early days in the cloud, the services that were offered were sophisticated, but they weren’t terribly feature-rich. What we’ve been seeing in the past, increasingly over time… is that the services that you can get access to in the cloud are just so sophisticated, that it’s becoming decreasingly appealing to not use the cloud.

So, as a couple of examples, if you go to any of the big public cloud venders, say Microsoft Azure, there’s something like 25 regions in the world where they have data centers that you can access with a couple of mouse clicks or a little programming and you can deploy all over the planet. So if that’s, say, the backend to your mobile app…  you couldn’t do that on your own. It’s part of the democratization. Or for enterprise customers, with a small amount effort… you can have a multi-region disaster recovery strategy, where the cloud vendor does all the heavy lifting.

And one of the things that allows us to do at a company like Finomial — and we’re an early-stage company trying to be efficient with our people — is that we don’t have to have people on staff who are experts in the things that we can buy as a service from the cloud. That is a huge [benefit] for us. So, over time, as the services become more sophisticated, it becomes more turnkey, and we can focus on our business and the software that differentiates us from competitors, and the software that our customers need to get their work done more efficiently, rather than on the infrastructure that a couple years ago or a couple months ago, in some cases, would have been necessary for us to pay a lot of time and attention to.

What are the top challenges organizations face with cloud?

Wilder: One of the most consistent challenges I see is conceptually understanding what the cloud can do… because for a lot of companies, the cloud is different enough that they might misunderstand the best way to use it. A good example of this is a lot of companies are best off by what’s called lifts and shifts, which [involves taking] existing workloads and [moving] them on to virtual machines in cloud to kind of get their feet wet, which is great. But… one of the big challenges is that the way you might organize your teams to be fully productive in the cloud is probably different than how you organize your teams in your enterprise. The DevOps movement has a very strong cloud affinity, so if you’re not doing that, your efficiencies in cloud will be lower, and if your architecture isn’t modern, your efficiencies in the cloud will be lower.

When not working with the cloud, what do you enjoy doing?

Wilder: My wife and I are Patriots season ticket holders, so we are sports fans, for sure. And we like to hang out with our four sons.   

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: