From Silos to Services: Cloud Computing for the Enterprise


July 7, 2018  2:33 AM

Cloud computing definitions are no longer relevant

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, CaaS, Cloud Foundry, containers, FaaS, Google, IaaS, OpenShift, PaaS, SaaS, Serverless computing

Back in 2011, the National Institute of Standards and Technology (NIST) created the Definition of Cloud Computing. At the time, the definitions of Infrastructure-as-a-Service (IaaS), Plaform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) seemed to be reasonable, especially given the existing offerings in the market. At the time, AWS mostly offered compute and storage resources, so AWS as an IaaS. Heroku, Cloud Foundry and OpenShift had a platform for developers to push code, so they were PaaS. And things like WebEx or Salesforce.com or Gmail were just on-demand versions of software that were used within a company, so they were SaaS.

Fast-forward 7 years and those definitions no longer seem very relevant. For example, neither the concept of “containers” (or Containers-as-a-Service, CaaS) or “serverless” (or Functions-as-a-Service, FaaS) are defined or mentioned in that NIST definition.

On one hand, someone might argue that the NIST definition could just be updated to add CaaS and FaaS as new -aaS definitions. On the other hand, someone could argue that CaaS is sort of like IaaS (containers instead of VMs), or that PaaS platforms often used containers under the covers, or that FaaS is essentially the same definition as PaaS, it’s just become more granular about what is “pushing code” vs. “pushing a function” – they are both just chunks of software. Either of those perspectives could be considered valid. They could also make things confusing for people.

Which brings us back to my original claim – the definitions are no longer really all that useful. AWS is no longer just IaaS. OpenShift is no longer a PaaS. And many things could be considered SaaS, from Salesforce to Slack to Atlassian to a Machine-Learning service on Azure or Google Cloud.

We already see the lines getting very blurry between the CaaS and PaaS market. Should a Kubernetes product only be a CaaS platform (if it allows containers to be deployed), or can it also be a PaaS platform if it enables developers to also push code?

The last point about SaaS was recently raised after several people listened to our Mid-Year 2018 show, when we discussed which companies are leading in SaaS. It sparked a conversation on Twitter, where a number of people threw out “AWS”. These comments surprised me, as I had previously not thought of AWS as a SaaS provider, but mostly as an IaaS and PaaS provider, with a bunch of “tools” or “services” that could be connected together to build applications. But the more I thought about it, many of the services of an AWS (or Azure or GCP) are just software, being delivered (and run/managed) as a service. Everything from a VM to Storage to an authentication service to an AI/ML service is just software being delivered as a service. It’s just a bunch of exposed APIs. And developers, or non-developers, can use them any way they need to create a simple service or a complex service. They are no different that a service like Netlify or Auth0 or Okta or GitHub (or 1000s of others). When you start thinking about things from that perspective, it sort of explains why Gartner’s IaaS MQ now only has 6 web-scale clouds listed, but it also make the “IaaS” part sort of irrelevant (from a classification perspective).

So the relevant questions are now:

  • Where do you want to run an application? How critical is it to control the environment?
  • What abstractions do you either want to provide to developers or does the operations team want to maintain?
  • If using multiple clouds, or services, how important is it to have consistency of abstraction and operations?

As with every technology, we learned as the technologies evolve and it’s worth questioning if we should still use the same taxonomies to describe what is trying to be accomplished. In the case of Cloud *-aaS definitions, the old definitions no longer apply.

June 28, 2018  12:59 PM

Digital Transformation Requires Good Storytelling

Brian Gracely Brian Gracely Profile: Brian Gracely
Digital transformation, Marketing, Meetup

Photo Credit: http://bit.ly/2twDSSP

There’s an interesting dynamic happening in technology circles these days. On one-hand, there is the long standing animosity of technologists towards “marketing”. On the other hand, there is an accelerating desire by business leaders to accelerate their ability to transform and potentially disrupt how they compete in their given markets. The technologists live in a world of constant technology change, but often bemoan change being forced upon them. The business leaders are using terms like “digital transformation” to articulate the changes they hope to enact through technology, organizational and process change. Both sides are facing change, but far too often they aren’t on the same page.

In the past, getting funding for a new IT project was often based on the ability to build a business case that demonstrated it would reduce costs, or improve the speed of an existing process. In some cases, it was just a regularly scheduled upgrade because the equipment had been fully depreciated. But the new projects are different; they require a business case that require a new type of calculus. The new model requires that the company measure their ability to do something very new, often times without the required skills in place to accomplish the goals. To a certain extent, this requires a leap-of-faith, which means that many people are putting their reputation (and maybe their jobs) on the line with the success or failure of the project.

So where does this “story telling” come into play? On one level, it’s the normal activities of getting “buy in” from different levels of the organization. This means going to the various stakeholders and making them believe that you’ll help them achieve their individual goals with the project. But on a different level, this story telling requires someone (or many people) to create a vision, and get that vision to permeate across the company. It’s the type of story telling that gets groups to want to help make it possible, instead of just being a recipient of the project.

We recently began seeing more executives exercising their story telling skills at events, standing on stage to tell their story at a tradeshow or meetup. This didn’t  happen 5+ years ago, but as more companies are recruiting top-level engineers (developers) to staff those new projects, this is now happening more frequently. These executives are evolving their ability to talk about their vision, their success, their challenges/failures, and how they drove change within their company. But for successful companies, it not just about an executive giving a good presentation. It about them creating a culture that accepts that change is not just necessary, but the new normal.

For the technologist that views this this as marketing, consider thinking about it this way. These new projects are (typically) creating the new face of the company. The new way the company will interact with customers, partners and the marketplace. These projects are no longer just an IT project, but they are the product of the company. Like any successful product, they need effective marketing to succeed. This marketing, or story telling, has to be done across many groups in the company, in order to get the breadth of buy-in to help evolve the company. This marketing needs to not only show measurable results, but inspire people that the things once thought too difficult are now possible. In essence, this marketing (or story telling) is serving the purpose of trying to capture the time, attention and resources of people that many other available choices; some of which might be in direct contrast to this specific project. And just like in the marketing of other products, this story telling needs to not only be able to explain the value of success, but be able to defend itself against “competitive” claims of alternative approaches or expectations of failure.

So as we get closer to the next decade, it’s become clear that the core of business differentiation is technology. Starting a business from scratch is difficult, but it has the advantage of limited technical debt. Transforming an existing business means that the debt will either remain, evolve, or be eliminated. Making that happen, one way or another, will begin with someone having a vision and telling a story. Having the skills to craft and tell that story will become ever more critical as people attempt to move existing businesses forward.


June 26, 2018  3:08 PM

Kubernetes is the Platform. What’s next?

Brian Gracely Brian Gracely Profile: Brian Gracely
APIs, CaaS, Google, Kubernetes, OpenShift, PaaS, Public Cloud, Service Broker

This past week, I gave a webinar titled “Kubernetes is the Platform. Now what?“, based on this presentation. I thought it might be useful to provide some additional context beyond what could be explained in 30 minutes. The purpose of the presentation was to explain how Kubernetes has evolved over the past couple of years, what it is capable of doing today, and looking forward to where new innovation is happening around the Kubernetes platform.

A Brief History Lesson on the evolution of the Platform market

Ever since venture capitalist Marc Andreessen uttered the phrase “software is eating the world” to the Wall Street Journal in 2011, companies of all sizes and maturity levels have been in a race to acquire software development talent. That talent enables startup companies to disrupt existing industries and business models, and that talent can also be used by existing companies to reshape how they digitally interact with customers, partners and their markets.

In order to succeed in the race to become a software-centric business, one of the most critical pieces to have in place is an application development and deployment “platform”. In today’s world, this platform is the digital equivalent of the supply chain and factory that enabled successful businesses of the 20th century. The goal of this platform is to not only simplify the ability for developers to rapidly build and deploy new applications and updates, but also be able to securely scale those deployments are demand grows and changes.    

At the time of Andreessen’s original comments, many companies and communities were trying to provide solutions to this problem through Platform-as-a-Service (PaaS) platforms. This included Heroku, OpenShift, dotCloud, Google AppEngine, Cloud Foundry AWS Elastic Beanstalk and several others. While PaaS platforms gained some traction, they suffered from several significant challenges:

  • Tied to one specific cloud platform
  • Limited developer applications to specific languages or frameworks
  • Used proprietary or platform-specific application packaging models
  • Provided limited visibility for troubleshooting applications
  • Provided limited visibility to the operators of the platform
  • In some cases, not open source or extensible

Many of these limitations were resolved with two core technologies – Linux containers (docker) and open source container orchestration, specifically Kubernetes. The combination of these two building blocks set in motion where the industry is today, with a unified architecture that allows a broad set of applications to run, and the foundation for continued innovation.   

As Kubernetes has evolved since 2015, it has been able to support a wide variety of application types, from new cloud-native applications to existing applications to big data analytics and IoT. The embedded deployment models within Kubernetes allow it to be intelligent and properly manage the deployment and availability of this variety of application types. This ability to support so many applications on a single platform results in better ROI for the platform, but also simplifies overall operations. And as Kubernetes has evolved, matured and stabilized, it has allowed new innovation to happen around Kubernetes to improve the developer experience, support even more application types, and provide better operations for applications running on Kubernetes.

 

Adding Services to the Kubernetes Platform

Beyond the core capabilities of Kubernetes, the community has seen opportunities to innovate around important areas for application workflows, security, developer tools, service brokers and many other areas. This has led to new projects within the Cloud Native Computing Foundation (CNCF), that augment Kubernetes:

 

Enabling Services off the Kubernetes Platform

While Kubernetes has done an excellent job of enabling many applications to run in containers on the platform, the world still doesn’t run entirely on Kubernetes. This means that there needs to be a common way to reach services that run off the platform. This is where the Kubernetes community has innovated around the Open Service Broker, allowing integration of 3rd-party services through a broker model. This applications applications to integrate with off-platform services, and Kubernetes operators to still have visibility into usage patterns. Brokers for services from AWS, Azure and Google Cloud already exist, as well as brokers for Ansible Playbooks. In the future, we expect that the number of brokers will continue to grow, both from cloud providers, but also being independently built to serve specific business needs.

 

Extending the Kubernetes API via Custom Resources

At some point in its evolution, every project must decide how broad its scope will be. Every project wants to be able to add new functionality, but this must always be balanced against future stability. While the Kubernetes community is still innovating around the core, it made a conscious decision to allow the Kubernetes API to be extensible, allowing new innovations to be Kubernetes compatible, without expanding the Kubernetes core. This extensibility is called Custom Resource Definitions (CRDs), it is already allowing significant extensions to Kubernetes. For example, most of the “Serverless” or Functions-as-a-Service (FaaS) projects – such as Kubeless, Fission, OpenFaaS, Riff, etc – integrated with Kubernetes through CRDs.

 

Simplifying Operations with Operators

While Kubernetes does include powerful and granular “deployment” models, those models don’t include all the things that complex applications might need for Day 2 operations. To help fill this gap, the Operator Framework was created to enable applications to not only be deployed (directly or in-conjunction with other tools, such as Helm charts), but also to codify the best practices for operating and managing those applications. In essence, building Automated Operations around those applications. The Operators framework can be used for core elements of the Kubernetes platform (e.g. etcd, Prometheus, Vault), or used for applications that run on the Kubernetes platform (e.g. many examples here). ISVs are already beginning to adopt the Operator Framework, as they realize that it will allow them to write one best practice to Kubernetes, which allows their application operator to run on any cloud that has Kubernetes.

 

Kubernetes – A Unified Platform for Innovation

When all of these elements are put together, it becomes clear that not only has Kubernetes established itself as the leading container orchestration standard, but it’s also established itself as the foundation of a unified platform for innovation. The core Kubernetes services are able to run a broad set of business applications, and the extensibility is enabling innovation to happen both on the platform and off the platform. This unified approach means that operations teams will be able to establish a common set of best practices. It also means that Kubernetes-based platform, such as Red Hat OpenShift, has created that application platform that Andreessen discussed nearly a decade ago as critical for any business that wants to be a business disruptor and not on the list of being disrupted.


June 10, 2018  9:40 PM

Looking Ahead – Less Focus on Nodes

Brian Gracely Brian Gracely Profile: Brian Gracely
CoreOS, Immutable infrastructure, Lambda, operators

Back in 2013, I was introduced to a young kid that was the CTO of a startup called CoreOS. His name was Brandon Phillips and he told us that he was building a new type of Linux that had a very small footprint, was focused on containers, and was automatically updated for security. The founders at CoreOS wanted to treat Linux hosts like tabs in a browser, where they could be started or stopped very easily, and would always be secure. In essence, they wanted to take the SysAdmin complexities  out of the hosts.  At the time, the concept didn’t really connect with me, because I was still thinking about hosts as hosts. I couldn’t quite grasp the idea of not having to actively manage (and upgrade) core infrastructure. Maybe it was because he was still calling it “Linux” instead of some other term.

Fast forward to late 2014, and AWS introduced a new service called Lambda, which eventually would be known by the concept of “Serverless”. Lambda promised to let developers just write code, and AWS would manage all of the underlying infrastructure resources, including security updates and scalability.

In November 2017, AWS introduced a new service called “Fargate“, which attempts to abstract the node-level services under the AWS Elastic Kubernetes Service (EKS) and Elastic Container Service (ECS).

In December 2017, Microsoft introduced the Virtual Kubelet, which makes many physical nodes look like a single virtual node.

In May 2018, CoreOS open sourced the Operator Framework, which will include Operators for the elements of a Kubernetes platform, as well as embed Day 2 operations capabilities for applications.

All of these evolutions and new services (along with several others) highlight an emerging trend that is taking the automation and operations of nodes to a new level. Not only are they creating “immutable infrastructure” (pets vs. cattle), but also hiding many of the administrative tasks to manage every aspect of the node-level computing. Some of these new capabilities will be valuable to developers (e.g. writing Lambda functions), which others (e.g. Operators) will be valuable to both operations teams and application ISVs.

These new technologies don’t eliminate the concepts of the operating system, or the underlying security capabilities, but they are focused on simplifying how they are managed.

While these new concepts may or may not immediately apply to your existing applications, they may apply to new applications or operational models that will alter how developers or operators do their day-to-day tasks.


May 30, 2018  8:20 PM

The Many Choices for Deploying Applications to Kubernetes

Brian Gracely Brian Gracely Profile: Brian Gracely
containers, DevOps, Kubernetes, OpenShift, operators, Service Broker

This past week, on the PodCTL podcast (@PodCTL on Twitter), we received a question from a listener asking about a frequently asked question (and frequent point of confusion). With all the different tools that exist for deploying a containerized application to Kubernetes, how is someone supposed to know which one to choose?

Tools are an interesting space in the open source world. No matter what the broader project, two things are always true: [1] People love their tools and like they to be specific to them, [2] People love to build new tools, even when a previous tool already did about 75-90% of what they put into their new tool. And since tools are often outside the scope of the main project, the ability to have sprawl is pretty easy, without disrupting the main project.

In the case of Kubernetes, there is actually some reason for why there are many tools available to help deploy applications. In the simplest terms, it because Kubernetes can have many different types of people or groups that will interact with it. Kubernetes co-creator Joe Beda discussed this at a recent KubeCon, and we discussed it on PodCTL #28 episode. There can be developers that deploy applications to a Kubernetes cluster, and operators that also deploy applications to a Kubernetes cluster. Applications can be deployed just within the Kubernetes cluster, and other applications that have to interact with services or application that reside outside a Kubernetes cluster. These various tools and frameworks, from Helm Charts to OpenShift Templates to Service Brokers to Operators, are all focused on the Day 1 and Day 2 aspects of getting containerized applications onto a Kubernetes cluster.

But beyond those tools (and more) is an emerging set of tools that are focused on trying to make it simpler for a developer to write an application that is Kubernetes-aware. Tools such as these are now beginning to be lumped into a category called “GitOps

It’s important to understand that the earlier tools that were used with Kubernetes tended to be replacements or augmentations for existing config-management tools such as Chef, Puppet, Ansible, Salt, Terraform. The newer generation of tools, such as the Operator Framework, are more advanced as they now take into consideration the context of the application (deployments, upgrades, failure scenarios) and are leveraging the native APIs within Kubernetes to be able to align with the deployment models. Expect to see more ISVs begin to package their applications as Operators, as it makes it easier “embed” operational knowledge, and it gives them a consistent way to deploy their application into a Kubernetes cluster running in any private cloud or public cloud environment.


May 28, 2018  6:08 PM

The Digitally Transformed Business and Ecosystem

Brian Gracely Brian Gracely Profile: Brian Gracely
Business model, containers, culture, DevOps, Digital transformation, Kubernetes

A few week ago I had the opportunity to speak at the Gateway 2 Innovation event in St. Louis. For a regional event, it drew a very large audience (1200+) and a diverse set of speakers and topics of discussion. Since it was an executive-level event, I was asked to try and connect the dots between a technology topic (e.g. Containers, DevOps) and a business framework to help the attendees be able to explain it to their teams, or take actions to influence a business challenge.

Since we were in St.Louis, and not Silicon Valley, I wanted to make sure that we could take a pragmatic approach, instead of the normal unicorns and rainbows – hence my contrast between the colorful skyline and the muddied waters of the mighty Mississippi River.

Source: Brian Gracely, Red Hat (2018)

The framework of the talk was to compare a traditional business structure/organization, with the evolution of how today’s disruptors see opportunity in the established models.

Source: Brian Gracely, Red Hat (2018)

 

As we begin to compare the traditional structure to a more digitally transformed business, we begin to see several changes have occurred:

  • Marketing is often distributed (omni-channel) and optimized for each specific channel (Web SEO, Mobile App, etc.)
  • The locality of physical stores, and the value of physical proximity to customers, can be displaced by the Internet.
  • Partners and Suppliers have often merged or are served through a smaller number of entities.
  • Product creation becomes split between in-house creation and those things that can be outsourced, obtained via OEM/ODM partnerships, or are created through open communities.

In addition to the transformation of the end-to-end value-chain, the paths and availability of information and feedback loops evolve. The advantages gained in the traditional model through information asymmetry begin to go away and other forms of differentiation will need to be created.

Source: Brian Gracely, Red Hat (2018)

The next area of focus was looking at how to map the evolution of these supply-chains to the underlying technologies, allowing the audience to begin to connect their technology initiatives to the challenges and changes facing the business.

Source: Brian Gracely, Red Hat (2018)

We finished the talk by looking at an example of a Digitally Transformed (and continuing to transform) industry, around all aspects of online travel. As you can see in the diagram, you can start mapping the companies to the “newer” diagrams and see the new interactions within the industry. Many of these companies have spoken publicly about how they are using Containers, Kubernetes and DevOps in production.

Source: Brian Gracely, Red Hat (2018)


May 1, 2018  8:39 PM

Understanding Kubernetes

Brian Gracely Brian Gracely Profile: Brian Gracely
containers, Digital transformation, Docker, Interop, Kubernetes, Microservices

This week at Interop 2018 in Las Vegas, I have the opportunity to speak on the topic of “Understanding Kubernetes“. The Kubernetes ecosystem has grown so large over the last few years, that it’s sometimes valuable to step back and look at not only how this community evolved, but also what basic problems the technology solves. In addition to that, I’ll spend some time explaining the basics of what is included in Kubernetes and what is NOT included. This is an area that is often confusing for many people, as they just assume that the Kubernetes project includes an SDN, storage, container registry, monitoring, logging, and many other elements. This often happens because the governance of the Kubernetes project is managed by the CNCF (Cloud Native Computing Foundation) as well as a number of other projects that address those additional requirements. The CNCF represents those projects in their Cloud Native Landscape.

Given the amount of time allocated for this talk, we just focused on the basics of how Kubernetes interacts with some of those core elements (e.g. registry, networking, storage, etc.). For a much broader discussion of how the newest projects interact (e.g. Prometheus, Istio, Grafeas, OpenWhisk, ), it’s best to keep up with all the things happening at KubeCon this week in Copenhagen (and past KubeCon events).


April 30, 2018  10:47 PM

Kubernetes as the Unified Application Platform

Brian Gracely Brian Gracely Profile: Brian Gracely
ai, Big Data, Cloud Foundry, Docker, HPC, iot, Kubernetes, OpenShift

Three years ago, I was working as a technology analyst and was asked to do a technical evaluation of a number of application platforms – at the time it was the intersection of mature PaaS platforms and emerging CaaS platforms.

At the time, most of these platforms were built on fragmented, homegrown orchestration technologies that packaged applications in containers and then made sure they would operate with high availability. Most of these platforms were based on some element of open source, but (for the most part) they all differed in how the applications needed to be packaged and how the applications would be orchestrated on the platform. Docker was just beginning to gain wide acceptance as a packaging standard for applications, and the market was heavily fragmented between container schedulers (Cloud Foundry /Diego, Kubernetes, Mesos, Docker Swarm, Hashicorp Nomad and a few proprietary others). The platforms all had limitations – from proprietary extensions, to limited language or framework support, to cloud deployment options.

At the time, the Cloud Foundry ecosystem had the largest following, but the platform was limited to only supporting 12-factor stateless apps and didn’t yet support applications packaged using docker containers. Google had just open sourced the Kubernetes project and Red Hat OpenShift v3 was the only commercial platform to include Kubernetes for orchestration, after placing their homegrown packaging and orchestration technologies in OpenShift v2 with docker and Kubernetes. Given the immaturity of Kubernetes, Google’s lack of experience with public open source projects, it was hard to image the growth that would happen to Kubernetes and that community over the next 3yrs.

Fast forward less than one year and my perspectives were beginning to change. I went from believing that “Structured” platforms would dominate Enterprise adoption, to seeing that “Composable” platforms were winning in the market. Highly opinionated (“Structured”) platforms were only able to address ~ 5% of customer applications, while more flexible platforms (“Composable”) could not only address those 5% of new applications, but also large percentages of migrations for existing applications. This flexibility led to significantly better ROI for companies adopting these platforms. And recently released market numbers show that the market is choosing Composble platforms, based on Kubernetes, as more than a 2:1 ratio vs. other platform architectures over the past 3 years.

A Unified Platform for Any Application

Fast forward three years and it has become clear that Kubernetes is positioned to be the unified application platform for many types of applications, deployed across hybrid-cloud environments. These three years have not only watched Kubernetes mature, but also watched the industry begin to reconcile that the artificial division between PaaS and CaaS was unnecessary. OCI-compliant containers are proving to be the dominant standard for application packaging, either directly by developers, or indirectly by CI/CD pipelines or integrated build features within application platforms. And each day, companies are moving new types of applications into production on Kubernetes.

Source: Brian Gracely (2018)

It took a little while for the market to realize that Kubernetes application platforms could do more than just new cloud-native applications, but now they are realizing that containers are also a great vehicle for migrating existing (stateful) applications as well. For many companies, this not only provides a modernization path for 60-80% of their application portfolio, but it also unlocks the ability to significantly reduce costs from previous infrastructure decisions (e.g virtualization, lack of automation, etc.). Beyond those applications, which drive business-critical functions today, we’re seeing new classes of applications being deployed on unified Kubernetes platforms – from Big Data to IoT to Mobile to HPC to Serverless. And the recently released “Custom Resource Definitions” and “Open Service Broker” extensions are expected to unlock a whole new set of vertical-market opportunities, as well as integration with emerging public cloud services such as AI/ML.

As the operational experience with a breadth of application classes on Kubernetes application platforms grows, these learnings will soon be codified and automated as native services within the Kubernetes platform. This will not only enable applications to be consistently deployed across any cloud environment, but will deliver as-a-Service experiences for developers that are not dependent on a specific cloud platform. Driving consistent Application Lifecycle Management across any cloud environment will significantly increase the efficiency and agility for both developers and operations teams.

AND Better Operations

While the developer and application communities have gotten onboard with Kubernetes application platforms, the ability of operations teams to consolidate many types of applications with a consistent set of operational tools (automation, networking, storage, monitoring, logging, security, etc.) is a huge benefit to their businesses. Not only are these operational tools based on lower-cost open source software, but also broad communities of experience and expertise drive them. The ability to learn from the community about a broad set of deployment, upgrade and Day-2 operational scenarios will help accelerate the learning curves of all operational teams, reducing their costs and time to expertise. Application platforms based on multiple orchestration technologies drive up cost and complexity for operational teams by not allowing them to standardize on tools and infrastructure, in additional to drive up expenses in hiring and training for experienced personnel.

The new standard for operational teams will be to significantly reduce the gap between current private cloud (data center) operations speed and efficiency and similar experiences in the public cloud. The growing use of immutable infrastructure, from OS to Platform

Standard Foundations, Consistent Operations

In the world of security professionals, one of the most significant recent challenges has been the decomposition of the security perimeter. New devices, new cloud services, and new working patterns have forced security teams to drive consistent security policies and process from the data center edge to every operating environment (office, branch, teleworkers, partners, cloud).

For application platforms, the same challenges are being faced as more and more companies choose to leverage both private cloud and public cloud resources to deliver on business goals. This decomposition of the data center from single operational model to hybrid cloud operational model will be a critical success factor for many IT organizations.


April 22, 2018  11:06 AM

Being a Technical Advocate / Evangelist

Brian Gracely Brian Gracely Profile: Brian Gracely
Blogs, cloud, Github, Open source, podcasts, YouTube

Back when I started working for technology vendors, the landscape was much different than it is today:

  • Vendors tended to focus on a specific technology area; hence their portfolio was primarily made up of various sizes and price-points of the same technology (e.g. security, database, storage, networking, etc.).
  • The ecosystem consistent of Vendors – Distributors/Resellers – Customers. Everything originated from the vendors (e.g. products, documentation, reference architectures). The flow of information was primarily a one way path, with only the largest customers giving direct feedback (and buying influence) back to vendors.
  • Blogs, podcasts, free video distribution and GitHub didn’t exist. Meetups weren’t independently organized. Most conferences were very large and happened once or twice a year.

During that time, the people most responsible for the intersection of technology knowledge, content creation and interacting with the marketplace were “Technical Marketing Engineers” (TMEs). Vendor or reseller “Systems Engineers” also did it, but they also carried a sales quota, so they often didn’t have much free time to create content. The TME job was a strange combination of a technology expertise, jack-of-all-trades communicator, content creator and a role that never fit neatly into an organizational chart. The oxymoronic platypus combination of “Technical” and “Marketing” meant that the production engineering teams never really thought they did engineering, and the product marketing team never thought they had proper marketing skills. And TMEs usually need lab equipment to be able to validate features, create demos, and write reference architectures, but ownership of lab funding was always a hot potato topic when annual budgets came around. The engineering teams would fund them if they could get some DevTest cycles out of them, and Marketing teams always wanted the work done without the burden of taking any dollars away from their typical marketing spend. This strange dichotomy of responsibility, funding and expectations led to frequent shuffling in org charts, and frustration about career paths.

Fast forward a decade+ and the markets changed significantly:

  • There is still quite a bit of information that flows from Vendors – Resellers – Customers, but there is now a ton of independently created content that originates from communities of people that share a common interest in various technology subjects. This could be independent consultants, independent developers, industry analysts, bloggers/podcasters, resellers, customers or any number of other sources.
  • The rise of blogs, podcasts, free video distribution and GitHub make sharing and distribution of ideas and code impossibly simple. People with knowledge can come from anywhere, with very little investment other than their passion and time.
  • Whether it’s open source software, free tiers of public cloud services, and vendor trials; the ability to get access to technology has never been easier. The need for massive, hardware-centric labs is a thing of the past.
  • More and more technology is being created in open source communities, so the vendors have to be much more attune to the input and demands of these communities.
  • These new technologies are emerging much faster than before. Keeping up with the new technologies and changes is proving to be more and more difficult for many companies.

With these changes taking place, companies decided that they needed to create some roles that would be more involved with these communities and customer challenges. This created the rise of “Evangelist” and “Advocate” roles. Some of these roles focus on operations-centric technologies, while others focus on developer-centric technologies. Just like the mythical Full-Stack Engineer, it very difficult to find many people that can be an Advocate/Evangelist for both operations and developer interests.

The functions of an Advocate/Evangelist can vary significantly, based on experience, skills and interest of the hiring company. In general, they will perform these functions:

  • Have deep technical skills in a specific area, or group of technologies
  • Create presentation material and provide talks at industry events (conferences, meetups, etc.).
  • Create technology demonstrations and provide talks at industry events (conferences, meetups, etc.).
  • Have the ability to speak to a wide variety of operators, developers, and prospective customers about a wide variety of topics related to your focus domain (and sometimes others).
  • Create on-going content for blogs, podcasts, videos, and demos that are widely shared.
  • Be willing and able to travel to events around the country or around the world.
  • Be able to work on vendor-centric activities at conferences and meetups (e.g. staffing the booth at tradeshows).
  • Be able to join customer calls with account teams to be a subject-matter-expert.

And like TMEs, these roles can be great for people that enjoy wearing many hats.

  • Travel to interesting cities
  • Meet lots of interesting people
  • Work on leading-edge (and often bleeding-edge) technologies
  • Educate and inspire people

And like TMEs, there roles can be very difficult to explain and sustain, as they don’t always create obvious metrics or results that can be immediately tracked back to ROI-justified business metrics.

  • Travel is (typically) a constant part of the role. And the travel can often involve many, many hours of journey for very few hours of interaction. This can be hard on your health and your family.
  • The cost of the role can become quite expense when on-going T&E are included. It’s not unusual to incur $30-100k in expenses, depending on travel distances (international), conference attendance (tickets) and the frequent lack of planning (e.g. need you in Europe in 2 days).
  • It’s not always clear where the roles report in an organization. It could be sales (e.g. “Field CTO”), marketing, product groups, office of CTO, etc. Each type of group may have different expectations and success metrics.

Having been an Advocate/Evangelist and managed teams of both TMEs and Advocate/Evangelists, I can say that deal with all the duplicities can be very complicated. How to justify budgets for teams that don’t carry a direct sales quota or marketing funnel? How do you create a career path for people/groups that are appreciated with a given organization? How to prevent burnout of people that are in high-demand and often don’t have enough time to create content/demos/technology? How do you keep them motivated to stay at your company when other companies with newer technologies come along?

I don’t have the answers to these questions. It was often “it depends”, based on lots of individual reasons and contexts. It was often “you need to control your career path, but I’ll provide you the opportunities to learn the necessary next-skills to get there”. It was often “I know they love your work, but apparently not enough to recognize it with the proper budgets.”

These roles, whatever the name, can be great for a few years at a time. The trick is to be aware that they are fundamentally “overlay” roles that are put in place until the knowledge of a few can be transferred to the knowledge of many. The other trick is knowing when that transition is coming and planning your next steps accordingly. Maybe it’s getting on the next wave (pioneering), and maybe it’s being part of the expansion and stabilization of that trend (settler).


March 31, 2018  8:57 PM

Adding Artificial Intelligence into Software Platforms

Brian Gracely Brian Gracely Profile: Brian Gracely
ai, Artificial intelligence, Machine learning, ml, Zugata

Image Source: FreeImages

It’s difficult to go a day in the tech industry without hearing a prediction (e.g. here, here, here, here, and many, many more) about Artificial Intelligence or Machine Learning. Jobs for these skills are in high demand and companies in nearly every industry are trying to figure out how to embed these capabilities into their products and platforms before their competition.

The question for many CIOs or Product Managers is, “How do we add AI into our platforms?”. Do they hire a few PhDs or Data Scientists? Do they try using one of the AI/ML services from public cloud providers like Google, AWS or Azure? Or is there some other path to success?

Recently I asked this question to Srinivas “SK” Krishnamurti (@skrishna09; Founder/CEO of Zugata) as his company recently announced a new AI service to augment their existing SaaS platform. I wanted to understand the complexity of the technology, the difficultly in finding engineering talent, and how to make the product more attractive with the embedded AI capabilities.

The first thing we discussed was what types of business (or customer) problems that AI/ML could potentially solve. SK highlighted that it was important to understand who the would be using the software and how much experience or expertise could be assumed about their use-cases. Once this was a well understood domain, then it was possible to understand if or how AI/ML should be used.

The next thing we discussed was how AI/ML advanced would be perceived in those use-cases. Would they create a measurable difference from what could be manually accomplished now, and would the difference be perceived as valuable enough to make the investment?

Once we got past the business value and use-cases, we began to focus on how to find the right staff to start the AI/ML process. SK shared with me that their journey lasted well over a year before they began to feel confident that their efforts would be valuable. This included hiring talent, looking at data models, building the models, and the long process of training the models with data. He said that the longest amount of time was in training the models, as they had to frequently ask themselves if they were biasing the system to get the answers they believed were needed vs. the system coming to those answers by itself.

Finally, we talked about the challenges of building AI/ML models that were influencing non-human systems (e.g. electronic trading or IT Observability systems) vs. systems that would directly impact human decisions (e.g. hiring, firing, evaluating emotional state, etc.). He said that this added yet another layer of complexity into their analysis of their models, as again they needed to make sure that a broad set of scenarios were being properly evaluated by the system.

It was clear to me that there is no single way to add AI/ML into a software platform, but the guidelines and guidance from SK may prove to be valuable to the readers as they begin to explore their journey to improve their software platforms. I’d love to hear of any experience that the readers have about their own systems.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: