When you attend quite a few technology conferences, you tend to hear the same massages and narratives over and over again. Stop me if you’ve heard this one – To drive a digital transformation within your business, you’re going to have to become a software company that uses DevOps to build cloud-native applications using microservices and continuous integration on immutable cloud infrastructure. Your company needs to become like Uber, Netflix or AirBnb in order to avoid getting Uber’d in your industry.
At a recent show, an Enterprise Architect came up to me and asked a straight-forward question – “Assuming I could figure out how to make all that technology work, how would I explain it to our business leaders in a way that they understand it….in business terms, not technical terms.” I took a stab at trying to explain that here, along with a ROI model and real transition example. [Disclosure, I work for Red Hat and had all those examples handy – this blog isn’t supposed to be a Red Hat advertisement.]
I’m reasonably versed in how to talk to technical and business audiences because I’m a weird mutt that’s been a solution architect and have an MBA. All that means is that I know that sometimes you need to talk about Agility vs. ROI vs. Cost of Capital vs. Internal Rate of Return. They all relate to similar things – are we getting measurable value out of the money and effort we spend, in the context of those areas where we could spend that time and money.
So this brings me back to that question. If, as a technology industry, we’re asking engineers to be interested in digital transformations, then we need to give them the basic tools and languages to be able to explain to a business leader how some piece of new technology (or process) will improve the business. Not just how to save technology or process costs, but truly impact how the company will grow revenue and profitability. These are essentially “Technical MBA” skills, for lack of a better term. And I’m not aware of any place that currently offers these frameworks or skills. I’m sure there are 50 page documents or spreadsheets that a high-priced consultant would provide you for a large fee, but shouldn’t think be something that is freely available to help our industry succeed and expand?
I’m willing to start building some of this content and putting it on GitHub, but I’m curious of other people think this is a needed set of knowledge. If so, what do you think should be included? I have some ideas, but I’d love to get your feedback or suggestions. And if you’re interested in participating, please reach out to me and we’ll figure out a way to collaborate.
This isn’t a deep-thoughts piece or a “hot take” on the election. There will be no shortage of those to fill your time, if you choose. This is simply an observation based on a few things I’ve seen and heard over the last few months – and then connected some dots while watching election coverage the other night.
For all the reasons* that people will claim for why the results of the election happened, one thing that appears to be “true” is that a large portion of the population is aligned to an economically struggling model (especially around manufacturing) and they have felt neglected or marginalized by another groups of people. This is sometimes tough for the technology crowd to understand, especially if you’ve never driven through a Rust Belt town that has been decimated because of a factory closing. “Just get re-trained” isn’t really a viable option for many of these people, for a wide variety of reason. These people were told that the business world moves fast and that they should get on-board with models that seem to replace the need for them.
* NOTE: I’m fully aware that there are many other issues/causes that impacted the results of the election. I’m not trying to minimize them or debate them here.
As I was attending several tech conferences recently, the topic of “DevOps” came up frequently. The discussions were about customers that wanted to “do some DevOps” or “add some DevOps”, usually because their management wanted them to understand that that the business world moves fast (and they need faster software, or better quality software). Now if you’re a fairly skilled SysAdmin, focused on the Infra/Ops for compute, then the DevOps push to automate all the things isn’t that big of a leap for you. You most likely have some of the basic skills needed to make this transition – understanding of Linux, basic scripting skills, etc. But if you’re from the rest of the Infra/Ops team, responsible for things like Networking, Storage, Virtualization, Security, etc. then you might be feeling like one of those Rust Belt workers. Your vendors haven’t really given you the tools to do all the necessary automation, and in some cases, they are also struggling to stay viable as these new DevOps approaches impact their existing customers (e.g. “Software Defined Everything”). Those people keep hearing that the skills and tools they have worked with for 10+ years are now “commodities” and should be marginalized or ignored.
I’m not sure if DevOps is the way forward for many IT organizations, mostly because I can rarely find two people that have the same definition or model of what DevOps is. There’s the Gene Kim “Phoenix Project” model, but I don’t see that out in the wild as much as I see the book on people’s desk. There are probably lots of reasons for that, but it seems like one of them might be that the DevOps world tends to treat the non-experts as a marginalized class of IT. The “they just don’t want to learn” set of people.
It’s not a perfect parallel or analogy, but since DevOps likes to draw Lean Manufacturing parallels to software development, I believe that we need to also be cognizant of the people that are part of the factory too – not just the processes. They are being told that they should get on-board with models that seem to replace the need for them.
Last week, VMware and AWS announced that they are working on a new service to deliver VMware technology from AWS’ cloud – called “VMware on AWS“.
We talked about this with Greg Knieriemen and Keith Townsend on The Cloudcast.
The strategies for VMware and AWS are becoming clearer:
VMware has been looking for more ways to control how their SDDC stack is deployed, as well as ways to downplay the role of the underlying hardware. They are focused on displacing the functionality of hardware-centric compute, networking and storage, and downplaying the focus on cloud management (e.g. vCloud Realize Suite). They have been getting pressure from customers to better define an IaaS cloud strategy, and they now have solid partnerships in place with IBM and AWS.
AWS has attracted developers and startups, but struggled to attract the “traditional IT” that is aligned to VMware, Oracle and Microsoft workloads. This partnership now provides a way for customers to potential migrate entire sets of data center resources to AWS, as well as getting VMware to endorse that AWS is now a viable destination for Enterprise workloads.
What is Hybrid Cloud between now and 2018?
The VMware on AWS offering is still in beta/preview, with GA scheduled for some time in 2017. Since this is being targeted at Enterprise customers, you can expect any uptake to happen later in 2017 or into 2018. Many details still needed to be filled in by VMware, especially in the areas of pricing, licensing transitions (for migrated workloads), and integrations with AWS services.
For many companies, this offering will be compared to the Microsoft Azure Stack, which is also supposed to GA in mid-2017 (after originally being scheduled for late-2016). This will require updated to customer’s on-premises Windows Server environments, which traditionally lag behind the GA dates.
This means that both offerings realistically have 2018 timelines before we hear about mainstream adoption. And both of these offerings are primarily based on simplified IaaS services (compute, storage, networking), but we’re seeing more and more C-level executives that are focused on Digital Transformations and evolution of how they develop software applications. Will we see greater adoption of PaaS and CaaS (e.g. CloudFoundry, Docker DataCenter, Kubernetes, Red Hat OpenShift, etc.) platforms before these offerings become viable in 2018?
Will more MSPs move to AWS?
If you follow the writings of Ben Thompson (Stratechery.com; Exponent podcast), you know that Amazon will often experiment with a new platform idea before expanding it’s reach at greater scale. The VMware on AWS offering is much closer to a Co-Lo or Managed Services offering that a Public Cloud offering. Is AWS using the VMware on AWS model as an experiment to attract more and more existing Co-Lo and MSP customers to their platform? The MSP market is highly fragmented and many of them don’t have the resources to continue to invest in non-differentiated data center facilities. Is this deal just the precursor to AWS becoming the defacto server provider to the MSP ecosystem?
What is the Dell/EMC stance on AWS?
Even before the Dell-EMC merger, it was often difficult to figure out the strategic focus of the EMC Federation of companies. EMC wanted to sell hardware on-premises. VMware wanted to commoditize hardware and wanted to create a homogenous “cloud” ecosystem of all VMware SDDC. Pivotal wanted to abstract away any infrastructure or cloud and focus on a platform for developers. In general, their one commonality was a competitive distain for AWS, either directly or indirectly. And Dell generally shared that competitive posture, choosing to be more closely aligned to Microsoft. But now that has changed. One of the most valuable brands within Dell Technologies is now aligned with AWS and IBM cloud offerings. Both Pivotal and Dell-EMC are getting more aligned to Azure or Azure Stack, but VMware has no current alignment to their one-time foes in Redmond. So where does this leave a customer that has interest in potentially using Azure in a Hybrid Cloud environment?
Back in 2007-2009, as the awareness of cloud computing was growing, you couldn’t go a couple days without hearing about the killer use-case for cloud – “cloud bursting”. This magical ability of the cloud to make sure that your website could manage the rush of Black Friday shoppers.
Many years later and we’re (mostly) past the talk of cloud bursting. But now the buzzword universe is obsessed with the Internet of Things and the trillions of dollars of value and insight it will unlock for future generations. And with this promise of technology nirvana comes the new use-cases that will help you understand why it’s needed for your business.
Let’s take a look at a couple of examples that I’ve recently seen that have left me scratching my head.
The Internet Connected Appliance
At first glance, this is a very interesting approach to leveraging Artificial Intelligence (AI), Serverless Computing, and IoT to create a maintenance program for the filters within refrigerators. Using sensors to do predictive maintenance on remote devices is potentially a “killer application” of IoT and AI. And the serverless angle is very appealing as well. We’re actually spinning off a new podcast (The Serverlesscast, @serverlesscast) soon to explore this area in more depth.
But the thing about this example that had me questioning it was the actual value to the end-customer. We’ve all heard about connected homes for at least a decade, but actually making that work has proven to be extremely complicated – and left many of us having to play tech-support for our friends and parents. In this case, the following things are needed:
- Networking on a device, which also needs a UI to program it to join the local WiFi (and hopefully use secure passwords and protocols to connect). Plus an extended tech support model to answer questions from non-techies that just want a new water filter.
- All of the serverless elements to be programmed and integrated together.
- All of the AI logic to be programmed to “be trained” on the behaviors of the refrigerator over time.
Within my own home, I recently bought a new refrigerator. For the water filter replacement, GE gave me the option to have a replacement sent every 6 months for a fixed fee. The model works great – I don’t have to worry about the filter AND I don’t have to worry about any of the networking or applications that could break when it’s time to get a new filter. Might I only need it every 7 months instead of 6? Sure, that’s a possibility. But it’s a frictionless model for the consumer, hence there is value for me.
The Roads will Brake the Cars
I saw this one in The Register this week and I just don’t know what to think about it. It’s one think to have Tesla build a nationwide network for self-owned super-chargers for electric cars. It’s another thing to think that our highway system, which is massively underfunded as is (and constantly under repair) are going to get “embedded braking systems”. This to me is the new cloud bursting example for IoT.
All of this might sound a little bit cynical about IoT. Fair enough. And just so you don’t think I’m leaving you with nothing but bad demo ideas, here’s one that seems pretty powerful and useful – http://devpost.com/software/hazel
There are lots of good things happening with IoT these days – just be careful which types of stories you believe.
Earlier today I received an email from a friend that contained this simple, and complicated question. The person doesn’t work for any vendor, so the focus of this question wasn’t specific. The question was more of a response to the breadth of announcements from Oracle, Microsoft and Google over the last couple of weeks.
Oracle announced that they were (soon) going to launch a brand new IaaS cloud and attempt to complete directly with AWS. As expected, the Twitterati was very skeptical of their ability to deliver, as was Ben Thompson of Stratechery. If you don’t already listen to Ben’s “Exponent” podcast, be sure to add it to your favorite pod-catcher. Oracle had been making steady progress in SaaS and PaaS revenues, but IaaS isn’t really their core area of focus. Are they getting distracted?
Microsoft announced that AzureStack will eventually ship, but before that, they are strengthening their partnership with Docker by embedding Windows containers in Server 2016. This got some people excited, but Richard Seroter made a good point on the Pivotal Conversations podcast by highlighting that Windows Server 2003, 2008 and 2012 currently hold 87% marketshare – meaning that it might be 10 years before Windows 2016 becomes mainstream in the Enterprise. Given that Microsoft Azure is already 33%+ using Linux, and products like Powershell and SQL Server are moving to Linux, will Windows Server 2016 ever gain major traction in the Enterprise?
Google announced a bunch of new technology enhancements to the Google Cloud Platform, including major database and AI capabilities. They also announced that Google Container Engine (GKE), based on Kubernetes, was running the popular Pokemon Go! platform – requiring massive scalability. But then Google proceeded to do Google things by remaining “Google for Work” to “G Suite”. Not only were Nate Dogg and Warren G not consulted, but Google also has to try and convince Enterprise IT that their massive scale is needed for Enterprise-scale problems.
And we’re still about 2 months from AWS re:Invent.
While all of these large cloud providers have massive cash assets and are making announcements, none of them have really delivered a home run recently. Azure seems to be gaining Enterprise mindshare, but they still haven’t fully realized how to leverage their massive installed base and sales force. Oracle also has a massive installed base, but getting them to migrate to the cloud will not be an easy process. Years of customization will be difficult to move to standardized cloud environments. And Google has awesome technology, but the market continues to ask them if they are still serious about delivering cloud services.
So to answer my friend, the market is evolving with offering that will compete with AWS. Not all of them will be effective, but we’re moving into a new stage of public cloud usage where more Enterprises view it as a viable option. It’s still unclear if those Enterprises will have the same needs or affinity for AWS as customers of the last 5-7 years have.
As everyone’s favorite genius Sir Issac Newton once said, “for every action, there is an equal and opposite reaction.” Back then, software was not eating the world, but obviously he had the foresight to realize that Marc Andreessen’s famous proclamation would have ramifications on the hardware side of the technology industry.
It would be easy to look at the recent rash of Private Equity transactions and assume that they were driven by the general commoditization of hardware. But IMHO, there are more things are work here.
The software eating the world trend is driving (or being driven) by three key factors:
- The availability of relatively frictionless public cloud resources.
- The growth of open source software projects which enable powerful access to technology which drives Big Data, Mobile and Web scale applications and architectures.
- Startup companies “disrupting” existing industries by putting the Internet between customers and their service, removing the friction of many layers of sales channels and distribution.
The guys over at the Software Defined Talk podcast did an excellent job of reviewing several of the recent Private Equity transactions (Dell/EMC, HPE Software, Rackspace, etc.). As you can see, they aren’t all hardware-centric businesses, but the cause of their disruption is tightly coupled to the three elements I mentioned above.
In essence, these companies that get acquired or are doing deals with Private Equity (directly or indirectly) are struggling with the transition where those three factors are impacting their business, or struggling to manage the breadth of their portfolios. Over time, as more companies attempted to build (often via acquisition) a “complete stack” set of solutions, many have struggled to also create a sales and marketing model that targeted the expanded list of buyers at large customers. Their models attempted to mix hardware, software, professional services and various consumption models (CAPEX, OPEX, Subscription).
Now all of these Private Equity deals are attempting to provide cash back to the vendor companies. How the vendors will deal with the new capital is still TBD. Will they use it to make new acquisitions that are more closely aligned to their core business? Or will they use the money to do financial engineering for shareholders or debt holders? If nothing else, it will make the transparency of these companies much different for the market and customers.
Not only does leave the market with many questions about the future of these operating models, but it also creates several new questions:
- What happens to the technologies that were sold to the Private Equity companies?
- Will we see the Private Equity trend expand to some of the larger, traditional companies who have seen top line growth rates near “0” or negative for the last few years?
- Are their any great opportunities for young leaders that want to revitalize a business that was sold to Private Equity?
These large shifts in ownership will make it very interesting to watch the levels of investment and innovation over the next 3-5 years. At a time when many end-user customers are trying to drive their technology agenda faster, many vendors are taking a step back to try and figure out how to adjust to this faster paced market.
It wasn’t long ago that if you wanted to get some perspective on the size of the revenues of the cloud computing market, you had the following options:
- AWS didn’t report their numbers, so you could make some educated guesses about their “Other” number in SEC filings.
- Microsoft didn’t break out the individual products, but rather they lumped them into broader categories.
- Gartner provided detailed capabilities reports as part of their IaaS MQ, as well as some trajectory concepts, but didn’t offer a breakdown of revenues.
- Many publicly traded technology vendors talked about “cloud” solutions and leadership in cloud, but almost never broke out cloud revenues as a specific number.
- Many mid-tier cloud providers are privately owned, or part of a larger conglomerate, so they tend to not break out cloud-specific revenues.
So if you wanted to gauge the size of public cloud or private cloud market, the options were somewhat fragmented and the results tended to be foggy. Some analyst firms attempted to size portions of the market, but that often brought criticism from technology vendors that didn’t believe they had counted enough of their technology portfolio revenues – without giving any additional guidance to the analyst community.
Some of this is understandable, as many companies do not breakout revenues below a certain level (e.g. $100M, 500M, or $1B), depending on the size of their company. Still other may not be disclosing this information because their products aren’t do as well in the market as they would like you to believe.
But some things are beginning to change – some for the good and some for the bad – at least from the transparency perspective.
- Amazon now breaks out AWS revenues each quarter.
- Microsoft now breaks out Office 365 revenues, as well as putting their Azure revenues into the Intelligent Cloud bucket – which also includes SQL Server and Windows Server (on-premises) revenues.
- Oracle breaks out their SaaS and PaaS revenues for Oracle Cloud. They sort of break out their IaaS revenues, but these will likely start including revenues from the Oracle Cloud Machine, which lives on-premises – which leads us to the challenge of what counts as “Oracle Cloud” (on-prem, public cloud, some hybrid combination??)
- Google/Alphabet does not break out Google Cloud Platform revenues.
- Dell Technologies, owners of VMware (vCloud Air), Pivotal and Virtustream, no longer have to disclose their revenues to the public after their merger with EMC closed on September 7th.
- Rackspace was recently acquired by Private Equity firm Apollo, so they no longer have to disclose their revenues to the public.
- HPE recently sold their software portfolio to Micro Focus, but is apparently keeping their Helion Cloud business under HPE. HPE does not break out the revenues for the Helion business.
So now we have AWS as the most transparent guidepost of cloud revenues, and the hardware vendors moving more towards private ownership and limited (if any) revenue disclosure. I talked about this (and many other industry topics) with Keith Townsend (@CTOAdvisor) on a recent Cloudcast podcast.
Given the changing landscape in financial transparency, it will be interesting to see how customers adapt to working with vendors. Do they continue to believe vendor claims about market-share, or do they begin to shift more focus towards open-source projects and track community participation as a more transparent metric of growth trajectory?
These days, you can’t attend any event without seeing some slides about “Digital Transformation” and how companies like AirBnB, Uber and Netflix are disrupting all sort of existing industries. At the core of these discussions is the premise that all companies need to become software companies (e.g. “software is eating the world“). And the types of software applications that companies need to build are called “cloud-native” and the applications are architected using “microservices”.
All of this sounds cool, but I find that so many of the people that discuss this on an everyday basis are very technical and assume that the audience has a similar level of experience and knowledge. So I thought I would take a functioning cloud-native application, using microservices (and lots of other stuff), and break it down into simple terms that hopefully make it easier for people to understand.
[Disclosure: This video/demo is from the Red Hat Summit 2016 keynote. My day job is with Red Hat. The reason I chose this video is not because of the company/products, but because it is long enough to see all of the moving parts – both on the developer and operator side of a set of applications.]
The applications come from this video, and start around the 43:00 mark.
This is a customer-facing application. The application has the following characteristics. Think of these as the basic inputs that your product team might provide to the development team:
- It can be accessed as a browser-based application or a mobile-application.
- It involves user-interaction which updates the experience in real-time.
- All user-interactions are tracked and stored in real-time, and based on the interactions, the system will apply business-logic to personalize the experience.
- Both the technology teams and the business teams have visibility into the data coming from the user-interactions.
- The application must have the ability to be frequently updated to be able to add new features, or be modified to adjust the experience based on interaction data.
Looking at these basic requirements, they include some basic elements that should be at the core of all modern applications [note: I created this framework at Wikibon based on interactions with many companies that have been leading these digital transformations and disruptions:
- The API is the product. While parts of any application may live in a local client, the core elements should be interacting with a set of APIs in the backend. This allows user flexibility, and broader opportunities to integrate with 3rd-party applications or partners.
- Ops are the COGS. For a digital product, the on-going operational costs are the COGS (Costs of Goods Sold). Being able to optimize those COGS is a critical element to making this digital product a success.
- Data Feedback Loops. It’s critical to collect insight into how customers interact with the business digitally, and this data must be shared across technical and business team to help constantly improve the product.
- Rapidly Build and Test. If the product is now software, it is important to be able to build that software (the applications) quickly and with high quality. Systems must be in place to enable both the speed and the quality. Using data as the feedback loop, knowing where to build new aspects of the software will be clearer than before.
- Automated Deployments. Once the application has been built (or updated) and tested, then it’s critical to be able to get it to the customer quickly and in a repeatable manner. Automation not only make that process repeatable (and more secure), but it helps with the goal of reducing the Ops COGS.
The Developer’s Perspective (starting at 50m 0s)
The demonstration starts by looking at this application from the perspective of the application developer. In this case, the developer will be building a Java application on their laptop. In order to make this simpler for them, they use a few tools. Some of the tools are directly on their laptop and some of them are running as a cloud service (could be private cloud or public cloud):
- “Project” – The local representation of the application on the developer tools.
- “IDE” (Integrated Development Environment”) – The local tools on the developers laptop + tools running on a server to help them build the application
- “Stacks” – The groups of software/languages/framework that the developer will use to build the application.
The tools provided to the developer are there to make sure that they have all the things needed to build the application, as well as making sure that their software stack aligns to what is expected by operations once the application eventually gets to production.
Once that is in place, the developer can then begin writing (or updating) the application. When they have completed a portion of the application, then can move it to the system that will help integrate it with the other pieces of the application:
- “Microservices” – The smaller elements of an application that perform specific tasks within a broad application experience.
- “Pushing to Git” – Updating the application into a Git or GitHub repository, so that it can then be stored, tested and eventually integrated with other pieces of the application.
- “Continuous Integration” (CI) – A set of automated tests that look at the updated application and make sure that it will run as expected.
NOTE: There was a small section in the demo where the developer made a small change/patch to the application. The small change then went through the “pipeline” and eventually got integrated and moved to production. This was done to show that updates to applications don’t have to be large (or any specific size) and then can now safely be done at any time of the day – no longer need outage windows.
The “Pipeline” (starting at 56m 0s)
This part of the demonstration is focused on the guts of the “bits factory”, the place where the software gets tested and reassembled before it’s ready for production use.
- “Continuous Integration / Continuous Deployment” (CI/CD) – A set of tools that take application updates from developers, automate the testing of that software with other pieces of the broad application, and package the validated software in a way that can be run in production.
- “Pipelines” – The on-going flow of application updates from developers. In an agile development model, these updates could be occurring many times per day.
- “QA” – Quality Assurance – The part of a pipeline responsible for quality testing. Hopefully this is where software bugs are found, before going into production.
- “Staging” – An intermediate step in a pipeline between QA and Production, where a company will try and emulate the stress of production. Sort of like a dress-rehearsal for a theater play.
- “Blue / Green Deployment” – Even after being QA tested, when application updates are available, there needs to be a way to deploy them and make sure they work properly before completely removing the old versions. Blue/Green is a way to validate that the application update works before eliminating the old version. It also allows for “roll backs” if some chaos occurs.
The Application and User-Experience (at 1hr 0min)
Now that the application has been written and moved from being tested to being deployed into production, on a container application platform, the actual interaction with customers can begin. In this demo, we can see the real-time user interactions and how the interactions are tracked by the backend databases (and business logic). While the demo only showed the basic user interaction, the overall experience was made up of a number of microservices (login/user-authentication; user profile; game interactions; scoring dashboards, user status levels, etc.). Each of those microservices could be independently updated without breaking the overall application experience.
The Business Analyst (at 1hr 3min)
Previously, I talked about how it’s important for both the technical and business teams to have visibility to customer/market interactions with these new digital products/applications. In this part of the demo, we see how a “business analyst” (or some might call it a “data scientist”, depending on the scope of the work) would interact with the data model of the application. They are able to see the interaction data from the application, as well as make changes to how data relationships can be modeled going forward. They could also make adjustments to business logic (e.g. marketing incentives, customer loyalty programs, etc.). As they make changes to the business rules, those changes would be validated, tested and pushed back into the application, just like a developer that had made changes to the core application.
Platform Operations (at 1hr 6min)
This part of the demonstration is somewhat out of order, because the platform operations team (a.k.a. sysadmin team, network/storage/server team, infrastructure team) would have setup all of the platform components prior to any of the application work happening. But in this part of the demonstration, they showcase several critical operations elements:
- How to use automation to quickly, consistently and securely deploy the platform infrastructure that all of the applications will run on.
- How to easily scale the platform up (or down) based on the needs of the application and the business. “Scale” meaning, how does the system add (or remove) resources without causes outages and without the end-user customers knowing that it’s happening. In essence, ensuring that user-experience will be great, and costs will be optimized to support the right level of experience.
- How the operations teams can work more closely together with the applications teams to deliver a better overall experience to the end-user customer.
- How to manage and monitor the system, in real-time, to correct problems and failures.
A/B Testing the Market (at 1hr 12min 20sec)
In an early segment of the demonstration, they showed how application updates can go through the “pipeline” and be deployed without causing any disruption to the user-experience. But sometimes the business isn’t sure that they want the entire market to see any update, so they’d like to do some market experimentation – or A/B testing. In this case, they are able to test a specific experiment with a subset of the users in order to collect specific digital feedback about a change/new-feature/new-service. The application platform is able to provide that level of granularity about which users to test and how to collect that data feedback for the business analyst teams (or marketing teams, or supply chain teams, etc.). These experiments are treated just like any other change to the application, in that they go through the pre-deployment QA testing and integration, as any other update to the microservices of the application.
Agile for Developers AND Operators (at 1hr 14min)
Just as application developers are trying to go faster to keep up with business demand, so too must the operations teams be able to move faster to keep up with the developers. In this part of the demo, we see them using similar technology and process to update the platform infrastructure when a security threat or software bug happens. It is important for them to be able to test new software before deploying it live into production, as well as experiment and make sure that something new does not create unexpected problems in the overall environment.
There were a lot of moving piece to this demonstration. From a technology perspective, it included:
- Containers (Docker)
- Container Application Platform (Openshift/Kubernetes)
- Middleware (JBoss)
- Many development languages (Java, nodeJS, .NET)
- Microservices / Cloud-native applications
- “Legacy” applications (SQL databases)
- Continuous Integration (Jenkins)
- Automation (Ansible)
- and lots of other stuff…
But what I hope you were able to see was that these new environments are much more agile, much more flexible (for both developers and operators), and require closer collaboration between the business and technology teams to deliver a great end-user experience.
The last few weeks have been very interesting in the world of container “standards”. For some historical context, check out this excellent infographic about the history of containers.
After CoreOS released the “rkt” (Rocket) container specification in 2015, the industry was concerned about competing “standards”, which lead to the creation of the Open Container Initiative. At the time, Docker donated the libcontainer code to the OCI. If you weren’t paying close attention, you might have believed that the Docker contribution would have created a “container standard”. But in reality, Docker only donated a portion of what is needed to make container work. Here is a good explanation of the various layers of Linux that are needed to get containers working properly, both now and historically. But for a while, the industry seemed to be able to deal with both Docker and the OCI, with many implementations and commercial products shipping with the ability to work with Docker containers.
[NOTE – It’s important to remember that there is a difference between docker the open source application container engine project, and Docker, Inc. the commercial company that is the lead contributor to docker.
At DockerCon 2016, Docker, Inc released v1.12, which embedded their container scheduling engine (Swarm | Swarm Mode) into the container runtime (Docker Engine). This integration was intended to help Docker, Inc. expand their commercial offerings (Docker Datacenter). But the work was done outside of the normal, open community process, which raised some concerns from companies that partner or integrate with docker.
A few weeks ago, I had a conversation with Kelsey Hightower (@kelseyhightower, Google) about his concerns about regarding the evolution of Docker, Inc and docker, as well as his desire to see the community evolve to have a standard implementation that wasn’t commercially controlled by a single company.
Since then, a number of blogs and discussion forums (here, here, here, here, here, here have been written expressing further concern about the velocity, stability, and community involvement with docker.
With containers gaining so much traction with developers, it will be very interesting to watch how the community evolves. As some people have said, it might be useful to have “boring infrastructure” to underly all of the rapid changes that are happening in the application world.
With the CNCF hosting KubeCon and CloudNativeCon in November (Seattle) and Docker hosting DockerCon Europe (Barcelona) on back-to-back weeks, we’ll all be watching to see how the container landscape finds a balance between open communities and commercial offerings.
If you give a techie a whiteboard and a marker, regardless of the topic, they tend to want to draw “a stack”. They like to build layers upon layers of technology, showing how each is dependent and interacting with other layers.
When people talk about the cloud computing stack, they often start with the basic layers, as defined by NIST – IaaS, PaaS and SaaS. The IaaS layer is fairly easy to understand – it’s programmatic access to compute (usually a VM), storage and networking. The SaaS layer is equally easy to understand, as we’ve all used software applications delivered via the Internet (Gmail, WebEx, Google Search, Salesforce.com, etc.). SaaS is the rental modal for software applications.
But the PaaS layer is more complicated and nuanced to explain. PaaS is the layer that is supposed to make things easier for developers. PaaS includes all of the things that applications care about – databases, queueing, load-balancers, and middleware-like-things. PaaS is also the layer that has been through the most evolution and starts/stops, including things like Heroku, Google App Engine, AWS Simple Queuing Service (the 1st service that AWS launched), Cloud Foundry, OpenShift, Parse (MBaaS), Google Firebase (DBaaS) and various DBaaS services.
What should begin to become obvious is that PaaS is not really a layer; instead it should be thought of as a set of services. And it’s not a specifically defined set of services, but rather a set of evolving services:
- Database services – these could begin as SQL or NoSQL databases, but the services quickly evolve to simplifying the tasks around sizing, I/O management, and backup and replication management.
- Queuing, Notification, Messaging services – a whole set of modular services that are replacing
- Functions services – The whole set of capabilities that are being called #Serverless (see here, here) or Functions-as-a-Service (FaaS). In some cases, these will be isolated services, but in other cases they will be features of a broader PaaS platform.
- “Application-helping” services – There are a whole list of tools that help developers get applications built faster – where these are considered “middleware” or “mobile development” or “runtime” (e.g. build packs, S2I, etc) or something else
- “Stuff-around-the-edges” services – This can be everything from authentication services to API gateways to CI/CD tools.
Where things start to get a little bit fuzzy is when the discussion of containers come up. On one hand, containers are somewhat of an infrastructure-level technology. They represent computing, they need networking, they need storage. On the other hand, they are primarily going to be used by developers or DevOps teams to package applications or code. They also require quite a bit of technology to get working properly, at scale.
Many times, people will blur the lines between CaaS (Containers-as-a-Service) and PaaS (Platform-as-a-Service), using the existing or lack of an embedded runtime (within the platform) as the distinction. Technically, that might be a fair distinction. But it can also confuse the marketplace.
Still other people want to drop the term “PaaS” and replace it with “Platform”. This makes sense when the conversation is about the business value of a platform, with all it’s extensibility and ability for 3rd-parties to equally create value. It’s less useful when that approach is used to avoid any discussion about what technologies are embedded within the platform.
At the end of the day, the PaaS / Platform space is rapidly evolving. But as more companies try and understand how to compete in a “software is eating the world” marketplace, it becomes the most important to understand.