From Silos to Services: Cloud Computing for the Enterprise

Page 3 of 2012345...1020...Last »

August 31, 2016  6:18 PM

Cloud-Native Applications in Plain English

Brian Gracely Brian Gracely Profile: Brian Gracely
Ansible, Automation, containers, Docker, Java, Jenkins., Kubernetes, Microservices, OpenShift, Red Hat
Image Source: Pixabay

Image Source: Pixabay

These days, you can’t attend any event without seeing some slides about “Digital Transformation” and how companies like AirBnB, Uber and Netflix are disrupting all sort of existing industries. At the core of these discussions is the premise that all companies need to become software companies (e.g. “software is eating the world“). And the types of software applications that companies need to build are called “cloud-native” and the applications are architected using “microservices”.

All of this sounds cool, but I find that so many of the people that discuss this on an everyday basis are very technical and assume that the audience has a similar level of experience and knowledge. So I thought I would take a functioning cloud-native application, using microservices (and lots of other stuff), and break it down into simple terms that hopefully make it easier for people to understand.

[Disclosure: This video/demo is from the Red Hat Summit 2016 keynote. My day job is with Red Hat. The reason I chose this video is not because of the company/products, but because it is long enough to see all of the moving parts – both on the developer and operator side of a set of applications.]

The applications come from this video, and start around the 43:00 mark.

An Overview

This is a customer-facing application. The application has the following characteristics. Think of these as the basic inputs that your product team might provide to the development team:

  • It can be accessed as a browser-based application or a mobile-application.
  • It involves user-interaction which updates the experience in real-time.
  • All user-interactions are tracked and stored in real-time, and based on the interactions, the system will apply business-logic to personalize the experience.
  • Both the technology teams and the business teams have visibility into the data coming from the user-interactions.
  • The application must have the ability to be frequently updated to be able to add new features, or be modified to adjust the experience based on interaction data.

Looking at these basic requirements, they include some basic elements that should be at the core of all modern applications [note: I created this framework at Wikibon based on interactions with many companies that have been leading these digital transformations and disruptions:

  1. The API is the product. While parts of any application may live in a local client, the core elements should be interacting with a set of APIs in the backend. This allows user flexibility, and broader opportunities to integrate with 3rd-party applications or partners.
  2. Ops are the COGS. For a digital product, the on-going operational costs are the COGS (Costs of Goods Sold). Being able to optimize those COGS is a critical element to making this digital product a success.
  3. Data Feedback Loops. It’s critical to collect insight into how customers interact with the business digitally, and this data must be shared across technical and business team to help constantly improve the product.
  4. Rapidly Build and Test. If the product is now software, it is important to be able to build that software (the applications) quickly and with high quality. Systems must be in place to enable both the speed and the quality. Using data as the feedback loop, knowing where to build new aspects of the software will be clearer than before.
  5. Automated Deployments. Once the application has been built (or updated) and tested, then it’s critical to be able to get it to the customer quickly and in a repeatable manner. Automation not only make that process repeatable (and more secure), but it helps with the goal of reducing the Ops COGS.

The Developer’s Perspective (starting at 50m 0s)

Screen Shot 2016-08-31 at 2.34.59 PM

The demonstration starts by looking at this application from the perspective of the application developer. In this case, the developer will be building a Java application on their laptop. In order to make this simpler for them, they use a few tools. Some of the tools are directly on their laptop and some of them are running as a cloud service (could be private cloud or public cloud):

  • “Project” – The local representation of the application on the developer tools.
  • “IDE” (Integrated Development Environment”) – The local tools on the developers laptop + tools running on a server to help them build the application
  • “Stacks” – The groups of software/languages/framework that the developer will use to build the application.

The tools provided to the developer are there to make sure that they have all the things needed to build the application, as well as making sure that their software stack aligns to what is expected by operations once the application eventually gets to production.

Once that is in place, the developer can then begin writing (or updating) the application. When they have completed a portion of the application, then can move it to the system that will help integrate it with the other pieces of the application:

  • “Microservices” – The smaller elements of an application that perform specific tasks within a broad application experience.
  • “Pushing to Git” – Updating the application into a Git or GitHub repository, so that it can then be stored, tested and eventually integrated with other pieces of the application.
  • “Continuous Integration” (CI) – A set of automated tests that look at the updated application and make sure that it will run as expected.

NOTE: There was a small section in the demo where the developer made a small change/patch to the application. The small change then went through the “pipeline” and eventually got integrated and moved to production. This was done to show that updates to applications don’t have to be large (or any specific size) and then can now safely be done at any time of the day – no longer need outage windows.

The “Pipeline” (starting at 56m 0s)

Screen Shot 2016-08-31 at 2.35.52 PM

This part of the demonstration is focused on the guts of the “bits factory”, the place where the software gets tested and reassembled before it’s ready for production use.

  • “Continuous Integration / Continuous Deployment” (CI/CD) – A set of tools that take application updates from developers, automate the testing of that software with other pieces of the broad application, and package the validated software in a way that can be run in production.
  • “Pipelines” – The on-going flow of application updates from developers. In an agile development model, these updates could be occurring many times per day.
  • “QA” – Quality Assurance – The part of a pipeline responsible for quality testing. Hopefully this is where software bugs are found, before going into production.
  • “Staging” – An intermediate step in a pipeline between QA and Production, where a company will try and emulate the stress of production. Sort of like a dress-rehearsal for a theater play.
  • “Blue / Green Deployment” – Even after being QA tested, when application updates are available, there needs to be a way to deploy them and make sure they work properly before completely removing the old versions. Blue/Green is a way to validate that the application update works before eliminating the old version. It also allows for “roll backs” if some chaos occurs.

The Application and User-Experience (at 1hr 0min)

Screen Shot 2016-08-31 at 4.45.34 PM

Now that the application has been written and moved from being tested to being deployed into production, on a container application platform, the actual interaction with customers can begin. In this demo, we can see the real-time user interactions and how the interactions are tracked by the backend databases (and business logic). While the demo only showed the basic user interaction, the overall experience was made up of a number of microservices (login/user-authentication; user profile; game interactions; scoring dashboards,  user status levels, etc.). Each of those microservices could be independently updated without breaking the overall application experience.

The Business Analyst (at 1hr 3min)

Screen Shot 2016-08-31 at 5.26.53 PM

Previously, I talked about how it’s important for both the technical and business teams to have visibility to customer/market interactions with these new digital products/applications. In this part of the demo, we see how a “business analyst” (or some might call it a “data scientist”, depending on the scope of the work) would interact with the data model of the application. They are able to see the interaction data from the application, as well as make changes to how data relationships can be modeled going forward. They could also make adjustments to business logic (e.g. marketing incentives, customer loyalty programs, etc.). As they make changes to the business rules, those changes would be validated, tested and pushed back into the application, just like a developer that had made changes to the core application.

Platform Operations (at 1hr 6min)

Screen Shot 2016-08-31 at 5.47.32 PM

This part of the demonstration is somewhat out of order, because the platform operations team (a.k.a. sysadmin team, network/storage/server team, infrastructure team) would have setup all of the platform components prior to any of the application work happening. But in this part of the demonstration, they showcase several critical operations elements:

  • How to use automation to quickly, consistently and securely deploy the platform infrastructure that all of the applications will run on.
  • How to easily scale the platform up (or down) based on the needs of the application and the business. “Scale” meaning, how does the system add (or remove) resources without causes outages and without the end-user customers knowing that it’s happening. In essence, ensuring that user-experience will be great, and costs will be optimized to support the right level of experience.
  • How the operations teams can work more closely together with the applications teams to deliver a better overall experience to the end-user customer.
  • How to manage and monitor the system, in real-time, to correct problems and failures.

A/B Testing the Market (at 1hr 12min 20sec)

Screen Shot 2016-08-31 at 5.57.34 PM

In an early segment of the demonstration, they showed how application updates can go through the “pipeline” and be deployed without causing any disruption to the user-experience. But sometimes the business isn’t sure that they want the entire market to see any update, so they’d like to do some market experimentation – or A/B testing. In this case, they are able to test a specific experiment with a subset of the users in order to collect specific digital feedback about a change/new-feature/new-service. The application platform is able to provide that level of granularity about which users to test and how to collect that data feedback for the business analyst teams (or marketing teams, or supply chain teams, etc.). These experiments are treated just like any other change to the application, in that they go through the pre-deployment QA testing and integration, as any other update to the microservices of the application.

Agile for Developers AND Operators (at 1hr 14min)

Screen Shot 2016-08-31 at 6.05.06 PM

Just as application developers are trying to go faster to keep up with business demand, so too must the operations teams be able to move faster to keep up with the developers. In this part of the demo, we see them using similar technology and process to update the platform infrastructure when a security threat or software bug happens. It is important for them to be able to test new software before deploying it live into production, as well as experiment and make sure that something new does not create unexpected problems in the overall environment.

In Summary

There were a lot of moving piece to this demonstration. From a technology perspective, it included:

  • Containers (Docker)
  • Container Application Platform (Openshift/Kubernetes)
  • Middleware (JBoss)
  • Many development languages (Java, nodeJS, .NET)
  • Microservices / Cloud-native applications
  • “Legacy” applications (SQL databases)
  • Continuous Integration (Jenkins)
  • Automation (Ansible)
  • and lots of other stuff…

But what I hope you were able to see was that these new environments are much more agile, much more flexible (for both developers and operators), and require closer collaboration between the business and technology teams to deliver a great end-user experience.

August 30, 2016  9:57 PM

Tracking Container Standards

Brian Gracely Brian Gracely Profile: Brian Gracely
containers, Docker, Kubernetes, Linux, RunTime, Scheduling, standards
Image Source: Pixabay

Image Source: Pixabay

The last few weeks have been very interesting in the world of container “standards”. For some historical context, check out this excellent infographic about the history of containers.

After CoreOS released the “rkt” (Rocket) container specification in 2015, the industry was concerned about competing “standards”, which lead to the creation of the Open Container Initiative. At the time, Docker donated the libcontainer code to the OCI. If you weren’t paying close attention, you might have believed that the Docker contribution would have created a “container standard”. But in reality, Docker only donated a portion of what is needed to make container work. Here is a good explanation of the various layers of Linux that are needed to get containers working properly, both now and historically. But for a while, the industry seemed to be able to deal with both Docker and the OCI, with many implementations and commercial products shipping with the ability to work with Docker containers.

[NOTE – It’s important to remember that there is a difference between docker the open source application container engine project, and Docker, Inc. the commercial company that is the lead contributor to docker.

At DockerCon 2016, Docker, Inc released v1.12, which embedded their container scheduling engine (Swarm | Swarm Mode) into the container runtime (Docker Engine). This integration was intended to help Docker, Inc. expand their commercial offerings (Docker Datacenter). But the work was done outside of the normal, open community process, which raised some concerns from companies that partner or integrate with docker.

A few weeks ago, I had a conversation with Kelsey Hightower (@kelseyhightower, Google) about his concerns about regarding the evolution of Docker, Inc and docker, as well as his desire to see the community evolve to have a standard implementation that wasn’t commercially controlled by a single company.

Since then, a number of blogs and discussion forums (here, here, here, hereherehere have been written expressing further concern about the velocity, stability, and community involvement with docker.

With containers gaining so much traction with developers, it will be very interesting to watch how the community evolves. As some people have said, it might be useful to have “boring infrastructure” to underly all of the rapid changes that are happening in the application world.

With the CNCF hosting KubeCon and CloudNativeCon in November (Seattle) and Docker hosting DockerCon Europe (Barcelona) on back-to-back weeks, we’ll all be watching to see how the container landscape finds a balance between open communities and commercial offerings.


August 21, 2016  5:05 PM

What’s the future of the PaaS “layer”?

Brian Gracely Brian Gracely Profile: Brian Gracely
CaaS, Cloud Foundry, containers, DBaaS, DevOps, Docker, FaaS, Kubernetes, MBaaS, Mobile development, OpenShift, PaaS, platform, Platform as a Service
model

Cloud Computing Stack – IaaS/PaaS/SaaS (Image Source: JEE Tutorials)

If you give a techie a whiteboard and a marker, regardless of the topic, they tend to want to draw “a stack”. They like to build layers upon layers of technology, showing how each is dependent and interacting with other layers.

When people talk about the cloud computing stack, they often start with the basic layers, as defined by NIST – IaaS, PaaS and SaaS. The IaaS layer is fairly easy to understand – it’s programmatic access to compute (usually a VM), storage and networking. The SaaS layer is equally easy to understand, as we’ve all used software applications delivered via the Internet (Gmail, WebEx, Google Search, Salesforce.com, etc.). SaaS is the rental modal for software applications.

But the PaaS layer is more complicated and nuanced to explain. PaaS is the layer that is supposed to make things easier for developers. PaaS includes all of the things that applications care about – databases, queueing, load-balancers, and middleware-like-things. PaaS is also the layer that has been through the most evolution and starts/stops, including things like Heroku, Google App Engine, AWS Simple Queuing Service (the 1st service that AWS launched), Cloud Foundry, OpenShift, Parse (MBaaS), Google Firebase (DBaaS) and various DBaaS services.

What should begin to become obvious is that PaaS is not really a layer; instead it should be thought of as a set of services. And it’s not a specifically defined set of services, but rather a set of evolving services:

  • Database services – these could begin as SQL or NoSQL databases, but the services quickly evolve to simplifying the tasks around sizing,  I/O management, and backup and replication management.
  • Queuing, Notification, Messaging services – a whole set of modular services that are replacing
  • Functions services – The whole set of capabilities that are being called #Serverless (see here, here) or Functions-as-a-Service (FaaS). In some cases, these will be isolated services, but in other cases they will be features of a broader PaaS platform.
  • “Application-helping” services – There are a whole list of tools that help developers get applications built faster – where these are considered “middleware” or “mobile development” or “runtime” (e.g. build packs, S2I, etc) or something else
  • “Stuff-around-the-edges” services – This can be everything from authentication services to API gateways to CI/CD tools.

Where things start to get a little bit fuzzy is when the discussion of containers come up. On one hand, containers are somewhat of an infrastructure-level technology. They represent computing, they need networking, they need storage. On the other hand, they are primarily going to be used by developers or DevOps teams to package applications or code. They also require quite a bit of technology to get working properly, at scale.

Many times, people will blur the lines between CaaS (Containers-as-a-Service) and PaaS (Platform-as-a-Service), using the existing or lack of an embedded runtime (within the platform) as the distinction. Technically, that might be a fair distinction. But it can also confuse the marketplace.

Still other people want to drop the term “PaaS” and replace it with “Platform”. This makes sense when the conversation is about the business value of a platform, with all it’s extensibility and ability for 3rd-parties to equally create value. It’s less useful when that approach is used to avoid any discussion about what technologies are embedded within the platform.

At the end of the day, the PaaS / Platform space is rapidly evolving. But as more companies try and understand how to compete in a “software is eating the world” marketplace, it becomes the most important to understand.


August 13, 2016  12:01 PM

Slack, GitHub and Container Registries – The New IT UIs

Brian Gracely Brian Gracely Profile: Brian Gracely
Ansible, API, Bots, Chef, CLI, Datadog, DevOps, Github, IT, Jenkins., Operations, plugins, Puppet, Slack, UI

initial-config-dialogYears ago, in the dusty old days of IT, when we used to rack and stack our own equipment, there were some common interfaces to equipment. If you worked in networking, that was most likely the Cisco IOS CLI.  Most other domains had a similar CLI interface that was directly linked to boxes from F5, Checkpoint, Dell, HP, Juniper, etc.

And for the most part, we worked in relative isolation. It was just you and the CLI. In some cases, it would be you and a script and the CLI, and hopefully you kept that script in a centralized location so it could be versioned and shared across teams.

But things are starting to change, and change quite rapidly. We first discussed this a couple years ago with Mark Imbriaco, then at GitHub, about the new tools they were using to manage their environment. He told us about a tool they created called “HuBot“, which allowed them to collaborate around Ops issues and make automated tasks simpler to understand. Since then, we’ve seen tons of companies integrate their technologies with both source-control systems (e.g. GitHub, etc.) and chat systems (e.g. Slack, HipChat, etc.) – see the above list for a small sample of example integrations.

Slack Chat

As more and more people talk about “DevOps”, there always comes a point where people realize that in order to achieve some of the benefits, that there needs to be a better interworking between groups of people and the underlying technology. There is a cultural element that must be evolved. But as we’ve learned from books like Made to Stick and Switch, there is always a push and pull dynamic between getting change implemented and getting change adopted. So while it’s true that no individual tool is going to help a group or company towards the benefits of a collaborative DevOps culture, the tools plus newer behaviors they drive can help move the progress in a positive direction.

These new UIs for IT are beginning to make that progress a reality for many companies, beyond just the ones that speak at specialized DevOps events. These tools are building upon the basic premises of:

  • Centralized, versioned information
  • Built-in automation of tasks that can be integrated into broader workflows and process
  • Centralized, open collaboration between team members or across teams
  • Logged actions of what happened and the context of the decision-making process around those actions

The third leg of the new IT UI stool is container registries, such as Docker Hub (and many locally deployed versions – often integrated into contain platforms). Containers are becoming a big deal. These registries act as another building block for tracking critical elements of infrastructure or applications, as well as serving as an integration point for multi-domain collaboration.

All of these tools and platforms are evolving the product-centric CLI into a set of open APIs that allow developers and operators to integrate them into a set of workflows that help them better achieve their technical and business goals.

While the days of the CLI will most likely exist for quite, we’re quickly seeing a new set of IT UIs evolve to better solve the need for fast-moving, rapidly-changing applications, and collaboration between the teams that build and support them.


July 31, 2016  11:17 AM

Cloud-native Platform Focus Areas

Brian Gracely Brian Gracely Profile: Brian Gracely
Application portfolio, CaaS, Cloud Foundry, containers, Developers, Docker, Kubernetes, OpenShift, PaaS

Earlier this month, I wrote a piece about the Architectural Considerations for Open Source PaaS and Container platforms. It was a follow-up to a series I wrote about 12 months earlier, looking at various aspects of these types of Cloud-native application platforms.

Changes are happening quickly in the PaaS market. These platforms were previously known as PaaS (Platform-as-a-Service), but many of the offerings tend to be shifting their focus more towards Containers-as-a-Service (CaaS).

Cloud-native Platform Hierarchy of Needs

   Cloud-native Platform Hierarchy of Needs (Image Source: Brian Gracely)

I tried to put this in the perspective of a “hierarchy of needs”, which evolved from basic stability, to basic developer and application needs, to the scalability and flexibility needs as usage of the platform grows within a company.

  • Platform Stability and Security – Before any applications can be onboarded onto the application, how does the platform itself provide a level of operational stability and security, including the underlying OS that runs the platform?
  • Application Portfolio – Does the platform support a broad range of customer applications, including both stateless and stateful applications? The latter part is critical because most customers will need to either re-purpose existing applications, or interconnect new and existing applications.
  • Developer Adaptability – How much flexibility does the platform provide for developers to get their code / applications onto the platform? Does it mandate that they must move to a single tool for onboarding, or is it flexible in terms of how applications get onboarded? How much re-training is needed for developers to effectively use the platform?
  • Scalability – As more applications are added to the platform, how well will it scale? This scalability looks at areas such as # of containers under management, # of projects, # of applications, # of groups on a multi-tenant platform. It also looks at the scalability of any associated open source community (e.g. Cloud Foundry, Docker, Kubernetes, Mesos, etc.) that is contributing to the projects associated with a platform.
  • Flexibility – In the spectrum between DIY, Composable and Structured platforms, there are trade-offs in how flexible the system is today vs. in the future. Given the rapid pace that platforms and the associated technology are evolving, IT organizations and developers need to consider where they expect their usage of a platform to evolve over time. Will the POC experience extend into the future, as usage expands? Will the needs of the “pioneer” team extend to “settlers” and “town planners”?

NOTE: There will probably be people that will wonder where the cultural aspects of cloud-native fit into these hierarchical needs. They actually fit into each layer and probably could be represented as a vertical bar that sits to the edge of the diagram.

As the pace of change in the platform market continues to accelerate, it is important to have a framework to evaluate how the changes impact the needs of both the developer and operator groups within a company. With so many changes happening so quickly, it’s easy to be confused about what is important and what is just technology noise. Being able to prioritize how something new impacts platform considerations will be a critical consideration for IT organizations and developers looking to build cloud-native applications, as well as evolving aspects of their existing application portfolio.


July 26, 2016  10:35 PM

The Appeal of Serverless Computing

Brian Gracely Brian Gracely Profile: Brian Gracely
FaaS, MBaaS, Mobile applications

Over 5 years ago, we started The Cloudcast podcast. That show is focused on the trends in Cloud Computing. Several months back, we decided to have a couple shows (here, here, here and here) focused on this emerging trend called “Serverless Computing”. Those shows turned out to be some of the most popular we’d ever had. It got us thinking….maybe there’s something here. 

There was a couple comments we observed in those months that peaked my interest. The first came from Joe Emison (@JoeEmison), who was using a bunch of Serverless services, and he said that it had reduced his AWS bill by 80%. Then a couple weeks later, during an AWS keynote (AWS Summit), AWS talked about a customer that redesigned a MapReduce application and saved 80% on their AWS bill. So this sort of set off a lightbulb in my head. Real customers…and AWS wasn’t afraid to talk about it.

Source: The ServerlessCast (c) 2016

Source: The ServerlessCast (c) 2016

People have a tendency to make everything binary for a new technology – does it kill the old thing or not? In reality, what we’ve seen is that newer stuff typically tries to solve a specific problem, and then the use-cases expand because people get comfortable with it and are tolerant of it’s drawbacks. Server virtualization is a great example of this. Nobody really needed virtualization, but it became valuable because it could save costs (improve efficiency) for companies that overbought server capacity. It wasn’t perfect then (and still isn’t), but it solved a measurable problem at the time (server and licensing costs). Since then, it’s created many other problems, but entire segments of the industry have sprung up around it.

Serverless is just an extension of that philosophy – some people have a specific application need to just execute functions, and they’d really prefer not to have to deal with all the operational planning that goes with it. It’s definitely not for every application (as currently written), but it can serve a specific purpose for certain types of applications.

  • Single functions that don’t need to be recursive (run in a loop)
  • Auto-scalable (up and down)
  • Charged on a per-usage basis (don’t pay for idle time)

Serverless could be it’s own type of system, or it could just be a feature of a “PaaS” platform, depending on how it’s implemented. The various implementations are all a little bit different, with a bunch of vendors creating implementations.

There are a couple types of apps where people are starting to use Serverless:

But Serverless creates concerns from the Ops world, because some vendors have started throwing out phrases like “No Ops”. But we’ve seen this before, in things like PaaS or CaaS, where stuff is supposed to be easier for Devs and the Ops functions are hidden. In that world, somebody still has to think about stuff like:

  • Having programmable infrastructure under the covers
  • Managing the authentication system
  • Managing the logging and monitoring systems
  • Managing data
  • Managing security
  • Etc..

But Serverless isn’t just for Devs. It can also be very helpful to Ops teams, especially for the types of tasks that Ops will often do – check the status of things, repeatedly poll something, take an action based on an input, etc. Serverless doesn’t have to be just about developer tasks.

Not all Serverless is the same. There are architectural differences between the various services. For example:

  • AWS Lambda mostly requires a front-end to get to Lambda (IoT Gateway, API Gateway)
  • Microsoft Functions allow you to use programming languages and scripting languages (e.g. PowerShell)
  • IBM OpenWhisk allows the functions to be Docker containers, so almost anything could run
  • Some of the MBaaS (Mobile Backend as-a-Service) things have really, really simple integrations – Google’s Firebase, Auth0, Netlify, etc.
  • Some things can run on-premises (e.g. Iron.io), so Ops will have to be involved – sometimes they run standalone, sometimes they are integrated with an IaaS or PaaS

All of this seemed like interesting stuff to us, we we’ve decided to start a new podcast that will be focused on Serverless. Hopefully people will find the content interesting….


June 20, 2016  2:04 AM

Bots, Automation and Jobs

Brian Gracely Brian Gracely Profile: Brian Gracely
Automation, Bots, jobs

Source: Pixabay

Bots are all the latest buzzword, with all the mandatory “2016 is the year of bots” articles popping up all over the place. In some cases, people except to see bots replace jobs like call-center attendants and tech support. Still others foresee a future where they might replace many knowledge worker jobs. So I thought it might be interesting to go through my list of past (and present jobs) and see where bots or automation could have an impact.

  • Newspaper Delivery – well, newspaper circulations are way down, thanks to everybody getting their news via the Internet. But bots are having an impact on writers of common stories.
  • Lawnmower – for just a couple thousand dollars, you too can replace that kid that you pay a few bucks to sweat in the hot sun so that you can have a weekend hobby or drive your kids to games on the weekend.
  • Grocery store clerk – I still see humans stocking the shelves when I go to the grocery store, but the checkout aisle has already jumped the shark with automated checkout. And drones delivery is not far around the corner for all you Amazon Prime members.
  • Janitor – yep, that was me, purveyor of the custodial arts. yes, there are those motion-sensing towel dispensers in the bathrooms, but automation hasn’t taken over this world yet.
  • Deli counter – The grab-n-go sandwich is popular, but people still like to have some say in what they order.
  • Golf Caddy – The glory days of Caddyshack are over, with these novelties overtaking courses everywhere (here, here). And these things will tell you everything you need to know on the course.
  • Camp Counselor (sports) – Apparently eSports are a thing, and getting bigger. will kids ever play outside ever again?
  • House painter – This one might not catch on as much since it sounds like machine guns outside your house
  • Clothing store at mall – people still go to malls instead of Amazon?
  • Home builderreplaced by a 3D printer.
  • Truck driverThose jobs will be gone soon.
  • Delivery serviceAdios, jobs!
  • Inside Sales – Got product or pricing questions, we’ve got BOTS!!
  • Technical Support – This is a prime Bots use-case
  • Product Manager – As distribution channels get shorter with increased usage of public cloud, we’ll see more direct customer usage data becoming an input to build roadmaps which will go directly to the engineering teams. bad news for the people with people skills…..
  • Blogger – (see “Newspaper” above) if they can do it for real journalists, they can definitely do it for bloggers.
  • BBQ Pitmaster – Yes, unfortunately stuff like this exist…..yuck!!

The good news….somebody is going to have to develop the software for all those bots.


May 30, 2016  1:58 PM

Looking at Changes in the PaaS Market

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, Cloud Foundry, CoreOS, Docker, Kubernetes, Networking, OpenShift, Pivotal, Red Hat

Business-platformThe Platform-as-a-Service (PaaS) market has been very interesting over the last 9-12 months. Let’s recap some of the highlights:

You get funding, and you get funding, and you get funding!!

  • Pivotal: $253M – Round C
  • Docker: $113M – Round D
  • DataDog – $94.5M – Round D
  • Apprenda: $24M – Round D
  • Mesosphere: $73.5M – Round C
  • CoreOS: $28M – Round B
  • Rancher Labs: $20M – Round B
  • Weave – $15.5M – Round B
  • Sysdig: $15M – Round B

I’m sure I’ve missed a few other deals, but that’s $600M+ in VC funding into a space that is essentially going through a v2.0 evolution (v1.0 being the earlier versions of Heroku and Google AppEngine). Throw on top of that the $1B/qtr that AWS, Google and Microsoft put into their clouds, and the IBM “$1B bets” and the market is moving in the right direction.

Is the funding turning into revenues? 

This is where things get more complicated to evaluate, since Pivotal (via EMC) is the only one of those companies that publicly reports their numbers – sort of. None of Google (AppEngine), Salesforce (Heroku), IBM, Microsoft or AWS disclose any details around their PaaS/Platform revenues.

Looking at Pivotal’s numbers, we can determine a few things:

Source: EMC 10-Q Statement - May 2016

Source: EMC 10-Q Statement – May 2016

  • Pivotal does not break out Pivotal Cloud Foundry (PCF) revenues. Their reporting includes all aspects of Pivotal’s business, including PCF, Pivotal Data and Pivotal Labs. Pivotal’s CEO said that PCF is on a $200M annualized bookings run-rate. NOTE: Annualized Recurring Revenue (ARR) and Bookings are two different accounting metrics.
  • From the current 10-Q, the business is still about 2/3 services and 1/3 software sales. This doesn’t seem that unusual as they (and most PaaS companies) are targeting enterprises that will need quite a bit of help getting up to speed on using these new cloud-native technologies.
  • Gross profit margins are 41.2%, which is low for a typical software company (typically in 80%), but Pivotal is still a younger company and the cloud-native and big-data transitions are very people intensive.
  • The overall business operates at a loss (-$58M) as R&D and SG&A costs are still higher than revenues. While Pivotal does operate Pivotal Web Services, they seem to be primarily targeting on-premises deployments with large Enterprise and Gov’t customers, which have higher sales costs and longer sales cycles.  And because the revenues and expenses are not broken out by product, we could infer but can’t assume that it might be a similar percentage to the bookings/revenues that were highlighted by the Pivotal CEO.

But the variety of go-to-market approaches with PaaS/Platform offerings are still somewhat diverse (on-premises software, managed on-premises services, public cloud services, etc.), so making any assumptions about the overall markets based on one company’s financial reporting would be a mistake. What is needed is much more financial disclosure about the various public cloud services (e.g. IBM Bluemix, AWS services, Google Cloud Platform / AppEngine, Microsoft Azure services) to give us a much better understanding of the state of the PaaS/Platform market.

Architectural approaches are varied, but beginning to consolidate

While some people want to claim that their architecture is the de-facto choice, or declare themselves the winner in this market, IMHO it’s still way too early for those claims. Nobody is even close to $1B in revenues yet, and technology is a tres commas world.

Just 9 months ago, I wrote that the market was Structured vs. Unstructured. At the time, it was a decent attempt at segmenting the market. But in that short period of time, that framework has gone through significant changes. Now, the major PaaS religions seem to be:

  • Structured (Highly Opinionated): Cloud Foundry
  • Semi-Structured or Composable: The platforms that are migrating towards Kubernetes (Apcera, Apprenda, CoreOS, Google Cloud Platform, Red Hat OpenShift, etc.)
  • Container Services: AWS Elastic Container Service, Azure Container Service, Docker Data Center, Rancher Labs

There are still some powerful (technology) outliers, such as Mesosphere/Mesos/Marathon and Hashicorp/Nomad that will be interesting to watch.

Building cloud-native, microservices applications is still complicated

This is a topic that definitiely needs it’s own series of posts, but the TLDR is that it’s still very, very early days for tools that will help a broad number of developers build these cloud-native, microservices applications. Things like SpringBoot/SpringCloud, NetflixOSS, Micro and a few others exist, there still seem to be more books about microservices than tools to simplify things for developers.

Oh yeah, and now the Serverless movement is beginning to gain traction… (stay tuned!)

Networking is still complicated

Most developers don’t care about networking. To them, it either works or it’s a convenient thing to blame. Unfortunately, it needs to work and someone needs to figure out how to make it work. And ever since the work went to virtual machines and now containers, networking has become much more complicated.

The good news is that some people (e.g. Weave, Romana, Docker/Socketplane, Project Calico…plus all the big networking companies) are focused on making it easier to network all these containers and microservices. The less good news is that it’s still evolving and new architectures still have to be created.

A long way to go…

The PaaS/Platform market is still in very early days and is still rapidly evolving. The good news is that we’re still seeing VC funding flowing into the space (even if funding markets might be getting tighter) and we’re seeing the technologies mature and evolve. The other good news is that we’re seeing more end-user companies (e.g. “customers”) taking a more involved role in what technology will impact their business going forward.

The less good news is that the scoreboards and balance sheets are still pretty fuzzy, so betting on a winner is still complicated. I suspect that we’ll continue to see many companies stand up at multiple keynotes over the next year, talking about their deployments with various technologies and companies.


May 21, 2016  11:01 AM

Can Enterprises Keep Up with Frequent Upgrades?

Brian Gracely Brian Gracely Profile: Brian Gracely
Cloud Foundry, DIY, Docker, Kubernetes, Managed Services, OpenStack, Public Cloud, VMware

Upside-Down-Planter-285x380For many years, I worked for technology vendors in roles that involved building new products and then trying to get businesses to buy and use them. A good portion of my time was spent talking to IT organizations about all the new features we had available and why they would want to use those features to solve a problem they had. To support this effort, we made tons of PPT slides and brought out extremely detailed roadmaps of what the next 6-12-18 months would look like.

But here’s where reality used to set in:

  • About 70-80% of customers were running software that was at least 2-3 releases behind the “latest version”.
  • Regardless of what version of software the customers were running, the majority of them would only turn on about 30% of the features (mostly the defaults).
  • It was not unusual to see customers take 3-6 months, and sometimes longer, to test and validate a new release before it would go into production. And then there was the waiting period before an “outage window” was available for the update.

While this was equally frustrating for the vendors and the customers, the give and take or features vs. deployments sort of settled into a groove and the industry learned to deal with these realities.

Then software-defined-<whatever> and open source started to become more mainstream, which brought with them a completely different update model.

For example:

  • VMware hypervisor has a major release every 12 months, with a minor bug-fix release every 6 months.
  • OpenStack has a major release every 6 months.
  • Docker has updates every 1-2 months.
  • Technologies like Kubernetes and Mesos are releasing updates every 1-2 months.

Houston….we might have a problem.

The good news is that these newer technologies bring with them lots of tools and best-practices to adjust to the increased pace of updates. Stuff like:

  • Continuous Integration and Continuous Deployments (CI / CD) – tools like GitHub and Jenkins and JFrog and others to help send new software into a pipeline of tests and have them get deployed into Dev, Test, Staging and Production.
  • Automation – lots of tools like Docker, Chef, Puppet, Ansible, SaltStack to help automate deployments.
  • DevOps – the cultural phenomenon where Devs and Ops groups (or integrated into one group) work more closely together to collaborate around deployments.
  • Blue / Green Deployments  – the model where updates are deployed to a small % of the available resources to validate if the changes work in production – and then if things are good then the updates can be deployed to more / all the resources.

The bad news is that IT organizations will need to learn all these new techniques and tools if they want to take advantage of these new technologies.

…or they can look to various vendors and cloud providers that will deliver those technologies as a service.

…or they can just use public cloud services and not worry about maintaining any of them (updates included).

So before IT organizations start evaluating these new technologies, they need to evaluate how well they can introduce a very rapid learning curve of new operational models into their organization. Just figuring out upgrades might be enough to make them think twice about an DIY projects.


May 14, 2016  5:25 PM

Are VCs a Funding Bridge for Open Source Projects?

Brian Gracely Brian Gracely Profile: Brian Gracely
CoreOS, Docker, Kubernetes, Mirantis, Open source software, VCS

AAEAAQAAAAAAAAcJAAAAJDcwOGFlNGNmLTE5ZGEtNDc1Zi1hMzEwLTg3OWU3ZjM1NjcwNwLately, I’ve been trying to connect a few dots that seem to be complicated to connect:

  • VCs (and others) have publicly said that there will never be another Red Hat in the open source software world – namely that there will never be another company that makes large amounts of money (at $1B+ levels) supporting open source software.
  • The most widely used services on the Internet wouldn’t exist without open source software.
  • The majority of the revenue that comes from activities associated with open source software are driven by companies where open source is just an input into a cloud-delivered service.
  • VCs continue to pour millions of dollars into companies that lead open source projects (e.g. Pivotal ($250M+), CoreOS ($28M), Rancher Labs ($20M), Weave ($15M)) – $300M+, and that’s just in the last week. Fairly recently, we’ve seen Mesosphere get $73M, Mirantis get $200M….and the list goes on and on.
  • It’s not clear that these same VCs have any idea what business models are viable when the core technology is based on open source software. The only model they discussed in the podcast was a SaaS vendor that included aspects of open source in their offering – e.g. managed open source.

As I wrote last week, the 2016 Open Source Jobs Report highlights that 87% of companies are struggling to find talent for emerging open source technologies. And as I talk to more and more of these companies, many of them will tell you that they initially thought they were selling to developers, but it was often the operations or security teams that held the budget to make the buying decisions.

Here’s where I’m confused:

  • It seems like the best way to monetized open source is to either deliver it as a cloud service, or be in services/support (e.g. Red Hat, Chef, Puppet, etc.). The services/support model is limited in scale because it’s people-centric.
  • The largest cloud providers (AWS, Azure, Google) are getting more proficient at taking the open source projects and turning them into services (e.g. container schedulers, etc.). They are eliminating the operations skills-gap that’s called out the Open Source Jobs report.
  • The large cloud providers are not acquiring the open source startups, especially in the infrastructure domains.
  • The largest contributors to open source projects tend to be the largest vendors, who can afford to pay engineers to stay focused on open source projects (e.g. Intel, IBM, Red Hat, Cisco, HPE).

So why do we continue to see all this VC funding?

  • Do the VC expect the large traditional vendors (except Red Hat) to try and use open source acquisitions as a way to prevent the on-going commoditization of hardware? This doesn’t seem to work, as most of these acquisitions have been < $200M (except Citrix buying Cloud.com for $400M) and most don’t align to their existing go-to-market models.
  • Do the VCs expect the cloud providers to start acquiring the startups for talent? That could work, but probably not at the large valuations that now exist.
  • Do the VCs expect that developers, who have traditionally not held large budgets, will start becoming large buying centers? And will they be using that software on-premises, or in the public cloud?
  • Do the VCs expect the startups to reach IPO? At least for the infrastructure companies, that path has not yet shown success.

So many of these investments are in areas that have overlapping technology and present way too many choices to Enterprises that don’t have the skill-sets in-house to make those long-term decisions.

At the end of the day, the only thing I can logically think of is that the VCs see these investments as relatively small, but strategic enough to kick-start enough development that the cloud providers will someday add as a valuable service. Then the VC investments in application-centric services can take advantage of those services and hopefully scale faster.

I continue to not be able to connect some of these dots together. Would be interested in hearing how others see these investments evolving…


Page 3 of 2012345...1020...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: