From Silos to Services: Cloud Computing for the Enterprise

Page 1 of 1912345...10...Last »

October 17, 2016  9:00 PM

4 Takeaways from the VMware and AWS Partnership

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, Dell, Dell EMC, Hybrid cloud, Managed Services, MSP, VMware

Last week, VMware and AWS announced that they are working on a new service to deliver VMware technology from AWS’ cloud – called “VMware on AWS“.

Screen Shot 2016-10-17 at 8.54.55 PM

We talked about this with Greg Knieriemen and Keith Townsend on The Cloudcast.

The strategies for VMware and AWS are becoming clearer:

VMware has been looking for more ways to control how their SDDC stack is deployed, as well as ways to downplay the role of the underlying hardware. They are focused on displacing the functionality of hardware-centric compute, networking and storage, and downplaying the focus on cloud management (e.g. vCloud Realize Suite). They have been getting pressure from customers to better define an IaaS cloud strategy, and they now have solid partnerships in place with IBM and AWS.

AWS has attracted developers and startups, but struggled to attract the “traditional IT” that is aligned to VMware, Oracle and Microsoft workloads. This partnership now provides a way for customers to potential migrate entire sets of data center resources to AWS, as well as getting VMware to endorse that AWS is now a viable destination for Enterprise workloads.

What is Hybrid Cloud between now and 2018?

The VMware on AWS offering is still in beta/preview, with GA scheduled for some time in 2017. Since this is being targeted at Enterprise customers, you can expect any uptake to happen later in 2017 or into 2018. Many details still needed to be filled in by VMware, especially in the areas of pricing, licensing transitions (for migrated workloads), and integrations with AWS services.

For many companies, this offering will be compared to the Microsoft Azure Stack, which is also supposed to GA in mid-2017 (after originally being scheduled for late-2016). This will require updated to customer’s on-premises Windows Server environments, which traditionally lag behind the GA dates.

This means that both offerings realistically have 2018 timelines before we hear about mainstream adoption. And both of these offerings are primarily based on simplified IaaS services (compute, storage, networking), but we’re seeing more and more C-level executives that are focused on Digital Transformations and evolution of how they develop software applications. Will we see greater adoption of PaaS and CaaS (e.g. CloudFoundry, Docker DataCenter, Kubernetes, Red Hat OpenShift, etc.) platforms before these offerings become viable in 2018?

Will more MSPs move to AWS?

If you follow the writings of Ben Thompson (; Exponent podcast), you know that Amazon will often experiment with a new platform idea before expanding it’s reach at greater scale. The VMware on AWS offering is much closer to a Co-Lo or Managed Services offering that a Public Cloud offering. Is AWS using the VMware on AWS model as an experiment to attract more and more existing Co-Lo and MSP customers to their platform? The MSP market is highly fragmented and many of them don’t have the resources to continue to invest in non-differentiated data center facilities. Is this deal just the precursor to AWS becoming the defacto server provider to the MSP ecosystem?

What is the Dell/EMC stance on AWS?

Even before the Dell-EMC merger, it was often difficult to figure out the strategic focus of the EMC Federation of companies. EMC wanted to sell hardware on-premises. VMware wanted to commoditize hardware and wanted to create a homogenous “cloud” ecosystem of all VMware SDDC. Pivotal wanted to abstract away any infrastructure or cloud and focus on a platform for developers. In general, their one commonality was a competitive distain for AWS, either directly or indirectly. And Dell generally shared that competitive posture, choosing to be more closely aligned to Microsoft. But now that has changed. One of the most valuable brands within Dell Technologies is now aligned with AWS and IBM cloud offerings. Both Pivotal and Dell-EMC are getting more aligned to Azure or Azure Stack, but VMware has no current alignment to their one-time foes in Redmond. So where does this leave a customer that has interest in potentially using Azure in a Hybrid Cloud environment?

October 16, 2016  11:12 PM

Beware the new IoT use cases

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Cisco, Cloud bursting, Edge computing, fog computing, IBM, iot, Machine learning

network-782707_1280Back in 2007-2009, as the awareness of cloud computing was growing, you couldn’t go a couple days without hearing about the killer use-case for cloud – “cloud bursting”. This magical ability of the cloud to make sure that your website could manage the rush of Black Friday shoppers.

Many years later and we’re (mostly) past the talk of cloud bursting. But now the buzzword universe is obsessed with the Internet of Things and the trillions of dollars of value and insight it will unlock for future generations. And with this promise of technology nirvana comes the new use-cases that will help you understand why it’s needed for your business.

Let’s take a look at a couple of examples that I’ve recently seen that have left me scratching my head.

The Internet Connected Appliance

At first glance, this is a very interesting approach to leveraging Artificial Intelligence (AI), Serverless Computing, and IoT to create a maintenance program for the filters within refrigerators. Using sensors to do predictive maintenance on remote devices is potentially a “killer application” of IoT and AI. And the serverless angle is very appealing as well. We’re actually spinning off a new podcast (The Serverlesscast, @serverlesscast) soon to explore this area in more depth.

But the thing about this example that had me questioning it was the actual value to the end-customer. We’ve all heard about connected homes for at least a decade, but actually making that work has proven to be extremely complicated – and left many of us having to play tech-support for our friends and parents. In this case, the following things are needed:

  • Networking on a device, which also needs a UI to program it to join the local WiFi (and hopefully use secure passwords and protocols to connect). Plus an extended tech support model to answer questions from non-techies that just want a new water filter.
  • All of the serverless elements to be programmed and integrated together.
  • All of the AI logic to be programmed to “be trained” on the behaviors of the refrigerator over time.

Within my own home, I recently bought a new refrigerator. For the water filter replacement, GE gave me the option to have a replacement sent every 6 months for a fixed fee. The model works great – I don’t have to worry about the filter AND I don’t have to worry about any of the networking or applications that could break when it’s time to get a new filter. Might I only need it every 7 months instead of 6? Sure, that’s a possibility. But it’s a frictionless model for the consumer, hence there is value for me.

The Roads will Brake the Cars

I saw this one in The Register this week and I just don’t know what to think about it. It’s one think to have Tesla build a nationwide network for self-owned super-chargers for electric cars. It’s another thing to think that our highway system, which is massively underfunded as is (and constantly under repair) are going to get “embedded braking systems”. This to me is the new cloud bursting example for IoT.

All of this might sound a little bit cynical about IoT. Fair enough. And just so you don’t think I’m leaving you with nothing but bad demo ideas, here’s one that seems pretty powerful and useful –

There are lots of good things happening with IoT these days – just be careful which types of stories you believe.

September 30, 2016  11:05 AM

What will it take to compete with AWS?

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, IaaS, Kubernetes, Microsoft, Oracle, Oracle Cloud

bench-press-1013857_1280Earlier today I received an email from a friend that contained this simple, and complicated question. The person doesn’t work for any vendor, so the focus of this question wasn’t specific. The question was more of a response to the breadth of announcements from Oracle, Microsoft and Google over the last couple of weeks.

Oracle announced that they were (soon) going to launch a brand new IaaS cloud and attempt to complete directly with AWS. As expected, the Twitterati was very skeptical of their ability to deliver, as was Ben Thompson of Stratechery. If you don’t already listen to Ben’s “Exponent” podcast, be sure to add it to your favorite pod-catcher. Oracle had been making steady progress in SaaS and PaaS revenues, but IaaS isn’t really their core area of focus. Are they getting distracted?

Microsoft announced that AzureStack will eventually ship, but before that, they are strengthening their partnership with Docker by embedding Windows containers in Server 2016. This got some people excited, but Richard Seroter made a good point on the Pivotal Conversations podcast by highlighting that Windows Server 2003, 2008 and 2012 currently hold 87% marketshare – meaning that it might be 10 years before Windows 2016 becomes mainstream in the Enterprise. Given that Microsoft Azure is already 33%+ using Linux, and products like Powershell and SQL Server are moving to Linux, will Windows Server 2016 ever gain major traction in the Enterprise?

Google announced a bunch of new technology enhancements to the Google Cloud Platform, including major database and AI capabilities. They also announced that Google Container Engine (GKE), based on Kubernetes, was running the popular Pokemon Go! platform – requiring massive scalability. But then Google proceeded to do Google things by remaining “Google for Work” to “G Suite”. Not only were Nate Dogg and Warren G not consulted, but Google also has to try and convince Enterprise IT that their massive scale is needed for Enterprise-scale problems.

And we’re still about 2 months from AWS re:Invent.

While all of these large cloud providers have massive cash assets and are making announcements, none of them have really delivered a home run recently. Azure seems to be gaining Enterprise mindshare, but they still haven’t fully realized how to leverage their massive installed base and sales force. Oracle also has a massive installed base, but getting them to migrate to the cloud will not be an easy process. Years of customization will be difficult to move to standardized cloud environments. And Google has awesome technology, but the market continues to ask them if they are still serious about delivering cloud services.

So to answer my friend, the market is evolving with offering that will compete with AWS. Not all of them will be effective, but we’re moving into a new stage of public cloud usage where more Enterprises view it as a viable option. It’s still unclear if those Enterprises will have the same needs or affinity for AWS as customers of the last 5-7 years have.

September 20, 2016  12:39 PM

Private Equity is Eating the World

Brian Gracely Brian Gracely Profile: Brian Gracely
Capex, Dell, EMC, Opex, Rackspace, VMware
Image Source: Pixabay

Image Source: Pixabay

As everyone’s favorite genius Sir Issac Newton once said, “for every action, there is an equal and opposite reaction.” Back then, software was not eating the world, but obviously he had the foresight to realize that Marc Andreessen’s famous proclamation would have ramifications on the hardware side of the technology industry.

It would be easy to look at the recent rash of Private Equity transactions and assume that they were driven by the general commoditization of hardware. But IMHO, there are more things are work here.

The software eating the world trend is driving (or being driven) by three key factors:

  1. The availability of relatively frictionless public cloud resources.
  2. The growth of open source software projects which enable powerful access to technology which drives Big Data, Mobile and Web scale applications and architectures.
  3. Startup companies “disrupting” existing industries by putting the Internet between customers and their service, removing the friction of many layers of sales channels and distribution.

The guys over at the Software Defined Talk podcast did an excellent job of reviewing several of the recent Private Equity transactions (Dell/EMC, HPE Software, Rackspace, etc.). As you can see, they aren’t all hardware-centric businesses, but the cause of their disruption is tightly coupled to the three elements I mentioned above.

In essence, these companies that get acquired or are doing deals with Private Equity (directly or indirectly) are struggling with the transition where those three factors are impacting their business, or struggling to manage the breadth of their portfolios. Over time, as more companies attempted to build (often via acquisition) a “complete stack” set of solutions, many have struggled to also create a sales and marketing model that targeted the expanded list of buyers at large customers. Their models attempted to mix hardware, software, professional services and various consumption models (CAPEX, OPEX, Subscription).

Now all of these Private Equity deals are attempting to provide cash back to the vendor companies. How the vendors will deal with the new capital is still TBD. Will they use it to make new acquisitions that are more closely aligned to their core business? Or will they use the money to do financial engineering for shareholders or debt holders? If nothing else, it will make the transparency of these companies much different for the market and customers.

Not only does leave the market with many questions about the future of these operating models, but it also creates several new questions:

  • What happens to the technologies that were sold to the Private Equity companies?
  • Will we see the Private Equity trend expand to some of the larger, traditional companies who have seen top line growth rates near “0” or negative for the last few years?
  • Are their any great opportunities for young leaders that want to revitalize a business that was sold to Private Equity?

These large shifts in ownership will make it very interesting to watch the levels of investment and innovation over the next 3-5 years. At a time when many end-user customers are trying to drive their technology agenda faster, many vendors are taking a step back to try and figure out how to adjust to this faster paced market.

September 11, 2016  11:00 PM

Cloud Computing – Revenue Transparency?

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Cloud Computing, Gartner, HPE, Managed Services, Open source software, Oracle Cloud, Pivotal, Public Cloud, Rackspace, VMware
Image Source: Pixabay

Image Source: Pixabay

It wasn’t long ago that if you wanted to get some perspective on the size of the revenues of the cloud computing market, you had the following options:

  • AWS didn’t report their numbers, so you could make some educated guesses about their “Other” number in SEC filings.
  • Microsoft didn’t break out the individual products, but rather they lumped them into broader categories.
  • Gartner provided detailed capabilities reports as part of their IaaS MQ, as well as some trajectory concepts, but didn’t offer a breakdown of revenues.
  • Many publicly traded technology vendors talked about “cloud” solutions and leadership in cloud, but almost never broke out cloud revenues as a specific number.
  • Many mid-tier cloud providers are privately owned, or part of a larger conglomerate, so they tend to not break out cloud-specific revenues.

So if you wanted to gauge the size of public cloud or private cloud market, the options were somewhat fragmented and the results tended to be foggy. Some analyst firms attempted to size portions of the market, but that often brought criticism from technology vendors that didn’t believe they had counted enough of their technology portfolio revenues – without giving any additional guidance to the analyst community.

Some of this is understandable, as many companies do not breakout revenues below a certain level (e.g. $100M, 500M, or $1B), depending on the size of their company. Still other may not be disclosing this information because their products aren’t do as well in the market as they would like you to believe.

But some things are beginning to change – some for the good and some for the bad – at least from the transparency perspective.

  • Amazon now breaks out AWS revenues each quarter.
  • Microsoft now breaks out Office 365 revenues, as well as putting their Azure revenues into the Intelligent Cloud bucket – which also includes SQL Server and Windows Server (on-premises) revenues.
  • Oracle breaks out their SaaS and PaaS revenues for Oracle Cloud. They sort of break out their IaaS revenues, but these will likely start including revenues from the Oracle Cloud Machine, which lives on-premises – which leads us to the challenge of what counts as “Oracle Cloud” (on-prem, public cloud, some hybrid combination??)
  • Google/Alphabet does not break out Google Cloud Platform revenues.
  • Dell Technologies, owners of VMware (vCloud Air), Pivotal and Virtustream, no longer have to disclose their revenues to the public after their merger with EMC closed on September 7th.
  • Rackspace was recently acquired by Private Equity firm Apollo, so they no longer have to disclose their revenues to the public.
  • HPE recently sold their software portfolio to Micro Focus, but is apparently keeping their Helion Cloud business under HPE. HPE does not break out the revenues for the Helion business.

So now we have AWS as the most transparent guidepost of cloud revenues, and the hardware vendors moving more towards private ownership and limited (if any) revenue disclosure. I talked about this (and many other industry topics) with Keith Townsend (@CTOAdvisor) on a recent Cloudcast podcast.

Given the changing landscape in financial transparency, it will be interesting to see how customers adapt to working with vendors. Do they continue to believe vendor claims about market-share, or do they begin to shift more focus towards open-source projects and track community participation as a more transparent metric of growth trajectory?

August 31, 2016  6:18 PM

Cloud-Native Applications in Plain English

Brian Gracely Brian Gracely Profile: Brian Gracely
Ansible, Automation, containers, Docker, Java, Jenkins., Kubernetes, Microservices, OpenShift, Red Hat
Image Source: Pixabay

Image Source: Pixabay

These days, you can’t attend any event without seeing some slides about “Digital Transformation” and how companies like AirBnB, Uber and Netflix are disrupting all sort of existing industries. At the core of these discussions is the premise that all companies need to become software companies (e.g. “software is eating the world“). And the types of software applications that companies need to build are called “cloud-native” and the applications are architected using “microservices”.

All of this sounds cool, but I find that so many of the people that discuss this on an everyday basis are very technical and assume that the audience has a similar level of experience and knowledge. So I thought I would take a functioning cloud-native application, using microservices (and lots of other stuff), and break it down into simple terms that hopefully make it easier for people to understand.

[Disclosure: This video/demo is from the Red Hat Summit 2016 keynote. My day job is with Red Hat. The reason I chose this video is not because of the company/products, but because it is long enough to see all of the moving parts – both on the developer and operator side of a set of applications.]

The applications come from this video, and start around the 43:00 mark.

An Overview

This is a customer-facing application. The application has the following characteristics. Think of these as the basic inputs that your product team might provide to the development team:

  • It can be accessed as a browser-based application or a mobile-application.
  • It involves user-interaction which updates the experience in real-time.
  • All user-interactions are tracked and stored in real-time, and based on the interactions, the system will apply business-logic to personalize the experience.
  • Both the technology teams and the business teams have visibility into the data coming from the user-interactions.
  • The application must have the ability to be frequently updated to be able to add new features, or be modified to adjust the experience based on interaction data.

Looking at these basic requirements, they include some basic elements that should be at the core of all modern applications [note: I created this framework at Wikibon based on interactions with many companies that have been leading these digital transformations and disruptions:

  1. The API is the product. While parts of any application may live in a local client, the core elements should be interacting with a set of APIs in the backend. This allows user flexibility, and broader opportunities to integrate with 3rd-party applications or partners.
  2. Ops are the COGS. For a digital product, the on-going operational costs are the COGS (Costs of Goods Sold). Being able to optimize those COGS is a critical element to making this digital product a success.
  3. Data Feedback Loops. It’s critical to collect insight into how customers interact with the business digitally, and this data must be shared across technical and business team to help constantly improve the product.
  4. Rapidly Build and Test. If the product is now software, it is important to be able to build that software (the applications) quickly and with high quality. Systems must be in place to enable both the speed and the quality. Using data as the feedback loop, knowing where to build new aspects of the software will be clearer than before.
  5. Automated Deployments. Once the application has been built (or updated) and tested, then it’s critical to be able to get it to the customer quickly and in a repeatable manner. Automation not only make that process repeatable (and more secure), but it helps with the goal of reducing the Ops COGS.

The Developer’s Perspective (starting at 50m 0s)

Screen Shot 2016-08-31 at 2.34.59 PM

The demonstration starts by looking at this application from the perspective of the application developer. In this case, the developer will be building a Java application on their laptop. In order to make this simpler for them, they use a few tools. Some of the tools are directly on their laptop and some of them are running as a cloud service (could be private cloud or public cloud):

  • “Project” – The local representation of the application on the developer tools.
  • “IDE” (Integrated Development Environment”) – The local tools on the developers laptop + tools running on a server to help them build the application
  • “Stacks” – The groups of software/languages/framework that the developer will use to build the application.

The tools provided to the developer are there to make sure that they have all the things needed to build the application, as well as making sure that their software stack aligns to what is expected by operations once the application eventually gets to production.

Once that is in place, the developer can then begin writing (or updating) the application. When they have completed a portion of the application, then can move it to the system that will help integrate it with the other pieces of the application:

  • “Microservices” – The smaller elements of an application that perform specific tasks within a broad application experience.
  • “Pushing to Git” – Updating the application into a Git or GitHub repository, so that it can then be stored, tested and eventually integrated with other pieces of the application.
  • “Continuous Integration” (CI) – A set of automated tests that look at the updated application and make sure that it will run as expected.

NOTE: There was a small section in the demo where the developer made a small change/patch to the application. The small change then went through the “pipeline” and eventually got integrated and moved to production. This was done to show that updates to applications don’t have to be large (or any specific size) and then can now safely be done at any time of the day – no longer need outage windows.

The “Pipeline” (starting at 56m 0s)

Screen Shot 2016-08-31 at 2.35.52 PM

This part of the demonstration is focused on the guts of the “bits factory”, the place where the software gets tested and reassembled before it’s ready for production use.

  • “Continuous Integration / Continuous Deployment” (CI/CD) – A set of tools that take application updates from developers, automate the testing of that software with other pieces of the broad application, and package the validated software in a way that can be run in production.
  • “Pipelines” – The on-going flow of application updates from developers. In an agile development model, these updates could be occurring many times per day.
  • “QA” – Quality Assurance – The part of a pipeline responsible for quality testing. Hopefully this is where software bugs are found, before going into production.
  • “Staging” – An intermediate step in a pipeline between QA and Production, where a company will try and emulate the stress of production. Sort of like a dress-rehearsal for a theater play.
  • “Blue / Green Deployment” – Even after being QA tested, when application updates are available, there needs to be a way to deploy them and make sure they work properly before completely removing the old versions. Blue/Green is a way to validate that the application update works before eliminating the old version. It also allows for “roll backs” if some chaos occurs.

The Application and User-Experience (at 1hr 0min)

Screen Shot 2016-08-31 at 4.45.34 PM

Now that the application has been written and moved from being tested to being deployed into production, on a container application platform, the actual interaction with customers can begin. In this demo, we can see the real-time user interactions and how the interactions are tracked by the backend databases (and business logic). While the demo only showed the basic user interaction, the overall experience was made up of a number of microservices (login/user-authentication; user profile; game interactions; scoring dashboards,  user status levels, etc.). Each of those microservices could be independently updated without breaking the overall application experience.

The Business Analyst (at 1hr 3min)

Screen Shot 2016-08-31 at 5.26.53 PM

Previously, I talked about how it’s important for both the technical and business teams to have visibility to customer/market interactions with these new digital products/applications. In this part of the demo, we see how a “business analyst” (or some might call it a “data scientist”, depending on the scope of the work) would interact with the data model of the application. They are able to see the interaction data from the application, as well as make changes to how data relationships can be modeled going forward. They could also make adjustments to business logic (e.g. marketing incentives, customer loyalty programs, etc.). As they make changes to the business rules, those changes would be validated, tested and pushed back into the application, just like a developer that had made changes to the core application.

Platform Operations (at 1hr 6min)

Screen Shot 2016-08-31 at 5.47.32 PM

This part of the demonstration is somewhat out of order, because the platform operations team (a.k.a. sysadmin team, network/storage/server team, infrastructure team) would have setup all of the platform components prior to any of the application work happening. But in this part of the demonstration, they showcase several critical operations elements:

  • How to use automation to quickly, consistently and securely deploy the platform infrastructure that all of the applications will run on.
  • How to easily scale the platform up (or down) based on the needs of the application and the business. “Scale” meaning, how does the system add (or remove) resources without causes outages and without the end-user customers knowing that it’s happening. In essence, ensuring that user-experience will be great, and costs will be optimized to support the right level of experience.
  • How the operations teams can work more closely together with the applications teams to deliver a better overall experience to the end-user customer.
  • How to manage and monitor the system, in real-time, to correct problems and failures.

A/B Testing the Market (at 1hr 12min 20sec)

Screen Shot 2016-08-31 at 5.57.34 PM

In an early segment of the demonstration, they showed how application updates can go through the “pipeline” and be deployed without causing any disruption to the user-experience. But sometimes the business isn’t sure that they want the entire market to see any update, so they’d like to do some market experimentation – or A/B testing. In this case, they are able to test a specific experiment with a subset of the users in order to collect specific digital feedback about a change/new-feature/new-service. The application platform is able to provide that level of granularity about which users to test and how to collect that data feedback for the business analyst teams (or marketing teams, or supply chain teams, etc.). These experiments are treated just like any other change to the application, in that they go through the pre-deployment QA testing and integration, as any other update to the microservices of the application.

Agile for Developers AND Operators (at 1hr 14min)

Screen Shot 2016-08-31 at 6.05.06 PM

Just as application developers are trying to go faster to keep up with business demand, so too must the operations teams be able to move faster to keep up with the developers. In this part of the demo, we see them using similar technology and process to update the platform infrastructure when a security threat or software bug happens. It is important for them to be able to test new software before deploying it live into production, as well as experiment and make sure that something new does not create unexpected problems in the overall environment.

In Summary

There were a lot of moving piece to this demonstration. From a technology perspective, it included:

  • Containers (Docker)
  • Container Application Platform (Openshift/Kubernetes)
  • Middleware (JBoss)
  • Many development languages (Java, nodeJS, .NET)
  • Microservices / Cloud-native applications
  • “Legacy” applications (SQL databases)
  • Continuous Integration (Jenkins)
  • Automation (Ansible)
  • and lots of other stuff…

But what I hope you were able to see was that these new environments are much more agile, much more flexible (for both developers and operators), and require closer collaboration between the business and technology teams to deliver a great end-user experience.

August 30, 2016  9:57 PM

Tracking Container Standards

Brian Gracely Brian Gracely Profile: Brian Gracely
containers, Docker, Kubernetes, Linux, RunTime, Scheduling, standards
Image Source: Pixabay

Image Source: Pixabay

The last few weeks have been very interesting in the world of container “standards”. For some historical context, check out this excellent infographic about the history of containers.

After CoreOS released the “rkt” (Rocket) container specification in 2015, the industry was concerned about competing “standards”, which lead to the creation of the Open Container Initiative. At the time, Docker donated the libcontainer code to the OCI. If you weren’t paying close attention, you might have believed that the Docker contribution would have created a “container standard”. But in reality, Docker only donated a portion of what is needed to make container work. Here is a good explanation of the various layers of Linux that are needed to get containers working properly, both now and historically. But for a while, the industry seemed to be able to deal with both Docker and the OCI, with many implementations and commercial products shipping with the ability to work with Docker containers.

[NOTE – It’s important to remember that there is a difference between docker the open source application container engine project, and Docker, Inc. the commercial company that is the lead contributor to docker.

At DockerCon 2016, Docker, Inc released v1.12, which embedded their container scheduling engine (Swarm | Swarm Mode) into the container runtime (Docker Engine). This integration was intended to help Docker, Inc. expand their commercial offerings (Docker Datacenter). But the work was done outside of the normal, open community process, which raised some concerns from companies that partner or integrate with docker.

A few weeks ago, I had a conversation with Kelsey Hightower (@kelseyhightower, Google) about his concerns about regarding the evolution of Docker, Inc and docker, as well as his desire to see the community evolve to have a standard implementation that wasn’t commercially controlled by a single company.

Since then, a number of blogs and discussion forums (here, here, here, hereherehere have been written expressing further concern about the velocity, stability, and community involvement with docker.

With containers gaining so much traction with developers, it will be very interesting to watch how the community evolves. As some people have said, it might be useful to have “boring infrastructure” to underly all of the rapid changes that are happening in the application world.

With the CNCF hosting KubeCon and CloudNativeCon in November (Seattle) and Docker hosting DockerCon Europe (Barcelona) on back-to-back weeks, we’ll all be watching to see how the container landscape finds a balance between open communities and commercial offerings.

August 21, 2016  5:05 PM

What’s the future of the PaaS “layer”?

Brian Gracely Brian Gracely Profile: Brian Gracely
CaaS, Cloud Foundry, containers, DBaaS, DevOps, Docker, FaaS, Kubernetes, MBaaS, Mobile development, OpenShift, PaaS, platform, Platform as a Service

Cloud Computing Stack – IaaS/PaaS/SaaS (Image Source: JEE Tutorials)

If you give a techie a whiteboard and a marker, regardless of the topic, they tend to want to draw “a stack”. They like to build layers upon layers of technology, showing how each is dependent and interacting with other layers.

When people talk about the cloud computing stack, they often start with the basic layers, as defined by NIST – IaaS, PaaS and SaaS. The IaaS layer is fairly easy to understand – it’s programmatic access to compute (usually a VM), storage and networking. The SaaS layer is equally easy to understand, as we’ve all used software applications delivered via the Internet (Gmail, WebEx, Google Search,, etc.). SaaS is the rental modal for software applications.

But the PaaS layer is more complicated and nuanced to explain. PaaS is the layer that is supposed to make things easier for developers. PaaS includes all of the things that applications care about – databases, queueing, load-balancers, and middleware-like-things. PaaS is also the layer that has been through the most evolution and starts/stops, including things like Heroku, Google App Engine, AWS Simple Queuing Service (the 1st service that AWS launched), Cloud Foundry, OpenShift, Parse (MBaaS), Google Firebase (DBaaS) and various DBaaS services.

What should begin to become obvious is that PaaS is not really a layer; instead it should be thought of as a set of services. And it’s not a specifically defined set of services, but rather a set of evolving services:

  • Database services – these could begin as SQL or NoSQL databases, but the services quickly evolve to simplifying the tasks around sizing,  I/O management, and backup and replication management.
  • Queuing, Notification, Messaging services – a whole set of modular services that are replacing
  • Functions services – The whole set of capabilities that are being called #Serverless (see here, here) or Functions-as-a-Service (FaaS). In some cases, these will be isolated services, but in other cases they will be features of a broader PaaS platform.
  • “Application-helping” services – There are a whole list of tools that help developers get applications built faster – where these are considered “middleware” or “mobile development” or “runtime” (e.g. build packs, S2I, etc) or something else
  • “Stuff-around-the-edges” services – This can be everything from authentication services to API gateways to CI/CD tools.

Where things start to get a little bit fuzzy is when the discussion of containers come up. On one hand, containers are somewhat of an infrastructure-level technology. They represent computing, they need networking, they need storage. On the other hand, they are primarily going to be used by developers or DevOps teams to package applications or code. They also require quite a bit of technology to get working properly, at scale.

Many times, people will blur the lines between CaaS (Containers-as-a-Service) and PaaS (Platform-as-a-Service), using the existing or lack of an embedded runtime (within the platform) as the distinction. Technically, that might be a fair distinction. But it can also confuse the marketplace.

Still other people want to drop the term “PaaS” and replace it with “Platform”. This makes sense when the conversation is about the business value of a platform, with all it’s extensibility and ability for 3rd-parties to equally create value. It’s less useful when that approach is used to avoid any discussion about what technologies are embedded within the platform.

At the end of the day, the PaaS / Platform space is rapidly evolving. But as more companies try and understand how to compete in a “software is eating the world” marketplace, it becomes the most important to understand.

August 13, 2016  12:01 PM

Slack, GitHub and Container Registries – The New IT UIs

Brian Gracely Brian Gracely Profile: Brian Gracely
Ansible, API, Bots, Chef, CLI, Datadog, DevOps, Github, IT, Jenkins., Operations, plugins, Puppet, Slack, UI

initial-config-dialogYears ago, in the dusty old days of IT, when we used to rack and stack our own equipment, there were some common interfaces to equipment. If you worked in networking, that was most likely the Cisco IOS CLI.  Most other domains had a similar CLI interface that was directly linked to boxes from F5, Checkpoint, Dell, HP, Juniper, etc.

And for the most part, we worked in relative isolation. It was just you and the CLI. In some cases, it would be you and a script and the CLI, and hopefully you kept that script in a centralized location so it could be versioned and shared across teams.

But things are starting to change, and change quite rapidly. We first discussed this a couple years ago with Mark Imbriaco, then at GitHub, about the new tools they were using to manage their environment. He told us about a tool they created called “HuBot“, which allowed them to collaborate around Ops issues and make automated tasks simpler to understand. Since then, we’ve seen tons of companies integrate their technologies with both source-control systems (e.g. GitHub, etc.) and chat systems (e.g. Slack, HipChat, etc.) – see the above list for a small sample of example integrations.

Slack Chat

As more and more people talk about “DevOps”, there always comes a point where people realize that in order to achieve some of the benefits, that there needs to be a better interworking between groups of people and the underlying technology. There is a cultural element that must be evolved. But as we’ve learned from books like Made to Stick and Switch, there is always a push and pull dynamic between getting change implemented and getting change adopted. So while it’s true that no individual tool is going to help a group or company towards the benefits of a collaborative DevOps culture, the tools plus newer behaviors they drive can help move the progress in a positive direction.

These new UIs for IT are beginning to make that progress a reality for many companies, beyond just the ones that speak at specialized DevOps events. These tools are building upon the basic premises of:

  • Centralized, versioned information
  • Built-in automation of tasks that can be integrated into broader workflows and process
  • Centralized, open collaboration between team members or across teams
  • Logged actions of what happened and the context of the decision-making process around those actions

The third leg of the new IT UI stool is container registries, such as Docker Hub (and many locally deployed versions – often integrated into contain platforms). Containers are becoming a big deal. These registries act as another building block for tracking critical elements of infrastructure or applications, as well as serving as an integration point for multi-domain collaboration.

All of these tools and platforms are evolving the product-centric CLI into a set of open APIs that allow developers and operators to integrate them into a set of workflows that help them better achieve their technical and business goals.

While the days of the CLI will most likely exist for quite, we’re quickly seeing a new set of IT UIs evolve to better solve the need for fast-moving, rapidly-changing applications, and collaboration between the teams that build and support them.

July 31, 2016  11:17 AM

Cloud-native Platform Focus Areas

Brian Gracely Brian Gracely Profile: Brian Gracely
Application portfolio, CaaS, Cloud Foundry, containers, Developers, Docker, Kubernetes, OpenShift, PaaS

Earlier this month, I wrote a piece about the Architectural Considerations for Open Source PaaS and Container platforms. It was a follow-up to a series I wrote about 12 months earlier, looking at various aspects of these types of Cloud-native application platforms.

Changes are happening quickly in the PaaS market. These platforms were previously known as PaaS (Platform-as-a-Service), but many of the offerings tend to be shifting their focus more towards Containers-as-a-Service (CaaS).

Cloud-native Platform Hierarchy of Needs

   Cloud-native Platform Hierarchy of Needs (Image Source: Brian Gracely)

I tried to put this in the perspective of a “hierarchy of needs”, which evolved from basic stability, to basic developer and application needs, to the scalability and flexibility needs as usage of the platform grows within a company.

  • Platform Stability and Security – Before any applications can be onboarded onto the application, how does the platform itself provide a level of operational stability and security, including the underlying OS that runs the platform?
  • Application Portfolio – Does the platform support a broad range of customer applications, including both stateless and stateful applications? The latter part is critical because most customers will need to either re-purpose existing applications, or interconnect new and existing applications.
  • Developer Adaptability – How much flexibility does the platform provide for developers to get their code / applications onto the platform? Does it mandate that they must move to a single tool for onboarding, or is it flexible in terms of how applications get onboarded? How much re-training is needed for developers to effectively use the platform?
  • Scalability – As more applications are added to the platform, how well will it scale? This scalability looks at areas such as # of containers under management, # of projects, # of applications, # of groups on a multi-tenant platform. It also looks at the scalability of any associated open source community (e.g. Cloud Foundry, Docker, Kubernetes, Mesos, etc.) that is contributing to the projects associated with a platform.
  • Flexibility – In the spectrum between DIY, Composable and Structured platforms, there are trade-offs in how flexible the system is today vs. in the future. Given the rapid pace that platforms and the associated technology are evolving, IT organizations and developers need to consider where they expect their usage of a platform to evolve over time. Will the POC experience extend into the future, as usage expands? Will the needs of the “pioneer” team extend to “settlers” and “town planners”?

NOTE: There will probably be people that will wonder where the cultural aspects of cloud-native fit into these hierarchical needs. They actually fit into each layer and probably could be represented as a vertical bar that sits to the edge of the diagram.

As the pace of change in the platform market continues to accelerate, it is important to have a framework to evaluate how the changes impact the needs of both the developer and operator groups within a company. With so many changes happening so quickly, it’s easy to be confused about what is important and what is just technology noise. Being able to prioritize how something new impacts platform considerations will be a critical consideration for IT organizations and developers looking to build cloud-native applications, as well as evolving aspects of their existing application portfolio.

Page 1 of 1912345...10...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: