From Silos to Services: Cloud Computing for the Enterprise

Page 1 of 2312345...1020...Last »

February 19, 2018  11:19 PM

Understanding the Challenges of Kubernetes

Brian Gracely Brian Gracely Profile: Brian Gracely
certification, Compatibility, containers, DevOps, Kubernetes, Microservices, platform

Last week I spoke about some of the critical factors that will be important for IT leaders to consider if they are bring containers and Kubernetes into their environments. These decision-criteria are elements that will have longer-term impacts on how well their application developers and IT operations teams are able to deploy and scale their containerized applications.

But the Kubernetes technology and community are moving very quickly, with lots of new features and entrants coming into the market. And anytime a market is moving quickly, there can be some short-term myths and misunderstandings that can confuse both the education process and decision-making process. For the last couple weeks, we’ve been looking at some of the more commons misperceptions that exist around the Kubernetes community, and some details about how to rationalize them with the existing technology available. We did this in a two-part series (Part I, Part II).

We covered the following topics:

Applications

Myth/Misunderstanding 1 – Kubernetes is a platform.
Myth/Misunderstanding 2 – Containers are only for microservices
Myth/Misunderstanding 3 – Microservices are always “micro” (small in size)
Myth/Misunderstanding 4 – Kubernetes is only for stateful apps

Architecture

Myth/Misunderstanding 5 – Architecture – Kubernetes Multi-Tenancy
Myth/Misunderstanding 6 – Architecture – Kubernetes is only for Operators

Compatibility and Certification

Myth/Misunderstanding 7 – What does “GKE Compatible” mean?
Myth/Misunderstanding 8 – Enterprises should run Kubernetes as trunk version

Open Source Communities

Myth/Misunderstanding 9 – Are OSS stats important? How to interpret them?

In going through this list, we often found that the misperceptions were not created by vendor FUD, but mostly came from a lack of experience with a broad range of potential applications, or deployment models. In other cases, the misperception untrue now, but had previously been true in the early days of Kubernetes (e.g. v1.0, or several years ago).

We know that it can be difficult to always keep up with all the changes happening in the Kubernetes community, so we hope that those two shows help to eliminate some of the confusion. Going forward, it may be useful to look at resources such as “Last Week in Kubernetes Development” or the KubeWeekly newsletter from the CNCF.

February 5, 2018  11:38 PM

5 Critical Criteria for choosing Kubernetes platforms

Brian Gracely Brian Gracely Profile: Brian Gracely
Code, communities, Kubernetes, Multi-cloud, UX

If 2017 began with a market full of choices for managing containers, it ended with a fairly unanimous vote from the community – Kubernetes has won the container orchestration battles. So now that nearly every vendor is claiming to support Kubernetes in 2018, what are the critical criteria that companies will use to determine which Kubernetes offering to choose for the applications that will drive their digital transformations? Let’s take a look at five that will be top-of-mind for decision-makers (in no particular order).

User-Experience / Developer-Experience

Ultimately, companies deploy modern infrastructure (e.g. containers) and application platforms to improve their ability to deploy and support modern applications; applications that are focused on improving the business (e.g. better customers experiences, better delivery of services, increased profitability, etc.). In order for this to happen, developers must be willing to use the platform. The developer experience allows developers the freedom to use the tools and interfaces they determine will make them the most productive. In some cases, these will be tools that are container-native. In other cases, developers just want to write code and push it into code repositories and automated testing systems. The best platforms will allow developers to operate in either mode, seamlessly. And subsequently, the operations teams shouldn’t have to think about these different modes as independent “silos”. They are just two different ways to access the same platform.

Engineering Commitment

If a company is considering an investment in a commercial platform, they are fundamentally looking for assistance in augmenting their internal development or operations teams. This means that they want to understand how a vendor’s (or cloud-provider or  system-integrator’s) engineering staff and experience will make them successful. Do they commit code to the core projects? Do they support the code for extended periods of time? Will they create bug fixes? Will they backport those fixes into upstream projects? Do they only support the projects they lead, or will they also support additional projects which give them more flexibility and choice?

Multi-Cloud

Unlike some of the IaaS platforms/projects, which are primarily targeted to “private cloud” environments, Kubernetes has always been designed to be multi-cloud, and allow application portability. This is where it’s important to find reference architectures for various cloud implementations (AWS, Azure, GCP, VMware, OpenStack, bare metal), as well as tools to help automate and upgrade the environments. This is also an area where some implementations are offering support for Open Service Broker capabilities, that allow Kubernetes platforms to integrate with 3rd-party services from various cloud providers. Vendors such as AWS, Red Hat, Ansible, GCP, Microsoft and Pivotal have implemented open service brokers.

Ecosystem

While Kubernetes already has a large number of companies that have proven to have certified implementations, it’s still important to know which implementations are making the effort to test and validate their working together. This is not just press-release partnerships, but also proven working code, automation playbooks, and implementations that are out in the open.

Operational Skills

One of the last things to consider is how well a company’s existing operational skills will match with the needs of the new platform requirements. Are they able to integrate existing build tools (CI/CD pipelines), or does that get replaced? Can they reuse existing automation or virtualization skills? Can they get training and certifications to improve their skills, or is it mostly just code examples to sift through? Or does the vendor offer options to augment existing operational skills with managed services or automated updates directly delivered by the vendor?


February 1, 2018  7:29 PM

Understanding Helm – The Kubernetes Package Manager

Brian Gracely Brian Gracely Profile: Brian Gracely
Kubernetes, Microsoft

Virtual machines are a fairly easy concept to understand. It’s software that emulates an entire computer (server). It allows a physical server to be logically “segmented” into multiple software-computers, or virtual machines. And they are an interesting way for Infrastructure/Operations teams to more efficiently run more applications than before on a single physical computer. For the most part, virtual machines are a technology tool that is primarily used (and managed) by the Infrastructure/Operations teams, with Developers being more focused on the applications that run inside the virtual machine.

In contrast, containers are a technology that is applicable to both developers and operators, depending on the task at hand. For developers, containers provide a standard way to package an application, as well as dependencies (additional libraries/dependencies, security credentials, etc.). They also allow developers to run and validate applications locally on their laptops. For operators, container platforms allow applications to be immutably deployed, as well as having the scaling and availability characteristics deterministically described.

So how does a developer “describe” to the operations team how they want the application to be deployed and run? One technology that makes this possible is called “Helm“, the Kubernetes Package Manager. Helm is an official CNCF project, with a large development community supporting the project. Helm creates a set of executable files, called Helm charts, that describe the application, its dependencies and description of how the Kubernetes controller should treat the application. Helm charts can be version controlled, to allow for rollback if a problem occurs after deployment. Helm charts can be centrally stored in an online registry-type service, such as KubeApps from Bitnami.

We recently sat down with Taylor Thomas, Software Engineer at Nike, and @HelmPack Maintainer. We walked through the basics of Helm technology, how it interacts with commonly use container tools and Kubernetes, as well as the how the Helm community is planning for the upcoming Helm v3 release. We also talked about the agenda for the upcoming Helm Summit in Portland.

During that discussion, we highlighted some great resources for anyone that wants to learn more about Helm, or start using the technology for their containerized applications.


January 8, 2018  9:45 AM

Understanding Service Meshes for Microservices

Brian Gracely Brian Gracely Profile: Brian Gracely
containers, Kubernetes, Load balancing, Microservices, Proxy

One of the most popular topics coming out of the CNCF’s KubeCon event in Austin was the concept of a “Service Mesh”.

There were a number of great sessions (videos) at KubeCon about Service Mesh technologies (including Istio, Envoy, Linkerd and Conduit).

This week we discussed the basics of Services Meshes on the PodCTL podcast, and I’ve previously discussed Istio and Linkerd on The Cloudcast.

What is a Service Mesh?

If you look at the origin of the service mesh projects that have emerged over the last year, most of them begin as a necessity in the webscale world. Linkerd was created by engineers that had worked at Twitter (and have since founder Bouyant, who also created the Conduit project). Envoy was created by the engineering team at Lyft. And Istio started as a project at IBM, but have since seen large contributions from Google, Lyft, Red Hat and many others.

In its most basic definition, a service mesh is application-layer routing technology for microservice-centric applications. It provides a very granular way to route, proxy and load-balance traffic at the application layer. It also provides the foundation for application-framework-independent routing logic that can be used (at a platform layer) by any microservice. This article from the Lyft engineering team does an excellent job of going in-depth on the basic use-cases and traffic flows where microservices might benefit from having a service mesh, as opposed to just using the native (L2-L4) routing from a CaaS or PaaS platform + application-specific logic.

Why are Service Meshes now gaining attention?

The biggest reason that we’re now hearing about Service Meshes is because of broader adoption of microservice architectures for applications. As more microservices are deployed, the more complicated it can be to route traffic between them discover new services, and instrument various types of operation tools (e.g. tracing, monitoring, etc.). In addition, some companies wanted to remove the burden of certain application functionality from their application code (e.g. circuit breakers, various types of A/B or Canary deployments, etc.) and Service Meshes can begin to move that functionality out of the application and into platform-level capabilities. In the past, capabilities like the NetFlix OSS services were application specific (e.g. Java), which allowed teams to get similar functionality, but only if they were writing applications in the same language. As more types of applications emerge (e.g. mobile, analytics, real-time streaming, serverless, etc.), the desire to have language-independent approaches are also desirable.

Want to Get Started with Service Meshes?

Consider working through the tutorial for Istio on Kubernetes over on Katacoda. Also consider listening to these webinars about how to get Istio working on OpenShift (here, here).


January 2, 2018  10:32 PM

The Top 10 Stories from 2017

Brian Gracely Brian Gracely Profile: Brian Gracely
ai, AWS, Azure, Bitcoin, Blockchain, containers, Docker, HCI, Kubernetes, ml, VMware

It goes without saying that a lot happened in 2017, as tech and politics and culture intersected in many different areas. Before the year came to a close, we looked back at 10 of the stories that had the biggest impact on the evolution of IT.

Container Stuff

1. Docker >> New CEO; docker >> Moby

The company and open source project that was once darling of the tech media went through some significant changes. Not only did they replace their long-time CEO, Ben Golub, with Steve Singh, but they also changed the identity of the project that created such a devoted open source community. In May, Docker decided to take control of the intellectual property around the name “Docker”, and change the name of the popular project from “docker” to “moby”.  This move provided a more distinct line between Docker, Inc (a.k.a. “big D Docker”) and the open source community (a.k.a. “little d docker”) and their future for monetization plans.

2. Kubernetes winning the Container Orchestration Wars – but do we now only talk about it being boring and irrelevant?

Just a few years ago, there was a great deal of debate about container orchestration. CoreOS’s Fleet, Docker Swarm, CloudFoundry Diego, HashiCor’s Nomad, Kontena, Rancher’s Cattle, Apache Mesos, and Amazon ECS were all competing with Kubernetes to gain the favor or developers, DevOps teams and IT operations. As we closed out 2017, Kubernetes has run away from the field as the clear leader. Docker, Cloud Foundry, Rancher and Mesosphere added support for Kubernetes, putting their alternative offerings on the back burner. AWS and Azure brought new Kubernetes services to market, and even Oracle jumped in the game. And early adopter Red Hat continues to do Red Hat things in the Kubernetes space with OpenShift. 

And with that standardization came some level of boredom from the early adopter crowd. The annual KubeCon event was littered with talks about how Kubernetes has become “boring” and “a commodity”. They have all moved onto 2018’s newest favorite, “service meshes”.

Cloud Stuff – When do we hit the Cloud Tipping Point?

3. Azure getting bigger, but big enough? ($20B run-rate; includes O365)

As Microsoft continues to evolve it’s business towards move cloud-based services and support of open source technologies, it continues to grow it’s Azure business and Office 365. But is it growing fast enough to keep up with AWS? Microsoft claims to be on a $20B run-rate for their cloud business, but it doesn’t break out individual elements such as Azure vs. O365 vs. Windows Server or management tools.

4. AWS re:Invent – can it be stopped? ($18B run-rate)

With dozens of new announcements at AWS re:Invent, the industry continues to chase the leader in cloud computing. The Wall Street Journal recently looked at ways that Amazon (and AWS?) may eventually slow down, but it’s hard to see that happening any time soon.

5. Google Cloud – still ??? (says almost 2,500 employees)

Given the growth of Alibaba Cloud in Asia and continued growth from Oracle Cloud (mostly SaaS business), does Google Cloud still belong in the discussion about “the Big 3”? Apparently Google’s cloud business is now up to 2500 employees, and has made some critical new hires on the executive staff, but when will we start to see them announce revenues or capture critical enterprise customers?

Infrastructure Stuff

6. New Blue Print Emerging for Infrastructure?

Over the last 3-5 years, no segment of the market has undergone as much change and disruption as IT infrastructure. VMware is a networking company, Cisco is a server and security company, HPE isn’t sure what it once was anymore, and DellEMC is focused on “convergence” of its storage and server business. The combination of Hyper-Converged Infrastructure (e.g. Nutanix, VSAN, etc.) and “Secondary Storage” (e.g. Rubrik, Cohesity, etc.) are all bringing Google and Facebook DNA to infrastructure, with an element of public cloud integration. Throw in the transition from VMs to containers and infrastructure is setting up for more consolidation, more software-defined and more disruption in 2018.

7. VMware’s cloud direction?

The Big 3 Public Clouds have all placed their bets on some form of a hybrid cloud story – AWS + VMware; GCP + Nutanix or Cisco or VMware/Pivotal; and Azure + Azure Stack. Will VMware continue to have a fragmented story of VMs to AWS and Containers to GCP/GKE, or will it create new partnerships to make this story more consumable by the marketplace? And what ever happened to the VMware + IBM Cloud partnership from 2016?

8. Where does Serverless Fit? Does it? Is it a Public Cloud only play?

Serverless or FaaS (Functions as a Server) is rapidly climbing up the hype curve, with AWS Lambda leading the way. AWS has been taking serverless in broader directions lately, with “serverless” RDS database offerings, and “Fargate” which is a serverless-like offering for the infrastructure under their EKS container offering. But what about the rest of the market? Will a strong open-source option emerge for multi-cloud or on-prem offerings (e.g. OpenWhisk, Kubeless, Fission, OpenFaaS), or will serverless be the one technology that never really translates from the public cloud into other environments? Expect to see 2018 be a big year for serverless, as more and more examples of applications and usage patterns begin to emerge from Enterprise companies.

Stuff we don’t understand 

9. BitCoin / Blockchain – did you bet your mortgage on it? 

Banks were allowing people to take out 2nd mortgages and HELOCs against their house to buy Bitcoins. Bitcoin was up 16,000%+ in 2017 and some “experts” were saying that it wasn’t a bubble. And the underlying technology, Blockchain, it still somewhat of a mystery to many people, although it’s supposed to be the next big thing for almost every industry. What’s happening here?

10. AI & ML – Everyone’s including AI in their products

These things we know:

  • The hype around AI and ML is huge
  • Where areas of AI or ML improve (e.g. auto-fill in searches), the industry assumes it’s simple and moves the goal posts of what miracles of computer science we expect next.
  • Everyone is afraid of SkyNet


January 2, 2018  7:05 PM

Kubernetes Year in Review

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Cloud Foundry, containers, Docker, Kubernetes, OpenShift, Oracle, Red Hat

Over at the PodCTL podcast, we’ve been discussing Kubernetes and Containers for the last 6 months on a weekly basis. Before the break, we looked back on how far the Kubernetes community had evolved in just a short period of time.

  • SETUP | GETTING STARTED: A few years ago, people said that getting started with Docker Swarm was easier than Kubernetes. The Kubernetes community created tools like Minikube and Minishift to run locally on the laptop, automation playbooks in Ansible, services like Katacoda have made it really simple to have online tutorials to learn, and multiple cloud offerings (GKE, AKS, EKS, OpenShift Online|Dedicated) make it simple to get a working Kubernetes cluster immediately.
  • ENSURING PORTABILITY: Nearly every Enterprise customer wants a Hybrid Cloud environment, but they need to understand how multiple cloud environments will impact this decision. The CNCF’s Kubernetes Conformance model is the only container-centric framework that can ensure customers that Kubernetes will be consistent from one cloud environment to another. And since it’s built entirely on the APIs and tools used to build the Kubernetes technology, it allows companies to include compliance testing as part of their day-to-day development.
  • INFRASTRUCTURE BREADTH: Other container orchestrators had ways to integrate storage and networking, but only Kubernetes created standards (e.g. CNI, CSI) that have gained mainstream adoption to create dozens of vendors/cloud options. This allows dozens of networking or storage vendors (or open source projects) to easily integrate with Kubernetes and the breadth of conforming Kubernetes platforms. 
  • APPLICATION BREADTH: The community has evolved from supporting stateless apps to supporting stateful applications (and containerized storage), serverless applications, batch jobs, and custom resources definitions for vertical-specific application profiles.
  • IMPROVING SECURITY: A year ago, there were concerns about Kubernetes security. Since then, the community has responded with better encryption and management of secrets, and improved Kubernetes-specific container capabilities like CRI-O and OCI standardization. In addition, the security ecosystem has embraced new innovations around continuous monitoring, scanning, and signing images within the container registry. 
  • IMPROVING PERFORMANCE: Red Hat (and others) have started the Performance SIG in the Kubernetes community to focus on high-performance applications (HPC, Oil & Gas, HFT, etc) and profiling the required performance characteristics of these applications in containerized environments.
  • IMPROVING THE DEVELOPER EXPERIENCE: One of the themes of KubeCon 2017 (Berlin) was focusing on developer experience, and in just a few months we’re seeing standardization around the Helm format (for application packaging), Draft to streamline application development, Kubeapps to simplify getting started with apps from a self-service catalog. We also seen Bitnami built a parallel to their existing container catalog with applications that are packaged specifically for OpenShift’s security model of non-root containers (vs. the Docker model of root-enabled containers). 

All-in-all, it was an amazing evolution of the Kubernetes community from ~ 1000 people at KubeCon 2016 in Seattle to over 4300+ at KubeCon in Austin. 2018 will bring increased competition and innovation, as well as many more customers running production application on Kubernetes – both in their own data centers and in the public cloud.


November 30, 2017  12:17 AM

AWS Expands the Definition of Serverless

Brian Gracely Brian Gracely Profile: Brian Gracely
Aurora, AWS, Kubernetes, Lambda

The keynote of the AWS re:Invent show has become a firehose of new announcements and new technologies. This year was no different. But instead of focusing on the laundry list of new features or services, I wanted to focus on one specific aspect of several of the announcements.

When AWS Lambda (e.g. “Serverless”) was announced in 2014, it opened two discussions. The first was focused on how this new, lighter-weight programming paradigm would be used. The second was that the usage of this service didn’t require (nearly) any operational involvement, from the perspective of planning the underlying infrastructure resources (capacity, performance, scalability, high-availability). In essence, it was truly on-demand. The third was the beginning of extremely granular pricing model, which is truly pay-per-usage and not pay-per-start-the-instance.

During the keynote, AWS made several announcements that appear to extend the concept of “serverless-style operations” to non-Lambda AWS services.

In particular, both Fargate and Aurora Serverless are helping to remove the concerns of the underlying infrastructure planning from more complex services – Kubernetes/Container Cluster Management, and Database Management.

AWS is by no means the only cloud provider that is starting to drive this trend. Azure has a service called “Azure Container Instances” that does something similar to AWS Fargate. Google Cloud Platform has several data-centric services (e.g. Big Query, DataFlow, DataProc, Pub/Sub) that don’t require any planning of the underlying infrastructure resources.

These new (or enhanced) services shouldn’t be confused with the Serverless or Functions-as-a-Service development paradigm. But they do begin to show the beginnings of what many in the Serverless community are beginning to called “DiffOps” (or Different Ops). It’s not the all-or-nothing claims of “NoOps”, but it’s a recognition that the requirements of Ops teams has the potentially to significantly be changing with these new services. And what that DiffOps will be is still TBD. We discussed it briefly on The Cloudcast with Ryan Brown a few weeks ago.

For Enterprise Architects and Operations teams, this will definitely be a trend to watch, as it has the potential to significantly change or disrupt the way that you do your day-to-day jobs going forward.


November 19, 2017  11:58 AM

The Evolution of the Kubernetes Kommunity and Konformance

Brian Gracely Brian Gracely Profile: Brian Gracely
API, certification, Cisco, Compliance, CoreOS, Google, Kubernetes, Linux, Microsoft, Red Hat, VMware

This past week the Cloud Native Computing Foundation (CNCF) announced their “Certified Kubernetes Conformance Program“, with 32 distributions or platforms being on the initial list. This allows companies to validate that their implementations do not break the core Kubernetes API implementations, along with allowing them access to usage of Kubernetes naming and logos.

Given that the vast majority of vendors are either now offering Kubernetes, or plan to offer Kubernetes in 2018, this is a valuable step forward to help customers reduce concerns they have about levels of interoperability between Kubernetes platforms.

[NOTE: We’ve been covering all aspects of Kubernetes on the new PodCTL podcastThe show is available on RSS FeediTunesGooglePlayStitcherTuneIn and all your favorite podcast players.]

Beyond the confidence this can provide the market, the Kubernetes community should be credited for doing this in a transparent way. Each implementation needs to submit their validated test results via GitHub, and the testing uses the same automated test suite that is used for all other Kubernetes development. There’s no 3rd-party entity involved.

Terminology

This may seem like a nitpick, but I think it’s important to get some terminology correct, as this has confused the market with previous open source projects. We dissected an open source release in previous posts. While open source projects have been around for quite a while, and usage has grown significantly over the years, the market still confuses how they speak about them. Let’s look at a couple examples:

#1 – “The interoperability that this program ensures is essential to Kubernetes meeting its promise of offering a single open source software stack supported by many vendors that can deploy on any public, private or hybrid cloud.” (via CNCF Announcement)

#2 – Dan Kohn said that the organization has been able to bring together many of the industry’s biggest names onboard. “We are thrilled with the list of members. We are confident that this will remain a single product going forward and not fork,” he said. (via TechCrunch)

You’ll notice that terms like product and stack are interchanged for project. This happens quite a bit, which can sometimes set the wrong expectations for customers in the market who are used to certain support or lifecycle expectations from software they use. We often saw this confusion with OpenStack, which was actually many different projects all puled together under one umbrella name, but could actually be used together or independently (e.g. “Swift” Object Storage).

It’s important to remember that Kubernetes is an open source project. Things that passed that conformance test are categorized at either “distributions” or “platforms”, which means they are vendor products (or cloud services). And this program doesn’t cover things that plug into non-Kubernetes-API aspects like Container Network Interface (CNI) or Container Storage Interface (CSI) or Container Registries.

Beyond the Conformance Tests (and Logos)

While there are very positive aspects of this program, there are other elements that still need to evolve.

Projects vs. Products (Distributions & Platforms)

It is somewhat unusual to have a certification of a open source project, especially a fast-moving one like Kubernetes, since the project isn’t actually certified, but the vendor implementations of that project (in product form). Considering that Kubernetes comes out with a new release every 3 months, it will be interesting to watch how the community (and the marketplace) reacts to constantly having to re-certify, as well as questions that will arise about backwards compatibility.

Another area which is somewhat unique is that vendors have been allowed to submit offerings before they are Generally Available in the market.

A third aspect that will be interesting to watch is how certain vendors handle support for implementations if they don’t really contribute to the Kubernetes project. For example, Pivotal/VMware, Cisco and Nutanix have all announcement partnerships with Google to add Kubernetes support to their platform. Given that those three vendors have made very few public contributions to the Kubernetes projects, these appear to be more like “OEM” partnerships. So how will a customer get support for these offerings? Will they always need a fix from Google, or will they be able to make patches themselves?

Long-Term Support (LTS)

One last area that will be part of the community discussion in 2018 will be an LTS strategy, or Long-Term Support. With new Kubernetes releases coming out every three months, many companies have expressed an interest in a model that is more focused on stability. In the past this eventually happened with the Linux community, and is beginning to happen with OpenStack distributions. It will be an interesting topic to watch, as many people within the community say that LTS models stifle innovation. On the flip-side is customers that might need/want to run the software, but are struggling to keep up with frequent upgrades and patching.


November 19, 2017  8:34 AM

Looking ahead to AWS re:Invent 2017

Brian Gracely Brian Gracely Profile: Brian Gracely
Aurora, AWS, containers, GPUs, ISVs, Kubernetes, Lambda, Oracle, Service Broker, SIS

As we get closer to the annual AWS re:Invent event, it’s time for all of us prognosticators to speculate on what new products and services that AWS might announce.

What I’ve learned over the years is that their announcements tend to follow a few common rules:

  • It’s never a great idea to be a top-level sponsor, as it means that your business is successful, which puts you on AWS’s radar. I know that sounds weird, but AWS has much better insight into what happens on their platforms than most IT vendors have of their ecosystem. In the past, there are usually 1-2 of these companies that have their service replicated as a new AWS service each year.
  • If you’re a business that hasn’t adapted it’s business model in a while, you’re potentially vulnerable to a new AWS service. Last year it was both managed services and virtual private servers
  • If there is a popular technology that is primarily being used as DIY (Do-It-Yourself) today, it’s not uncommon for AWS to created a bundled, more managed offerings (e.g. AWS Aurora for databases).
  • AWS is less interested in maintaining the status quo, especially within IT, than it is to unlock new potential for “builders” and business owners. This means that IT Ops team may often feel threatened by new services that automate tasks that used to require highly skilled (and certified) IT personnel.
  • AWS chips away at highly complex problems piece-by-piece. Things like Big Data, Data Science, Machine Learning, Artificial Intelligence are huge challenges. AWS has been trying to make them modular and simplified with each new service they add.
  • Data is sticky. Data has gravity and is difficult to move. So AWS is always looking for new ways to get customer data within their services. The ingestion fees are usually $0, and the fees to take action on the data or send it back out of the system (e.g. interact with customer applications) is where AWS makes their money.
  • AWS has created a large portfolio of services and capabilities. They always like to talk about how many new features they have created. This is sometimes overblown, as any large IT vendor with a broad portfolio is creating 100s of new features each year across many products – they just usually don’t mention them in the content of # of features. Last Week in AWS and Top Stories from AWS this Week are two excellent sources of information to keep up with new updates each week.

So given all of that, what might we expect to be announced at re:Invent?

  • More CPU types and adjusted pricing for Compute or Storage.
  • More Regions and Availability Zones, especially in Europe and Asia.
  • New networking capabilities, with a focus on higher bandwidth access into the cloud and across clouds.

Enterprise Partnerships – The biggest revenues in IT are in the Enterprise, which has been AWS’ focus for the last 3-4 years. Expect to see them continue to highlight SI partnerships to help scale delivery. Expect them to highlight some new ways that they’ll create hybrid cloud environments (example: AWS Service Broker with Red Hat OpenShift).

Lots of Data and Lots of Lambda: CTO Werner Vogel’s keynote is supposedly going to focus on the intersection of data and serverless, two areas where AWS is extremely focused and two areas where their services are very sticky (read: much potential for lock-in to the AWS cloud). I expect to see many early-adopter customer stories and use-cases highlighted.

Going after “the old guard” – AWS likes to refer to large, existing IT vendors as “the old guard”. They favorite seems to be Oracle. They have bee aggressively trying to offer alternatives to the Oracle DB (e.g. Aurora), as well as database migration tools into the AWS cloud. They’ve also gone after Oracle data warehouses with AWS Redshift. I’d expect to see them begin to target AWS Lambda at edges of common Oracle DB capabilities (e.g. batch processing).

Containers – Containers have been a hot technology for the past 2-3 years. Many surveys show that customers are already running containers (e.g. docker) on AWS, along with homegrown Kubernetes clusters. AWS has a managed container service (AWS Elastic Container Service – ECS), but with the rising popularity of Kubernetes, I’d expect to see AWS offer a managed Kubernetes service to compete with Azure AKS and Google GKE.

Talk about Open Source: AWS has had a mixed track record on open source. They consume a lot of it, but their contributions has been scattered. Google and Microsoft have been highlighting their commitment to OSS. AWS’ Adrian Cockcroft has been more visible this year, growing his open source team, so I’d expect them to highly their commitment to open source.


October 22, 2017  3:56 PM

Is Per-Minute Billing the Next Step to Unlimited Cloud Plans?

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, cloud, EC2, Public Cloud

As human beings, we either like having a level of certainty about how much we’re going to pay for something, or we don’t like to think about it at all. We buy an all-day pass at Disney World, or we sign up for bills to automatically be charged to our credit cards each month. Constantly having to think about the cost of an activity eventually takes away from the experience of any given activity.

In that context, one of the most complicated elements of cloud computing is pricing. We’ve covered it on The Cloudcast many times (here, here, here, here).

How much does anybody actually pay for a cloud service? There are fees to use a given service (e.g. computing), as well as fees to access the service (e.g. bandwidth), and then some fees are only charged based on certain usage patterns (e.g. outbound traffic vs. inbound traffic; intra-region vs. inter-region traffic). And then there are all the options about how to pay for the cloud service: on-demand, reserved instances, spot market, sustained usage discount, pre-emptive services, etc.

Per-Second Billing. Who needs it?

Recently, AWS made some noise by announcing that their EC2 (compute) and EBS (local storage) services would have the option for per-minute billing. Google Cloud quickly followed along by announcing similar pricing options, as well as reminding the market that some of their existing Big Data services were already billed at this granular level.

The initial reaction from the mainstream part of the market was similar to the infamous 8-minute abs vs. 7-minute abs debate.  They asked who really needed per-second-billing when there was already per-hour billing? Per-hour already feels very granular, especially in a market where the majority of companies buy computing in 3-5yr intervals.

But then we look at the rise of Serverless computing, with it’s per-second billing for millions of transactions. The framework for these types of applications is already in place, albeit that it’s only used by a small fraction of the market.

And then think about all the short-run batch jobs that take place in the evenings or off-hours. Somebody has probably been looking at their cost spreadsheet from AWS and noticing that many of those runs lasted less than an hour, and yet they were still being billed for the full hour of EC2 or EBS usage. There is an opportunity to save money for those customers that have gotten sophisticated about their usage patterns.

So what’s next?

Maybe the end game is just very granular billing models, where the cloud providers can incentivize additional incentivize usage by bundle a certain amount for free each month. They already do this for a number of services. Maybe the serverless model has unlocked a new psychological threshold around cost modeling that AWS believes is the next frontier, similar to how we no longer think about the annual cost of Amazon Prime, we just enjoy getting FREE shipping with each other.

But maybe there is something else on the horizon. Maybe there are more bundled offerings on the horizon. Maybe there will be an “unlimited” plan for enterprise IT, similar to how many Enterprises currently buy ELAs (Enterprise License Agreements) today. The early approaches to all-you-can-eat cloud have had mixed reviews (and some failures), but failures have never stopped AWS in the past (remember the Fire Phone, before the Echo/Alexa?). Most CFO’s would love some level of certainty around IT spending, so maybe the next frontier is to just buy blocks of “any cloud service”, with some concept of “unlimited” usage.

In the bigger picture of things, the public cloud has already attacked “maintenance is hard”. Maybe the next item on their attack list is “cost modeling is hard”.


Page 1 of 2312345...1020...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: