From Silos to Services: Cloud Computing for the Enterprise

Jan 8 2018   9:45AM GMT

Understanding Service Meshes for Microservices

Brian Gracely Brian Gracely Profile: Brian Gracely

Tags:
containers
Kubernetes
Load balancing
Microservices
Proxy

One of the most popular topics coming out of the CNCF’s KubeCon event in Austin was the concept of a “Service Mesh”.

There were a number of great sessions (videos) at KubeCon about Service Mesh technologies (including Istio, Envoy, Linkerd and Conduit).

This week we discussed the basics of Services Meshes on the PodCTL podcast, and I’ve previously discussed Istio and Linkerd on The Cloudcast.

What is a Service Mesh?

If you look at the origin of the service mesh projects that have emerged over the last year, most of them begin as a necessity in the webscale world. Linkerd was created by engineers that had worked at Twitter (and have since founder Bouyant, who also created the Conduit project). Envoy was created by the engineering team at Lyft. And Istio started as a project at IBM, but have since seen large contributions from Google, Lyft, Red Hat and many others.

In its most basic definition, a service mesh is application-layer routing technology for microservice-centric applications. It provides a very granular way to route, proxy and load-balance traffic at the application layer. It also provides the foundation for application-framework-independent routing logic that can be used (at a platform layer) by any microservice. This article from the Lyft engineering team does an excellent job of going in-depth on the basic use-cases and traffic flows where microservices might benefit from having a service mesh, as opposed to just using the native (L2-L4) routing from a CaaS or PaaS platform + application-specific logic.

Why are Service Meshes now gaining attention?

The biggest reason that we’re now hearing about Service Meshes is because of broader adoption of microservice architectures for applications. As more microservices are deployed, the more complicated it can be to route traffic between them discover new services, and instrument various types of operation tools (e.g. tracing, monitoring, etc.). In addition, some companies wanted to remove the burden of certain application functionality from their application code (e.g. circuit breakers, various types of A/B or Canary deployments, etc.) and Service Meshes can begin to move that functionality out of the application and into platform-level capabilities. In the past, capabilities like the NetFlix OSS services were application specific (e.g. Java), which allowed teams to get similar functionality, but only if they were writing applications in the same language. As more types of applications emerge (e.g. mobile, analytics, real-time streaming, serverless, etc.), the desire to have language-independent approaches are also desirable.

Want to Get Started with Service Meshes?

Consider working through the tutorial for Istio on Kubernetes over on Katacoda. Also consider listening to these webinars about how to get Istio working on OpenShift (here, here).

 Comment on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

Share this item with your network: