when relevant content is
added and updated.
This is a special guest post for Computer Weekly written by Lars Herrmann, GM of the integrated solutions business unit at Red Hat. Herrmann writes specifically for Open Source Insider to detail the six most common misconceptions that have arisen surrounding the subject of ‘container’ technologies.
Containers, as we know by now, are best described as independently deployable chunks of software application code (in the form of discrete components of application logic) that are capable of being used to build wider (very often Agile) applications. Containers are ‘intelligent’ enough to make their own calls for application resources in order for them to be able to function and they do this through Application Programming Interfaces (APIs).
Red Hat’s Herrmann writes from this point:
1 — Containers are exciting, but only in cloud-native application development
While we’re observing a lot of the buzz and early adoption of containers to be centered on developers using them to build cloud-native applications, the benefits and use cases of containers reach far beyond. Containers provide a practical path for an organisation to adopt the hottest macro-trends such as hybrid cloud, DevOps and microservices. The combination of being a general-purpose OS technology, with built-in abstraction, automation and separation of concerns, all baked into a set of prescriptive workflows for building, deploying, running and managing applications and services, form a new operational model that allows enterprise IT to introduce certain business benefits. These benefits include increased agility, efficiency and innovation across a broad range of applications and environments. It also defines a technology system around which organisations can build processes and structure, to overcome the complex inter-human interactions preventing these benefits today.
2 — Container technology is ‘new’
Containers are often perceived to be a new technology. True, many of their use cases are only emerging now, but most of the technologies inherent to Linux containers have been around for years and have provided the foundation of many first generation PaaS offerings. The new part is the ability to run and manage a broad set of applications such as cloud-native microservices as well as traditional applications with an image-based delivery model.
Equally, the idea of sharing an operating system instance by isolating different parts of an application is not a new concept. Solutions have been available for splitting up and dedicating system resources efficiently for some time now.
3 — Containers can/will replace virtual machines (VMs)
Containers aren’t the same as VMs, and therefore cannot replace them entirely. This is largely owed to the fact that virtualisation and containerisation address different problems: For instance, virtualisation provides flexibility from hardware, while containers provide speed and agility through lightweight application packaging and isolation.
Also, there are some enterprise workloads that lend themselves to running as containers, others are better served with the hardware abstraction provided by VMs.
For these reasons, we like to think of container technology as a complementary solution to VMs, rather than an out-and-out replacement.
4 — Containers are just that, self-contained
Contrary to the suggestion within the name, containers aren’t completely self-contained. Each individual container leverages the same host operating system, as well as its services. The upshot of this is that businesses can greatly reduce overheads, and improve performance. The downside being that this leads to potential security or interoperability issues. This leads us on nicely to the next misconception.
5 — Containers are watertight in terms of security
Linux containers can rely on a very secure foundation: Linux. Due to the aforementioned characteristic of containers sharing a host OS and all the resources therefore being managed by the OS, security needs to addressed differently than with VM’s. There are two entities that need to be made secure: the OS running the containers – which might run in a VM – and the software payload of each individual container.
Out of the box, Linux offers technologies to isolate containers, such as process isolation and namespaces, however despite their effectiveness they cannot shut down every route malicious code could take in order to access other containers in a single environment. Additional layers of security are necessary to create a completely locked down environment, such as SELinux providing military-grade security by enforcing policies for mandatory access control.
Often overlooked, the container payloads carry most of the security risk in a containerised environment, driven by the usage patterns of allowing development teams to define what goes into these containers and when and how they change. Industry best practices are to run only trusted components inside a container, complemented by scanning techniques to be able to create actionable insights on potential security risks such as viruses, known vulnerabilities or weak configuration and default settings.
6 — Containers will be universally portable
This isn’t the case… yet. For containers to be truly portable, there needs to be an integrated application delivery platform built on open standards and providing a consistent execution across different environments. Containers rely on the host OS and its services for compute, network, storage and management, across physical hardware, hypervisors, private clouds and public clouds etc. The ecosystem is the key here, there needs to be industry standards for image format, runtime and distribution in order for universal portability to become possible.
This need is recognised by the industry and relevant communities who formed entities to define and evolve these standards, such as the Open Container Initiative and Cloud Native Computing Foundation.