DataCenterNews Asia Pacific - Specialist news for cloud & data center decision-makers
Story image
How to speak like a data center geek: Software containers
Thu, 29th Jun 2017
FYI, this story is more than a year old

We're tackling software containers in this entry in our long-running “How to Speak Like a Data Center Geek” series because containers are huge right now.

Why?

They enable app development and operations at a level of cost-efficiency, scalability and optimization that's downright revolutionary.

Since the introduction of container technology over 16 years ago, IT departments have enthusiastically embraced it.

Given the rise of more contemporary container platforms from Docker, CoreOS, and public cloud providers such as AWS, Google and Microsoft, it seems like this is a technology that can't be contained.  (My apologies).

Let's start at the ground floor.

Containers

Containers exist to solve a problem: Developers needed applications to run reliably when they were moved between systems and computing environments in the cloud and elsewhere.

But the differences in those environments between supporting software, security and network interfaces made that a tricky proposition. Containers solve that by isolating an application to its own runtime environment, along with everything needed to operate it, all in one portable package.

Kernel

A kernel is the computer program at the core of an operating system (OS), and it has complete control over every function of the OS. Containers are often called lightweight, because they don't need a full operating system (OS) or a virtual copy of the host server's hardware – instead, they share the kernel of the host OS.

This minimal use of resources allows servers to host more of them, increasing computing power and efficiency. And since containers are portable, developers can move them around and use them to run any app on any server.

Orchestration

The word “orchestration” invites musical analogies, so we'll define orchestration as the way individual containers are deployed and managed so they can function in harmony.

Accordingly, orchestration software (e.g., Kubernetes, Docker Swarm) essentially simplifies and systemizes the deployment of containers to create a desired set of functions, such as network routing.

Kubernetes, for instance, clusters the containers that are the building blocks of a given application into logical units that are easier to find and manage. Most public cloud providers offer cluster management and orchestration capabilities for Docker containers.

Virtual Machines

We include virtual machines (VMs) because they are similar enough to containers to be frequently compared to them, but the differences help better define containers. VMs and containers are both ways to deploy a variety of discrete applications on a single hardware host, but they go about it differently.

A VM is an operating system (OS) or application environment that is installed on a hypervisor, software which imitates physical hardware.

Like containers, VMs are isolated from each other and more efficiently use hardware resources, making them ideal for testing software and apps and porting them over to other operating systems. But each VM includes an entire operating system, so they consume more resources than containers.

For instance, a container might be 10 megabytes in size, while a VM can be several gigabytes. Containers also start up much more quickly, which can make infrastructures that use them more responsive and flexible.

Direct and secure interconnection is a huge advantage for any dynamic technology like containers and their orchestration tools, especially those that can be accessed via the cloud as a service and integrate latency-sensitive software components.

This is what an Interconnection Oriented Architecture (IOA) strategy can deliver proximate, low-latency virtualized or physical connections to cloud-based container and orchestration services.

Article by Jim Poole, Equinix Blog Network