DataCenterNews Asia Pacific - Specialist news for cloud & data center decision-makers
Story image
How the evolution of containers Is changing app development
Wed, 5th Apr 2017
FYI, this story is more than a year old

The operating system is the mothership for every application hosted across the web. The robustness of the kernel (heart of an operating system) has matured with a lot of features over the years, and it has played a vital role in managing processes and maintaining the state of running applications in isolation.

Isolation and scalability are two driving factors that keep any DevOps teams busy in today's multi-cloud world. And though the concept of application isolation is not something new to the IT landscape, it is the precursor to new container technologies, and it has become a mandatory industry-wide practice for developing scalable and auto-recoverable microservices and micro-applications.

Evolution of containers from process isolation (1979 – 2014):

Running every process with its own resources in isolation is best when achieved at the operating system level. The first isolation was achieved using “chroot.

The chroot system call was introduced in 1979 in Unix V7 to change the root directory of a process and its children to a new location in the file system.

This was the first achievement in process isolation. It was then absorbed into BSD in 1982. In 2000, FreeBSD introduced JAILS, an advanced version of chroot which achieved separation between services and processes by enhancing security and facilitating ease of administration.

Application isolation went viral when Sun Microsystems created Solaris Zones, which was shipped with Solaris 10 OS in 2004. Zones is a virtualized operating system environment created within a single instance of an operating system to leverage data replication features like snapshots and cloning from the file system (ZFS).

Currently, Oracle Solaris Zones are also widely used for scalability, security and process isolation. Solaris Zones were followed by OpenVZ, a Linux kernel-level isolation methodology that was released as part of Linux Kernel.

Along come containers

Following the Linux Kernel, Google designed “Process Containers” for limiting and isolating resource usage (CPU, memory, disk I/O, network) of a collection of processes. Later, Google's Process Container concept was renamed “Cgroups”. Cgroups was introduced in the Linux Kernel from 2.6.24 onward.

The first Linux container manager was implemented in 2008 using Cgroups and the Linux namespace called “LXCs” (Linux Containers). LXC is an operating system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel.

Docker started using LXC in its initial stages and later replaced it with Docker's own container management library called libcontainer, which has now moved to the “Open Container Initiative (OCI).

Libcontainer provides a native implementation for creating containers with namespaces, Cgroups, capabilities and file system access controls. It allows you to manage the lifecycle of the container by performing additional operations after the container is created.

Docker spearheaded its offerings by creating an entire ecosystem for container management. Today, Docker is changing the DevOps practices, along with application development, lifecycle management and smoothing the way for enterprises to adopt microservice and scalable architectures.

In addition, Oracle Solaris and the Docker Engine take advantage of the proven Solaris Zones technology and native ZFS support to enable a truly enterprise-class solution for containers.

The integration will enable enterprise customers to use the Docker open platform to easily distribute applications built and deployed in Oracle Solaris Zones.

Container technologies have been revolutionary in their ability to enable fast application migration to the cloud. In addition, they also allow for applications and processes to be consistently deployed across multiple clouds. This fosters faster and more confident enterprise adoption of cloud services.

Application isolation fosters microservices

Application isolation plays an integral role in the world of microservices. Microservices and micro-apps greatly benefit from isolation.

One of the primary goals of moving to a microservices architecture is to be able to deploy changes to one microservice or feature without affecting another. If a microservice fails, its impact will not bring down the entire application, only the feature the microservice contains.

This is achieved by having each microservice deployed in complete isolation within a virtual machine, bare metal server, cloud or a container.

Isolation also helps in efficiently overcoming dependency, control, privilege separation, compliance, recovery, compatibility and ability to upgrade/migrate to any technology stack.

The security benefits of application isolation

The benefit of using multiple browsers for different sites helps in entry-point restriction and state isolation. These concepts, when combined for general application isolation mechanism, can provide better security benefits.

High-volume transactions and valued applications can opt-in to application isolation to gain defenses against a different attack patterns. Google Chromium open source browsers implement application isolation to verify security properties using finite-state model checking.

They also measure the performance overhead of app isolation and conduct a large-scale study to evaluate its adoption complexity for various types of sites, demonstrating how the application isolation mechanisms are suitable for protecting a number of high-value web applications.

Our use of application isolation, containers and microservices continues to evolve at Equinix to streamline and scale the development of our interconnection solutions and colocation data center monitoring services, such as IBX SmartView, for our more than 8,000 customers

Article by Balasubramaniyan Kannan and Kavitha Jeyaraj, Equinix blog network