Story image

How the evolution of containers Is changing app development

05 Apr 17

The operating system is the mothership for every application hosted across the web. The robustness of the kernel (heart of an operating system) has matured with a lot of features over the years, and it has played a vital role in managing processes and maintaining the state of running applications in isolation.

Isolation and scalability are two driving factors that keep any DevOps teams busy in today’s multi-cloud world. And though the concept of application isolation is not something new to the IT landscape, it is the precursor to new container technologies, and it has become a mandatory industry-wide practice for developing scalable and auto-recoverable microservices and micro-applications.

Evolution of containers from process isolation (1979 – 2014):

Running every process with its own resources in isolation is best when achieved at the operating system level. The first isolation was achieved using “chroot.”

The chroot system call was introduced in 1979 in Unix V7 to change the root directory of a process and its children to a new location in the file system.

This was the first achievement in process isolation. It was then absorbed into BSD in 1982. In 2000, FreeBSD introduced JAILS, an advanced version of chroot which achieved separation between services and processes by enhancing security and facilitating ease of administration.

Application isolation went viral when Sun Microsystems created Solaris Zones, which was shipped with Solaris 10 OS in 2004. Zones is a virtualized operating system environment created within a single instance of an operating system to leverage data replication features like snapshots and cloning from the file system (ZFS).

Currently, Oracle Solaris Zones are also widely used for scalability, security and process isolation. Solaris Zones were followed by OpenVZ, a Linux kernel-level isolation methodology that was released as part of Linux Kernel.

Along come containers

Following the Linux Kernel, Google designed “Process Containers” for limiting and isolating resource usage (CPU, memory, disk I/O, network) of a collection of processes. Later, Google’s Process Container concept was renamed “Cgroups”. Cgroups was introduced in the Linux Kernel from 2.6.24 onward.

The first Linux container manager was implemented in 2008 using Cgroups and the Linux namespace called “LXCs” (Linux Containers). LXC is an operating system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel.

Docker started using LXC in its initial stages and later replaced it with Docker’s own container management library called libcontainer, which has now moved to the “Open Container Initiative (OCI).”

Libcontainer provides a native implementation for creating containers with namespaces, Cgroups, capabilities and file system access controls. It allows you to manage the lifecycle of the container by performing additional operations after the container is created.

Docker spearheaded its offerings by creating an entire ecosystem for container management. Today, Docker is changing the DevOps practices, along with application development, lifecycle management and smoothing the way for enterprises to adopt microservice and scalable architectures.

In addition, Oracle Solaris and the Docker Engine take advantage of the proven Solaris Zones technology and native ZFS support to enable a truly enterprise-class solution for containers.

The integration will enable enterprise customers to use the Docker open platform to easily distribute applications built and deployed in Oracle Solaris Zones.

Container technologies have been revolutionary in their ability to enable fast application migration to the cloud. In addition, they also allow for applications and processes to be consistently deployed across multiple clouds. This fosters faster and more confident enterprise adoption of cloud services.

Application isolation fosters microservices

Application isolation plays an integral role in the world of microservices. Microservices and micro-apps greatly benefit from isolation.

One of the primary goals of moving to a microservices architecture is to be able to deploy changes to one microservice or feature without affecting another. If a microservice fails, its impact will not bring down the entire application, only the feature the microservice contains.

This is achieved by having each microservice deployed in complete isolation within a virtual machine, bare metal server, cloud or a container. 

Isolation also helps in efficiently overcoming dependency, control, privilege separation, compliance, recovery, compatibility and ability to upgrade/migrate to any technology stack.

The security benefits of application isolation

The benefit of using multiple browsers for different sites helps in entry-point restriction and state isolation. These concepts, when combined for general application isolation mechanism, can provide better security benefits.

High-volume transactions and valued applications can opt-in to application isolation to gain defenses against a different attack patterns. Google Chromium open source browsers implement application isolation to verify security properties using finite-state model checking.

They also measure the performance overhead of app isolation and conduct a large-scale study to evaluate its adoption complexity for various types of sites, demonstrating how the application isolation mechanisms are suitable for protecting a number of high-value web applications.

Our use of application isolation, containers and microservices continues to evolve at Equinix to streamline and scale the development of our interconnection solutions and colocation data center monitoring services, such as IBX SmartView, for our more than 8,000 customers

Article by Balasubramaniyan Kannan and Kavitha Jeyaraj, Equinix blog network

The new world of edge data centre management
Schneider Electric’s Kim Povlsen debates whether the data centre as we know it today will soon cease to exist.
Can it be trusted? Huawei’s founder speaks out
Ren Zhengfei spoke candidly in a recent media roundtable about security, 5G, his daughter’s detainment, the USA, and the West’s perception of Huawei.
SUSE partners with Intel and SAP to accelerate IT transformation
SUSE announced support for Intel Optane DC persistent memory with SAP HANA.
Inspur uses L11 rack level integration to deploy 10,000 nodes in 8 hours
Inspur recently delivered a shipment of rack scale servers of more than 10,000 nodes to the Baidu Beijing Shunyi data center within 8 hours.
How HCI helps enterprises stay on top of data regulations
Increasing data protection requirements will supposedly drive the demand for Hyper-Converged Infrastructure solutions across the globe.
Vodafone and PNSol champion new ‘invisble network’ broadband project
"As an industry, we've increased the speed of broadband to one gigabit and beyond, which is a remarkable achievement, but we now have to look beyond speed."
Top 3 cloud computing predictions – what’s in store for 2019?
Virtustream's Deepak Patil shares his predictions for how cloud computing will evolve in 2019.
Rubrik welcomes $261m funding for new market expansion
The company intends to use the funds from new investor Bain Capital Ventures will go toward future innovation and expansion.