Story image

The rise of Container-as-a-Service for faster app delivery

10 Jan 2017

Building and evolving a pervasive, global, digital IT delivery service demands a multi-disciplined approach that balances requirements with service availability, low latency, data replication, and compute capacity and efficiency.

Companies also need to consider globalization and delve deeper into the architectural patterns that enable seamless multi-cloud, multi-region traffic management, fast and reliable data and application deployment propagation, and an efficient IT service infrastructure.

The ongoing shift in application hosting with containerization technologies, underlines the need for every software company to have a flexible digital platform to manage their application deployment and load distribution.

“Containerization” is an OS-level virtualization method for deploying and running distributed applications without launching an entire virtual machine for each application. Instead, multiple isolated systems (“containers”) run on a single control host and access a single kernel.

Containers hold the components necessary to run application files, environment variables and libraries, and because the host OS also constrains the container’s access to physical resources (e.g., CPU and memory), a single container cannot consume all of a host’s physical resources.

Containers enable software teams within enterprise or start-up organizations to develop and deploy digital business needs faster, which in turn enables Software-as-a-Service (SaaS) solutions at scale.

With the rise of containers, there has been an increased need for a unified platform to manage the deployment scalability, resilience, fault tolerance and service registration and service discovery of hosted containers, particularly in the cloud. Container Platform-as-a-Service (CaaS) solutions, such as Red Hat OpenShift, Apcera and Google Cloud Engine, enable engineers to quickly build and deploy applications and support the unprecedented scaling of application workloads.

What is CaaS?

Containers-as-a-Service (CaaS) is a form of container-based virtualization in which container engines, orchestration and the underlying compute resources are delivered to users as a service from a cloud provider.

CaaS offerings provide users with the agility required for architecting solutions using containers and enable DevOps teams to automate the “CHECKIN to GO-LIVE” process for any containerized application, which significantly reduces the “TIME to DEPLOY,” as well as the “TIME to GO-LIVE” into production.

Containers have created a great advantage for infrastructure and DevOps teams, enabling them to focus on keeping the digital IT platform “alive” and equipped with underlying hardware resources for hosted applications and platforms to scale.

Enterprise DevOps teams have multiple options here to manage and host the containerized software, including:

  • Building their own CaaS using tools like Deis, Flynn, Tsuru, Dawn and Octohost
  • Using “out-of-the-box” solutions like Red Hat OpenShift and Cloud Foundry

CaaS can enable technology teams to achieve a competitive advantage via their digital IT infrastructures by doing more with less and enabling applications to scale very quickly. Cluster management is another advantage of CaaS solutions, with some having built in the intelligence for zero downtime upgrades using clusters of containers.

CaaS-Enabled Multi-Cloud Hosting

CaaS solutions can be built on top of IaaS platforms such as AWS, Microsoft Azure, Oracle Cloud Platform, Red Hat OpenStack and Docker Cloud. The scalability and resource “elasticity” of these solutions are achieved automatically to enable technology teams to effectively plan and deploy their multi-cloud strategy. CaaS, alongside of managing the containers, images, applications and scalable/fault tolerance options, helps to eradicate the complexities of:

  • cross-cloud deployment
  • multi-cloud hosting and load balancing
  • instrumentation of distributed workloads

As workloads, data and processes shift across multiple on-premises and cloud services, there will be a need for a new approach toward managing multi-cloud deployments alongside intelligent tools and provisioning systems.

The enterprise will also require capacity management and cost control that will enable it to strategically allocate workloads for the best execution and management of business continuity. Containers of the future will be shared across hosts and will have clustered file systems, which will make storage the next big thing in the container ecosystem.

As with most digital technologies that require the integration of components, tools, workloads, and on-premises and cloud infrastructures, CaaS is dependent on direct and secure interconnection to achieve the IT transformation required to effectively and efficiently deploy multiple applications.

Article by Ramchandra Koty and Balasubramaniyan Kannan, Equinix blog network 

Dropbox invests in hosting data inside Australia
Global collaboration platform Dropbox has announced it will now host Australian customer files onshore to support its growing base in the country.
Opinion: Meeting the edge computing challenge
Scale Computing's Alan Conboy discusses the importance of edge computing and the imminent challenges that lie ahead.
Alibaba Cloud discusses past and unveils ‘strategic upgrade’
Alibaba Group's Jeff Zhang spoke about the company’s aim to develop into a more technologically inclusive platform.
Protecting data centres from fire – your options
Chubb's Pierre Thorne discusses the countless potential implications of a data centre outage, and how to avoid them.
Opinion: How SD-WAN changes the game for 5G networks
5G/SD-WAN mobile edge computing and network slicing will enable and drive innovative NFV services, according to Kelly Ahuja, CEO, Versa Networks
TYAN unveils new inference-optimised GPU platforms with NVIDIA T4 accelerators
“TYAN servers with NVIDIA T4 GPUs are designed to excel at all accelerated workloads, including machine learning, deep learning, and virtual desktops.”
AMD delivers data center grunt for Google's new game streaming platform
'By combining our gaming DNA and data center technology leadership with a long-standing commitment to open platforms, AMD provides unique technologies and expertise to enable world-class cloud gaming experiences."
Inspur announces AI edge computing server with NVIDIA GPUs
“The dynamic nature and rapid expansion of AI workloads require an adaptive and optimised set of hardware, software and services for developers to utilise as they build their own solutions."