Story image

Colocation providers have options in integrating open compute project data center architectures

14 Mar 2017

There’s been a lot of buzz on our blog sites about the Open Compute Project (OCP) and how products that adhere to its design tenets can improve data center efficiency.

While I mentioned it in a recent post, I wanted to more clearly make the case for why colocation companies – in addition to Internet Giants – should pay close attention to OCP.

The topic came to mind as I viewed a recently released video of a talk given by Brice Martinot-Lagarde, Schneider Electric’s Global Solution Architect for Cloud & Service Providers, at the International Colocation Club 2016 event in Paris.

He described the potential benefits that OCP-inspired data center design can provide in terms of efficiency and flexibility.

A little background for folks not familiar with OCP – the Open Compute Project was officially launched in 2011 by some companies that know a thing or two about building large data centers, including Facebook, Intel, Goldman Sachs, Microsoft and Rackspace.

It started from the design work that went into Facebook’s groundbreaking Prineville, Oregon data center, as we covered in a post back in 2011.

OCP has come a long way since then, and now includes hundreds of active members contributing their expertise, including banks, telecom and IT equipment manufacturers, software vendors, and colocation providers.

Of particular interest to colo providers is the work OCP has done around simplifying data center power infrastructure. OCP server designs call for building redundancy into server power supplies and into the racks that house the servers.

That makes a shift in mindset possible — from having to provide power to redundant server power supplies all the time, to relying on the inherent redundancy of server power supplies at least some of the time.

In his talk, Martinot-Lagarde goes through different potential architectures, providing varying levels of power redundancy, highlighting the way these architectures can save money both up front and over time versus traditional architectures.

The more standard OCP design uses only a single upstream UPS instead of dual UPSs, along with a new rack design with a built-in UPS and reduced AC/DC conversions.

Additionally, fans are removed from individual servers in favor of fans on the rear door of the rack. “You have more efficiency and reliability because the fans are optimized according to the density of the rack” he says.

Another, somewhat less aggressive design in terms of the amount of equipment that’s eliminated, offers 2N redundancy through a hybrid design architecture.

The simplified low voltage power train with the integration of the OCP rack enables a unique flexibility to scale.  It allows colocation providers to meet their business requirements without taking the capex risk.

Given the fact that we know many Internet Giants are adopting this de-centralized architecture and that these hyperscale providers are taking more and more colocation space, it seems likely that we will see some level of adoption in the colo space.

“We have open source for software, why not have open source for hardware as well?” says Martinot-Lagarde. “There’s no reason why colocation providers cannot be an adopter of Open Compute.”

Article by Greg Jones, Schneider Electric Blog Network

Dropbox invests in hosting data inside Australia
Global collaboration platform Dropbox has announced it will now host Australian customer files onshore to support its growing base in the country.
Opinion: Meeting the edge computing challenge
Scale Computing's Alan Conboy discusses the importance of edge computing and the imminent challenges that lie ahead.
Alibaba Cloud discusses past and unveils ‘strategic upgrade’
Alibaba Group's Jeff Zhang spoke about the company’s aim to develop into a more technologically inclusive platform.
Protecting data centres from fire – your options
Chubb's Pierre Thorne discusses the countless potential implications of a data centre outage, and how to avoid them.
Opinion: How SD-WAN changes the game for 5G networks
5G/SD-WAN mobile edge computing and network slicing will enable and drive innovative NFV services, according to Kelly Ahuja, CEO, Versa Networks
TYAN unveils new inference-optimised GPU platforms with NVIDIA T4 accelerators
“TYAN servers with NVIDIA T4 GPUs are designed to excel at all accelerated workloads, including machine learning, deep learning, and virtual desktops.”
AMD delivers data center grunt for Google's new game streaming platform
'By combining our gaming DNA and data center technology leadership with a long-standing commitment to open platforms, AMD provides unique technologies and expertise to enable world-class cloud gaming experiences."
Inspur announces AI edge computing server with NVIDIA GPUs
“The dynamic nature and rapid expansion of AI workloads require an adaptive and optimised set of hardware, software and services for developers to utilise as they build their own solutions."