Story image

Redefining efficiencies for downsized, on-premise enterprise data centers

15 Jun 2017

Over the last five years, three major transformations have rocked the data center industry.

These transformations have occurred so quickly that many data center owners have been caught flat-footed, uncertain of the best way to shape their data center modernization strategies. The first transformational phase involved a rapid, virtualization-driven consolidation of on-premise data centers.

In a typical consolidation example, five medium-sized data centers would consolidate into one larger data center. This approach was replicated across many companies to varying degrees.

In the midst of all of this, the second transformational phase occurred: the mass exodus of applications to the cloud. This left behind thousands of downsized on-premise data centers that were mere shells of their former selves.

Today, the industry is undergoing a third transformational phase: a retrenchment, and even a renewed growth, of enterprise on-premise data centers.

What is driving this unanticipated third phase of transformation? Major market and technology trends such as the Internet of Things (IoT) have driven exponential growth in the amount of data that needs to be captured, stored, analyzed and connected. That data is being gathered and analyzed to create both business value and competitive advantage.

As a result, analysts are forecasting growth for both cloud/colo and on-premise data centers (in the case of on-premise, the growth forecast is 5% over the next five years).

On the enterprise side a second driver is emerging: entrenched applications that integrate into many on-premise business systems (like Lotus Notes) are staying put. These are applications that are difficult to cost-effectively break up into separate applications and push off to the cloud. Stakeholders deem these applications simpler and less costly to maintain on-premise. And these applications are growing.

The downsized data centers that have been left behind are quite different from their pre-cloud predecessors. As a result, the approach to managing and operating them has to be different. If not, stakeholders will be forced to sustain high OPEX costs as they preside over what are essentially very inefficient data centers.

Consider the example of power and cooling systems.  Even though 50-75% of the servers may have been displaced and those applications moved to the cloud, oversized power and cooling systems remain. When downsizing IT, the utilization of power and cooling systems can drop as low as 10%.

Power and cooling systems are not proportionally reduced in an IT downsizing scenario. Oversized gear is not energy efficient and is costly to maintain.

Therefore, the challenge within downsized data centers is to determine what pieces of equipment are inefficient (and how inefficient are they) and to measure how much these inefficiencies are inflating operational costs. Then, once reliable data is gathered and analyzed, decisions can be made regarding changes that render the downsized data center more efficient.

Monitoring and analytics are the keys for improvement

Both on-premise and cloud-based data center infrastructure management (DCIM) tools can assist in fixing inefficiencies in downsized data centers. On-premise tools, can start recording the power draw from all of the components of the data center physical infrastructure.

Then, through benchmarking, opportunities for improvement are identified. These tools are also effective in planning capacity (forecasting how much power and cooling is really needed to address the current data center requirements).

New, cloud-based tools are also emerging that are capable of capturing data center physical infrastructure asset performance data. These systems not only remotely monitor equipment performance, but they also perform predictive diagnostics that can leverage data from multiple similar data centers to create more precise performance benchmarks.

By recording factors such as operating temperature and the number of battery discharge operations sustained (e.g., in the case of a UPS), the predictive analytics will show what the probability of failure will be within a given window of time.

Article by Steven Carlini, Schneider Electric Data Center Blog Network

Opinion: Meeting the edge computing challenge
Scale Computing's Alan Conboy discusses the importance of edge computing and the imminent challenges that lie ahead.
Alibaba Cloud discusses past and unveils ‘strategic upgrade’
Alibaba Group's Jeff Zhang spoke about the company’s aim to develop into a more technologically inclusive platform.
Protecting data centres from fire – your options
Chubb's Pierre Thorne discusses the countless potential implications of a data centre outage, and how to avoid them.
Opinion: How SD-WAN changes the game for 5G networks
5G/SD-WAN mobile edge computing and network slicing will enable and drive innovative NFV services, according to Kelly Ahuja, CEO, Versa Networks
TYAN unveils new inference-optimised GPU platforms with NVIDIA T4 accelerators
“TYAN servers with NVIDIA T4 GPUs are designed to excel at all accelerated workloads, including machine learning, deep learning, and virtual desktops.”
AMD delivers data center grunt for Google's new game streaming platform
'By combining our gaming DNA and data center technology leadership with a long-standing commitment to open platforms, AMD provides unique technologies and expertise to enable world-class cloud gaming experiences."
Inspur announces AI edge computing server with NVIDIA GPUs
“The dynamic nature and rapid expansion of AI workloads require an adaptive and optimised set of hardware, software and services for developers to utilise as they build their own solutions."
Cohesity and Softbank partner to offer data services in Japan
The joint venture asserts it will enable Japanese enterprises to back up, store, manage and derive insights from all of their secondary data and applications.