Story image

You can’t burn it down and start over; how to modernize a data center in 4 steps

22 May 2017

It’s no secret among data center managers who are trying to bridge the gap between legacy systems and today’s business demands, that many would rather “burn” (figuratively) it down and build anew.

But unless you’ve come into a huge reserve of capital or have an incredibly justifiable business case, the reality for most enterprises is not starting over, but modernizing a data center with these four steps.

4 Steps to Modernization

A conventional data center tends to be more static and more manual in its operation with a potentially limited lifecycle. A modern data center is dynamic, automated and sustainable.

Getting from conventional to smart may seem overwhelming but the steps are clear, and following them will make all the processes involved more straightforward.

Assessment

When embarking on modernizing, you’ll likely begin by asking, “How do I know where to spend my next $1 if I don’t know what I have or where it will have the most impact?”

To figure out this out, you have to first to assess what you have. This will enable you to align spend with the business objective.

For example, are you looking to make the Data Center more resilient? More efficient or redundant? 

These answers will drive where you can optimize your investment.

Technology today enables digital data collectors, providing a highly scalable, low cost alternative to clipboards and manpower, so the assessment process is much easier.

These light, cloud-based applications are deployed on the network and provide metadata for both the physical infrastructure and IT environments.

This is when big data becomes a reality as all the collected data is centralized and analytics are applied to create actionable intelligence.

Real-time and standardized results enable benchmarks to be created and opportunities for improvement uncovered.

Fixing the Basics

Even as some operations shift to the cloud, fixing the basics sets a path that makes further modernization possible. The action here is low cost and low risk, and perhaps something you’ve heard before.

It bears repeating, because even a small change to the basics can make a big difference.

One basic example is containment — arranging racks in hot aisle/cold aisle format.

You’ll gain efficiency and possibly free up capacity. Analyzing and modifying cooling is another simple basic that can yield quite a return. Often, the focus in a data center is too much on deploying the latest widget.

New and innovative technology is great, but the thermodynamic cycle of the data center remains the same: power enters, IT work is performed, heat is generated and must be removed.

So, to be ready for the next big thing, you’ve got to be methodical about getting the basics right first. Then, you are headed in the right direction towards optimization.

Optimizing

At this stage, the not-so-secret wish to rip and replace is somewhat placated. No matter what, there will come a time when a system is so outdated, there’s no amount of fixing that can help.

The good news is that optimizing takes a data center from static to dynamic.

For example, old air conditioners would be turned on and run at one pace, no matter the surrounding heat fluctuation. New air conditioning technology allows for ramping up and down depending on the temperature in room.

The same goes for UPSes, which used to just function steadily and now have eco-modes that are more efficient. Pumps that once only went full throttle, now spin up and down.

Yes, these new pieces of equipment will require capital costs, but those are inevitable in any data center.  Nevertheless, the practical, gradual installation of new platforms enables the future.

Automation and Control

As full optimization is realized, the ultimate features of a smart data center must be put in place: automation and control. Without these systems, you’ll never know if what you’ve fixed and optimized is actually operating correctly.

For example, Data Center Infrastructure Management (DCIM) software enables parameters to be set and notifications communicated to prevent a small issue from becoming catastrophic.

Plus, as data center resources shrink and a knowledge gap widens, automation and control tools help ensure reliability and availability.

Article by Russell Senesac, Schneider Electric Data Center Blog

Dropbox invests in hosting data inside Australia
Global collaboration platform Dropbox has announced it will now host Australian customer files onshore to support its growing base in the country.
Opinion: Meeting the edge computing challenge
Scale Computing's Alan Conboy discusses the importance of edge computing and the imminent challenges that lie ahead.
Alibaba Cloud discusses past and unveils ‘strategic upgrade’
Alibaba Group's Jeff Zhang spoke about the company’s aim to develop into a more technologically inclusive platform.
Protecting data centres from fire – your options
Chubb's Pierre Thorne discusses the countless potential implications of a data centre outage, and how to avoid them.
Opinion: How SD-WAN changes the game for 5G networks
5G/SD-WAN mobile edge computing and network slicing will enable and drive innovative NFV services, according to Kelly Ahuja, CEO, Versa Networks
TYAN unveils new inference-optimised GPU platforms with NVIDIA T4 accelerators
“TYAN servers with NVIDIA T4 GPUs are designed to excel at all accelerated workloads, including machine learning, deep learning, and virtual desktops.”
AMD delivers data center grunt for Google's new game streaming platform
'By combining our gaming DNA and data center technology leadership with a long-standing commitment to open platforms, AMD provides unique technologies and expertise to enable world-class cloud gaming experiences."
Inspur announces AI edge computing server with NVIDIA GPUs
“The dynamic nature and rapid expansion of AI workloads require an adaptive and optimised set of hardware, software and services for developers to utilise as they build their own solutions."