Story image

How advanced analytics and predictive simulation help data centers enhance performance

03 Jul 17

According to the Uptime Institute (an independent division of the research company 451 Group), “the market for Data Center Management Systems, of which Data Center Infrastructure Management (DCIM) systems represent the largest share, will grow by $1.18 Billion by 2020.”

What will drive this growth?

IT and business executives have realized that millions of dollars in data center energy and operational costs can be saved through improved physical infrastructure planning, aided by state-of-the-art data monitoring and advanced data analytics.

On-premise DCIM tools can proactively identify potential physical infrastructure problems and predict how they might impact specific IT loads by correlating power, cooling and space resources to individual servers.

Through model-based simulation, these tools simplify complex issues such as capacity planning and server placement.

Simulation programs factor in variables such as power utilization, heat dispersion, and network access. Questions such as, “What would be the impact if I move that server?” or “What would happen if this component were to fail?” are answered.

In the case of a loss of cooling capacity, for example, simulations can answer the question of what happens if the data center temperature rises past a given threshold.

These modern DCIM tools help to achieve three key benefits:

  • Improved system uptime
  • Lower energy consumption
  • The agility needed to manage constant capacity changes and dynamic loads

In addition to on-premise tools, a new class of “cloud-based DCIM” or “Infrastructure Management as a Service” (IMaaS) tools are gaining prominence. These new tools monitor, gather data and perform analysis so that data center administrators can understand, at a component level, how their data center is operating.

One example of these tools is an offering from Schneider Electric called StruxureOn. The tool collects physical infrastructure raw machine data on a continuous basis.

As a cloud-based data center monitoring solution, it looks for patterns and detects anomalies and can draw conclusions regarding future equipment behavior.

Access to more (lots more) performance data is the new critical success factor

It is now possible for any data center, whether colo, on-premise, or cloud, to capture performance data on a daily basis. The potential also exists to benchmark that data against similar outside data centers.

Past efforts at tracking this data, and benchmarking it have been both limited and cost prohibitive. However, IMaaS tools can provide much larger scale data collection.

By leveraging performance data from a larger quantity of data centers, owners and operators will be able to make more informed decisions regarding which parts of their data center need improvement.

How might such a benchmarking system be deployed?

A third party could gather the data from multiple data centers and then utilize that data to provide anonymous benchmarking information.

That information would then help participating data center owners to gain access to more precise, field-tested physical infrastructure performance data.

The current and future benefit of big data and predictive simulation

Both on-premise DCIM simulation tools and IMaaS tools improve IT room allocation of power and cooling, provide predictive impact analysis of various IT room components, and leverage historical data to improve future IT room performance.

One benefit of incorporating both on-premise DCIM and IMaaS is the possibility of performing predictive maintenance.

The ability to say “all the signs tell us that this UPS will fail within the next 3 months so I’m going to do something about it now” saves money through reduced downtime.

Article by Torben Nielsen, Schneider Electric Data Center Blog Network

Data centre cybersecurity actions that most people overlook
Schneider’s Steven Carlini discusses ways to improve data centre cybersecurity that most people don’t think of until it’s too late.
Alibaba Cloud showcases commitment to Hong Kong
The company’s service capability in Hong Kong has doubled since it established its first data centre in the city in 2014.
5 tips to reduce data centre transceiver costs
Keysight Technologies' Nicole Faubert shares her advice on how organisations can significantly reduce test time and cost of next-generation transceivers.
The new world of edge data centre management
Schneider Electric’s Kim Povlsen debates whether the data centre as we know it today will soon cease to exist.
Can it be trusted? Huawei’s founder speaks out
Ren Zhengfei spoke candidly in a recent media roundtable about security, 5G, his daughter’s detainment, the USA, and the West’s perception of Huawei.
SUSE partners with Intel and SAP to accelerate IT transformation
SUSE announced support for Intel Optane DC persistent memory with SAP HANA.
Inspur uses L11 rack level integration to deploy 10,000 nodes in 8 hours
Inspur recently delivered a shipment of rack scale servers of more than 10,000 nodes to the Baidu Beijing Shunyi data center within 8 hours.
How HCI helps enterprises stay on top of data regulations
Increasing data protection requirements will supposedly drive the demand for Hyper-Converged Infrastructure solutions across the globe.