dcn-as logo
Story image

Data centre maintenance keeping you up? HPE says automation's the answer

05 Jun 2018

Article by HPE South Pacific vice president & general manager Raj Thakur

Always-on uptime in a data centre is absolutely essential to business success, and ensuring uninterrupted service requires constant vigilance and maintenance. This need for constant upkeep and reliance on infrastructure only looks set to increase as organisations increasingly deploy more business-critical applications.

While there is continuous innovation to introduce new infrastructure management tools, many still fall short of achieving the enhanced automation and lowered maintenance requirements that the industry covets. As a result, many IT professionals are still wasting days and nights – possibly even missing important birthdays and anniversaries – to deal with issues that require manual tuning.

A major pain point that continuously surfaces during conversations with customers is how maintenance cycles still require human intervention. Furthermore, it is a large drain on operating budgets, with data centre operators spending a huge proportion of their budgets on keeping the lights on.

This begs the question of why maintenance is still keeping operators up at night despite the constant introduction of new tools to deal with the problem. What are we really missing?

The shortfalls of traditional infrastructure tools

Truly removing the burden of managing infrastructure requires having the foresight to predict problems before they occur, along with being able to provide deep insightful intelligence of underlying workloads and resources for better infrastructure optimisation.

Lose sleep over data centre maintenance no further. Consider these four factors to determine if your tools are falling short in overcoming frustrating maintenance problems:

1) They don’t learn from others

Analytics that simply report on local system metrics tend to offer limited value. Instead, what you should look for in a tool is its ability to learn from the behaviour of thousands of peer systems, so as to aid in detection and diagnosis of developing issues. In a sense where it is said that two minds are better than one, a thousand are infinitely more so.

A holistic approach to data collection and analysis can pool observations from an immense variety of workloads. This allows rare events identified at one site to be pre-emptively avoided at another, and for more common events to be detected quicker with greater accuracy.

2) Failing to see the whole picture

Traditional tools often only provide analytics in a siloed fashion; providing only system status per device, which is just one part of the overall story. With problems that disrupt applications popping up anywhere in the infrastructure stack, it is important to have the ability to conduct cross-stack analytics across multiple layers to get the bigger picture. This will require crucial components such as applications, compute, virtualisation, databases, networks and storage.

3) They don’t know enough

Predictive modelling requires deep domain experience – understanding all the operating, environmental, and telemetry parameters within each system in the infrastructure stack. General-purpose analytics can only go so deep. However, pairing domain experts with AI can enable machine-learning algorithms to identify causation from historical events, and in turn, predict the most complex and damaging problems.

4) They can’t act without you

Perhaps the biggest drawback of traditional tools is their inability to act. In the ideal state of autonomous operations, the data centre would be self-managing, self-healing and self-optimising. In essence, they should be able to avoid a problem or improve the environment without the need for human intervention from an administrator. To achieve this level of automation would require a proven history of automated recommendations that provide the necessary level of trust and confidence.

The future of data centre maintenance 

To overcome the limitation of traditional tools and convincingly reduce maintenance requirements – and better automate a data centre – one would have to embrace a new generation of AI solutions. This means leveraging tools that are able to observe, learn, predict, recommend and ultimately, automate. 

Through observation, AI will be able to develop a steady-state understanding of ideal operating environments for various workloads and applications. Deep system telemetry coupled with global connectivity allows for rapid cloud-enabled machine learning, resulting in AI tools being able to quickly predict problems through pattern-matching algorithms. Application performance can even be modelled and tuned for new infrastructure based on past historical configurations and workload patterns. 

Based on these predictive analytics, AI solutions can determine appropriate responses required to improve the data centre environment. The pressure is then taken off IT teams – and they no longer have to work through the night to find the source of the problem when managing infrastructure. More importantly, in the event that the AI proves to be effective, recommendations can then be applied automatically without the intervention of IT administrators. That to me, is achieving the holy grail of automation.

At HPE, we have seen how our customers utilising AI tools are able to predict and resolve issues automatically 86 per cent of the time. Furthermore, they spend 85 per cent less time on storage issues and even enjoy a 79 per cent reduction in IT storage operating expenditures. The advantages of deploying AI to assist in data centre infrastructure is undeniable.

Furthermore, with technological advancements set to invigorate all sectors of the Asia Pacific economy, the highly-diverse region is expected to experience a talent shortage of 2 million IT professionals by 2030 (Korn Ferry - The Global Talent Crunch). I’m certainly looking forward to the not so distant future where automation will be the next frontier in data centre management – and of course, getting a good night’s rest. 

Story image
Over half of IT pros prefer hybrid and multi-cloud architectures - report
Denodo surveyed executives from over 250 organisations on their attitudes toward cloud, the challenges it presents, and the way in which it has changed workflows within organisations.More
Story image
Aussie edge data centre provider receives $20m in funding
Leading Edge Data Centres has received a $20 million cash injection from Washington H. Soul Pattinson which will be used to fund growth of its regional data centre network in Australia.More
Link image
Network automation with the human element in mind
Why should human admins bother with tedious network tasks when automation could free up their time for more important things? Automation can be astonishingly easy, and incredibly smart.More
Story image
Schneider Electric launches public API for cloud-based software
As the first public API for the software, it enables IT solution providers and end users to integrate a power and critical infrastructure monitoring platform into their preferred management system.More
Story image
Extreme Networks touts cloud management as 'new normal'
"As we grapple with more data, coming from more places, more connected devices, and more SaaS-based applications, the cloud is becoming fundamental to establishing a new normal.”More
Story image
Advanced Energy develops 48v power rack for Open Compute Project
Traditional data center racks use 12-volt power shelves. However, higher performance compute and storage platforms now demand more power, which results in very high current. More