HPE exec discusses how AI is transforming storage
Article by HPE South Pacific Data and Storage Optimisation business manager Brett Lobwein
IT infrastructures are undeniably complex and they are becoming more so every day as the growing digitisation of applications significantly increases.
So, what do these increasingly complicated infrastructures mean for businesses? Along with an increased risk of system disruption, how do IT teams manage these applications?
According to a report by IDC Australia, more than 50 percent of downtime problems experienced in data centers in the past 12 months are due to system failures.
This is a colossal waste of time and resources - Gartner even puts a dollar number on it and believes the cost of network downtime for an organisation is roughly $58,802 (AUD) every hour.
And, with average downtime for businesses sitting at an average of 175 hours each year, this could easily result in losses in the millions for organisations – not to mention the additional reputational damage and loss of customers that comes hand in hand with IT meltdowns.
So, what can organisations do to try and mitigate this problem?
Previous efforts to achieve reliability, performance and availability across this increasing number of applications have been focused on watertight control of IT processes, as well as over-capacity and hardware redundancy.
However, this is becoming increasingly challenging with the growing complexity of data storage technology. But this is somewhat of a redundant tactic – primarily since this level of complexity can’t just be effectively tackled with conventional data center management tools.
What is needed, therefore, is a new generation of management solutions that relieve data center administrators from arduous day-to-day work through automation and analytics and free up their valuable time for genuine value-adding activities, otherwise known as an autonomous data center.
A new generation of storage has arrived
A data center infrastructure, powered by Artificial Intelligence (AI), can overcome the limitations of traditional approaches by using intelligent algorithms, powered by sensor data from the systems, to effectively run itself.
This intelligent AI engine will be able to automatically detect malfunctions, bottlenecks or faulty configurations and has the potential to resolve them autonomously – already removing the need for time-consuming human intervention. It can even blacklist problems it has previously detected, to avoid repetition and stop customers hitting problems they’ve experienced before.
Not only could AI in the data center detect and repair issues, but it also has the potential to proactively provide suggestions for improvements.
By leveraging the data and insights generated, it can identify opportunities for systems optimisation and better performance, which in turn has a positive impact on business processes, the effectiveness of the IT team and – ultimately – customer experience.
How does it do this? Put simply - AI in the data center allows for simultaneous monitoring of all systems in an installed base.
This enables the system to develop an understanding of the ideal operating environment for every workload and application, and then spot abnormal behaviour through recognition of the regular, underlying ideal operating patterns.
In other words, as the depth and breadth of data generated within your business increases, so too does the effectiveness of the AI system as it recognises regular data patterns.
This, in turn, extends the life of the AI system and means that it will continuously look to improve your IT infrastructure, either by patching new problems that emerge, or suggesting new ways to optimise and improve processes.
The system can then use deep telemetry data to create a base foundation of knowledge and experiences, shared across every system connected to its AI engine globally.
This allows the technology to analyse and predict if any other system in the installed base will be susceptible to similar issues by using pattern-matching algorithms.
Additionally, this insight allows application performance to be modelled and tuned for new infrastructure based on historical configurations and workload patterns, reducing risk for new IT deployments and cutting down implementation costs.
Faster, better, stronger
Based on the predictive analytics and the shared knowledge of how to optimise system performance, the AI can determine the appropriate recommendations needed to ensure the ideal operating environment and apply these changes automatically on behalf of IT administrators.
When automation is not available, specific recommendations can be delivered through support case automation. This frees IT staff from a lot of the manual work required to identify the causes of system glitches and eliminates the guesswork in managing the infrastructure.
Bendigo Telco, a leading Australian telecommunications provider, has recently implemented AI technology in its data center operations to support its growth ambitions and the storage needs of its customers.
Bendigo Telco was able to reduce its data center by 3.5 racks, whilst increasing the capacity fourfold to 1.2 petabytes resulting in reduced physical space and electricity requirements.
“Previously we had engineers managing several different types of platforms across different data centers who were really hands-on to ensure the platform was operational,” says Jarrod Draper, the company’s general manager of technology services.
Draper continued, “We set up a holistic storage strategy across multiple data centers whilst having a single viewing plane. This gave us visibility of what was happening at any one time across the storage architecture.”
By putting AI at the heart of data center infrastructure management, organisations will be able to predict, prevent and resolve issues faster than ever before.
This can drive significant efficiency gains and operational improvements, while making the infrastructure smarter and more reliable.
Most importantly, businesses will be able to minimise service disruption and speed up the resolution of IT issues, allowing their IT teams to focus on tasks that add value and improve the quality of the customer experience.