Vertiv & SUSE warn AI data centres face energy strain
Australia-based executives from Vertiv and SUSE have warned that the rapid build-out of AI-ready data centres is creating new strains on infrastructure design and energy use. Their comments come as industry groups mark International Data Centre Day and highlight the sector's growing economic and environmental footprint.
Operators across Australia are expanding capacity as cloud providers and enterprises deploy more generative AI workloads. Hyperscale facilities are driving a new wave of investment in power, land and grid connections in markets such as Sydney and Melbourne.
Suppliers say this expansion is reshaping how facilities are planned and operated. They point to rising rack power densities, the spread of specialised AI chips, and the need for new approaches to cooling and energy efficiency.
AI drives redesign
Vertiv's Head of Country and Senior Director of Sales for Australia and New Zealand, Lulu Shiraz, said AI adoption is changing the core design assumptions of the data centre. Operators are starting to treat compute, power and cooling as a more tightly integrated system.
"With Australia's data centre footprint rapidly expanding to support AI-driven workloads, there's growing focus on how operators manage increasing power density, cooling requirements and infrastructure scalability.
"As International Data Centre Day highlights the critical role of digital infrastructure in powering the global economy, it also underscores how rapidly the data centre is evolving in the age of AI.
"As AI adoption accelerates, the data centre is becoming a highly integrated system where chips, servers and infrastructure must work in lockstep. The growing reliance on GPUs and AI accelerators is pushing compute demands well beyond traditional environments, increasing pressure on power density, cooling and scalability.
"This is shifting how organisations approach infrastructure planning, with chip requirements now shaping decisions from rack design to energy strategy, as well as driving the need for higher-density deployments and advanced cooling approaches such as liquid cooling.
"In Australia, where demand for AI-ready infrastructure is rising alongside hyperscale investment, this moment presents an opportunity to move towards more coordinated, flexible and energy-efficient data centre models. Success will depend on how early enterprises align compute needs with infrastructure design, ensuring systems can scale with rapidly evolving AI hardware without requiring significant redesign."
Her comments reflect a shift from traditional air-cooled server halls to higher-density zones built for GPU clusters and other accelerators. Many operators are evaluating liquid cooling, rear-door heat exchangers and other techniques that can support racks drawing far more power than legacy systems.
Rising chip power requirements are also shaping grid and energy planning. Developers are securing larger power allocations and exploring on-site energy sources as AI clusters drive sustained high loads.
Software under scrutiny
While much of the focus is on concrete, steel and electrical infrastructure, SUSE argues that the software stack is an underused lever for energy efficiency.
Ben Henshall, General Manager for Australia and New Zealand, said the emphasis on physical expansion risks overlooking savings inside the racks.
"I think International Data Centre Day is going to matter more with each passing year. If data centres didn't exist, the innovation and improved lives the world enjoys would not be where we are today.
"Most people hear 'the cloud' and picture something weightless and abstract. The reality is racks of servers drawing serious power. A single generative AI query uses around 10 times the electricity of a standard web search. Scale that across millions of users and you start to see why energy is the real bottleneck for this industry.
"The response has been to build more capacity, source more renewables, and improve the cooling. All necessary. But what's being forgotten is the software. Efficient infrastructure software can do for data centres what smarter engine management did for fuel economy in cars. The building matters. What's running inside it matters more. Get that right, and everything above it runs leaner."
His comparison with automotive engine management reflects growing interest in workload placement, orchestration and operating system tuning. These tools can consolidate tasks onto fewer servers, shut down idle resources and smooth power consumption.
The rise of generative AI intensifies the issue. Each query consumes far more compute than a traditional search or web transaction, translating into higher electricity use at scale.
Australian market pressure
Australia's role as a regional cloud and AI hub is feeding into the debate. Hyperscale operators are signing long-term renewable energy deals and building large campuses on the fringes of major cities.
Shiraz pointed to demand from both global and local organisations for AI-ready facilities. These customers want infrastructure that can handle frequent refresh cycles as chipmakers release more power-hungry GPUs and accelerators.
The comments from Vertiv and SUSE underline the industry's split focus. On one side, engineers are redesigning racks, power trains and cooling loops for dense AI clusters. On the other, software specialists are urging operators to rethink how workloads run on that hardware.
Vendors and operators in Australia are likely to face tighter scrutiny of energy use as governments pursue climate and grid stability targets. That will place greater emphasis on both better physical design and improved use of existing resources.
The two executives point to different levers but the same constraint: energy and heat are emerging as central limits on how far AI-driven growth can go inside the data centre.