DataCenterNews Asia Pacific - Specialist news for cloud & data center decision-makers
Darrenwatkins0012

When AI meets infrastructure: designing the data centres that can keep up

Thu, 20th Nov 2025

Artificial intelligence (AI) is driving the next wave of digital transformation - but the infrastructure needed to support it is under immense pressure. As models grow in complexity and data volumes multiply, enterprises across Europe are finding that legacy environments can no longer sustain the physical and operational demands of AI.

According to S&P Global Market Intelligence, more than 40 per cent of organisations have already delayed or abandoned AI projects due to the cost and complexity of running them at scale. While most early adopters focused on software, algorithms and data science, the real bottleneck has proved to be infrastructure.

The new workload reality

AI workloads are unlike anything data centres have managed before. Traditional applications such as ERP or office systems run predictable loads that fluctuate within a known range. AI is inherently unpredictable and exponentially more intense.

Training a large model can involve thousands of graphics processing units (GPUs) running continuously for weeks, consuming tens of kilowatts or more per rack and generating vast amounts of heat. Even in production, AI inference workloads (powering fraud detection, search or personalised services) run around the clock.

Legacy enterprise and colocation environments were never designed for this type of load. Organisations can't simply retrofit cooling or add power and expect to reach the efficiency or reliability that AI requires. The engineering needs are completely different.

At rack densities exceeding 50 kW, the airflow, power distribution and even floor strength must be re-engineered. Cooling systems built for traditional IT can no longer cope, and existing electrical infrastructure struggles to maintain the required stability.

From retrofit to redesign

For many operators, AI has triggered a fundamental rethink of how new facilities are designed and built. Liquid cooling, high-density power distribution and modular architecture are becoming the baseline requirements for AI-ready data centres.

Liquid cooling is especially transformative. Whether direct-to-chip or immersion-based, it enables operators to remove heat up to 3,000 times more efficiently than air. Designing it in from the start allows for cleaner integration of pipework, pumps and containment systems, which are essential for both operational stability and sustainability.

Structural considerations are also critical. Fully populated AI racks can weigh several tonnes, requiring reinforced flooring and wider containment aisles. Electrical systems must provide redundancy at every level, while uninterruptible power supply (UPS) configurations and switchgear need to handle dynamic, sustained loads.

New AI campuses are being built for flexibility as much as capacity. Data centres might start with a 30-megawatt footprint, but the infrastructure must be ready to scale five or ten times without redesigning the entire site. That scalability is key to long-term value.

The importance of location

The rise of AI is also reshaping the geography of data centre demand. For many applications, performance now depends not only on compute power but on proximity to data.

If models are trained or deployed far from their datasets, latency increases and responsiveness drops, especially for time-sensitive tasks such as financial transactions, health diagnostics or real-time content delivery. This is prompting enterprises to prioritise sites closer to major network nodes and user populations.

The data centre, once seen as a background utility, has become a strategic extension of the AI value chain. Facilities near large cities, financial districts and innovation clusters are in growing demand, while secondary locations are being developed as regional hubs.

This distributed model of high-capacity central campuses connected to smaller edge sites that process data closer to where it's generated, supports both performance and resilience.

Sustainability in the AI era

AI's energy demands have placed the industry under heightened scrutiny. Training a single large model can consume the equivalent annual electricity usage of hundreds of homes, prompting questions from regulators, investors and customers alike.

Sustainability isn't a separate agenda to gain a competitive advantage anymore - it's fundamental to data centre design. Facilities that can deliver AI-ready density efficiently will set the standard for the next decade.

Modern campuses are integrating renewable power sourcing, waste-heat reuse, and intelligent cooling management to reduce carbon intensity. Liquid-cooled systems, for example, not only enhance thermal efficiency but can make it easier to capture and reuse heat in local energy networks.

AI is also playing a role in improving sustainability. Operators are using predictive analytics to fine-tune temperature control, airflow and energy use across their sites to maximise efficiency hour by hour.

Designing for change

The pace of change in AI technology is relentless. Hardware cycles are shortening, cooling technologies are evolving, and the balance between cloud, edge and on-premise computing is shifting. Static infrastructure risks becoming outdated before it's even fully operational.

This is driving a preference for modular construction and adaptable systems. Facilities are now designed with scalability built in, enabling operators to expand power and cooling incrementally, without shutting down workloads. Electrical rooms, cooling loops and hall layouts can be reconfigured to support the next generation of GPUs or liquid-cooling technologies as they emerge.

Operational flexibility is equally important. Workloads may need to move between halls or even regions to meet compliance, performance or sustainability objectives. Data centres that enable this agility will be best positioned to support long-term AI growth.

A new definition of readiness

AI has elevated the role of the data centre from background infrastructure to strategic enabler. The organisations moving fastest are those aligning their digital ambitions with physical readiness by investing in facilities that can handle high-density compute, low latency and sustainable operation simultaneously.

The lessons are clear:

  • AI workloads demand density and precision far beyond traditional enterprise IT.
  • Retrofitting older sites offers only temporary relief and purpose-built design is now essential.
  • Proximity to data is becoming a performance differentiator.
  • Sustainability and flexibility are no longer optional; they define long-term viability.

As AI reshapes industries from healthcare to finance, the data centres powering it are evolving just as rapidly. Facilities that can deliver scalable, efficient and responsible infrastructure will form the backbone of the AI economy, enabling innovation not just for today's workloads, but for the ones that haven't been imagined yet.
 

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X