DataCenterNews Asia Pacific - Specialist news for cloud & data center decision-makers
Story image

The world is heating up, but data centres should keep their cool

Tue, 27th Sep 2022
FYI, this story is more than a year old

In many parts of the world, including Asia, we are experiencing some of the hottest temperatures on record. Intense heat waves have swept across China leading the government to issue red alerts advising people not to spend too much time outdoors. In Singapore, temperatures shot up to almost the island's all-time high in April this year. With a warming planet, the heat's about to get worse with some weather simulations predicting that 600 million to a billion people in Asia will be living in areas with lethal heat waves by 2050. 

With the world heating up, the challenge of keeping data centres cool becomes more complex, expensive and power intensive. Already, data centres are known for being big consumers of electricity - globally, data centre electricity use in 2020 was 200-250 TWh, or around 1% of global final electricity demand. With data volumes growing, this need is only going to expand. The region's green rules for data centres to be more energy efficient further compounds the cooling challenge. 

Keeping cool is not a new challenge for those in the data storage and processing world. Any data centre manager will be familiar with the need to balance efficient power consumption and consistent temperatures with answering a business's needs. While there's plenty of high-end tech out there that can help with cooling components, these can be hard to implement or retrofit into existing data centres. Thankfully, there are some pragmatic, sustainable strategies to explore as part of a holistic solution. 

Keeping cooler air circulating 

It should go without saying, but good air conditioning should be a mainstay of all data centres. This is particularly important for data centres operating in tropical climates like Southeast Asia where the urban heat island effect is driving temperatures to new highs. In fact, cooling of data centres make up 35 to 40% of total energy consumption for data centres in Southeast Asia. 

Making sure that Heating, Ventilation and Air Conditioning systems have a stable power supply is a basic stipulation. For business continuity and contingency planning, back-up generators are a necessary precaution—for cooling technologies as well as compute and storage resources. Business continuity and disaster recovery plans should already include provisions for what to do if power (and back-up power) cuts out.  

If temperatures do spike, then it pays to be running hardware that's more durable and reliable. Flash storage, for instance, is typically far better able to handle increases in temperatures than mechanical disk solutions. That means data stays secure and performance remains consistent, even at high temperatures. 

Power reduction suggestions

Here are three strategies IT organisations should be considering. When combined, they can help to reduce the power and cooling requirements for data centres: 

More efficient solutions:

This is stating the obvious – every piece of hardware uses energy and generates heat. Organisations should look for hardware that can do more for them in a smaller data centre footprint. In Singapore, this is even more critical with the government mandating all new data centres to have a power usage effectiveness of at least 1.3. Increasingly, IT organisations are considering power efficiency when selecting what goes in their data centre. In the world of data storage and processing for example, key metrics for evaluation now include capacity per watt and performance per watt. With data storage representing a significant portion of the hardware in data centres, upgrading to more efficient systems can significantly reduce the overall power and cooling footprint of the whole data centre.

Disaggregated architectures:

We now turn to direct attached storage and hyperconverged systems. Many vendors talk about the efficiencies of combining compute and storage systems in hyperconverged infrastructure (HCI). That's absolutely fair, but that efficiency is mainly to do with fast deployments and reducing the number of teams involved in deploying these solutions. It doesn't necessarily mean energy efficiency. In fact, there's quite a bit of wasted power from direct attached storage and hyperconverged systems. For one thing, compute and storage needs rarely grow at the same rate. Some organisations end up over-provisioning the compute side of the equation in order to cater to their growing storage requirements. The same thing happens from a storage point of view occasionally and in either scenario, a lot of power is being wasted. If compute and storage are separated, it's easier to reduce the total number of infrastructure components needed and therefore, cut the power and cooling requirements too. Additionally, direct attached storage and hyperconverged solutions tend to create silos of infrastructure. Unused capacity in a cluster is very difficult to make available to other clusters and this leads to even more over-provisioning and waste of resources.

Just-in-time provisioning:

The legacy approach of provisioning based on the requirements of the next 3 to 5 years is not fit for purpose anymore. This approach means organisations end up running far more infrastructure than they immediately need. Instead, modern on-demand consumption models and automated deployment tools let companies scale the infrastructure in their data centres easily over time. Infrastructure is provisioned just-in-time instead of just-in-case, avoiding the need to power and cool components that won't be needed for months or even years.  

Keeping data centres cool depends on reliable air conditioning and solid contingency planning most of the time. But in every facility, each fraction of a degree that the temperature rises is also a fractional increase in the stress on equipment. Cooling systems alleviate that stress from racks and stacks, but no DC manager wants to put those systems under additional stress - which is what the rising temperatures have been doing. 

As global warming is likely to reach 1.5 degree Celsius by as early as 2030, organisations have to find solutions that enable them to cut running costs, simplify and cool their data centres and reduce their energy consumption – all at the same time. For this to happen, it is high time for them to take big steps forward in reducing equipment volumes and heat generation in the first place.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X