Story image

Whitepaper: Warmer water can improve the cost of data centre cooling

02 Aug 2018

How Chilled Water Temperatures Can Improve The Cost of Data Centre Cooling

Article by Schneider Electric data center science center senior research analyst Paul Lin

Energy efficiency requires a continual focus for both data centre designers and operators seeking to reduce costs and improve returns on investment (ROI). The drivers are many, including concerns for environmental impact with its own effect on corporate reputation, the need to adhere to existing and anticipated government regulation, but all boil down in the end to the perennial need to control costs. Reducing one’s energy bill is never a bad idea.

Apart from the IT equipment, the cooling function is the largest drain on energy resources in any data centre. Typically, the operating temperature of the computer space is maintained by chilled water coils, through which water cooled by an external chiller is circulated. The coils remove heat from the white space and return the now warmed water back to the chiller where it is once again cooled and passed through a dry cooler, before the cycle is repeated.

The chiller itself is a major consumer of energy on any data centre, typically accounting for between 65% and 80% of that required by the cooling process. The heat rejection mechanism, usually comprising a cooling tower or dry cooler, is much less power hungry. If conditions permit, and the operation of the dry cooler is sufficient to maintain optimum operating temperatures, an economiser mode can be deployed in which the chiller is bypassed completely in favour of the dry cooler for some cycles of the cooling process. Naturally, given the power savings that ensue, data centre operators try to maximise the amount of “free cooling”, or time the chiller can spend in economiser mode.

One way to ensure efficiency is through system design, allowing the temperature of chilled water (CHW) to be raised above the 7C, traditionally used for air-conditioning systems. If higher CHW can be permitted without impairing the safe operation of the data centre’s core IT equipment, costs can be reduced with less effort required by the chiller when it is operating, and thereby increasing number of free cooling hours, which can be utiilised.

A systematic analysis of the overall chiller plant  using higher CHW temperatures shows how operating costs can be reduced, thanks to more cooling hours and a reduced load on the chillers. However,this must be offset by increased capital expenditure (CAPEX) elsewhere in the cycle. Nonetheless, there are several options available for operators looking to reduce energy costs, and the optimal solution will depend on the specific situation, with the location of the data centre and the prevailing climate being key factors.

Maximising efficiency in temperate climates

Consider the case of a data centre in a temperate climate, in this case Frankfurt Germany, with a typical cooling system comprising an external chiller and dry cooler from which water is pumped to cooling coils inside the data centre. In economiser mode, when outside air conditions are within specified set points, the dry cooler is the only element used to cool the CHW as the chiller itself is bypassed.

Let us assume that the air inlet temperature in the IT space is to be kept at a constant 23C, which is sufficient for effective operation of the IT equipment. If the CHW is allowed to rise in a series of increments between 7 and 17C, the energy expended by the chiller decreases steadily because of the improved efficiency and longer period spent in economiser mode.

However, with CHW temperatures above 15C, additional Computer Room Air Handling (CRAH) units must be deployed inside the IT space to maintain the required 23C air inlet temperature. Naturally this requires an increase in capital expenditure but reduces energy consumption. If more CRAH units are added, the speed of their fans can be reduced, which also saves energy costs—operating costs—albeit for an additional capital outlay.

A recent study carried out by Schneider Electric on such a facility in Frankfurt found that when starting from a baseline CHW temperature of 7C, and calculating total energy requirements as CHW increased in steady increments, savings of 39% over baseline were achieved with a CHW of 17C.

If CHW is to be increased to even higher temperatures, further modifications to the cooling system are necessary. Although extra CRAH units can compensate for the reduction in cooling effort ensuing from higher CHW temperatures, once the CHW hits 20C it becomes necessary to redesign the cooling coils in the CRAH. This of course entails further capital expense but in the case of the Frankfurt data centre, using modified cooling coils allowed total energy savings of 50% over baseline when CHW was allowed to rise to 21C.

The case for adiabatic coolers

Further improvements in energy efficiency can be achieved by deploying additional adiabatic cooling, improvements to device efficiency, the control methodology and a more efficient hydraulic architecture.

Adiabatic cooling is a natural physical process by which air is passed through water droplets. The heat transfer from air to water causes the water to evaporate and the air temperature to fall. This is known as a constant enthalpy process because the overall energy within the air does not change.

Water is misted into the air stream entering the condenser’s air flow and evaporates into the airstream which reduces the air temperature. Lowering the condenser’s temperature in this way reduces the chiller energy consumption and increases economiser hours. Adiabatic cooling is typically used in warm dry climates, provided there is a plentiful source of water.  For the Frankfurt data centre, adding adiabatic cooling improved energy savings by a further 6% over the baseline.

Data centre infrastructure vendors continue to develop their power and cooling products to maximise efficiency. Examples or recent advancements in the cooling space include variable frequency drives for existing chillers and variable speed fans in CRAHs. By selecting and deploying such energy-conscious equipment, an operator can help to improve overall efficiency throughout a facility.

Cooling devices are usually controlled manually in a standalone, decentralised manner by manual operators who adjust chilled water setpoints and turn chillers on and off depending on the situation. However, centralised systems management and Data Center Infrastructure Management (DCIM) software can greatly increase the accuracy and responsiveness of the cooling equipment to changing conditions, which in turn can help increase energy efficiency accordingly.

Conclusion: The findings of White Paper 227

To assess the overall costs savings possible using higher CHW temperatures, Schneider Electric’s Data Center Science Center studied the operations of two data centres, similar in terms of the cooling architectures but located in two very different climactic regions. The first was the previously mentioned facility in Frankfurt, Germany and the second was in Miami, Florida; A city that has, what one would typically describe as a tropical monsoon climate. 

By enabling higher CHW temperatures the Frankfurt facility saw total energy savings of 61% compared with traditional cooling methods. When adiabatic cooling was added, a small improvement of 63% energy savings over baseline was achieved.

In Miami, with its hotter climate, warmer CHW temperatures provided a 34% increase in energy savings over baseline, and when adiabatic cooling was added, this yielded an increase in savings over baseline of 39%.

In each case, capital expenditure increased by 13% to install the new systems, including extra fans, that are needed to permit higher CHW temperatures. But the improvement in Total Cost of Ownership (TCO) over three years, thanks to reduced energy consumption, was 16% in the case of the Frankfurt facility and 12% for Miami.

Clearly, permitting higher CHW temperatures can yield long-term cost and efficiency benefits, providing that adequate investment is made into the cooling infrastructure, including the intricate management and software systems necessary to accommodate such temperatures. The outcome of course will vary depending on the architecture of the cooling systems and the climactic region in which the data centre is located, but payback on investment is rapid and builds a compelliing case for increasing enery efficiency.

Dropbox invests in hosting data inside Australia
Global collaboration platform Dropbox has announced it will now host Australian customer files onshore to support its growing base in the country.
Opinion: Meeting the edge computing challenge
Scale Computing's Alan Conboy discusses the importance of edge computing and the imminent challenges that lie ahead.
Alibaba Cloud discusses past and unveils ‘strategic upgrade’
Alibaba Group's Jeff Zhang spoke about the company’s aim to develop into a more technologically inclusive platform.
Protecting data centres from fire – your options
Chubb's Pierre Thorne discusses the countless potential implications of a data centre outage, and how to avoid them.
Opinion: How SD-WAN changes the game for 5G networks
5G/SD-WAN mobile edge computing and network slicing will enable and drive innovative NFV services, according to Kelly Ahuja, CEO, Versa Networks
TYAN unveils new inference-optimised GPU platforms with NVIDIA T4 accelerators
“TYAN servers with NVIDIA T4 GPUs are designed to excel at all accelerated workloads, including machine learning, deep learning, and virtual desktops.”
AMD delivers data center grunt for Google's new game streaming platform
'By combining our gaming DNA and data center technology leadership with a long-standing commitment to open platforms, AMD provides unique technologies and expertise to enable world-class cloud gaming experiences."
Inspur announces AI edge computing server with NVIDIA GPUs
“The dynamic nature and rapid expansion of AI workloads require an adaptive and optimised set of hardware, software and services for developers to utilise as they build their own solutions."