Optimising cooling within the data centre

06 May 2021

 Alex Thompson, sales engineer, technology sector, Airedale International 

 Alex Thompson, sales engineer, technology sector, Airedale International 

The world has been catapulted into a virtual reality at a much faster pace than we ever envisaged and the need to move our day to day lives online, in response to the pandemic, has seen the data centre industry having to respond fast to maintain service and match demand.

Since the age of computers began, the need for effective cooling to keep IT operational has been a much debated topic. At Airedale, we invest heavily into R&D to ensure we meet the objectives and solve the issues faced by data centre managers, both at strategic and operational level. With a wealth of expertise in this field, we can share some of our frequently covered topics of discussion and resolutions.

Perhaps frustratingly, there isn’t a one-size-fits-all solution to data centre cooling. Every operation is different and power/ cooling load, physical footprint and geographical location of the data centre will always impact on the critical cooling solution employed, but there are some common themes to consider that will assist with decision making and here we look at 3 key points.

1. Aisle Containment: Historically, data centres were cooled using CRAHs (computer room air handlers) to cool the general space. However the issue with this is that it resulted in hotspots, leaving some crucial areas unable to be cooled due to poor air distribution. By employing a more strategic server layout, hot exhaust air can be segregated within specific aisles, with cool air channelled towards the servers that need it. Ensuring server intake cool air and the exhaust hot air do not  mix increases the efficiency of the system, whereby hot air is returned to the CRAH (making them more efficient) and cool air can be directed to the servers as required. This separation of “conditioned air” from “returning exhaust air” is a crucial first step for optimising efficiency.

2. Energy Efficiency: Cooling a data centre consumes a significant amount of energy and efficiency is key to ensure minimum waste, to reduce carbon footprint and minimise lifecycle costs. Free Cooling chiller technology, which is the process of using external ambient temperature to reject heat in a chiller, rather than using the refrigeration process, is a highly effective method of reducing waste and operational costs. If used within an optimised system, free cooling can provide significant energy savings. It can take effect when the difference between the outside supply and return temperatures is as little as 1°C. This means that, in a 24/7 data centre with a typical server inlet temperature of 24°C, over 95% of the year can be spent with free-cooling active. Other energy efficiency measures can also be employed alongside free cooling, such as part-load operation for when full capacity isn’t demanded by the servers, thus reducing energy consumption in line with IT demand. Airedale’s DeltaChill Azure, which uses the lower GWP refrigerant R32, provides one such free cooling solution.

3. Intelligent Controls: Investing in technology can only be effective if its operation is monitored on an on-going basis. Intelligent controls that  offer 24/7 demand adjustment for precision control of temperature, airflow and air pressure difference to maximise uptime and optimise efficiency. Airedale’s Helix control can work in conjunction with the data centre infrastructure management (DCIM) system to ensure continuity across the whole site. Within the controls system, any leakages, failures and downtime can be quickly recognised and within systems that employ an N+1 or N+N redundancy strategy, downtime is minimised.

So whilst there is no standard fit, an example solution for a 750kW Data Centre might be to select 3no chillers, such as the DeltaChill Azure, with two chillers providing 382kW cooling and the 3rd chiller providing N+1 resilience. These would work alongside 4no downflow AHUs, such as Airedale SmartCool chilled water units each providing 187.25kW, supplied via a floor void with contained hot aisle and ceiling void return to CRAHs. To mitigate risk of failure, controls could be available on circulation pumps to ramp up pump speed in the event of a CRAH failure and overcome the increased coolant pressure drop.  This system described, offers an annualised Efficiency of 11.0 ( i.e. 11kW of cooling per 1 kW of power) with a partial PUE (Chillers + CRAH) of 1.091*.

There are many day to day aspects of the data centre operation that need to be considered when it comes to developing a cooling strategy. It is important to consult with an experienced cooling team who can identify risks and advise on design to maximise efficiencies within the setting. At Airedale we have dedicated teams established to work with both enterprise, edge and large data centre operators, to develop the most appropriate and advanced solutions for our clients.

*All the above partial PUE figures are based on set parameters and do not take into account any other inefficiencies (other power losses)