04 November 2023
Gordon Johnson, senior CFD manager, Subzero Engineering
Data centres (DCs) are experiencing increasing power density per IT rack. In 2020, the Uptime Institute found that due to compute-intensive workloads, racks with densities of 20kW and higher are becoming a reality for many DCs.
This has left DC stakeholders wondering if air-cooled IT equipment (ITE), along with containment used to separate the cold supply air from the hot exhaust air, has finally reached its limits.
Moving forward it’s expected that DCs will transition from 100% air cooling to a hybrid model encompassing air and liquid-cooled solutions. Those moving to liquid cooling may still require containment to support their mission-critical applications, depending on the type of server technology deployed.
One might ask why the debate of air versus liquid cooling is such a hot topic in the industry right now? To answer this question, we need to understand what’s driving the need for liquid cooling, the other options, and how can we evaluate these options while continuing to utilize air as the primary cooling mechanism.
Air and liquid cooling successfully coexisted until the industry shifted primarily to CMOS technology in the 1990s.
With air being the primary source used to cool DCs, ASHRAE (American Society of Heating, Refrigeration, and Air Conditioning Engineers) has worked towards making this technology as efficient and sustainable as possible. Since 2004, it has published a common set of criteria for cooling IT servers with the participation of ITE and cooling system manufacturers - ‘TC9.9 Thermal Guidelines for Data Processing Environments.’
ASHRAE focused on the efficiency and reliability of cooling the ITE in the DC. Several revisions have been published with the latest being released in 2021 (revision 5). This latest generation TC9.9 highlights a new class of high-density air-cooled ITE (H1 class) which focuses more on cooling high-density servers and racks with a trade-off in terms of energy efficiency due to lower cooling supply air temperatures recommended to cool the ITE.
As to the question of whether air and liquid cooling can coexist, it’s done so for decades already.
It’s easy to assume that when it comes to cooling, a one-size will fit all in terms of power and cooling consumption, but it’s more important to focus on the actual workload for the DC that we’re designing or operating.
A common assumption with air cooling was that once you went above 25kW per rack it was time to transition to liquid cooling. But the industry has made some changes, enabling DCs to cool up to and even exceed 35kW per rack with traditional air cooling.
Up to around 2010, businesses utilized single-core processors, but once available, they transitioned to multi-core processors. However, there was still a relatively flat power consumption with these dual and quad-core processors, which enabled server manufacturers to concentrate on lower airflow rates for cooling ITE, resulting in better overall efficiency.
Around 2018, with the size of these processors continually shrinking, higher multi-core processors became the norm and with these reaching their performance limits, the only way to continue to achieve the new levels of performance by compute-intensive applications is by increasing power consumption. Server manufacturers have been packing in as much as they can to servers, but because of CPU power consumption, in some cases, DCs were having difficulty removing the heat with air cooling, creating a need for alternative cooling solutions, such as liquid.
Manufacturers have also been increasing the temperature delta across servers for several years now, which again has been great for efficiency since the higher the temperature delta the less airflow that’s needed to remove the heat. However, server manufacturers are, in turn, reaching their limits.
There are several approaches the industry is embracing to cool power densities up to and even greater than 35kW per rack successfully, often with traditional air cooling. These options start with deploying either cold or hot aisle containment. If no containment is used typically, rack densities should be no higher than 5kW per rack, with additional supply airflow needed to compensate for recirculation air and hot spots.
What about lowering temperatures? In 2021, ASHRAE released their 5th generation TC9.9 which highlighted a new class of High-Density Air-Cooled IT equipment, which will need to use more restrictive supply temperatures than the previous class of servers.
At some point, high-density servers and racks will also need to transition from air to liquid cooling, especially with CPUs and GPUs expected to exceed 500W per processor or higher in the next few years. But this transition is not automatic and isn’t going to be for everyone.
Liquid cooling is not the ideal solution for all future cooling requirements. The selection of liquid cooling over air cooling has to do with a variety of factors, including specific location, climate (temperature/humidity), power densities, workloads, efficiency, performance, heat reuse, and physical space available.
This highlights the need for DC stakeholders to take a holistic approach to cooling their critical systems. It will not and should not be an approach where we’re considering only air or only liquid cooling moving forward. Instead, the key is to understand the trade-offs of each cooling technology and deploy only what makes the most sense for the application.