06 November 2024
Alan Beresford, Managing Director, EcoCooling Ltd
Choosing the right cooling solution for your data centre is essential to protect you and your stakeholders’ assets and reputation. Overheating components can lead to costly system failures and interruptions to business, while well-cooled data centres are bastions of productivity and efficiency.
But the right solution right now could be woefully inadequate to keep up with the demands of technology in five or 10 years. High Performance Computing (HPC) has increased in power exponentially and its cooling demands have grown in step with this.
Future-proofing data centres with flexible designs that can accommodate rapid changes and meet efficiency standards while maintaining capital cost, efficiency and the speed of deployment is vital.
Where will your business be?
Before selecting a cooling solution for your data centre, businesses need to seriously think about what they’re doing now and what they might be doing in the future. State-of-the-art data centres built 5-10 years ago are now inappropriately engineered for emerging technologies with high power densities and associated cooling demand. This is not a small evolution: there’s a risk you could get the decimal point in the wrong place.
Consider space and power
The latest AI equipment can take 10 times the amount of power per rack and demand 10 times the amount of cooling as the HPC of yesteryear. That requires more infrastructure and space to accommodate it.
Ask yourself, what cooling redundancy am I going to have? N, N+1, 2N? Some Nvidia equipment requires 3N power! With that in mind, it’s important to consider the size of cooling modules. A set of smaller modules can typically be adapted quickly and cost-effectively. Any liquid cooling solutions need to be very carefully designed to provide appropriate resilience. Don’t forget, there is a big difference between redundancy and concurrent maintainability of cooling systems.
Measurement and reporting
The chase to net zero is going to affect how businesses operate. Data centres already have to meet reporting requirements to ensure they are efficient, particularly in the use of electricity and water. Making sure your designs comply with existing standards while anticipating future standards is essential.
Pay close attention to the measurement and reporting capabilities of new equipment. Not only do you have to be efficient – you need to prove efficiency. Also, consider whether your data centre will meet the standards when running at 10% or 90% capacity. Your designs must accommodate the whole range of rate demands.
Air cooling systems, if correctly controlled, operate at a far higher efficiency at low utilisation as the fans use less energy. It is quite common to see a cooling partial Power Utilisation Effectiveness (pPUE) of less than 1.02 for a pure ventilation system.
A hybrid solution is often best
In the past, you could deploy one cooling system for an entire data centre. Today, there is a distinct possibility that you will have two different cooling requirements. HPC will probably need a form of liquid cooling but a considerable proportion of the load will still be everyday computing that only requires conventional cooling.
Ultimately, you end up with a hybrid solution – part liquid, part air – and then have to decide on the proportion between the two. Think carefully about how much HPC you might be doing in a few years.
Designing a facility which can accommodate the rapid integration of new equipment is essential to enable you to develop as technology progresses. Speed is of the essence.
Take inspiration from cryptocurrency pioneers
Cryptocurrency mining equipment has increased in performance about a hundredfold over the last 10 years. The industry has moved into a zone where the only way to survive is by being huge. Hundreds of megawatts.
By building data centres in remote areas with naturally cold climates and cheap power, crypto pioneers are able to deploy giant data centres in a quarter of the time it takes to build a conventional data centre – and for less than 10% of the cost.
In these locations, cooling can be achieved with fresh air and no refrigeration, which challenges traditional thinking.
There is currently a trend for converting cryptocurrency facilities into high-performance data centres. Mining cryptocurrency is not easy – it requires enormous computational power, energy consumption, and specialised hardware. These facilities are now offering viable solutions to the rapid deployment of HPC.
Plus, a lot of AI is not latency-affected; requests do not have to be back in milliseconds. While it requires a lot of computational power, the actual traffic demands are, in some cases, much lower. Location will be dependent on the particular latency demands your business has.