Cooling and power tips – the basics

11 June 2020

pic 3

Robert Staines, data centre management specialist

Over the past two decades I have been involved in the design, fit out and ongoing maintenance of data centres and special equipment rooms all over the UK. Some of these spaces have been small and of course some extremely large.

The main concern for pretty much most of these rooms was how they were going to be cooled and how much power can be used. Some rooms needed nothing more than a small air conditioning with no redundancy and some required full redundancy should any of the installed units fail as well as full- sized power distribution units (PDUs).

When you hear the term “free air cooling” you would think this option would cost you the least - that may be the case in the longer term - but only if the room is set
up correctly and standards for loading the room with IT kit adhered to. When you here the term 16/32 amp, do not think this is the power you can use per outlet, this is normally the maximum that can be used before that circuit breaker activates. 

Now, for a lot of companies, they do nothave a specialist team who solely look after managing these rooms. Hopefully, they’ll take care of placing kit in the right or should I say most efficient position within the racks, which in simple terms can be as straight forward as the kit that consumes the most power and gives off the most heat being placed closest to the air cooling units.

There are plenty of cooling units on the market and you would be unwise to just install a system without understanding how it works, how it’s maintained and approximate life span of the chosen installation. That’s because everything has a service life, meaning how long will parts be available for, how easy is it to carry out maintenance and can repairs be done without impacting the kit in the room.

As the current trend is leaning more towards managed data centres and cloud infrastructures, it is important to fully understand the environment that you may
be acquiring. It’s crucial that if you have critical kit that needs cooling you must have some sort of monitoring tool to notify you once the temperature starts to move outside of the safe operating temperature of the lowest tolerant kit in the room. In general, servers can normally run a lot hotter than network kit (unless it is the main backbone switch).

Things I would recommend you look out for when it comes to cooling are as follows: • If a water-cooled system is in use, check that there is a leak detection system in place and pipes monitored by area so any potential leaks can be identified quickly and resolved.

• You will need at least three of the same rating to have N+1 redundancy (for example, should any one of the three units fail, the remaining two units can support the space being cooled).
• Maintenance can be done without causing major disruption to the room (some data centres keep the cooling units in a separate room to minimise this).
• Do not exceed the recommended cooling.
rating for the space you are using, should things go wrong it only takes a few minutes before the temperature really starts rising at a rapid rate.
• Remove any kit that is not required as it is a waste of energy having kit powered on that is no longer in use.
• How much power are you allowed to install in your allocated rack space (up to 7kw can be installed in a 47u x 700m rack space using contained cooling systems. This figure is approximately 4kw for a standard hot/cold aisle cooling system where the air is either pushed through a raised floor or using wall mounted units (wall mounted units are normally used for smaller rooms only).

I’ve deliberately NOT used too much jargon as these are just my high levels thoughts, there is a lot more to cooling and power and they are both interlinked.