Cooler Data Centres For A Warming World


While reports of heatwaves have garnered headlines in parts of Europe and the United States, soaring summer temperatures rarely make headlines in the Middle East. But with Kuwait recording a temperature of 53.2 degrees Celsius in June this year, and the UAE, Saudi Arabia and Oman all experiencing temperatures above 50 degrees, its clear that climate change is pushing the mercury higher in the region too.

As it does, the challenge of keeping data centres cool becomes more complex, expensive, and power intensive. This threatens to hinder the impressive pace of digitalisation the region has witnessed in recent years. According to a recent report from, the Middle East data centre market is projected to grow at a CAGR of 12.4 per cent during 2021-2027. So whether it be to reduce the environmental impact or the costs involved with operating data centres, finding effective ways to cool data centres in the face of rising temperatures is the need of the hour. Thankfully, there are some pragmatic, sustainable strategies to explore as part of a holistic solution.

Keeping cooler air circulating

It should go without saying but good air conditioning should be a mainstay of all data centres. While other geographies have the luxury of reducing the cooling burden by building data centres in regions with colder climates, this isn’t a practical option for Middle Eastern nations.

Ensuring that HVAC systems have a stable power supply is a primary stipulation. For business continuity and contingency planning, backup generators are necessary for cooling technologies and computing and storage resources. Business continuity and disaster recovery plans should include provisions for what to do if power (and backup power) cuts out.

If temperatures do spike, then it pays to be running hardware that’s more durable and reliable. Flash storage, for instance, is typically far better able to handle temperature increases than mechanical disk solutions. That means data stays secure and performance remains consistent, even at high temperatures.

Power reduction suggestions

Here are three strategies that IT organisations should consider. When combined, they can help to reduce the power and cooling requirements for data centres:

  • More efficient solutions – Every piece of hardware uses energy and generates heat. Organisations should look for hardware that can do more for them in a smaller data centre footprint which immediately helps to keep temperatures—and, as a result, cooling costs—down. Increasingly, IT organisations are considering power efficiency when selecting what goes in their data centre. For example, key metrics now being evaluated include capacity per watt and performance per watt in the world of data storage and processing. With data storage representing a significant portion of the hardware in data centres, upgrading to more efficient systems can significantly reduce the overall power and cooling footprint of the whole data centre.
  • Disaggregated architectures – Now, we turn to direct attached storage and hyper-converged systems. Many vendors talk about the efficiencies of combining compute and storage systems in HCI (hyper-converged infrastructure). That’s fair, but that efficiency mainly concerns fast deployments and reducing the number of teams involved in deploying these solutions. It doesn’t necessarily mean energy efficiency. There’s a lot of wasted power from direct attached storage and hyper-converged systems.

For one thing, compute and storage needs rarely grow at the same rate. Some organisations end up over-provisioning the compute side of the equation to cater to their growing storage requirements. Occasionally the same thing happens from a storage point of view, and in either scenario, a lot of power is being wasted. If compute and storage are separated, it’s easier to reduce the total number of infrastructure components needed—and therefore cut the power and cooling requirements too. Additionally, direct attached storage and hyper-converged solutions tend to create silos of infrastructure. Unused capacity in a cluster is very difficult to make available to other clusters, leading to even more over-provisioning and waste of resources.

  • Just-in-time provisioning – The legacy approach based on the requirements of the next 3 to 5 years is no longer fit for purpose. This approach means organisations end up running far more infrastructure than they immediately need. Instead, modern on-demand consumption models and automated deployment tools let companies easily scale the infrastructure in their data centres over time. Infrastructure is provisioned just-in-time instead of just-in-case, avoiding the need to power and cool components that won’t be needed for months or even years.

Most of the time, keeping data centres cool depends on reliable air conditioning and solid contingency planning. But in every facility, each fraction of a degree that the temperature rises is also a fractional increase in the stress on equipment. Cooling systems alleviate that stress from racks and stacks, but no DC manager wants to put those systems under additional stress—which is what the recent heat waves have been doing.

So why wouldn’t we take steps to reduce equipment volumes and heat generation in the first place? If we can cut running costs, simplify and cool our data centres and reduce our energy consumption —all at the same time—then I’m not sure that’s even a question to ask.

If you liked reading this, you might like our other stories
Datatechvibe Explains: Zero-Trust Cloud Security
Datatechvibe Explains: Data Distribution Service (DDS) Protocol