Features Hub

Safeguarding the future data centre: Keeping pace with technology

Wed 3 Jul 2024

In this opinion piece, Phil Beale, Board Director at RED Engineering and DCA Platinum Partners, explores the urgent need for forward-thinking investment decisions in the race to expand data centre capacity.

As rapid technological advancements outpace historical best practices, many current models risk obsolescence. Beale addresses the complexities introduced by artificial intelligence (AI) compute rates, the challenge of maintaining service continuity without frequent upgrades, and the necessity for adaptable design to future-proof data centres.

The Current State

The race to bring new data centre capacity into production is accelerating. However, today’s investment decisions often lean more on past practices than on future foresight.

While most new building financial models and design frameworks draw from historical best practices, accredited research warns that data centres designed and constructed in the last five to seven years are at risk of obsolescence.

With increasing levels of data rate performance, existing technical standards, engineering specifications, and drawing templates are fast becoming outdated.

Rate of Change

The emergence of AI compute rates is introducing new layers of complexity to the engineering landscape. Rapid technological evolution makes it increasingly challenging to predict the shape, size, and operating conditions required for the next generation of graphics chips.

AI application platforms demand heightened server density, expanded space, enhanced containment, power, and chip-aligned cooling systems.

Where service level continuity cannot be compromised, minimising the frequency of power-down refurbishments and ‘forklift’ upgrades is imperative.


It is relatively straightforward to design based upon conventional design assumptions. For the past 20 years, the convention of 19in practice cabinets with front-to-back airflow and hot-aisle containment has provided a globally accepted, ultra flexible approach that has accommodated every make of file server, storage device and networking component. It has enabled Dell, HP, Cisco, and Juniper to deliver their air-cooled products into a known environment without any need for adaption or specialisation.

This conventional practice has been adopted across the industry from the humblest low density co-location site to the hyper-scale operators with their vast server farms.  Initially there was a natural limit set by 32Amp single-phase PDUs capable of delivering 7kW per cabinet more recently three-phase 32 Amp PDUs have taken this up to 22KW per cabinet.

In practice, high density loads have not usually exceeded 15KW per cabinet because this loading approached the practical limit of conventional air cooling. It becomes difficult to get the necessary amount of air through the cabinet and may lead to the need to increase the cold aisle width which will of course reduce the number of rows of server cabinets in the data hall. However, it is not so easy safeguarding for tomorrow. How do we resolve this conflict?

The Future State

Ideally, the data centre facilities we introduce to the market today should offer the agility to accommodate a diverse range of compute, storage, and networking systems well into the future.

Data centres have always had to respond to day/night and summer/winter external conditions, but the IT load has remained remarkably constant. This is in part due to poor server/chip design which has a very high idle load.  The server/chip is consuming a lot of power and putting out a lot of heat when not actually doing any useful computing.  There is a small increase in power load/heat output when processing but even fully virtualise servers rarely exceed 30% processing load then only in bursts.

In contrast, AI computing loads are much more dynamic, there are significant differences between generative AI training mode and ongoing use of a trained system. The hardware is also so expensive that providers like Microsoft have invested in the technology to dynamically move the AI workload around the globe at a moment’s notice to make the best economic use of their systems.

As a result, future data centres can expect much more short-term variation in IT power and cooling loads which will require control and monitoring systems to be re-engineered. We need to deliver solutions now that offer upscaling of containment, power provision and heat rejection, without the need to frequently re-engineer the underlying technical architecture.

Impediments to Change

While today’s delivery pressures often foster a risk-averse culture, the greater risk lies in introducing facilities to the market that lack easy saleability.

There are multiple solutions for providing power and cooling up to 100kW per cabinet. Most of these are based on some form of direct to chip liquid cooling. This is a total revolution for data centre design where with conventional air cooling it has been possible to keep all liquids out of the data hall even to the extent of having chilled water pipes and the associated mechanical plant in seperate service corridors/fire compartments. This has enables facilities maintenance to work outside the data halls which have much more stringent security and work permit regulatory requirements.

There is at present no accepted way of delivering direct to chip liquid cooling in a data hall and the major server manufacturers like Dell and HP are waiting to see the way forward. NVIDIA is introducing a liquid cooled server GB200 NVL72 but is offering it as complete liquid cooled system.

The GB200 NVL72 will be available on Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud but cannot yet be considered as a universal, flexible solution comparable to the air-cooled technology adopted by data centres for the past 20 years.

Our decision-support processes are grounded in existing knowledge or reasonable predictions. How do we economically design build products that withstand the test of time?

Our Response

Drawing from lessons learned across various industries, we understand the necessity of comprehensive modelling before implementation. Just as an energy company would not construct a new oil rig without a detailed model, we recognise the importance of modelling the data centre engineering environment.

Leveraging industrial-scale modelling, RED ICT, in collaboration with its central engineering team, construct models representing different operational assumptions of data centre computer architecture.

Scenario Modelling

Before a design brief is frozen, we consider the engineering implications of different computational stacks. We introduce example high density processors, active networking equipment, and mass data storage products. We use these examples to run calculations based upon various data hall floor plate configurations.

By exploring ‘what if’ variations for different scenarios, we can assess the implications of compute density upon, cabinet aisle assembly, power, and cooling.  By tweaking the model, we can better understand the implications of AI processors.

RED is actively developing the first direct to chip water cooled data hall designs in anticipation of the growing demand.  This work will help to establish practical solutions for data centre operators who need to offer their users flexible, practical, maintainable and cost-effect direct-to-chip water cooling in their data halls.

Our embodied carbon calculator reports provide a corresponding whole life Co2 for a given configuration. When applied to an existing facility, this approach can inform investment priorities related to legacy data centres.

Safe Place

By combining extensive data centre engineering experience with advanced software tools, we model the future state data centre with precision. The ability to explore diverse options within a virtual workshop ensures that long-term data centre viability remains uncompromised by the urgency to bring new capacity to the market.

Join Tech Show Paris

27-28 November 2024, Porte de Versailles, Paris

Be a part of the latest tech conversations and discover pioneering innovations in Paris.

Don’t miss one of the most exciting technology events of the year for France.

Send us a correction Send us a news tip