The Stack Archive Expert View

Building an efficient data centre

Mon 9 Nov 2015 | Matthew Baynes

Matthew Baynes, Enterprise Sales Director, Schneider Electric

Energy efficiency has become a key consideration when designing and fitting out a data centre. Environmental concerns and corporate social responsibility may be two of the reasons for this, but the overriding driver is cost. Energy costs are cyclical and no matter how cheap they may appear to be at any one time, the only guarantee is that they will rise again soon. They also represent the main operational cost for most data centres, so keeping them under control is paramount.

An important metric to assess the power efficiency of a data centre is Power Usage Effectiveness (PUE) defined simply as the ratio of total energy consumed by a facility to the energy expended by its IT equipment. It should be noted that PUE is calculated independently of the means used to produce electrical power in the first instance. If you are fortunate enough to be able to derive the entire power requirement of a data centre from a local hydro-electric or solar source, then so much the better, but PUE is concerned with how a facility uses power, not how it is produced.

Although a data centre design can seek to mitigate the energy expended by the IT equipment at its core (through higher utilisation, virtualization and switching off ghost machines), the real opportunity for cutting cost is efficient deployment of the infrastructure technology that supports a facility’s key function. Primarily with larger IT installations, this may be by rightsizing the power and cooling equipment required to keep the servers and disk arrays operating smoothly.

At the most basic level, free cooling can be provided to a data centre by a cold local ambience. A data centre in a remote site with no adjoining buildings provides an excellent opportunity to cool the entire building. In Scandinavia, some companies have taken this to the extreme of placing large data centres near bodies of waters such as fjords to act as a giant natural heat sink. Sadly, such extreme measures are limited in this country, however a data centre located in a large detached building away from a built up area is a good start for reducing cooling costs.

Regardless of how much ambient cooling one can achieve, there is nonetheless a requirement to deliver a targeted cooling effort where it is needed most: in the racks holding the IT equipment. In particular the major effort must be directed at those elements that generate the most heat, such as densely packed server racks and large storage arrays.

For these sorts of applications, containment together with In-Row cooling equipment, provides targeted cooling where it is needed. In addition, a carefully considered layout of the IT racks and cooling equipment allow hot and cold aisles to establish an efficient flow of air from hot to cold regions so that energy expended on cooling one piece of equipment is not wasted by another unwisely positioned heat source.

Containment solutions such as EcoAisle from Schneider Electric help to establish thermally efficient rack layouts so that the cooling effort is expended in the most cost-effective manner.

As well as cooling equipment itself, careful selection of energy efficient power equipment, and in particular the backup power or UPS (uninterruptible power supply) infrastructure in a date centre can minimise the overall power requirement.

Such backup systems are typical in data centres that require a power output of 10kVA – 10MW and beyond.

They provide guaranteed instantaneous backup power in the event of a power outage but they are also inherently power hungry. Because of this many now employ an Eco-mode option which sacrifices a minuscule amount of backup response time to achieve a 2 to 3% improvement in power efficiency. For all but the most mission-critical applications requiring zero down time, deploying Eco-mode on a UPS could be considered to maximise power efficiency.

Although designing an efficient data centre is a complex task which will always require an element of customisation specific to the location and layout of the building and the exact functional requirements of the data-centre itself, there is enough commonality between most data centres to allow a modicum of advanced design and prefabrication to take place.

Schneider Electric, for example, has put together a library of more than 100 reference designs, essentially blueprints for a data centre operator to follow to set up a tried and tested, validated and documented data centre in a fraction of the time it would take to design such a facility from scratch.

Reference designs greatly reduce the time and expense needed to build and fit out a data centre and also provide the comfort of benefiting from the expertise and experience of others.  Reference designs are conceptual plans for how the physical infrastructure of a data centre should be laid out. They cover power, cooling and the IT space itself and include standardised descriptions of all components and systems needed.

They also describe typical options for electrical power lines, piping plans for cooling equipment and optimal floor layouts. Data centre operators can mix and match between the various reference designs and perform whatever inevitable customisation is required for their particular requirements, but the existence of tried and trusted, validated and documented designs result in significantly shortened lead times.

As well as reference designs themselves there is the option of deploying prefabricated data centre building blocks. These can take the form of an entire data centre building, to be located on available ground and containing all the infrastructure one needs. The advantages of a prefabricated data centre are primarily that one can deploy the infrastructure one needs quickly and in a modular fashion.

Building on the library of reference designs that Schneider Electric has already published, prefabricated modular data centre designs can be put together using a wide range of IT, power, cooling and hydonics modules available in different ISO, non-ISO containers as well as skid-mounted form factors and Modular room construction.

Whereas a full custom approach to data-centre design is inherently complex and time consuming, it can also lead to oversized data centres which are expensive in terms of capital expenditure and inefficient in terms of operating costs. Typically one has to design for expansion and so the initial data centre will generally be oversized and over-specified for the early period of its operation.

Using prefabricated modules, not only can a data centre be up and running quickly and reliably but it can also be built up in a modular fashion so that it is only using the equipment it needs, both in terms of IT and the supporting infrastructure. By right-sizing infrastructure to the IT requirement, the data centre operator benefits from lower cost of operations, from energy to upkeep.

Finally, and regardless of how big or how prefabricated a data centre can be there is the inevitable task of monitoring and controlling the operation of its equipment. This requires the deployment of a reliable Data Centre Infrastructure Management (DCIM) software product such as StruxureWare for Data Centers from Schneider Electric. These management tools allow constant monitoring and adjustment of the cooling and power management infrastructure so that efficient operation can be maintained at all times.

StruxureWare for Data Centers also integrates well with Building Management Systems typically used by facilities managers to monitor other infrastructure equipment not specific to the operation of a data centre. In this way it allows data centre operators to fine tune the operation of their infrastructure to ensure that everything is operating at maximum efficiency.



Experts featured:

Matthew Baynes

Datacentre Strategy & Business Development Director
Schneider Electric

Send us a correction Send us a news tip