Schneider Electric publishes new white paper specifying data centre pod architectures
Mon 26 Jun 2017
- Standardised building blocks of IT equipment and associated power and cooling infrastructure help data centres to scale up rapidly in response to changing load demands.
- Although standardisation exists at individual rack level, there are no comparable standards for the building blocks required by larger data centres.
- A new White Paper from Schneider Electric describes the factors that need to be considered when specifying IT Pods
London, United Kingdom – June 22nd, 2017 – Standardisation of IT equipment and support infrastructure at rack level helps to simplify the process of scaling up data centres and meet increasing load demands. Fully integrated racks loaded with IT that can be rolled into place provide a familiar and reliable way of upgrading data centre resource quickly.
For centralised data centres, including the latest hyperscale facilities, there is a need to develop larger increments of IT resource, known as Pods, comprising a group of racks in one or two rows to facilitate rapid upscaling. With no equivalent industry standards for such deployments as yet, operators have to design and specify their own Pod architectures.
A new white paper from Schneider Electric, the global specialist in energy management and automation, explains how to specify the physical infrastructure for an IT Pod. Entitled “Specifying Data Center IT Pod Architectures”, White Paper No 260, describes the optimum configurations based on available power feeds, physical space and average rack power densities that should be considered when designing an IT Pod.
IT Pods present a number of advantages. Apart from the obvious convenience for hyperscale data centres being able to expand using larger increments, a Pod can be used as a logical grouping of business applications and can be assigned in entirety to a single significant client or line of business. Pods can also be used to vary the technologies available in a large data centre, for example housing Open Compute Project (OCP) racks in one Pod and traditional server racks in another.
Electrical redundancy can be varied Pod by Pod so that critical high-availability applications requiring dual power feeds can be kept separate from less critical ones, thereby maximising investment where it is needed and reducing cost where it is not.
The three main drivers determining Pod architecture are the choice of electrical feed, the physical space available, ie the number of racks, for a Pod and the average rack density required. The White Paper suggests guidelines for the power requirements of a Pod, recommending that each should be treated as either a low-power assembly running at 150kW or a high-power version capable of 250kW. Grouping racks together into Pods, each having a dedicated electrical feed, helps avoid the problem of complex power distribution that often emerges in a data centre when some racks have to “borrow” breaker space from Power Distribution Units (PDUs) that are not physically close.
To make the best use of physical space, the longest practical Pod for the room, balancing power and rack density, should be designed. But as always there are challenges pertaining to each individual IT room which must be considered, such as room shape, building columns, ducting etc. The paper suggests best practice guidelines to follow in every case.
When determining rack density, the paper recommends that designers and operators should underestimate the expected rack density because it is more expensive to deploy IT below the data centre design density than to deploy above the design density.
Standardising Pod designs and limiting the number of configurations can make Pod level deployments quicker and easier. Organising IT racks into Pods makes it easer to vary power and cooling redundancies and architectures based on specific business needs.