The latest data centre developments set to transform the industry
Thu 23 Mar 2017
On a recent visit to Schneider Electric‘s global Innovation and R&D centre in St Louis, Missouri, The Stack heard from Victor Avelar who works as Director and Senior Research Analyst at the company’s Data Centre Science Centre.
From innovations in power and cooling to the introduction of AI and analytics, the expert discussed the emerging IT trends that will define the modern data centre. Below, he provides a summary of his current areas of research.
Role of the micro data centre
Beyond its potential for edge computing, Avelar considers a future data centre model which would see businesses deploy a series of micro data centres in aggregate to form a distributed IT estate, delivering the equivalent capacity of a single MW data centre.
‘Let’s say you take a 1MW data centre and break it up into 100 10kW micro data centres. These micro sites could be within existing offices and buildings in a business’ real estate portfolio. It could be possible to connect these micro facilities so that they logically appear like a 1MW data centre,’ he suggests.
While Amazon and other internet giants are adopting a similar distributed approach, enterprise customers should not ignore the benefits. The model offers significant savings on installation, with access to existing generators, switchgear and cooling. Avelar explains that where this equipment does not exist, equipment such as in-row cooling or DX systems could easily be bolted on.
‘Our team found that if you can break an MW data centre up into 100 pieces, distribute them out and connect them over the internet, you can save around 40-50% on the CAPEX of the data centre,’ he says.
Avelar further argues that the model makes sense from a redundancy point of view. He asks why a company would build a large MW data centre centralising its mission critical equipment, and creating a single point of failure.
Businesses could take the money they would be investing in fortifying one building, and distribute its assets across multiple locations and geographies – ‘it’s like a RAID array, you take one out and you may see a little latency but you’re not completely out.’
Conversely, Avelar references the major Google outage which hit the internet giant’s cloud services in 2016. Despite following a distributed model, concurrent maintenance events had triggered a rebalancing of data by the distributed storage system which resulted in elevated latency and errors in one of its zones. ‘This example raised by Google demonstrates that while we may have a distributed architecture, there still remains a point of failure in the processing. However, over time these problems can be resolved.’
The IT admin community is not quite ready to roll out this distributed micro data centre approach, says Avelar, but just like every other technology, virtualisation included, we will become confident in its potential and could see enormous business benefits.
While it will have particularly sensitive ramifications for calendar-based maintenance services, condition-based maintenance (CBM) could change the face of data centre operations. Avelar argues that the development, which has been discussed in research circles for over 10 years, is finally resonating with data centre managers.
The technology involves using data collected from sensors pinpointed around the data centre to make predictions as to when something is going to fail. Avelar says that while businesses will need to invest more time and money installing these sensors to be able to gather the information, the system will ultimately lead to reduced maintenance costs and improved reliability.
The researcher also suggests that a condition-based approach could prevent human error caused by maintenance operations. He refers to the dilemma linked to routine maintenance whereby there is a certain probability of introducing a new defect regardless of whether there was something broken to begin with.
‘We should all aspire as data centre managers to reach a point where we can receive a call or an email notifying us that a technician is showing up tomorrow to replace the capacitor in our UPS because it has another month before it fails, with a 90% certainty.’
Artificial intelligence will play a big part in this predictive capability, Avelar surmises, adding that the more data that is collected, the more confident providers will be in selling and leasing their data centre equipment.
Next, Avelar turns to his preliminary analysis of energy consumption in indirect air economiser technology compared to chilled water systems.
From initial research, the Schneider Electric Data Centre Science Centre found that if you start with the same 3000-amp switchgear and work out how many IT amps are left over after accounting for mechanical and electrical components, you can gain between 100 and 200kW extra IT using indirect models, depending on the location.
‘If you’re in Miami, Florida, which is hot and humid, you’re going to have to size the compressors in the indirect air economiser 100%, so you tend to lose some advantage. You don’t have to deal with large chilled water pumps so you still have a certain advantage, perhaps a 50kW advantage. Whereas in Seattle, Washington you could get an extra 150kW,’ explains Avelar.
The expert suggests that such technology could provide a great advantage for colocation providers which are buying the 3000-amp gear. ‘These businesses have to buy the equipment anyway, but if they select an indirect air economiser cooling system, which may cost a little more initially, they could get an extra 100 or 250kW of IT load which they can then lease out to customers.’
To keep up-to-date with more of Avelar’s expert commentary and latest analysis of the data centre market click here.
Schneider Electric is an official Knowledge Partner for The Stack, providing industry expertise on many aspects of data centre infrastructure. To read more from Schneider Electric, please visit its Partner Page