Augmenting operations at the edge
Mon 18 Mar 2019 | Kevin Brown

Kevin Brown, VP Global Data Center Strategy and Technology at Schneider Electric, speaks to Techerati on meeting data centre challenges at the edge and how emerging analytics capabilities can help augment operations
There is a huge amount of hype about the edge. It is plugged to be a multi-billion-dollar opportunity and is expected to make things massively more efficient but for the most part, the industry is still struggling to really define what the edge is.
The way I like to talk about it is as a hybrid environment which encompasses three types of data centre.
Firstly, there is the centralised data centre owned by the cloud giants such as Amazon, Google and Microsoft. Secondly, the regional edge. These are managed by the big players too, but mainly by enterprise – especially legacy companies.
These businesses generally have lazy apps that they cannot, or do not want to, put into the cloud. They may have their own data centres, or they may choose to opt for colocation.
Lastly, the local edge – yesterday’s server rooms and wiring closets. Considering the resiliency of the whole system and as the first point of connection, this is a very important part of the equation.
Looking at traditional management tools, there is a fundamental problem at the edge
Local edge micro data centres are typically historic facilities that nobody really pays too much attention to – there are wires hanging all over the place and are not usually secure to any degree. They are not what you would consider best practice.
The reliability of this edge, therefore, is much lower than the big hyperscale centres, and if you start looking at the availability of the whole system, the availability of the local edge starts dominating the calculation.
What needs to change?
To meet this challenge at the edge, the ecosystem needs to change the way it works together, re-architect inadequate management tools, and consider how new technologies such as AI and analytics can help to augment operations.
Looking at traditional management tools, there is a fundamental problem at the edge – with a host of different sites, how can you effectively control such diverse operations?
The solution is that all management tools need to become cloud-based. When companies migrate to a true cloud architecture management system, they can pay as they grow, maintenance is automatic, and they don’t have to worry about upgrades. Cybersecurity is also always up-to-date.
With cloud-architected management tools, data is also automatically stored in a data lake, meaning that the IT players, service providers, and the customer, are all able to look at the same data at the exact same time, act on it immediately and increase stability.
You might have experts in UPS looking at your data logs and environments. Through their own experience and knowledge, they would then be able to see which ones are at risk.
The technical challenges are real, but the more painful conversations are internal
We need data scientists working with subject matter experts like these to develop the algorithms, and in time turning them into machine learning systems.
Normalising this data is a big challenge. With UPS, for example, there are several different ways of reporting power. Is it the input power to the UPS, or is it output power? Is it reporting watts or is it reporting VA? What is the context of the data? How accurate is the data?
Once we conquer issues around normalisation we will be able to do something truly meaningful with AI and help simplify customers’ experience. For AI to work, we have to exert a focused effort on trying to continue to get better analysis, and ultimately become more predictive.
A further challenge is getting people to understand the opportunities it gives us and change their behaviour. The technical challenges are real, but the more painful conversations are internal and it is much more about the larger cultural shift that we need to tackle.