The edge has been “real” for years. What’s new is the scale and scope of mission-critical edge deployments.
It’s safe to say the data centre industry expects a lot from “the edge”. The short, sharp, edgy term evinces the bleeding-edge and the frontier of innovation.
On the typical view edge is a (relatively) new IT trend, demanding specific, high-tech innovations to process data at remote enterprise sites away from the data centre, caused by an uptick in demand for ultra-low-latency applications, such as those required to orchestrate autonomous cars.
The argument goes that the need for low latency in a growing number of scenarios means enterprises don’t have time to make repeated trips back and forth to the cloud or central data centre. Edge processing in these applications combines monitoring and analytics. Monitoring demands “IoT” type devices to sense critical environments, such as oil and gas rigs, and analytics requires solutions such as edge-based AI inference and training that can analyse and shoot out data back in real time, say, to proactively warn personnel of danger.
As the edge provides an edgy name for edgy technologies from a marketing standpoint it is without equal. The term’s inherent marketability has also led to what you might call EFS (Edge fatigue syndrome): it’s not that architectures are not being deployed in specific scenarios where real-time response is a necessity. The problem is that the law of low latency is not as generalisable as we are told. Can the average enterprise justify the significant outlay on new edge infrastructure to save a few ms, rather than relying on predictive analytics in the cloud?
Last week Schneider Electric welcomed journalists to its Boston facility for a mini-conference named ‘Life at the Edge’. Over the three days, we were systematically guided through Schneider’s edge strategy — encompassing integrated systems, a cloud-based software stack, and an ecosystem of partners — alongside examples of the edge in action. Schneider is in little doubt that a considerable number of commercial, industrial and telco businesses will soon be dependent on all three ingredients. That’s because Schneider has a slightly different take on what the edge is.
Defining the edge
As mentioned, it has been commonly accepted that edge computing simply describes the processing of data close to where the data is being produced. What tied together the wide-reaching presentations and demonstrations across the three-days in Boston is that Schneider views it a mistake to think of the edge as referring to a necessarily new breed of distributed infrastructure. Au contraire. The edge has been around for years. In server rooms, in operating theatres, in research facilities, if you look at the edge primarily through the lens of mission-criticality.
Kevin Brown CTO and Senior VP of innovation at Schneider Electric’s power division, told attendees that the mainstream definition is unnecessarily narrow. The edge, he said, is simply the “first-point you’re plugging into the network” which encompasses any computing enclosure, space or facility physically closer to the point of origin of data or user base, such as server rooms or wiring closets.