Edge networking without compromising on efficiency
Mon 21 Mar 2016
There is a very specific reason for Edge. We deploy stuff at the Edge when the application content, whether it’s activated by a sensor or an individual, is very sensitive to things like latency and bandwidth, and when the necessary performance cannot be fulfilled using a centralised model.
As operational best practices and operational tools like IT asset management and IT service management develop quickly, one could make the argument that it actually doesn’t matter if the infrastructure is in a big central place or in hundreds of small distributed locations. If your management of infrastructure, technologies and processes are good they should be able to cope with a hybrid model.
How the edge advanced
Ask yourself the question, 10 years ago could you have managed both a highly centralised architecture and a highly distributed Edge network at the same time? Probably not. But those management platforms and processes are significantly more mature now than they ever have been. It is reasonable to say that you can deploy at the Edge and still maintain cost-efficiencies, while maintaining reliability and high performance.
Modern databases, modern applications and even modern IT infrastructure are much more able to live in that distributed world than they used to be.
In many ways that’s exactly how the telco market has operated for decades and it has done that by understanding how to deploy infrastructure in both a centralised and in a distributive way. For example, with cellular communications you wouldn’t build one huge radio mast in the middle of a country like the UK because it would be incredibly expensive. It would have to be incredibly powerful, and most importantly it wouldn’t give you the performance if you were a long way away from it. So that’s why they have hundreds of thousands of mobile repeater sites.
By contrast, they don’t have hundreds of customer databases, they just have one central one. The challenge is how to do Edge without compromising efficiencies of cost and everything else.
In many ways Edge is a perfect example of having to deploy something that you know has a significant lifecycle and at the very start of the process you have to think about how you’re going to manage it and how you’re going to make sure that its performance is both reliable and predictable.
Resilience through distribution
Modern databases, modern applications and even modern IT infrastructure are much more able to live in that distributed world than they used to be. Things like database replication, latency tolerance in applications and having thousands of instances of the same application running in sync with each other are all much easier to manage now than they ever have been.
So the industry could start to deploy applications in data centres that are less resilient individually, but could compensate with increased redundancy and resilience by using multiple sites. What’s more you don’t have to build resilience at every level because the infrastructure as a whole becomes more reliable as a result of distribution and failover mechanisms.
It’s also one of the ways you can make an Edge infrastructure more efficient than a centralised one because you don’t have to do all of that over-engineering in the centre that you used to do. You can use the fact that you’ve got a distributed infrastructure to create resilience.
In other words, if you have 100 sites and you want to run a 2N+1 or an N+1 architecture model, the extra site is not as expensive as it would be with a huge monolithic infrastructure where the N+1 would be extremely expensive.
Responding to IoT drivers
The tipping point for Edge computing is going to be the need to increase data centre capacity quickly in response to drivers like IoT. It will be easier to deploy greater numbers of micro data centres than to build a smaller number of massive data centres. Another thing that is changing is that the infrastructure, especially the mechanical and electrical infrastructure that you need to build smaller facilities is much more affordable than it used to be.
In the past you only really reached a combinatory scale when you were building in the multi megawatt size. It’s much easier to do now because it’s easier to buy small pieces of cooling equipment or smaller blocks of power equipment off the shelf. In the past the sweet spot was often at 1MW, so it cost about the same to build a 0.5MW data centre as a 1MW facility. Whereas today it’s not far off half the cost.
The fact that you can now buy all of that infrastructure in one small box and at an affordable level is important. You can assume high levels of quality and reliability in the manufacturing process as opposed to say five years ago if you wanted to build a one rack solution you had to do it yourself which meant that it was most expensive and probably quite hard to maintain.
Read part one of Arun’s interview on immaturity in the data centre industry here.