fbpx
The Stack Archive

How Cloud and IoT are shifting data centre focus to the Edge

Thu 5 Jan 2017

Edge data centre

kevin-brown-schneiderKevin Brown, senior vice president for data centre solutions at Schneider Electric, discusses why new tech trends are requiring businesses to rethink resiliency at the Edge…

With the emergence of trends such as Big Data, IoT, and the wider hybrid computing environment, we’re seeing a three-tier data centre architecture developing. This network includes large, cloud-focused data centres and regional facilities, as well as localised, or edge data centres. The historic idea of server rooms and wiring closets being different from data centres is disappearing. Businesses operating in a hybrid environment are beginning to realise that it is the smaller localised data centres that define availability, and not the big centralised data centres.

This trend stems from a recent demographic and cultural shift. Younger generations view network availability the way that we previously viewed the availability of electricity. With this change, businesses need to extend their concerns from simply supplying power to the IT rack, to actually delivering the experience that customers want. Poor latency in online gaming, for example, is just not acceptable. As this generation comes in, so will the expectations, and businesses need to be prepared.

Previously an operator’s focus would be tuned to the big, centralised data centre – where the mission-critical data is stored – and there was little concern for the smaller server rooms. The new model that I see evolving almost reverses this consideration. The small, localised data centres will become better equipped in terms of resiliency and redundancy compared to the large cloud data centres. Just because they are small facilities does not mean that the level of infrastructure supporting them should be any less important.

Vertically-orientated implementations

Taking IoT as an example, in the field of oil and gas exploration, the level of availability and the amount of compute that this will demand locally is enormous. In this instance, the only connection to the internet is through satellite, and getting to the cloud via satellite would demand incredible bandwidth and the associated cost would just not be feasible. This means that a tremendous amount of computing power will need to be localised to support the operation.

Localised data centres will crop up in various verticals, on factory floors, bank branches and in retail stores. While there will be similar tools available, the final configurations will vary depending on requirements. Using the example of the gas rig again, perhaps there is a spare room available, so security would not need to be built into the rack itself. Conversely, for a retail store the rack would need to be very robust and physically secure.

Challenges to deployment

As companies convert to this realisation, they do face considerable technical challenges. While rolling out these smaller, localised data centres sounds straightforward when you first look at it, it is a lot more complicated to match the configuration to the requirement. Companies are looking for support in how to best build, deploy and service localised centres.

The critical consideration involves assessing business functions and the number of people impacted by the data centre.

What’s more, CIOs and technical staff at major companies are struggling with the cultural change. Asked if they would consider implementing dual networks at their localised data centre, they typically respond that first of all it is not a data centre, it is a server room, and secondly that they do not equip server rooms with these technologies. Even when they understand the importance of these small data centres themselves, they are still not sure that they can convince their boss.

A further stumbling block in rolling out a high-grade localised data centre is managing the cybersecurity threat around physical access. Generally, smaller server rooms are not nearly as physically secure as you would expect at bigger facilities.

For example, when visiting a colocation data centre, you have to provide ID, wear an ID badge, and leave any baggage at the entrance. Whereas with a smaller wiring closet, you can walk straight in, and all you have to do is talk to the janitor.

Calculating availability

The critical consideration involves assessing business functions and the number of people impacted by the data centre. If one individual gets disconnected, it’s not too disastrous, but if an entire call centre goes down there is an immediate impact. Businesses therefore have to put together a methodology for availability.

Using figures from the Uptime Institute, a Tier I data centre offers 99.65% availability, while a Tier IV provides 99.98% availability. Although that does not sound like a big difference, it is hugely significant when the percentages are translated into the number of hours. Looking at it this way, a Tier IV data centre has roughly 25-30 minutes of downtime a year, whereas 99.65% at a Tier I facility translates into 28.8 hours of downtime a year. That’s a 50 times difference.

Therefore, if there are 30 hours of downtime each year on a localised data centre and there are a thousand people relying on it, that’s 30,000 man hours. However, with the centralised data centre, if those same people are affected, or even if 10,000 people are down for 30 minutes, that’s 5,000 man hours.

Using this calculation model, businesses can order departments and functions by severity of failure and multiply the numbers together to establish a ranking system. On this basis, we can learn which edge data centres need particular attention.


Schneider Electric is an official Knowledge Partner for The Stack, providing industry expertise on many aspects of data centre infrastructure. To read more from Schneider Electric, please visit its Partner Page.

Companies featured:

Tags:

Cloud Data Centre feature IoT Schneider
Send us a correction about this article Send us a news tip