The fifth edition of Data Centre World Asia Singapore took place at the Marina Bay Sands last week. On day two was a much-anticipated panel discussion on the future of edge data centres. In this article panellist and A*STAR Head of Data Centre Joshua Au explores in more detail the role of edge data centres in a hyperscale world
Hyperscalers have fundamentally changed the way data centres are designed. Granted, the design of each data centre ought to be based on the business needs of its customers, but what hyperscalers have demonstrated is that designing data centres for intensive scaling brings matchless benefits.
They have the economy of scale to customise the application layer, the server and networks, and the facilities. They can experiment to find, replicate, and tweak optimal configurations, allowing them to maximise hardware and power density while minimising the overall costs of powering, cooling, and administrating of the data centre.
Once the application stack is designed for distributed computing, the impact of individual server nodes, racks, and perhaps even data halls failing is reduced significantly. This is what gives hyperscalers the liberty to handle wider ranges of temperatures and power to support exponential needs – albeit at the expense of hardware lifespan.
Perhaps most importantly, from a business perspective, hyperscalers have full oversight and control. They can make informed trade-offs to maximise their investments in a way a multi-tenanted facility would never be able to.
Space for the edge
Edge and hyperscale are not the one and same. However, they both seek the holy grail of maximising compute, and edge data centres can learn from the hyperscale design philosophy: designing the application layer, the server and networks, and the facilities, to maximise compute while achieving acceptable levels of safety, security, and availability.
It’s all down to creating the most value – recognising that design customisation and optimisation, and full stack integration, comes with its own sets of opportunities and risks – and making sure that value is passed on to the customer.
Once we set our eyes on the opportunities (while also being mindful of the risks), it’s easy to understand what’s driving edge data centre deployments. Smart nation projects, for instance, require dispersed compute nodes across the city to manage massive amounts of IoT sensor data; to provide high fidelity, near-real time response for driverless cars and drones; and to improve end-user gaming and live-streaming experiences.
Prospects for edge growth
It’s not just about technological solutions. Business acumen is needed to create and monetise that value. The business case for effective edge data centres becomes more apparent due to the growing need to manage exponential amounts of data, and the opportunity to improve end-user experience once 5G kicks in.
We must match this enthusiasm with caution, as there are a number of cases where edge compute affects human safety. To ensure driverless cars and drones in flight do not pose a threat to people, safe, secure, and reliable real-time data processing over a wide coverage area needs to be the priority.
Network and storage challenges
As mentioned above, the main goal of edge computing is to maximise the compute cycle for given transmission. Reducing the need for the compute to travel from geographically dispersed locations to the core delivers huge latency and bandwidth gains.
However, there are many challenges. For one, we need to properly figure out how to power and transmit information to the end nodes (e.g. sensors, actuators, and IT devices). Several network protocols commonly used in IoT – such as Zigbee or BLE – are notoriously insecure. Edge expansion is an opportunity for telcos to rethink how the systems that connect mobile networks together spread across 5G coverage areas. Getting this right could alleviate security risks.
There are also storage challenges. We can only expect the volume of data handled to increase exponentially. It has been suggested that each connected car could generate up to 4 TB of data daily. It will be interesting to see how network and storage requirements evolve for edge deployments in places where we would not typically expect to see them. We may see train stations, moving private cars, public trains, and public buses increasingly used to maximise opportunity. Recognising that edge compute may operate at wider temperature ranges, Nokia and Samsung have proposed interesting solutions to address storage needs at the edge with improved abilities to handle thermal load.
There are many untapped opportunities arising from heightened expectations with regard to safety, security, availability, and user experience. Given what’s at stake, we do not want to leave the future of edge to chance. It is up to industry to act shrewdly and rise to the occasion. Although this will require the improved engagement of government, academia, consumers, and supply and distribution chains – the incentives are there for all. Edge data centres should learn from the hyperscale design philosophy: maximising compute while achieving acceptable levels of safety, security, and availability.