The Stack Archive

Keeping up with Data Centre Interconnect

Tue 3 Mar 2015

stu-elby[1]Stu Elby, senior vice president of cloud network strategy and technology at Infinera, discusses the growing demand for inter-data centre traffic and the efficient solutions which should respond to this challenge.

Cloud growth and the early adoption cycle

Cloud-based services have emerged as a critical part of communications. From a consumer perspective, there is a continually increasing amount of communication taking place which has a cloud component, whether it’s between laptops or smartphones. Businesses have also been driven towards cloud adoption due to the cost and complexity of operating an IT back-office. More companies than ever before are finding that it is much more efficient from a cost-perspective to outsource at least some of their IT infrastructure and deploy cloud solutions, whether this is Software as a Service (SaaS), Infrastructure as a Service (IaaS), hosting, colocation, or even building their own data centre facilities. This trend across both markets has pushed the volume of traffic enormously and has driven bandwidth consumption in urban areas in particular.

Virtualisation has been critical in these early adoption stages. It would be nice to think that cloud and data centre services are located close to the end-user and hold everything that is required – that would be very efficient, but sadly it’s not reality. Facebook, as an example, is a social environment where the user is constantly accessing multi-media content from numerous data centres around the world, yet all of the data required is still brought back to the user in real-time. The main way to reach that level of scale and distribution is through virtualisation such as VMs and Linux containers for example. This structure creates a compute and storage environment to handle the large volume as, at such a scale, content and processing has to be spread over multiple data centres.

Data Centre Interconnect

Here at Infinera, and even before when I was working at Verizon, one of the US’ largest telcos, I have observed a rise of social media and of machine-to-machine communication. It has been these types of services that has caused a bandwidth amplification effect. When a piece of information is put into a data centre by an end-user, perhaps through social media or machine-to-machine, it generates a multiplicative amount of bandwidth between the computers, servers and storage. Facebook quoted this multiplication as over 900x – if one bit is put in from a single query on a smartphone, it generates 900 times the traffic between the servers to fulfill that request.

A lot of that traffic may be centralized within a data centre, but even if 10-15% is transferred between data centres that equates to around 100x, i.e. one inputted bit creates 100 bits of traffic between the network data centres. Bandwidth between data centres is now outpacing bandwidth required to connect users to a data centre because of this amplification effect.

Large operators are therefore starting to build their own dedicated networks between data centres to be more cost-effective and to handle this increase in Data Centre Interconnect (DCI) traffic. DCI becomes its own network, separate from local networks which cover end-user traffic.

With this shift in speed and demand for terabits, operators cannot provide a solution that chews up half a rack of equipment and requires a kilowatt of power

Big tech firms with data centres all over the world have networking demands that are measured in terabits per second. As access and aggregation networks start to think about migrating to 100 gigabit wavelengths, larger companies with huge amounts of DCI traffic are already thinking in magnitudes of terabits per second. Businesses are therefore looking for equipment that can handle this pressure, and vendors need to respond by finding cost-effective ways to provide easily and rapidly scalable platforms.

In the early days of cloud, the same equipment was used for both DCI and end-user traffic. However, with the rise of social media and machine-to-machine communication, bandwidth requirements across DCI have taken off and require a new class of equipment.

With this shift in speed and demand for terabits, operators cannot provide a solution that chews up half a rack of equipment and requires a kilowatt of power – that model simply would not fit into a data centre real estate or energy profile. Running a data centre is all about power and space efficiency as they are under incredible growth pressures. The physical building cannot always expand, so every new generation of technology has to squeeze into the same amount of space and consume the same amount of power. It is therefore important to design a system that meets modern data centre footprint and energy goals.

At Infinera we have attacked this challenge by applying our photonic integrated circuit technology. Like any other integrated circuit, this allows us to achieve very efficient power and space profiles. We were the first company to enter the market with a purpose-built box just for the DCI market, called Cloud Xpress, which started shipping last December. The whole box consumes about one watt per gigabit per second. Generations of chips will continue to improve in terms of power efficiency as bandwidth scales upwards so integrated circuits will become a critical solution for meeting the demand from data centre companies.


Cloud’s rapid growth rate and the need to keep power profiles constant are forcing rapid innovation to take place in the industry. The problem is that whatever great solution exists currently will be outdated in 18 months or two years’ time. This is hugely different to the telco market where a platform is created, placed in the network and sits there for 20 years. In this industry however, new technologies come in quick steps and data centre companies are looking to harness each of these iterations as they are released, in line with efficiency strategy.

As a supplier into that space, products should not be considered if they are going to take three years to get off the ground and expected to be in the network for five to ten years. Innovation needs to happen along a year to 18 month cycle in order that the market can quickly adopt a solution and jump on the next price performance curve.

The hurdle for data centre companies is adopting these new technologies quickly and smoothly. Data centres are built to be very efficient from a software management and operations point of view as operators are always adding new servers and storage. This should not be hindered by their network choices; it is therefore essential that all new network solutions are completely programmable, extremely simple, and fit just like another piece of storage equipment into a familiar IT operating environment.

Infinera’s Cloud Xpress will be on display at the Fibre Technologies stand, #G68A, at Data Centre World, 11th – 12th March at London Excel. Register here for your FREE delegate pass.


Data Centre feature interview Verizon
Send us a correction about this article Send us a news tip