Facebook designs Express Backbone for cross data centre traffic
Tue 2 May 2017
Facebook has designed a new data center backbone network, called the Express Backbone. Created to promote speed and efficiency of data transfer, the Express Backbone manages cross-data center traffic, after it has been separated from internet-facing traffic.
Demand for bandwidth is always rising, as users become more comfortable with sending and storing ‘rich content’ such as photos and videos. This, along with unscheduled large-scale traffic bursts proved to be a stress on the traditional classic backbone network. Rather than devoting energy to optimizing the existing network, the team decided to split data into two separate streams, creating a new kind of backbone structure for transferring data.
This represents an improvement over the classic backbone network used by Facebook, a single wide-area network that carried both types of traffic. Splitting the traffic into cross-data center and internet-facing as separate streams allows for each type of traffic to be optimized individually, increasing the speed and efficiency of data transfer in a new, flexible system.
The Express Backbone, or EBB, borrows a basic tenet of the Facebook data center network fabric, which runs on a ‘four plane’ topology of parallel networks. It uses a hybrid model for engineering data traffic, running both a central controller as well as distributed control agents. This creates a flexible network that can immediately redirect traffic in case of congestion or failure in a segment of the network.
The team created the software that runs the central controller and the distributed control agents, as well as a traffic estimator. The traffic estimator collects samples of the network and allocates the traffic into different classes in a matrix, which is then relayed to the controller.
While the EBB involved a significant amount of innovation, programming and testing, it took less than one year to complete the first iteration of the new network. The team found that upon implementation, the EBB network improved visibility and response to changing traffic demands held within a simple, multi-plane architecture.
Future improvements will include extending the controller to reserve bandwidth by service, which would allow services to react better to data transfer congestion, and the addition of a scheduler for large bulk data transfers.