MIT Announces New System Capable of Reducing Data-Transmission Delays by Half

Bookmark and Share

Researchers at the Massachusetts Institute of Technology have developed a new network-management system that, in experiments, reduced the average queue length of routers in a Facebook data center by 99.6% — virtually doing away with queues. When network traffic was heavy, the average latency — the delay between the request for an item of information and its arrival — shrank nearly as much, from 3.56 microseconds to 0.23 microseconds.

Like the internet, most data centers use decentralized communication protocols: Each node in the network decides, based on its own limited observations, how rapidly to send data and which adjacent node to send it to. Decentralized protocols have the advantage of an ability to handle communication over large networks with little administrative oversight.

The MIT system, dubbed Fastpass, instead relies on a central server called an “arbiter” to decide which nodes in the network may send data to which others during which periods of time.

“It’s not obvious that this is a good idea,” says Hari Balakrishnan, the Fujitsu Professor in Electrical Engineering and Computer Science and one of the paper’s co-authors.

With Fastpass, a node that wishes to transmit data first issues a request to the arbiter and receives a routing assignment in return. The researchers’ experiments indicate that an arbiter with eight cores, or processing units, can keep up with a network transmitting 2.2 terabits of data per second. That’s the equivalent of a 2,000-server data center with gigabit-per-second connections transmitting at full bore all the time.

“This paper is not intended to show that you can build this in the world’s largest data centers today,” Balakrishnan says. “But the question as to whether a more scalable centralized system can be built, we think the answer is yes.” Moreover, “the fact that it’s two terabits per second on an eight-core machine is remarkable,” Balakrishnan says. “That could have been 200 gigabits per second without the cleverness of the engineering.”

Today, to avoid latencies in their networks, most data center operators simply sink more money into them. Fastpass “would reduce the administrative cost and equipment costs and pain and suffering to provide good service to the users,” Balakrishnan says. “That allows you to satisfy many more users with the money you would have spent otherwise.”

For more details, visit