MENU

Supercomputer, data center nets diverging

Supercomputer, data center nets diverging

Technology News |
By eeNews Europe



Researchers from China, Japan and the U.S. will give the first looks into their supercomputer interconnects at the event. Fujitsu will describe the Tofu interconnect used in the K computer, ranked in June as the world’s fastest system at 8 petaflops on the Linpack benchmark.

China researchers will describe for the first time Galaxy, the network used inside their Tianhe-1a system that held the top spot last year at 2.5 petaflops. IBM will disclose details of the interconnect used in its Sequoia supercomputer that expects to deliver 20 petaflops when it is completed in 2012.

All three networks use roughly similar concepts. The IBM interconnect uses a five-dimension torus structure, Japan’s Tofu uses a 6-D torus/mesh network and China uses a hierarchical fat tree.

The highly complex nets emerge at a time when large data centers are simplifying their network structures to reduce complexity and cost.

"In ten years there could be supercomputer networks using 17 degrees of torus, but data centers are trying to converge on one link, they are trying to be as flat as possible," said Torsten Hoefler, a researcher at the University of Illinois and technical co-chair of the event.

The trend represents a 180-degree turn. Years ago data centers adopted the kind of clustering interconnects widely used in supercomputers as a cheap and effective way of building out large, powerful systems based on networks of many small nodes.

The network in the IBM Sequoia system is expect to break new ground in delivering high bandwidth at very low power consumption levels. To achieve that goal, IBM’s interconnect chip uses a multicore messaging unit and special packaging techniques to be described at the event.

The China Galaxy and Japan Tofu networks—and other interconnects described at the event—all pack increasing amounts of intelligence into the network. They handle in hardware functions such as routine queuing and adding floating point numbers stored in data packets to boost performance and lower power consumption.

"The trend we see is for efficacy reasons networks are becoming smarter, and more of the networking stack is moving into the network chip–even parts of the message-passing interface library," said Fabrizio Petrini, chairman of the event and a researcher at IBM.

"You can execute algorithms in the network in nanoseconds at line rate–this is very powerful and fast compared to sending operations to the compute node," said Petrini.

At Hot Interconnects, the supercomputer designers will share thoughts about future networks to enable systems to hit tens of petaflops and even eventually crack the exascale performance barrier.

Separately, GNodal will describe its 64-port 10 Gbit/s Ethernet switch at the event. The company makes 10 and 40G switch systems based on its ASICs and was founded in part by former engineers from interconnect specialist Quadrics.

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s