
Gigabit serial links: Gateways to multicore scalability
As multicore processors benefit increasing numbers of high-performance, data-intensive applications, such as wireless basestations and high-performance compute platforms, system scalability can only be achieved through high-capacity embedded interconnect. Gigabit serial links help enable system scalability by reducing system cost, area footprint and pin count while delivering greater parallelism, performance and capacity.
Gigabit serial links define the physical layer of the high-speed communication link. The serializer/deserializer (serdes) at the heart of the gigabit serial link transforms the parallel data inside the device into serial data streams for communication to the external world. Compared with parallel interfaces, serdes-enabled serial links shrink the device area and package size while reducing cost and power consumption, enabling higher system performance.
Figure 1 offers a high-level overview of serdes operation. In the transmit direction, the byte serializer converts parallel bits into serial bytes, then encodes them before sending them out to the serial links.
The most common encoding scheme is 8b/10b, which maps 8-bit data bytes to 10-bit code by adding clock and framing control information so the receiver can recover that information and align it with the transmit data.
In some cases—such as for 10-, 40 and 100-Gbit/second Ethernet—64b/66b encoding is used to enable increased data payload throughput.
In the receive direction, the serial input is first decoded by an 8b/10b or 64b/66b decoder. It then is fed into a clock and data recovery (CDR) block to synchronize with the transmit clock and framing before being sent to the deserializer for conversion to parallel data for internal processing.
![]() Figure 1. The serializer/deserializer is the foundation of gigabit serial links. Click on image to enlarge. |
Many communications protocols can be built on top of the serdes function for various data-intensive applications. Figure 2 shows a typical system-on-chip integrating the CPU and digital signal processor, as well as hardware accelerators for application processing. Gigabit interconnects that can be built on top of the serdes function include Gigabit Ethernet, Common Public Radio Interface/Open Base Station Architecture Initiative (CPRI/OBSAI), JESD204B, Serial RapidIO and PCI Express (PCIe).
![]() Figure 2. Communications protocols can be built on top of the serdes function in data-intensive applications. Click on image to enlarge. |
These interconnects greatly enrich SoCs for today’s high-performance computation needs.
The options in detail
Gigabit Ethernet is a widely adopted data link layer for wired data communications. The standard’s interface rate is being increased to meet bandwidth requirements from 1 Gbits/s to 10, 40 and 100 Gbits/s. 10G Ethernet has become popular in recent years and can connect to various physical layers (PHYs) through optical fiber or copper physical media.
In 2010, the IEEE802.3ba standard was established to support 40G and 100G Ethernet; here, either four or 10 lanes with 10- or 25-Gbit/s respective signaling are used to achieve a 40- or 100-Gbit/s respective data rate.
Gigabit Ethernet can be used as a backup connection for either short- or long-reach data transport, as it delivers packet-based non-real-time data for applications that can tolerate the communication latencies. Latency can be reduced in certain cases through cut-through operations in Layer 2 switches, where data packets can be forwarded as soon as the destination MAC address is received.
Low-cost, low-pin-count PCI Express is a standard bus architecture widely used in consumer, server and industrial applications, primarily for computer expansion to peripherals such as graphics cards, server motherboard interconnects and computer-based control systems. Created in 2004 by Dell, Hewlett-Packard, IBM and Intel, PCIe can support up to 32 lanes. Each lane in PCIe version 2.x can support a 5-Gbit/s data rate; each lane in version 3.0 can support 8 Gbits/s. PCIe version 4.0, going through the specification process now, is expected to support 16 Gbits/s per lane.
PCIe can form a tree topology (Figure 3), with nodes connecting to one another via point-to-point links. Visualize the root node as the root complex, the leaf nodes as endpoints and the nodes that connect multiple devices to each other as switches.
![]() Figure 3. Example of a PCIe standard bus architecture tree topology. Click on image to enlarge. |
The Common Public Radio Interface and the Open Base Station Architecture Initiative both target wireless basestation applications and are used for baseband interconnects to RF radio heads. CPRI and OBSAI have similar radio interfaces but with different feature sets; OBSAI enables interoperability among different vendors’ radios, while CPRI is widely adopted by major basestation OEMs and is more focused on the PHY and link layers.
CPRI/OBSAI can support 6.144 Gbits/s per lane. The latest CPRI version, 4.2, can support 9.8 Gbits/s per lane.
Traditionally, data converters use high-speed, low-voltage differential signaling or low-speed, JESD207 parallel interfaces; but as more bandwidth and antenna paths are required in a system, the parallel interface puts a heavy burden on SoC package, size and cost.
The JESD204 serial standard provides gigabit serial links to support a high sampling rate, as well as more antennas, with greater area and cost efficiency.
JESD204B supports one link with multiple, aligned lanes, with each lane supporting up to a 12.5-Gbit/s data rate with deterministic latency.
An example application would be the use of JESD204B as a serial link between the wireless small cellular basestation processor and the integrated DAC/ADC analog RF front end.
As a result, the basestation can be built on a much smaller footprint with much lower power requirements, offering a cost-effective small cell solution.
Texas Instruments’ HyperLink multicore architecture uses a proprietary protocol on top of the serdes function, with four links each running at 12.5 Gbits/s, for a total of 50 Gbits/s. HyperLink not only supports high throughput between devices, but does so without requiring a complex software protocol. Each linked device can be simply viewed as a memory-mapped device separate from the others, and can access the memory and peripherals accordingly.
This greatly simplifies interchip communications and allows systems to scale easily by interconnecting multiple KeyStone-multicore-based devices for applications such as wireless basestations, media gateways and cloud computing servers, which all require multiple chips on a single board.
Another serial I/O architecture is RapidIO, a packet-based interconnect largely used in embedded systems such as DSP-based applications. It provides high-speed data transfer with low latency, as well as the ability to interconnect multiple endpoints.
Serial Rapid IO is widely used in wireless infrastructure, video and image processing, military radar, server and industrial applications. The layered architecture includes logical, transport and physical layers to facilitate message passing, intercore communications through shared memory, data streaming and traffic flow control. Serial RapidIO supports up to 16 lanes, each running at up to 6.25 Gbits/s.
Other serial links include Infiniband, popular in server and high-performance computing installations, and Serial Advanced Technology Attachment (SATA), often found in storage devices.
Whether connecting devices within equipment, devices to backplanes or interequipment devices, gigabit serial links are the ultimate gateways for meeting next-generation data bandwidth requirements with lowered cost, simplified design and infinite scalability.
About the author
Zhihong Lin is strategic marketing manager for Texas Instruments’ wireless basestation infrastructure business, responsible for defining and planning key requirements for multicore SoCs for basestation applications. Lin holds an MS in electrical engineering from the University of Texas at Dallas.
