Cisco rolls 16nm ASICs

Cisco rolls 16nm ASICs

Technology News |
By Julien Happich

The new ASICs enable a more flexible set of interfaces for ports carrying 100Mbit to 100 Gbit/second Ethernet and 32 Gbit/s Fibre Channel traffic. The company claims the chips are the first to pack 36 100G ports in a system that fits in a single rack unit. The ASICs also implement flow control tables to monitor all traffic running across leaf and spine switches.

Cisco designed a family of three closely related ASICs for its systems with aggregate bandwidth ranging from 3.6 to 1.6 Terabits/second. Two of the chips, made in TSMC’s 16FF+ process, started shipping in systems in February, a third will ship within two months.

“We wanted the performance and cost advantages” of going to 16nm, Thomas Scheibe, senior director of product management for Cisco’s data center switch group told EE Times.

The process helps Cisco pack 20 to 40 MBytes of memory into the ASICs, eliminating the need for external memory. The cost of external memory “is significantly higher with lower reliability and higher power consumption” than embedded memory, said Scheibe.

Cisco, long one of the world’s largest ASIC designers, has its own approach for dynamically parsing the memory into shared or private buffers as needed. The memory serves the flow tables which are the largest new blocks in the ASICs. The tables help calculate average flow completion times, a key metric to avoid data collisions and get full use from its computer networks.

“Today no top-of-rack switch has a flow table for cost reasons and most chips can’t export the flow data fast enough,” said Scheibe.

The ASICs also let users select the data rates at which they want to run Ethernet and Fibre Channel traffic. They support SFP interfaces for 10G or 25G traffic and QSFP links for 40G or 100G links.

The switch team was the first to use 16nm process technology inside Cisco. Scheibe claims the ASICs were the first switch chips of any kind produced in a 16nm process.

The ASICs are relatively large, about the same size as Cisco’s prior 28nm switch chips. Die size is one of the gating items for switch chips, Scheibe said. He credits Cisco’s ASIC simulation tools for helping the company go from “first prototypes to shipping products in 6-8 months.”

Cisco is making two versions of its new switch systems. One uses Broadcom’s Tomahawk chips as a system fabric and line card switch; the other uses the new ASICs. Some customers demand merchant chips because they modify the software running on the systems to program their own networks, he said.

Fig. 1: Cisco dominates in data center switches, according to International Data Corp. (Image: IDC)

“Cisco believes a significant proportion of its datacenter-networking installed base will be receptive to the price-performance attributes and capabilities of the Nexus switches based on its new proprietary ASIC technology,” said Brad Casemore, an analyst following the sector for International Data Corp.

“They emphasized use cases that spotlight the flow tables, analytics, and flexible port configurations to position the technology for where the market is going…private and hybrid cloud, containers and microservices [and] distributed IP-based storage,” he said in an email exchange.

Cisco commands a whopping 60% of the data center switch market. Its closest rival, Arista Networks, is a heavy user of Broadcom chips, but got just 8% of the market by revenue in 2015, according to IDC. Cisco, Arista and a group of ODMs took a percentage point each in share last year, while Hewlett Packard fell from second place, it said.

Fig. 2: Cisco implements a NAND flash caching layer in its new HyperFlex servers. (Image: Cisco Systems).

Storage software from startup Springpath is one of the key ingredients in Cisco’s latest servers, announced along with the switches. The code manages solid-state and hard-disk storage distributed across server nodes and uses solid-state drives as a new tier of cache memory.

Cisco is an investor in Springpath, founded in 2012 by members of VMware. The startup provides a software interface for storage as well as a proprietary file system to optimize flash reads and writes for performance and chip endurance.

The new servers support continuous, real-time data de-duplication and compression. “Other vendors grabbed open source code; [Springpath] did it right, adding a ton of technology differentiation,” said Todd Brannon who manages Cisco’s data center server group.

In December rival Hewlett Packard Enterprise rolled out its latest data center servers, claiming they let users more easily configure in software compute, storage and networking resources. Like HPE, Cisco claims its latest systems simplfy the job of flexibly managing complex collections of data center gear.

Cisco entered the server business in 2009 and now claims it has more than 52,000 customers. It used ASICs in its first servers to more than double the DRAM memory linked to a Xeon processor.

Cisco’s new server nodes pack multiple solid-state drives (SSDs) and hard disks. They use a 120 GByte SSD to store data logs, a 480GB to 1.6 TB SSD for caching, two SD cards as boot drives and up to 23 1.2 TB 10,000-rpm SAS hard drives.

The SSDs use PCI Express interfaces and are based on commodity flash. Cisco did not adopt 3-D NAND drives for the new systems. The 2U servers use one or two Xeon E5-2600 v3 CPUs.


About the author:

Rick Merritt is Silicon Valley Bureau Chief at EE Times

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles