TI, Stanford look to optimise networking chips for OpenFlow
Software-defined networks (SDNs) aim to cut through the complexities of a growing variety of protocols and techniques often coded into silicon that make today’s nets difficult to manage. SDNs also aim to bring the same kind of virtualization capabilities to networks that are now widely used in computers.
The OpenFlow specification developed at Stanford is leading the charge, gaining support in products from Hewlett-Packard, IBM, NEC and and a handful of startups. But some say it is just one approach to creating a software defined network.
The trend could be disruptive for established companies such as Cisco Systems that have deep investments in systems and chips geared to handle today’s protocols.
In an effort to get ahead of the curve, Cisco recently announced a broad initiative called Open Network Environment that it says goes beyond OpenFlow. Meanwhile the company also is developing systems for OpenFlow as part of an internal startup called Insieme, according to a New York Times report.
“In short, Cisco is planning on offering multiple approaches to program networks across its portfolio,” said Nick Lippis, a network analyst, writing for a GigaOM Pro report.
Meanwhile at Stanford, TI and others are collaborating on “a paper design of an OpenFlow optimized switch fabric,” said Martin Izzard, director of systems and applications research at TI.
The project, expected to run through the end of the year, aims to define in silicon “a classification pipeline variable in length and able to look at different kinds of labels or tags,” said Izzard.
The packet lookup and classification techniques in an OpenFlow switch are expected to be relatively generic so they can be flexibly configured as needed by network managers. Indeed, engineers are hammering out a high-level applications programming interface for routers and switches as the next big step in the evolution of the OpenFlow specification.
The goal of OpenFlow is to let end users program network systems using servers as controllers. However, the switches and routers will still require some embedded processing to interpret the APIs and carry out their jobs.
“My gut feel is the [OpenFlow] machines will be more efficient than ones that have to serve a large number of protocols,” said Izzard. “These boxes could be software configured for different protocols and topologies so they could be less power hungry,” he said.
“This is an inflection point in how networks are developed, and that means there’s a chance for markets to change,” Izzard said. “My timeline is this year to understand this market well and see which way it’s going,” he added.
“In data centers there’s a clear case for OpenFlow, but how quickly it moves into the WAN is a big discussion,” he said. “It’s not completely clear if it will move into public networks, but I expect this year it will become clear,” he added.