MENU

Inphi starts sampling 100-Gigabit Ethernet CMOS PHY solutions to de-risk the development of next generation line cards

Inphi starts sampling 100-Gigabit Ethernet CMOS PHY solutions to de-risk the development of next generation line cards

Interviews |
By eeNews Europe



This month Inphi Corporation unveiled what the company claims is industry’s first, lowest power  100 Gigabit Ethernet (GbE) CMOS PHY solutions that support the IEEE 802.3ba standard and target next generation high density 100G line cards. The company’s new 100 GbE CMOS chipsets are looking to deliver three times less power and twice the levels of integration compared to what is already on the market.       

Inphi’s new IN112510 100 GbE CMOS Gearbox (GB) and IN012525 100 GbE CMOS Clock Data Recovery (CDR) chipsets have been developed to accelerate time-to-market for higher aggregate bandwidth systems while containing costs of next generation 100 GbE line cards targeted for data center and enterprise networks. 

Based on Inphi’s iPHY architecture announced in March 2011 the cost-effective, energy-efficient 100GbE links are aiming to become essential tools for data center and service provider networks, which are struggling to satisfy the global economy’s relentless hunger for more bandwidth. With service providers and data centers demanding technology with low power consumption, Inphi’s latest iPHY CMOS PHY solutions will enable them to easily upgrade to 100 GbE networks while retaining a lower carbon footprint. By integrating multiple channels along with transmit and receive functions on a single IC, Inphi claims the company is  able to double the levels of integration available from existing 100 GbE PHY and CDR offerings. 

The iPHY IN112510 is a single-chip, low-power PHY for 10:4 gearbox applications for 100 GbE and OTU4 high-density 100 Gbps line cards with 25-28 Gbps electrical interfaces.

The iPHY IN012525 is a low-power CDR for 100 GbE and OTU4 next generation 100G modules.

eeNews Europe:  What is the significance of the  new 100 GbE CMOS PHY solutions?

Sheth: In March 2011 at OFC/NFOEC 2011 in Los Angeles we announced our iPHY architecture which is an architecture that allows 100-Gigabit PHY and CDR solutions in a CMOS process. 

Now we have officially sampled a 100 GbE CMOS PHY and 100 GbE CDR solutions to our customers.  This means that Inphi is first to sample the industry’s lowest power CMOS 100 GbE Gearbox and CDR PHY semiconductors for line cards and modules enabling next generation 100G platforms. 

The customers will now have the silicon in their hands to do more detailed evaluation and testing.  

eeNews Europe:  How long has this development work taken so far?

Sheth: About two and half year’s worth of work has gone into this so far.  We are initially announcing a chipset solution.  One is a gearbox chip or 100 GbE PHY that typically resides on a line card.  We are also announcing a 100 GbE CDR in CMOS which will typically reside inside a CFP-2 module.  

eeNews Europe:  What are the key application areas that you are targeting with these solutions?

Sheth: The lower form factor, lower power and lower cost CFP-2 module is expected to be very popular in data centers and enterprise type of applications and that is the primary applications we are going after.  

The CDR chip would sit inside the CFP-2 module and the PHY chip would sit outside on a line card behind the CFP-2 module. That chipset solution allows us to own both ends of the link because you have a PHY chip sitting on a line card driving the CDR chip inside the module to a connector. 

eeNews Europe:  What do you see as the key attraction for designers of Inphi’s new chipset solution?

Sheth: We essentially own the link with this chipset solution that allows us to de-risk the design and our customers will find it very attractive because of the use of a CDR chip solution inside a module that talks to a chip that we guarantee works over that channel. 

That’s the big attraction from a design risk or design development standpoint on the system side.  The other advantage is that we are the first vendor in the market place to have a CMOS solution for 100 GbE PHY and 100 GbECDR.  There have been other announcements but no real silicon.  We announced real product at  ECOC 2011 in Geneva, Switzerland.

The CMOS solution offers a lot of advantages over existing SiGe solutions.  The current solutions on the market are based on SiGe process technology and that is typically very high power and obviously is not as area optimal and there are lower areas of integration. 

What CMOS offers is much lower power.  It is almost three times less than what is available on the market today.  The current SiGe solutions are about 8 W but our solution comes in at about 2.5 W.  So it is a substantial advantage from a power standpoint. 

Existing SiGe solutions have typically been two-chip solutions.  They typically have a transmit chip and a receive chip but what we have done is integrated the transmit and receive chips into a single chip making it like a true transceiver and that obviously saves area specially for enterprise and data center applications where area is a scarce resource. 

eeNews Europe:  What is the benefit of using a digital CMOS process technology?

Sheth: We are manufacturing this in a generic digital CMOS process at TSMC.  This is the highest volume process at TSMC.  There are various flavours of process technology around the 40 nm process node.  For instance there is a low power variant and there is a high voltage variant and we have basically gone with a generic digital CMOS process.  This is a process that sees the maximum volume from companies like Nvidia  and Broadcom because they push a lot of large gate count digital SoCs on this process. 

We have tweaked that process a little bit and we have made that process analog friendly because our chip has a lot of analog circuitry inside it.  We essentially had to tweak the libraries and tweak the cells for performance and power and that became the core IP of the company and then we built this chip around those libraries and those cells.  It still allows us to manufacture the chip on a generic digital CMOS process. Because of the volume it sees this process tends to mature the fastest.  It gets to high yield numbers the quickest which allows us to take advantage of all of that and ride down the cost curve.  So that’s another advantage we bring to the table as opposed to SiGe which is still a boutique process.  You will never see the kind of yields that CMOS sees.  You will never see the kind of cost efficiencies that CMOS will see.

No one has solved these problems at 100 GbE yet.  We are the first and that allows to go out and take a lot of the first generation designs that would like to take advantage of low power, less area and cost efficiencies of CMOS.

eeNews Europe:  How big is the investment Inphi has made to develop the new solutions?

Sheth: This has taken substantial investment.  We have a team in the UK in Northampton.  There is a team of seven or eight people there.  We also have a team in West Lake Village, Los Angeles and we have a team in Santa Clara, California.  In the three sites there are about 30 people.

 We are designing everything on a 40 nm CMOS process which typically takes roughly a few million dollars to do a full match tape out.  It is a multi-million dollar investment and we are not done yet. It is going to be a long term commitment.  This is a business in which the design cycles are long and you get the revenue over a period of five years. 

eeNews Europe: Did you looks at SiGe as a solution?

Sheth: We did not look at SiGe originally because the company has a lot of experience around SiGe but it also has a lot of experience around CMOS.  We have two other product lines.  One is a full blown CMOS product line and the other one is  40 to 100 GbE products like  modulator drivers . Those are typically built in a non-CMOS process or built in GaAs or SiGe or other III/V materials so there is other experience.  That is the original product that started Inphi as a company back in 2000.

So there is plenty of experience in the company to do SiGe based design and there are a lot of folk with a GaAs background but we very deliberately chose not to look at GaAs or SiGe.  These process technologies have much better performance than CMOS but they just don’t have the cost structure or the power advantages, the area advantages and the latency advantages and many, many advantages like higher levels of integration that CMOS can offer.  Besides once any technology moves to a mainstream or higher volume application it tends to move to CMOS very quickly.  We wanted to lead that transition as opposed to having to wait for the industry to make that transition and we would follow.  We wanted to be leaders not followers.

eeNews Europe:  Do you think SiGe will disappear?

Sheth: I think SiGe will always be present in some boutique applications but for these 100 GbE PHY solutions SiGe is already history.  The first generation designs will be served with SiGe technology.  What is deployed in the field today the first generation 100 GbE is mostly SI Ge technology and that allows the 100 GbE technology to be validated.  It allows customers to deploy 100 GbE.  AT&T and Verizon have been able to deploy core routers and edge routers from Juniper, Alcatel-Lucent and Cisco in field and all these core router and edge router systems have 100 GbE PHY boards that would typically have SiGe technology in them.

That is quickly going to come to an end because the next-generation core and edge router boards already want to move to higher densities.  The current SiGe technology allows only a maximum of two ports of 100 GbE on a single line card.  If you want to move to four port solution or a eight or ten port solution you have to go to CMOS because there is no way the power and the area will get scale to those densities with SiGe. So all the next generation designs are going to be CMOS.  That goes without saying.

We have already engaged with a lot of customers who are bringing up the silicon in their labs and started designs around this.  I think SiGe will go away for 100 GbE specifically but I’m sure tomorrow there will be a 400 GbE Ethernet spec and a 100-Terabit Ethernet spec and there are many other applications were SiGe will lead the way and get the first generation of the technology deployed but then transition out and hand it over to CMOS.

eeNews Europe:  What were some of the design challenges you faced when opting for a digital CMOS process technology?

Sheth:  SiGe by its very nature is a very forgiving design process so if you are designing 100 GbE PHY and you could put a bunch of transistors together and put the design together and it is very likely that you would hit the performance matrix that the spec requires.   With the margin that SiGe offers it is a very forgiving process from a performance standpoint.

CMOS is not as forgiving.  I think you really have to drill into the detail of every single transistor and every single cell and make sure that when you integrate these cells onto a single chip that you have worked out every part. 

CMOS also tends to be a noisier process than SIGe.  There is a lot of noise that can couple in from other sources plus the chip has almost 40% digital content which basically means that there is noise coming from those portions too. 

So you need to work out how you decouple the analog portions of the design from the digital portions of the design.  You need to find how you put the different cells together.  You need to know how you put the right amount of decoupling together.     You need to make sure there is no cross-talk.  These are the kind of design challenges that you run into when you are putting a chip into CMOS. While you are doing this you are really truly trying to make sure that you are taking advantage of the CMOS process.

You need to make sure that you put all of that stuff together and you are getting the requisite performance that you need.  At the same time you hit that low power number and you integrate more so that you have performance advantages.  What is a strength also can become a weakness because CMOS brings low power, high levels of integration and area optimization but that same area optimization can lead to more noise and you might have to take a hit on performance. 

So you have to balance the two and that’s where really good circuit designers or architects come into the picture.

eeNews Europe:  How have you achieved the high signal integrity performance?

Sheth: Signal integrity is always a huge challenge when you transition from one key step of the Ethernet to another and this is the other part of the equation. 

The design would not be a successful without us having done a lot of design work that had already be done on the signal integrity side and we are very proud of the work that we have done here. 

Over the last two years before we even got started with the design we had a team of core signal integrity experts who were working with our customers and partners to create a link simulation set-up. 

When you move to data rates of 100 GbE you realise that you are not just going to build a point solution and put it out into the field and expect it to work.  That is a ‘Hail Mary’ approach.  That might have worked at 1 GbE and may have even worked at 10 GbE but was never going to work at 100 GbE.  So you need to pre-empt all of that ahead of time and take the chip models. 

We took all the parameter models of our customers’ channels and data connector models from our connector vendor and went to optics vendors and got their optics models.. We recreated the whole link from our chip to the receiving chip with all the elements in between and then put it through a detailed signal integrity simulation tool.  We created this tool floor that took about a year and half for our design team to build.   The tool allows us to take all of the different parameters and plug it into the model and then extrapolate how our chip would perform in our customers’ environment.  We have various knobs we can turn in that tool floor.

The tool is a statistical simulator as opposed to an actual simulator so it will statistically simulate all the different corners and extrapolate how the chip would perform in a customers’ environment without having to physically to do each and every simulation.

That saves you a lot of time and gets you a lot of data and although it is not the real thing it still gives you a great deal of confidence in terms of how well the chip is going to perform.  That is the basis of how we designed our transmitters, our driver circuits and our receiver circuits.  A lot of this work was done before we started putting the design together.

eeNews Europe:  What do you see as the growth prospects for 100 GbE?

Sheth: 100 GbE growth is going to be going gangbusters.  Although it is coming off a very small base I think it is going to be huge.   There is a real pull from the marketplace as opposed to us trying to push the marketplace.

At 10 GbE in my previous company we had a chip ready in 2003 and we pushed it into the marketplace for almost three years and yet it was really 2006 when we started seeing some traction. 

In that three year period there was no killer app which needed that kind of bandwidth and pulled forward the market place.  That is not the case right at this time.  Bandwidth is a new challenge. 

Customers want to deploy new bandwidth and they want to deploy it at the right price points and that is the big challenge.  I don’t think it is a question of if.  It is now a question of when. 

I think the key challenge is ‘Can 100 GbE get to the right price point?’  If it does I don’t think there is any doubt that there will be a huge pull from the marketplace because of what is happening with all the different sources of bandwidth like tablets and mobile broadband and such like.  I think that is the fundamental difference between then and now.

We expect 100 GbE to grow like crazy and I think the industry’s challenge is going to be to hit the right price points for the end customers to deploy.  That is really the problem we are trying to solve. 

I think SiGe would not be able to get there.  But with CMOS we are hoping to get there and we are going to be able to take this technology and make it mainstream.  By building it on a generic process and packaging technology we will allow the industry to get down to the price points it needs to meet.

 
eeNews Europe:  What about the future.  Is it going to be CFP-4?

Sheth: I think CFP-2 is next and I’m sure there will be CFP-4 and that is where the industry will go.  It is just the same as what happened with 10 GbE.  There was XENPAK, XPAK and X2, which is the equivalent of the CFP today.  Then came the XFP which is the equivalent of CFP-2 today and then came the SFP+ which is the equivalent of the CFP-4.
 

SFP+ is obviously the most popular one but the industry had to take steps to get there and that is exactly what is going to happen here.  CFP-2 is next.  The X2 solution at 10 GbE was a very compelling solution and that is why it became very popular. Unfortunately the CFP solution for 100 GbE is not very a good solution from a power and area standpoint.  Customers want way more bandwidth on a single line card than CFP can fit.  The XFP at 10 GbE was not a very popular form factor.  It was X2 that was really popular and then SFP+ that was hugely popular.  XFP was a step but not hugely popular and especially not with the data centers and the enterprise guys. 

I don’t expect that to be the case with CFP-2.  I see CFP-2 to be quite popular and then the industry will convert to CFP-4 after CFP-2.  CFP-4 will be where the industry will end up and that is where there will be maximum volume.  But I expect CFP-4 to really happen in the 2014 timeframe and CFP-2 to happen in the 2012 to 2013 timeframe.

eeNews Europe:  What do you see as being the greatest future design challenges?


Sheth:
From a PHY standpoint I think the challenge is really going to be the receivers because as you transition from CFP-2 to CFP-4 you are essentially going from a non re-timed interface to a fully re-timed interface. 

Basically there is no CDR inside the optical module.  The signal comes off the fiber and hits the O-to-E converters and then the electrical signal is directly fed into the PHY over the module connecter and over the PCB.  So it gets distorted even further.  What you need is a receive equalizer circuit there that can address the noise attenuation problem as well as the dispersion problem and can equalize the signal to recover the signal.  That is going to be the design challenge for the next generation of PHYs.

Inphi unveils industry’s first 100-Gigabit Ethernet CMOS PHY solutions for next generation line cards

Inphi’s iPHY architecture announcement

Visit Inphi at www.inphi.com

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s