MENU

Reclaiming lost yield through methodical power integrity optimization

Reclaiming lost yield through methodical power integrity optimization

Technology News |
By eeNews Europe



As designs are moving to 28nm and beyond, designers fully experience the effects of the much higher power density and diminishing effectiveness of decoupling capacitances at these geometries: failures due to dynamic power noise integrity issues is a significant contributor to yield loss in many designs. Synchronous switching and increasing di/dt at advanced process nodes (Figure 1) makes it increasingly challenging for designers to deal with on-chip dynamic voltage drop (DVD) and high frequency electromagnetic interference (EMI). And neither is to be taken lightly; studies have shown DVD fluctuations introduce sizable gate delays causing timing-related yield loss, and EMI from digital switching similarly cause mixed-signal yield loss due to compromised noise integrity.

Figure 1: Transient current (di/dt) has increased significantly for each new process node, causing higher dynamic voltage drop and digital switching noise (Source: INSA, Toulouse, France).

Test-mode yield loss due to DVD

Designers already deal with a DVD ‘budget’. In a study done by Mentor Graphics and company SMIC, measuring failures across 25 designs, 49% of the yield loss was attributed to SCAN/ATPG/function fail defects (Figure 2).

To save cost of implementation, an on-chip Power Delivery Network (PDN) is often dimensioned for functional modes; not the worst-case test mode. As the scan test mode results in a higher dynamic power density than functional modes, chips that would work properly in functional mode actually fail in scan test due to the increased DVD. It is somewhat ironic that design methods and tests, which should guarantee more good products and higher profit, themselves contribute to an increasing yield loss


Figure 2: SCAN/ATPG failure is a significant share of the total yield loss. (Source: Mentor Graphics, Taipei, Taiwan).

Mixed-signal issues

The integration of digital and analog/RF circuits in mixed-signal designs has always caused even the best designs – and best designers – substrate noise problems. The impact of the substrate noise relates to the distance between noise aggressor and victim circuit, the layout of the circuit and the behavior of the digital circuit (magnitude and spectrum of noise). Advanced process nodes, originally only adapted in digital designs, are now being used in mixed-signal products too, putting further pressure on specifications and yield. Digital blocks are increasing in size hence causing more noise. Furthermore, designers have to deal with high-frequency effects of a much steeper di/dt at newer process nodes.

As an example, moving from a 180nm to a 65nm technology, analog designers have seen substrate noise from synchronously switching digital blocks go up by 18 dB; purely by change of process and without changing architecture or design.

Meanwhile, new product performance requirements put an even higher demand on the mixed-signal design, just look at the modulation schemes in wireless LAN (Figure 3):

Figure 3: Evolution of wireless standard complexity over the last decade. Going from QPSK to 256 QAM, noise requirements have increased by 27dB (Source: Teradyne Inc.).

Evolving from 802.11b (QPSK) through 802.11ac (256QAM) significant advancements in modulation efficiency and data density was demonstrated. However, the Error Vector Magnitude requirement – a measure for the allowed deviation of constellation points, from the required position – has increased significantly: it now needs to be 27 dB better than the original 802.11b specification.

So, as 10’s of dBs of noise is being introduced by scaling the process nodes, new applications and standards require much lower on-chip noise than previous generations. This cocktail of high-performance mixed-signal requirements in increasingly noisy environments, is a cocktail doomed for yield degradation.

However, for both mixed-signal and digital designs it is possible to use a systematic optimization methodology to shape the design’s digital power noise signature, and counteract both dynamic voltage drop and on-chip noise to recover the lost yield.

Improving dynamic power integrity and power noise yield

From a dynamic power integrity and power noise perspective, the fundamental problem with a synchronous design is exactly that; it’s synchronous. Having all flops clocked at (ideally) the same time creates significant power noise integrity problems. But by using a modern EDA back-end optimization tool like FloorDirector, designers can attack this problem in a systematic and controlled manner. FloorDirector works in the time and frequency domains concurrently. By analyzing available timing slack on each single path in each scenario and dynamic power in each use-case, the tool can resolve clock scheduling solutions to shape the current pulse and noise spectrum so that it matches the designer’s power or frequency optimization targets. As all optimizations are within-cycle optimizations, the cycle-to-cycle functionality of the design remains unaltered.

In a recent example mixed-signal design, which was low yielding due to severe functional and scan test power integrity failures, a sizable recoverable yield was revealed: if DVD in scan test mode could be reduced by 20%, and a specific frequency band of the noise spectrum could be reduced by 14 dB, this could remove a significant amount of test failures and increase the yield by 6.4%. The optimization was successful; the DVD and noise targets were well within reach for the design within three days of work.

The earnings rationale is comprehendible:

Yield improvement => less scrap product => manufacturing savings

Manufacturing savings – optimization cost = bottom-line profit

The full calculations are of course complex and set and adjusted by multiple factors, but we can walk through an example and let everyone adjust for their own organizations approach and variables. But, let’s start by making some conservative assumptions to our calculations to make them just and fair:

First, it’s a reasonable assumption that some programs will not benefit significantly, so let’s eliminate program outliers and focus on plus/minus one sigma of a normal distribution, meaning the bulk 68.2% of the total semiconductor revenue. Second, while the earlier mentioned yield improvement of 6.4% might not be the case for all designs, it is a reasonable assumption that at least half of this yield improvement (3.2%) would be achievable.

Now, back to our example: Let’s assume 300M$ revenue from 20 tape-outs in a small/mid-size semiconductor company, 80% yield, and an average material cost of 50% = 150M$ good product is needed. To get 150M$ worth of good material, the company will need to manufacture 150M$ / 80% = 187.5M$ material.

We conservatively assumed an average of 3.2% improvement on 68.2% of the volume, which equals 2.2% yield improvement on the full 100% volume. This means the company only needs to produce 150M$ / 82.2% = 182.5M$ material. Compared to the original 187.5M$, the end result is an additional 5M$ profit.

As the tool license cost and engineering time required to optimize 20 different tape-outs is – by any measure – considerably less than that, the result is a quantifiable and an absolutely worthwhile effort.

It is worth noticing that we started by excluding the outliers to make our calculations conservative and widely applicable. But it is also in the outliers that we find the extremes: At one end, there will be the one design that would not see any improvement from an optimization; there never was a problem to begin with and you would have ‘wasted’ engineering hours and a small amount of money.

However, if this is so, at the other end of the range there will be a design going through too many tape-outs, test and debug re-spins resulting in an embarrassing customer-relations and profit-crushing late market introduction. Still, even after that, the product might not meet the yield expectations.

The morale: The benefits of a structured, methodical approach before tape-out will greatly outweigh frantic problem solving after the first tape-out.

Figure 4: Optimization result of dynamic peak current demand. Peak has been reduced by 60% (!) without affecting implementation cost or system timing. Notice the softer slope which reduces noise significantly (Source: Teklatech A/S).

Manual yield optimization by design is a daunting task; most designs are just too large and complex to really fathom. Using automated design optimization solutions to reduce power and noise integrity problems it is possible to recover a sizable percentage of the yield loss. The designer’s main task is to understand challenging areas of the design, set the optimization targets and simply let the tools do the heavy lifting and systematically reclaim the lost yield. Teklatech’s FloorDirector solution improves digital power noise integrity to reduce failures during scan testing and dynamic functional faults, and to improve analog performance in an increasingly noisy digital environment.

About the author

Christian Petersen has headed multiple sales and marketing efforts to target markets and drive new business strategies within mixed-signal, power and wireless semiconductors. He joined Teklatech in 2009 from a position as Sales Director in Wolfson Microelectronics, where he headed Tier 1 mobile manufacturer penetration. He holds a Business Administration Bachelor degree focused on Management, Marketing, Leadership, Technology and Finance, utilizing faculty from premier business schools Harvard, Haas and Berkeley.

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s