
Tools and techniques for the implementation of variation-aware custom IC designs
Not only can designers use the Cadence Virtuoso ADE Product Suite to analyze simulation results to verify a design is specification compliant, they can also use it to reduce the effect of process variation on a design. Solving this problem requires more than fast simulation, it requires adopting new tools and methodologies.
Minimizing the effect of process variation is an important consideration because it directly impacts the cost of a design. From Pelgrom’s Law, it is understood that the device mismatch due to process variation decreases as the square root of increasing device area, see Note 1. For example, to reduce the standard deviation, sigma, of the input offset voltage of a differential pair from 6mV to 3mV, means that the transistors size needs to be increased by four times.
By increasing transistor size, the die cost is also increased since die cost is proportional to die (and transistor) area. In addition to increasing cost, increasing device area may degrade performance due to the increased device parasitic capacitances of the larger devices. To maintain performance, power dissipation may need to be increased using more current to drive the larger parasitic capacitances of the larger devices. The result is that analog circuits don’t scale down as quickly as digital circuits, that is, to maintain the same level of analog performance has historically required something like roughly the same die area from process generation to process generation.
If an ADC requires 20% of the die area of a product at 180nm, then after two process generations at the 90nm process node, the die area of the ADC and digital area have become roughly equivalent. After two more process generations at the 45nm process node, the ADC requires 4x the area of the digital blocks, see Note 2. The example that has been presented is exaggerated since the logic in an ADC also scales, however, the basic concept that process variation is an important design consideration for analog design is valid.
Traditionally, the main focus of block-level design has been on parasitic closure, that is, verifying the circuit meets specification after layout is complete and parasitics from the layout have been accounted for in simulation. This focus on parasitic closure meant that there was only limited supported for analyzing the effect of process variation on design. During the design phase, sensitivity analysis allowed a designer to quantitatively analyze the effect of process parameters on performance. During verification, designers have used corner analysis or Monte Carlo analysis to verify performance across the expected device variation, environmental, and operating conditions.
In the past, these analysis tools were sufficient because an experienced designer already understood their circuit architecture, its capabilities, and its limitations. Designers using their experience could achieve performance specifications by judiciously using overdesign. However, ever decreasing feature size has increased the effect of process variation while market requirements for better performance at low power means that designers have less margin to use for guard banding their design. The decreasing feature size has other implications for designers; power supply voltage scaling results in the need for new circuit architectures.
As an example of how power supply voltage affects circuit architecture, consider the evolution of ADC design, where there has been a movement from pipeline ADC designs at legacy nodes, 180nm, to successive approximation ADC, SARADC, for advanced node, 45nm, designs. This change has occurred for several reasons: a SARADC can operate at lower power supply voltages than a pipeline ADCs, lower power dissipation, and performance scaling of the digital gates. The change to a SARADC designs has other implications. For a pipeline ADC, the matching requirements are reduced to the problem of analyzing differential pair mismatch. However, a SARADC DAC requires more sophisticated analysis. Let’s look at an example of statistical analysis of the DAC used in a SARADC. Shown below is the signal-to-noise and distortion ratio, SNDR or SINAD, value of a capacitor D/A converter, CAPDAC. A CAPDAC is used in a successive approximation ADC to generate the reference voltage levels used to compare the input voltage to in order to determine the digital output code. The SINAD of the CAPDAC determines the overall ADC accuracy.

On the left is the distribution of the capacitance variation and one the right is the CAPDAC signal-to-noise ratio (SNR) distribution. From the SNR distribution, the mean and standard deviation of the CAPDAC SNR can be calculated. If the specification of the SNR must be greater than 60dB, does this result mean that the yield will be 100%? Another question to consider is whether or not distribution for the SNR is Gaussian or not, since the analysis of the results is impacted by the type of distribution. Or we might want to quantify the process capability, Cpk. Cpk is a parameter used in statistical quality to control the design margin. In the past, this type of detailed statistical analyses was not available and designers needed to export the data and perform the analysis with tools such as Microsoft Excel.
In Monte Carlo analysis, the values of statistical variables are perturbed based on the distributions defined in the transistor model. The method of selecting the sample points determines how quickly the results converge statistically. Let’s start with a quick review, in the CAPDAC example we ran 200 simulations and all of them passed. Does that mean that the yield is 100%? The answer is no.
It means that for the sample set used for the Monte Carlo analysis, the yield is 100%. In order to know what the manufacturing yield will be, we need to define a target yield, for example, let target yield be greater than 3 standard deviations, or 99.73%, and define a level of confidence in the result, for example 95%. Then we can use a statistical tool called the Clopper-Pearson method to determine if Monte Carlo results have a >95% chance of having a yield of 99.73%.
The Clopper-Pearson method produces an interval of confidence, the minimum and maximum possible yield, given the current yield, number of Monte Carlo iterations, etc. By using a confidence interval, we know how reliable the statistical results are. Another result of using the rigorous approach to statistical analysis is that more iterations of Monte Carlo analysis may be required. As a result, designers need better sampling methods that reduce the number of samples, Monte Carlo simulation iterations required, in order to trust the results.
Random sampling is the reference method for Monte Carlo sampling since it replicates the actual physical processes that cause variation, however, random sampling is also inefficient requiring many iterations, or simulations, to converge. New sampling methods have been developed to improve the efficiency of Monte Carlo analysis by more uniformly selecting sample points. Shown in Figure 2 is a comparison of samples selected for two random variables, for example, n-channel mobility and gate oxide thickness. The plots show the samples generated by random sampling and a new sampling algorithm called low-discrepancy sampling, or LDS. Looking at the sample points it is clear that LDS has more uniformly spaced sample points. This means that the sample space has been more thoroughly explored and, as a result, the statistical results converge more quickly. This translates into fewer samples being required to correctly estimate the statistical results: yield, mean value, and standard deviation.

The LDS sampling method replaces Latin Hypercube sampling because it is as efficient and supports Monte Carlo auto-stop. Monte Carlo auto-stop is an enhancement to Monte Carlo that optimizes simulation time.
Statistical testing is used to determine if the design meets some test criteria; for example, for the CAPDAC assume that you want to know with a 90% level of confidence that the SNR yield is greater than 99.73%. You would need to define these criteria at the start of the Monte Carlo analysis; check the results after every iteration of the Monte Carlo analysis. The analysis stops if one of two conditions occurs. First, the analysis will stop if the minimum yield from the Clopper-Pearson method is greater than the target criteria, that is, the SNR yield is greater than 99.73%.
More importantly, the Monte Carlo analysis will also stop if Virtuoso ADE XL finds that the maximum yield from the Clopper-Pearson method will not exceed 99.73%. Since failing this test means that the design has an issue that needs to be fixed, this result is also important. It also turns out that failure usually occurs quickly, after a few iterations of the simulation. As a result, using statistical targets to automatically stop Monte Carlo can significantly reduce the simulation time. In practice, what does this look like?
Consider the following plot in Figure 3, which shows the upper bound, the maximum yield, and lower bounds, minimum yield, and the estimated yield of the CAPDAC as a function of the iteration number. The green line is the lower bound of the confidence interval assuming the user would like to represent the estimated yield By the 300th iteration, we know that the yield is greater than 99% with a confidence level of 90%. Or we can be very confident that the CAPDAC yield will be high. In addition, thanks to Monte Carlo auto-stop we only needed to run the analysis once.

To summarize, the two improvements to Monte Carlo sampling are LDS sampling and Monte Carlo auto stop. LDS sampling uses a new algorithm to more effectively select the sampling points for Monte Carlo analysis. Monte Carlo auto stop uses the statistical targets: yield and confidence level, to determine when to stop the Monte Carlo analysis. As a result of these two new technologies, the amount of time required for Monte Carlo analysis can be significantly reduced.
Note 1: Remember in analog design, designers rely on good matching to achieve high accuracy in their designs. Designers can start with a resistor whose absolute accuracy may vary +/-10% and taking advantage of the good relative accuracy, matching between adjacent resistors, to achieve highly accurate analog designs. For example, the matching between adjacent resistors may be as good as 0.1%, allowing design data converters of 10 bit, 1000parts per million (ppm), 12 bit, 0.00025ppm, or even 14 bit, 00001ppm, accuracy circuits.
Note 2: In reality, only the components in the design sensitive to process variation do not scale, so the area of the digital blocks will scale and the area of some of the analog blocks may scale. The solution designers typically adopt to maintain scaling is to implement new technologies such as digitally assisted analog (DAA) design to compensate for process variations. While adopting DAA may enable better scaling of the design, it also increases schedule risk and verification complexity.
About the author:
Art Schaldenbrand is Senior Product Marketing Manager at Cadence Design Systems – www.cadence.com
