Improve DAC integral nonlinearity through gain correction
The static absolute accuracy of a digital-to-analog converter (DAC) can be described in terms of three fundamental kinds of errors: offset, gain error, and nonlinearity. Linearity errors are the most challenging to handle of the three since, in many applications, the user can null out the offset and gain errors, or compensate for them by building end-point auto calibration into the system design. Linearity errors, however, require more complex correction.
A DAC (Figure 1), converts digital input codes to proportional analog output signals, which could be either current or voltage. The resolution of a DAC refers to the number of unique output levels that the DAC is capable of producing. For example, a DAC with a resolution of 8-bits will be capable of producing 28 (256) different output levels at its output. Ideally, each digital code provides equal analog steps; however, in reality it cannot because of non-idealities.
Figure 1: 8-bit DAC symbol and role
Linearity of a DAC
Before looking at how to improve the integral nonlinearity (INL) of a DAC, it would be best to review how we determine its linearity, as shown in Figure 2. In a DAC, we focus primarily on two measures of its linearity: differential nonlinearity (DNL) and integral nonlinearity (INL).
DNL is the maximum deviation of an actual analog output step between adjacent input codes, from the ideal step value (Δ). INL is the maximum deviation, at any point in the transfer function, of the actual output level from its ideal value. The ideal value is a straight line drawn between the actual zero and full-scale of the DAC.
Figure 2: DAC linearity errors, DNL and INL
The conventional end-point calibration technique is used to remove gain error in DACs. However, the gain error is typically not linear throughout the full-scale range of the DAC, because of various systematic non-idealities in silicon. These systematic patterns may cause unidirectional gradients that result in poor INL performance.
The major non-idealities that cause systematic patterns are as follows:
- Edge Effects, e.g. Length of Diffusion (LOD)
- Doping gradients
- Oxide thickness gradients resulting in a threshold shift across die
- Thermal gradients
- Voltage drops in the supply lines
Therefore, an end-point calibration technique is not enough to remove gain error completely and can result in poor INL performance. Applications that require absolute output accuracy may require a much lower INL.
One way of improving INL performance is using a firmware technique. This technique takes advantage of working with a SoC to build two-point auto calibration into the system. For this example, we will use the PSoC® 3 family, which has up to four multiple-range 8-bit voltage/current DACs, and with an INL of about 1.5 LSB. The on-chip 20-bit delta-sigma analog-to-digital converter (ADC) has an INL of less than 1 LSB when operating in 12-bit mode. This is more than enough to calibrate 8-bit DAC. The firmware is required to complete a feedback loop between the DAC output and the ADC (Figure 3).
Figure 3: Current-output DAC (IDAC) with ADC feedback
INL typically tends to reach its maximum value in the middle of the full range, as shown in Figure 4. If we could bring this peak down, we would improve INL significantly. This observation leads us to use two-point calibration instead of end-point or single-point calibration techniques, which by themselves may not be enough to remove gain error completely.
Figure 4: DAC linearity with different gain regions
The first calibration point will be used to calibrate the first half of the full range (Equation 1).
Similarly, the second calibration point is to calibrate the second half of the full range (Equation 2).
The algorithm, shown in Figure 5, works as follows. Initially, the two gain-correction values are calculated and saved at the mid-point and end-point of digital input DAC values. This is the only time the ADC is used. Therefore, we only sacrifice time for measurement and computation for the calibration once.
Figure 5: Flow chart of the two-point gain-correction algorithm
During normal operation, if a digital code less than the mid-point is passed to the DAC, it is calibrated using the first gain correction value before conversion. If a digital code greater than the mid-point is passed to the DAC, it is calibrated using the second gain correction value before conversion.
The calibration will be done on the fly by updating the gain trim registers, as shown in Figure 3. A direct memory access (DMA) module would be used if it is available on the SoC to update the registers even faster.
Changing the gain correction value in the middle of the full range will create a trim offset (Equation 3).
This trim offset needs to be compensated for in the second half of the range as shown in the algorithm of Figure 5).
For comparison, the INL performance of the 8-bit current DAC (IDAC) with conventional end-point calibration is first measured before the proposed algorithm is applied. The INL was about 1.5 LSB as shown in Figure 6.
Figure 6: INL of IDAC with conventional end-point gain correction
After the proposed algorithm is implemented, the INL is now reduced down to 0.8 LSB, Figure 7.
Figure 7: INL of IDAC with two-point gain correction
A firmware technique for improving DAC integral nonlinearity in a SoC is practical and realistic. It has been demonstrated in an implementation example using the PSoC® 3 family, and the results are tangible. The proposed technique achieved INL improvement of 85% for voltage and current DACs in the PSoC® 3.
About the author
Onur Ozbek is an Electrical Design Engineer on the staff of Cypress Semiconductor Corp., which he joined in 2005. He has an MSEE from the University of Washington. He has 8 years of analog-mixed signal and embedded system design experience.