This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

# ADS8860: how DNL and INL influence the ADC system performance?(quantitatively)

Hi there

I am starting watch Precision Lab Videos, the first section is about some DC and AC parameters introduction. I found it is clearly explains the definition of those parameters, but I did not gain much sense (quantitatively) on how they will influence the ADC system performance.

For exmample, by DNL definition we could straightforwardly understand that DNL >-1LSB means there is a missing codes , so the error will be large. Same for INL, the lager INL is, the lager will error be.

Is there any application notes of TI that can illustrate those parameters a little bit more in-depth instead of just their definitions on basic level ? even better with some real case as examples.

This question arises from the experience that when we choose an ADC, parameters like the number of bits, SNR, VREF, are relatively easy to confirm according to system requirements. But it is not so easy to identify how much INL or DNL  we may need.

Those videos are great materials to learn, and I am going on.  And thanks very much !

• Hello,

Generally, DNL is important to guarantee no missing codes.  If you are using a data converter in a feedback control system, then making sure DNL is less than 1LSB to ensure monotonicity is important for control loop stability.

INL can be thought of as a change in gain over input voltage.  For DC signals, you can simply add the maximum INL to the Offset.  However, for AC signals, this change in gain will distort the sinewave and will lead directly to THD.  There is a correlation between the two, but it is best to directly measure THD using AC input and INL using DC input.

Here is a link to another e2e post that discusses how to make INL measurements:

https://e2e.ti.com/support/data-converters/f/73/p/944642/3490213?tisearch=e2e-quicksearch&keymatch=INL#3490213

Here are some additional links that may provide some more insight:

https://www.ti.com/lit/an/slaa013/slaa013.pdf?ts=1603416432091

Regards,
Keith Nicholas

• Hi Keith, thanks for reply.

Another question is , in section 2 of TI Precision Lab Video it mentions the definition of Offset Error(Eo) as "The offset is the y-axis intercept" , where y axis is the output code and x-axis is the input analog voltage. According to this, I think Eo should be a digital value, however, I check in ADS8860 datasheet page 6 and found its Eo is "typical ±1mV" which seems defined in X-axis. .

Can user just treat them as equivalent things? That is, 1LSB = 38uV ---> 1mV = 26 LSB. So we could use 26 LSB as Eo value when doing Eo calibration ?

Thanks.

• Hello,

Yes, you are correct.  The measured ADC offset is in codes (LSBs) but is usually specified in voltage.  To get the equivalent LSB's, you simply divide the specified offset voltage by 1LSB.  In the case of ADS8860, 1 LSB is equal to Vref/2^16.

The typical offset of ADS8860 is specified as +/-1mV.  The LSB's that would be measured when using a 5V reference will then be +/-0.001V/(5V/2^16) = +/-13LSB.

Regards,
Keith