ADS1282-SP: FSR Utilization, ENOB and Accuracy

Prodigy 100 points

Replies: 6

Views: 64

Part Number: ADS1282-SP

 Hello,

I have a couple of questions about how to interpert the datas from the ADS1282-SP, ENOB and Error Budget Analysis.

First of all, I have read all documentation on training.ti.com/adc-noise

I have read this presentation (btw, it's really help me to separate the type of errors introduced by the ADC).

In our design, we have a Vsignal_diff = 50mV going in differential into an ADS1282S-SP. This signal varies slowly in the time (DC signal).

From your table bellow, I should only consider (because i have a dc signal at the input of the ADC) : the Effective resolution, Noise-free resolution and Noise-Free counts to calculate the resolution i can achieved?

We have VREF=  VREP-VREFN = 5-0V = 5V. We plan to use only the SINC filter without the FIR filter at 128kSPS.

FSR =    VREF/PGA = 5/1 = 5V.

We can calculate the Resolution system:

Resolution Loss:

Log2(%Utilization) = log2(System FSR/ADC FSR) = log2 (50mV/5V) = 6.7bits

To calculate the Effective Resolution (bits), i don't know which SNR to take. The only SNR provided in the datasheet is for High Resolution at 4kSPS (see page 17). I tried to extrapolate (yellow values) the SNR to higher Sampling Rates and at diffirent PGA Gains. If I use the extrapolate values of the bellow table, I got a SNR equal to 102dB at 128kSPS at PGA =1.

  PGA (SNR Ratio dB)
SPS 1           64
250 130           114
500 127           111
1000 124           108
2000 121           106
4000 118           103
8000 115           100
16000 111           97
32000 108           94
64000 105           91
128000 102           88

FSR_rms = (VREFP-VREFN)/(2 x sqrt(2) x PGA) = 1.7677 Vrms (formula from the datasheet, if PGA = 1)

I can calculate the Vn,RMS =  FSR_rms / 10^(SNR/20) = 1.7677 / (10^(102 dB/20)) = 14.04uVrms (by using the extrapolate table above)

Effective Resolution = Log2(System FSR/Vn,RMS) =  Log2(50mV/14.04uVrms) = 11.79bits

System Resolution  = Effective Resolution-Resolution Loss = 11.79 - 6.7 = 5.1bits (This number represents the actual resolution which i can achieve in my system without considering errors like INL, Offset errors, Gain errors, Gain error drift and Offset error drift?)

However, if i  want to use a PGA equal to 64,

log2(System FSR x 64 / ADC FSR) = log2 (50mV x 64 /5V) = -0.644 bits

FSR_rms = (VREFP-VREFN)/(2 x sqrt(2) x PGA) = 0.02762 Vrms

Vn,RMS =  FSR_rms / 10^(SNR/20) = 0.02762V/ (10^(88 dB/20)) = 1.0996uVrms (88dB comes from the table above which i extrapolate)

Effective Resolution = Log2(System FSR/Vn,RMS) =  Log2(64 x 50mV/1.0996uVrms ) = 21.47 bits

System Resolution  = Effective Resolution-Resolution Loss = 21.47- 0.644bits = 20.82bits (This number represent the actual resolution which i can achieve in my system without considering errors like INL, Offset errors, Gain errors, Gain error drift and Offset error drift?)

 

If I'm taking the last case with a PGA = 64, my resolution system is 20.82bits. To calculate my error budget (INL, Offset Error, Drift, Gain Error, Gain error drift, Noise RTI), should I take 31 bits or 20.82 bits to calculate the ERROR in term of LSB?

If i'm taking 31 bits, 1 LSB voltage is 2.32nV and 0.000465 LSB (ppm) and i got these errors with a gain of 64, SNR = 88dB

  Source of Error VALUE ERROR (%) ERROR (LSB) ERROR (PPM) LSB (PPM)
DNL DNL 0 0.000 0.000 0.00 0
QUANTIZATION ERROR QUANTIZATION ERROR 0 0.000 0 0.00 0
NOISE RTI NOISE RTI (nVp-p) 3110.21 0.000062 1336 0.62 1336
INL INL (%FSR) 0.000001953 0.000001953 42 0.02 42
OFFSET Offset Error (uV) @PGA = 1, -38 38 0.049 1044536 486.40 1044536
Offset Drift (uV/Degrees) @PGA = 1, -0.01 0.01 0.001 16493 7.68 16493
GAIN Gain Error (%), -1.05 1.05 1.050 22548578 6720.00 14431090
Gain Error Drift (ppm/Degrees) @PGA=1, -1 1 0.004 82463 38.40 82463
  ERROR (%) ERROR (LSB) ERROR (PPM) LSB (PPM)
TOTAL ERROR (WCA) 1.38 29706402 10053 21588914
TOTAL ERROR (RSS) 1.09 23360054 7296 15668762
BITS LOSSES (WCA) 24.36      
BITS LOSSES (RSS) 23.90      

I'm loosing 23.90 bits on 31 bits. So, my system Resolution is 7.044 bits.

 

If i'm taking 20.82 bits (from the previous part), 1 LSB voltage is 2.701uV and 0.540 LSB (ppm) and i got these errors with a gain of 64, SNR = 88dB

  Source of Error VALUE ERROR (%) ERROR (LSB) ERROR (PPM) LSB (PPM)
DNL DNL 0 0.000 0.000 0.00 0
QUANTIZATION ERROR QUANTIZATION ERROR 0 0.000 0 0.00 0
NOISE RTI NOISE RTI (nVp-p) 3110.21 0.000062 1 0.62 1
INL INL (%FSR) 0.000001953 0.000001953 0 0.02 0
OFFSET Offset Error (uV) @PGA = 1, -38 38 0.049 900 486.40 900
Offset Drift (uV/Degrees) @PGA = 1, -0.01 0.01 0.001 14 7.68 14
GAIN Gain Error (%), -1.05 1.05 1.050 19437 6720.00 12440
Gain Error Drift (ppm/Degrees) @PGA=1, -1 1 0.004 71 38.40 71
ADC ERROR        
  ERROR (%) ERROR (LSB) ERROR (PPM) LSB (PPM)
TOTAL ERROR (WCA) 1.38 25607 10053 18610
TOTAL ERROR (RSS) 1.09 20137 7296 13507
BITS LOSSES (WCA) 14.18      
BITS LOSSES (RSS) 13.72      

I'm loosing 13.72 bits on 20.82bits. So my system resolution is 7.1bits. The resolution is 1LSB = VREF (5V) / 2^7.1bits = 0.0364V

Does-it make sense?

 

Thank for your help

 

6 Replies

  • For the first Table

     

     

  • In reply to Jeremy Chambon:

    Hello Jeremy,

    I am glad the ADC Noise training was helpful.  Please take a look at the Analog Engineer's Pocket Reference guide, which includes much of this information in a quick reference format.

    http://www.ti.com/lit/slyw038

    First, you state that you have an input signal of 50mV, and it is slow moving.  So yes, I would agree that the effective and noise free resolution is more applicable in your case.  ENOB takes into consideration AC signals, and includes the effects of harmonic distortion caused by the non-linearity in the ADC transfer curve. 

    I do have an assumption that your full scale signal swing is from 0V to 50mV.  If it is from -50mV to +50mV, then your effective full scale input signal range will be 100mV, which will change your calculations.  For example, if your input signal swing is 0V to 50mV, then your Resolution Loss calculation of 6.7b is correct.  However, if your input signal swing is -50mV to +50mV, then your Resolution loss would be log2 (100mV/5V) = 5.7bits.

    Regarding the SNR at the higher data rates using only the sinc5 filter, the best way to determine this is to measure the noise with shorted inputs on an evaluation board.  I would do this, but I do not currently have one available.  In any case, I think your estimates are not very far off since the noise is largely due to thermal noise, and each doubling of the data rate results in roughly a 2x increase in bandwidth, which results in a 3dB reduction in overall SNR. 

    20*log(SQRT(2))=3dB

    Also, when moving from 4ksps to 8ksps, you are also switching from the wideband FIR filter to just the sinc5 filter.  The -3dB bandwidth of the wideband filter is 0.413*Fdata, and the sinc5 filter is closer to 0.23*Fdata (see Figure 31 in the ADS1278-SP datasheet).  This will result in about 20*log(SQRT(0.413/0.23)) = 2.5dB higher SNR numbers.  However, your estimates are more conservative, and are still useful for your analysis.

    FSR_rms = (VREFP-VREFN)/(2 x sqrt(2) x PGA) = 1.7677 Vrms (formula from the datasheet, if PGA = 1)

    I can calculate the Vn,RMS =  FSR_rms / 10^(SNR/20) = 1.7677 / (10^(102 dB/20)) = 14.04uVrms (by using the extrapolate table above)

    Effective Resolution = Log2(System FSR/Vn,RMS) =  Log2(50mV/14.04uVrms) = 11.79bits


    I agree with your above calculations.  However, your effective resolution of 11.79bits is your system resolution since you have referred your noise to your input voltage range of 50mV.

    The ADC effective resolution is Log2(FSR/Vn,RMS) =  Log2(5V/14.04uVrms) = 18.44bs.  You can get back to your System resolution by subtracting the resolution loss from the ADC effective resolution; 18.4b-6.7b=11.7b.

    The same process applies with a PGA Gain=64.  The full scale input range of the ADC is now 5V/64=78.125mV, and your resolution loss will be log2(50mV/78.125mV)=-0.64b.

    Your calculation for FSR_rms is correct at 0.02762Vrms, and your input noise based on 88dB is also correct at 1.1uVrms.

    The effective resolution of your ADC is now based on the input noise relative to the FSR of 5V/64.

    Effective Resolution = Log2(ADC FSR/Vn,RMS) =  Log2(78.125mV/1.0996uVrms ) = 16.1b.

    Your system resolution relative to your full scale signal range of 50mV is now 16.1b-0.64b=15.47b.

    Based on the above calculations, you will get much better system resolution by using the PGA gain=64.  (15.47b vs 11.79b)

    Normally, when referring to Error terms relative to LSB, the LSB is based on the code that the ADC produces.  In the case of the ADS1282, this is a 32 bit word, so all values should be based on a 32b word size.  For example, the offset error of 50uV with PGA=1 and VREF=5V will be 50uV/5V=0.001%, or 10ppm of full scale range.  The LSB's would be 10e-6*2^32=42,950LSB.  This is a large number because of the word size, which is why LSB's are not commonly used for very high resolution ADC's.  

    Regarding your overall error analysis, for high accuracy systems, you would typically perform a system level calibration to eliminate the offset and gain errors.  At this point, your remaining errors will be due to noise, INL, and temperature related drift errors in gain and offset.

    Regards,
    Keith Nicholas
    Precision ADC Applications

  • In reply to Keith Nicholas:

    Hello Keith,

    It's really appreciated your help and it makes sense for me.

    "Also, when moving from 4ksps to 8ksps, you are also switching from the wideband FIR filter to just the sinc5 filter.  The -3dB bandwidth of the wideband filter is 0.413*Fdata, and the sinc5 filter is closer to 0.23*Fdata (see Figure 31 in the ADS1278-SP datasheet).  This will result in about 20*log(SQRT(0.413/0.23)) = 2.5dB higher SNR numbers.  However, your estimates are more conservative, and are still useful for your analysis."

    It means in my analysis, I am more conservative than the reality? For example, instead of having a SNR = 102dB at 128kSPS, I have more chance to have a SNR = 105.5dB? The better way to know exactly is to measure directly on the board?

    To calculte the FSR Utilization, I can calculate in this way: 

    - PGA = 1, FSR Utilization (%) = 50mV/5V = 1%, (System FSR/ ADC FSR) . However if the signal swings -50mv to 50mV, it will be 100mV/5V = 3%

    - PGA = 64, FSR Utilization (%) = 50mV/0.078125V = 64% (better improvement by using the PGA set to 64.)

    "Based on the above calculations, you will get much better system resolution by using the PGA gain=64.  (15.47b vs 11.79b)"

    -If i'm choosing a gain of 64, i will get a system resolution system of 15.47bits. It means 1 LSB Voltage = 5V/2^15.47bits = 110.160uV. It represents the resolution I could get/achieve? 

    "The offset error of 50uV with PGA=1 and VREF=5V will be 50uV/5V=0.001%".

    - If I'm using a PGA = 64, the offset error will be = 50uV/(5/64) = 0.06%? 

    In the following bloc diagram and from my understanting, If i want to use the Calibration inside the ADC, I have to select the FIR filter (so reducing the SPS to the maximum of 4kSPS at the output)? Is-it possible to calibrate the ADC by using the commands (Offset and gains) if i want to use only the SINC filter?

    Thank you very much,

  • In reply to Jeremy Chambon:

  • In reply to Jeremy Chambon:

    Hello Jeremy,

    It means in my analysis, I am more conservative than the reality? For example, instead of having a SNR = 102dB at 128kSPS, I have more chance to have a SNR = 105.5dB? The better way to know exactly is to measure directly on the board?


    Yes, your understanding is correct.  I am fairly certain this will be the case, but I have not looked at the SINC5 only filter on this device, which is why I suggest measuring to confirm.

    However if the signal swings -50mv to 50mV, it will be 100mV/5V = 3%


    Yes, but I think you have a typo; for an input signal swing of +/-50mV with a 5V reference, your FSR utilization will be 2%.

     PGA = 64, FSR Utilization (%) = 50mV/0.078125V = 64% (better improvement by using the PGA set to 64.

    Yes, with a gain of 64, the ADC input full scale range is now +/-2.5/64=+/-39mV, or FSR=0.078V.  FSR utilization is 50mV/78mV=64%.

    If i'm choosing a gain of 64, i will get a system resolution system of 15.47bits. It means 1 LSB Voltage = 5V/2^15.47bits = 110.160uV. It represents the resolution I could get/achieve? 

    Not quite; The system resolution of 15.47bits has been adjusted to a 50mV FSR input range.  1LSB in this case will be 50mV/2^15.47 = 1.1uV.

    If I'm using a PGA = 64, the offset error will be = 50uV/(5/64) = 0.06%?


    The offset error for the ADC core will be divided by the PGA gain, but there will be additional offset error in the PGA.  Voff=50uV is only valid for a PGA gain of 1.  Similar to the noise, the best way is to measure, but since we do not have data for offset at GAIN=64, we can estimate using the data in Figure 30 of the datasheet which does provide an offset for PGA=8.

    Voff (pga=1) = ~50uV.

    Voff (pga=8) = ~10uV.

    Voff ~=Voff-adc/PGA+Voff-amp

    50uV=Voff-adc/1+Voff-amp

    10uV=Voff-adc/8+Voff-amp

    Solving for Voff-adc=45.7uV and Voff-amp=4.3uV, then the estimated offset at PGA=64 will be around 5uV, or 5uV/5/64=0.006%.

    You are correct, the internal calibration registers only work with the FIR filter, not the sync filter.  This is not difficult to implement inside your processor.  After calculating the offset and gain correction factors, you would simply multiply the conversion result by the gain correction and then add the offset.

    There is a TI Precision Labs training that goes over how to do this in your system.

    https://training.ti.com/ti-precision-labs-adcs-understanding-and-calibrating-offset-and-gain-adc-systems?context=1139747-1140267-1128375-1139104-1134080

    Also, the output word can be either 24b or 32b.  When using 32b, the LSB is simply the sign bit of the result.  If you want to express the errors in LSB's, I would suggest you use the 24b word option, and scale your errors relative to this level.

    For VREF=5V, PGA=64 and a 24b word length, 1LSB=5V/64/2^24=4.66nV.

    Regards,
    Keith

  • In reply to Keith Nicholas:

    Hello Keith,

    Thank you for all explanations and the detailed procedure for the offset. However, I'm confused because I have two different analysis and i don't know how I can correlate one with the other.

    - First analysis is to determine the System Resolution. Above, we have a system resolution of 15.47bits for a PGA = 64. From my understanding, this system resolution is ideal without considering offset, gain, DNL, and drits errors. Is-it correct?

    - Second analysis is to determine the errors introduced by the ADC itself : Gain, offset, INL, DNL, drift. This analysis is based on 1 LSB = 5V/64/2^24 or 1 LSB = 5V/64/2^32. First, how can we select to choose if the ouput word is 24b or 32 bits? I'm assuming it should be a SPI request for 24 bits or 32 bits?

    "Also, the output word can be either 24b or 32b.  When using 32b, the LSB is simply the sign bit of the result.  If you want to express the errors in LSB's, I would suggest you use the 24b word option, and scale your errors relative to this level." For VREF=5V, PGA=64 and a 24b word length, 1LSB=5V/64/2^24=4.66nV.

    In the previous message, we have 0.006% of error due to the offset with a PGA = 64.

    - How can I convert this error 0.006% in term of voltage? 

    - How can I corellate this error of 0.006% to the system resolution of 15.47bits? How can I applied the error from the second Analysis to the first analysis?

    Thank you,
    Jeremy