This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

ADS114S08: Noise free resolution

Part Number: ADS114S08


Hello,

I am using a different reference voltage other than 2.5V so based on the formula given in the datasheet I have calculated the NFR bits. I would like to know how to interpret this NFR? Is there a way for me to translate the this NFR to a delta loss of accuracy. This again goes back to the trade for Idac selection. I want to avoid any Idac higher than 1000uA since the self heating temperature rise is not acceptable. But to chose between 250uA, 500uA and 750uA I was hoping to get a number for the loss in measurement resolution by interpreting the NFR in terms of delta change in the input reading.

Thanks.

  • Hi Harini,

    If you look at the noise table data for that starts on page 22 of the ADS114S08 datacheet, and in particular the tables with the noise in voltage (uVpp) instead of ENOB, you can use the these values for the desired datarate, filter and PGA settings desired. This value is noise of conversion relative to a shorted input where the reference voltage has an insignificant impact.

    What you will see is the NFR will change relative to the noise voltage and the reference voltage. As an example, let's use low-latency filter with 20sps and pga of 8. From the table we see 9.5uVpp and for a 2.5V reference 16 bit NFR. Let's change the reference to 1V. If you take a look at the equation 2 on page 22 of the datasheet you can calculate the NFR. Substituting the values in equation 2 you get about 14.68 noise-free bits. Keep in mind the LSB size changes to a smaller value and the result is fewer noise-free bits. So what in the end has changed? The full-scale range.

    If you have a small signal range to measure, then the consideration is the effect of external noise sources and error such as self-heating. The larger the current, the more signal is output from the RTD, but the self-heating causes an error. If you use a smaller current and add gain, then you also have the potential for gaining up noise from external sources such as EMI/RFI. So achieving the best overall temperature resolution will be choosing a current to limit self-heating effects, and then a combination of reference that allow for proper common-mode of the PGA and gain that can maximize as much of the full-scale range as possible for the desired temperature range.

    Best regards,
    Bob B
  • Hi Bob,

    Thanks for the information.

    What I can clearly understand is to maintain the full scale utilization as the key to get the best resolution. This is provided my current selection does not cause too much self heating. So, with a reference of 1.7V and 500uA and gain setting of 16 my FSR utilization is around 90.9%. I think this will work fine.


    Regards,
    Harini