This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

MSP430AFE231- 24-bit Sigma Delta Features..?

Other Parts Discussed in Thread: MSP430AFE231

Hi, I'd like to know just a simple feature of MSP430AFE231, for a measurement application.

what is the effective resolution of its 24 bit Sigma delta ADC? The only information I have had is an error percentage of 0,2%...but "percentage of " what? is this an error on the Full-scale range voltage? 

Thank you.      

 

  • Ing.Prozak said:
    but "percentage of " what? is this an error on the Full-scale range voltage? 

    Wouldn't make sense if the error would be in the 9th bit of a 24 bit ADC.

    It is relative to the reading. So the smaller the voltage, the smaller the error.
    The concept of Delta-Sigma-converters (not sigma-delta. Only the sigma river can have a sigma delta) is to cover a larger span of values, not to give a high resolution on high values.

    With an error of 0.2%, the 9th bit of the result is uncertain, but counted down from the first bit that is set in the result, not from the MSB. So if the voltage you apply is 1% of the full-scale, the absolute error is 20ppm of the full scale.
    Or the other way: while a delta-sigma gives you a really fine resolution on a small signal, it won't give you the same fine resolution (at least not usable) for a large signal.

    SAR ADCs, however, have an absolute error, independent of the signal.

  • The spec sheet says: EOS Offset error 0.2% %FSR

    This translates to an zero voltage offset of 1mV. The full scale range (FSR) is 500mV.

    Peter

  • Thank you for your explanation...my problem is "how many effective resolution bits I can consider in using this delta-sigma CONVERTER ?".

    Tell me where I'm making a mistake.

    In practice, I want to use 100% Full-scale range in my application (1.2 V) and, to understand my "resolution weight", normally I do : 1.2V / 2^(effective resolution bits), so in the perfect condition (24 bits) I could expect a value of  7.15 *10^-8 V/sample.

    In this case, I should consider an error of 2*10^-3 (2000 ppm)... does it mean that I must expect an error of 2.4 mV in full range?!  Too much....How can I translate this error in ADC effective bits, for calculating my sample weight ? Isn't it usable in this way? 

    Thank you again, Jens.

    Alessia

     

     

  • Peter Dvorak said:
    EOS Offset error 0.2% %FSR

    Okay, an offset error sure may be absolute. And is something you can easily calibrate to zero.
    It is most likely caused by the input offset of the OpAmp.

    Peter Dvorak said:
    The full scale range (FSR) is 500mV

    AFAIK, it is 1,2V (or rahter +-600mV) as 1.2V is the reference. However, specified performance is not guaranteed above +-500mV. But FSR is +-600mV. Comes down to a maximum offset error of 1,2mV. But as I said, offset errors can be easily compensated. Especially, if

    you have a symmetrically working ADC, so offset compensation does limit the usable range a bit (+-500mV to +-499,4mV) but does not remove 0V from teh metering range.
    On common.mode SARs an offset is more critical. Especially a negative one.

    Ing.Prozak said:
    In this case, I should consider an error of 2*10^-3 (2000 ppm)... does it mean that I must expect an error of 2.4 mV in full range?!  Too much....How can I translate this error in ADC effective bits, for calculating my sample weight ? Isn't it usable in this way? 


    As I said above, if it is an offset error, it is just a value that can be read once on 0V input and then be subtracted form the result as a constant. No need to mess with bits. IIRC, the SD has an offset calibration mode where the inputs are internally shorted to GND. Just make one measurement and subtract the result from any further real measurement. Can even be done without first calculating the voltage from the result.
    If in offset calibrtion mode (shortcut mode), the SD gives you a reading of 125, then just subtract 125 from any othe rreading and the offset error is gone.
    The thing to worry about is the gain error. Which in best case is a fixed factor of the measured signal across the full scale.
    Here, you can also calibrate. Apply a known precise voltage and see teh difference between real and theoretical result. This gives you a fixed value you need to multiply (or divide) all results with.
    This is usually caused by tolerances in the loopback resistors which provide the input gain. Or an error in the reference voltage.
    It may also be caused by your external circuitry, so you can compensate for both in one step.
    However, this value may drift with temperature.
    If it is not the same factor over full scale, then you have a nonlinearity error, which is more tricky to compensate for. Most precise is to have a compensation/mapping table which contains the correction factor for different input ranges.

    However, 24 bit on 600mV are a resolution of 36nV or 7 decades or 0.1ppm. A non-constant, unknown error in the 200ppm range wouldn't make much sense.

**Attention** This is a public forum