This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

MSP430F6723 ADC calibration

How should customers calibrate the 24 bit A/D for a typical application. 

 

  1. If they calibrate the A/D during production, can they assume that it will stay fairly accurate over the life of the device (even with fairly large storage and use temperature fluctuations)?

  2. If not, how often is calibration will be required?

  3. The data sheet doesn’t say much of anything about the performance of the A/D except that it’s fairly linear, but it doesn’t give any specs over temp except gain error.  What is the typical or max number for bit error over temperature?

     

     

  • Hi Charles,

    When high accuracy is important, temperature and power supply changes should trigger re-calibration during use. The amount of change really depends on the specific requirements of the application. It's also a good idea to re-calibrate after power-on (and perhaps after waking up from sleep mode). If high accuracy is not that important, the customer could calibrate the device once during production using the two-point method (discussed here), assuming that the device stays within the recommended operating conditions throughout the life of the device.

    Keep in mind that there are different types of calibration, such as self and system. For example, you may calibrate the gain and offset of the internal SD24 in the MSP430 (self) independently, but other external hardware on the board could have much larger offsets than the SD24 (system). Thus, it would be best to manually calibrate the SD24 when all other components have been connected/populated during production to account for this (interesting discussion in this thread).

    Regarding the performance over temperature, page 73 in the datasheet includes both the gain and offset error variations over temperature, which are represented as coefficients and were calculated using the box method.

    Bit error is normally related to a SAR ADC. For the Sigma Delta ADC, the signal-to-noise ratio (SNR) is essentially quantifying the quantization noise/error (discussed in Section 29.2.1 in the user's guide), but I'm not sure where/how this is specified in the datasheet for this device. I'll look into this and let you know.

    Regards,

    James

    MSP Customer Applications

**Attention** This is a public forum