This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

MSP430 ADC Accuracy

Other Parts Discussed in Thread: MSP430FR5739

I am currently looking to use the MSP430FR5739 micro-controller's 3.0-3.3 ports' ADCs to read 4 analog signals.  I have the functionality of the code working, however I notice that if I feed the same input voltage to ALL 4 ADC channels, the ADC readings across them vary as much as 43 of the 1024 bits, and I am interested in much higher precision.


I was looking into the TLV built-in ADC calibrations (section 1.14 of the family datasheet), but am not sure how to write the CCS code to perform desired calibrations.  Could someone share with me how to use/combine these ADC correction values stored in the micro-controller with the ADC10MEM0 value to get a more accurate ADC reading?

  • The ADC10 is a single ADC with an input multiplexer. While the ADC and the input multiplexer have of course some non-ideal influence on the signal, this should be the same for all channels. (so all channels are equally wrong regarding offset and gain error).
    To compensate this, the MSP has calibration values stored in a TLV structure that compensate for offsets, gain errors and reference voltage tolerance.
    However, these calibrations values can of course not compensate for differences on the external circuitry beyond the MSP pin.
    So maybe the attached circuitry it different for the four channels. If you directly short the pins together, there should of course be no difference.

    Then the analog part is sensitive to ripples on VCC and (worse) Vss. If available, DVss and AVss should be routed separately to the supply GDN point. Also, a small resistor (10-100 Ohm) between DVcc and AVcc are a good idea, together with a 100nF ceramic/10µ tantalum combo between AVcc and AVss. This gives a clean environment. If analog and digital supply are tied together, the operating current ripple of the CPU and other digital modules may influence the conversion. Maybe even forming patterns (as the code used to do the conversion is always the same)

    The formula for using the calibration values is provided in the users guide. But first you should improve the readings so they are identical for all channels (except perhaps some incoming signal noise of few LSB)

  • Thanks for the info, I am using the register containing the corrected values and am seeing better results.  I know the address through the datasheet, but do you know which .h file the CAL_ADC25REF_FACTOR and other similar calibration values are in?  I checked the msp430FR5739.h and could not find?   Thanks!

  • I don’t know where they are defined. I use MSPGCC, and since these names are not defined in the users guide, there is no common ‘standard’ how the calibration value ‘registers’ should be called. And MSPGC did differently.
    Also, I don’t use them at all, as I run a two-point calibration where required. This calibrates the external circuitry as well as the reference and any other aberrations.
    For our laser power supply, I stuffed all inputs with analog switches (and OpAmps) and for calibration, I switch to GND or a known reference voltage and can so calculate offset and gain, including the OpAmps and what else lies in the way of the signal. On the other, cheaper devices, I have some calibrations software that assumes a certain sequence of input signals on first run, and adjusts the gain and offset accordingly. More or less the same TI does at the factory to gather these calibration values. But again including the external circuitry into the process.

**Attention** This is a public forum