This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

MSP430 ADC calibration using TLV correction values at Bella

Other Parts Discussed in Thread: MSP430WARE, CC430F5137, MSP430F5508, MSP430G2553

Hi,

We have a customer with a question about ADC calibration.  Their question follows:

We have been using various flavors of the MSP430 on several projects but I have not seen a well documented software solution for doing ADC calibration using the TLV structure values.  I have working code to properly get the correction factors from the table.  I also have working code to perform a reference cal, and gain and offset correction on the ADC conversion value.

I’m trying to document the order in which these corrections should take place.  Things work fine if we are in the middle of the ADC range but the upper and lower regions of the ADC range could be a problem. If the correction are not done in the proper order we could effectively reduce the available ADC range.

My question… What is the recommended sequence to properly adjust an ADC conversion for reference, gain and offset errors?

REFCAL first, then ADC gain and offset?  Or the other way around?  It depends on how the calibration is done at the factory.  Unfortunately I could not find this particular information on the TI website.

Regards,

John

 

  • I agree this isn't well documented.

    Looking in MSP430Ware 1.20.1.8 I can't find any examples which apply the ADC calibration.

    The CC430 Family User's Guide SLAU259B shows how to apply REFCAL or ADC gain and offset, but not all three together (you didn't say which devices you are using).

    The User's Guide does have the note:

    If both gain and offset are corrected, the gain correction is done first:

    John Wiemeyer said:
    REFCAL first, then ADC gain and offset?
    Given that REFCAL is a scaling factor correction, applying REFCAL first does seem to be the correct order in line with the User's Guide note.

    Failing a definitive answer from TI, do you have a precision DC voltage source you could apply to an ADC input, and try applying the calibration in different orders to see which order gives the most accurate result?
     

  • John,

    The VREF calibration should be done first on the raw values out of the ADC.

    This is because the VREF factor in the TLV structure is obtained by measuring the VREF voltage with the internal ADC and normalizing it by the ideal value (1.5V for example).

    Basically, the VREF calibration value does not already take any ADC offset or gain calibration into account.

    On the other hand the ADC offset and gain values are obtained with an external reference voltage, so they have no dependence on VREF, so the VREF calibration should not be applied to them.

    1. Apply VREF calibration.
    2. Then apply gain calibration.
    3. Last apply offset calibration.

    ADC(calibrated) = ( (ADC(raw) x CAL_ADC15VREF_FACTOR / 2^15) x (CAL_ADC_GAIN_FACTOR / 2^15) ) + CAL_ADC_OFFSET

    VREF and gain don't matter which order you do them since they are both multiplied but the offset should definitely be done last of all.

  • Austin Miller said:
    This is because the VREF factor in the TLV structure is obtained by measuring the VREF voltage with the internal ADC and normalizing it by the ideal value (1.5V for example).

    I wonder how you can measure internal VRef with the internal ADC without knowing the the ADC offset and scaling? Sure you can measure a known VCC (or other external voltage). Then you'd get offset (two measures) and gain, but gain would include reference and ADC gain error and the two can't be separated. (well, no need to, anyway, as it is unimportant whether the gain error comes from ADC or reference). Knowing the reference error alone is IMHO only required if using the reference externally.

    However, I had assumed that VRef output on a pin and externally measured (the obvious way to eliminate the ADC from the equation). But maybe this is too time consuming for factory calibration and there's a different way.

  • I tried to clarify one thing regarding the ADC Offset. I always get ADC Offset = 1 on all CC430 chips? 

    I wounder What is wrong I;m doing. I read everything from TVL data and the Offset as well according to the datasheet. However, I couldn't figure why the offset is always 1? 

    regards

  • Ideally, the offset is 0. In a real world, it isn't. But there is no reason why the offset should be large. After all, the larger the offset, the smaller the number of different ADC readings and the rougher the resolution. Imagine an offset of 1022 on the ADC12. THis would mean 1022 is 0V and 1023 is 2.5V. Giving you a 1-bit ADC.

    An offset of 1 sounds reasonable. Since the offset is caused by some internal things, chance is that it is very similar across devices. And 1 +-10% remains 1 (rounded).

    However, the offset caused by your external circuitry might be much higer. And since it depends on external components, the internal offset won't help you much.

    If you care abotu things like offset, I recommend a calibration run of your own on every device. It will also eliminate the need for 0% tolerance components outside the MSP :)

  • Jens-Michael thank you for your reply. I was checking the TLV data on the Die of the CC430F5137, and it's as follows:
    ADC Gain calibration --> address 0x1A16
    ADC offset --> address 0x1A18
    ADC reference calibration --> address 0x1A2C The ADC corrected value (Gain and Vref correction) has better linearity than the RAW data, but always has a DC offset (few mVolts). Applying the ADC offset correction did not help. The ADC offset value is always 1.
    I checked many controllers and all have the value of 1 for ADC offset.
    Firstly I was suspecting the Software code, but after checking the TLV data on the Die found that it's really stored as 1.
    Could it be a manufacturing problem? I am using the CC430F5137 G4.

  • Fri_coder said:
    Could it be a manufacturing problem? I am using the CC430F5137 G4.

    THat I really doN't know,.

    However, I usually do my own calibration anyway, completely ignoring these values. This removes any offset and gain error of the external parts too, so i can live with cheap 1% parts rather than expensive 0.1% or better parts. And still have good (even better) results.

    But maybe that's because I started on 1x family where factory calibrations values weren't available at all.

  • Fri_coder said:
    I always get ADC Offset = 1 on all CC430 chips?

    It could just be the lot of chips that you have. I am using the MSP430F5508 and I have noticed a variance in the values.

    Note that the offset is a 16-bit signed value and the values will (should) be close to 0, plus or minus. Thus, a -1 offset is 0xffff.

    This has implications when the ADCraw values are close to 0 and full-scale.

  • Austin Miller said:
    1. Apply VREF calibration.
    2. Then apply gain calibration.
    3. Last apply offset calibration.

    Indeed. This discussion came up about a month ago and we speculated about the order. Good to see TI chime in this time and confirm.

    Code snippet....

    UNSIGNED16 convertADCtoMillivolts(BatterySelect_e battery)
    {
       UNSIGNED64 result64 = 0;
       UNSIGNED16 result16;

       if (battery == BATTERY1)
       {
          result64 = SampleBuffer[PFM_BATTVOLTS1];
       }
       else if (battery == BATTERY2)
       {
          result64 = SampleBuffer[PFM_BATTVOLTS2];
       }

       // Compensate for the reference voltage
       result64 *= calREF20;

       // Compensate for the ADC gain
       result64 *= calADC_Gain;

       // Normalize the previous two compensations and check for overflow
       result16 = (UNSIGNED16)(result64 / POW2L(30));

       // Factor in the ADC offset (Offset is a 16-bit signed value) and check for
       // overflow/underflow
       result16 += calADC_Offset;
       if (result16 & 0x8000)  // result is negative, we had underflow
       {
          result16 = 0;
       }
       else if (result16 > PFM_ADC_MAX_SAMPLE) // overflow happened
       {
          result16 = PFM_ADC_MAX_SAMPLE;
       }

       // Now take the corrected ADC value and multiply by the conversion factor
       // to get value in milliVolts
       result16 = (UNSIGNED16)((result16 * ADC_TO_MV_MULT) / ADC_TO_MV_DIV);

       return(result16);
    }






  • Brian Boorman said:
       result16 = (UNSIGNED16)(result64 / POW2L(30));


    I think,
    result16 = result64>>30;
    give the same result with less code and in much less CPU cycles.
    Also, using an intermediate result32 for the first multiplication also speeds up the code, if the MSP has a MPY32, as it avoids an unnecessary 64 bit multiplication.

    Besides this, thanks for sharing the code.

  • Jens-Michael Gross said:
    I think,
    result16 = result64>>30;
    give the same result with less code and in much less CPU cycles.

    I don't. :-)

    What I forgot to include are the macro definitions.

    #define POW2L(x)   (1L<<x)
    #define POW2U(x)   (1U<<x)

    The intent was to force the bit-width of the arithmetic.

    The compiler substitutes a right shift for the division since the constant is known.  (I checked when I wrote the code).

  • Brian Boorman said:
    The intent was to force the bit-width of the arithmetic.

    Okay. Nice idea for othe rsituations, but since result64 is already 64 bit, it rather obfiscates what's going on.

    Brian Boorman said:
    The compiler substitutes a right shift for the division since the constant is known. 

    Must be a new feature. Some time ago, a division was a division and wasn't replaced by a shift even if the constant was a power of two.
    Maybe it's the fact that your divider is an expression that itself contains a shift, so the compiler unfolds it: a/(b<<c) -> (a>>b)/c (depending on division algorithm, this is faster) where /c is elimiated if c==1.
    I remember that a "/2" was executed as division rather than a shift.
    It also might depend on optimization level. But that's just guessing.
    If you say you checked it and the compiler is that smart, then fine. (but this isn't a portable finding. If you ever switch the compiler, you might get a penalty you cannot explain at first)

  • Austin Miller said:
    ADC(calibrated) = ( (ADC(raw) x CAL_ADC15VREF_FACTOR / 2^15) x (CAL_ADC_GAIN_FACTOR / 2^15) ) + CAL_ADC_OFFSET

    Regarding this information I have a question about accuracy of ADC and VREF data in Flash memory.

    I am using the MSP430G2553 device and the datasheet gives an accuracy of +/- 6% of the voltage
    reference. The calibration correction data in flash memory (probably measured after production and
    programmed individually) CAL_ADC_25VREF_FACTOR gives an accuracy of +/- 15 bit reps. a
    resolution of 1/32768 or 30ppm at room temperature.

    Regardless of effects from power supply, temperature and aging - is it safe to assume that the
    accuracy of the adc measured data with using internal voltage reference is significantly better
    than the +/- 6%, lets say at least 0,1% or 1000ppm ?

    And what about the adc calibration data like offset and gain - the datasheet is talking about
    +/- 1 LSB. Is this without calibration data ? Because these values are measured or lets say
    expressed with +/-15 bit and may be accuracy in the range of 30ppm (effects from voltage,
    temperature and aging not regarded) ? +/-1 LSB means about +/- 1000ppm for a 10 bit adc.

    So could I expect an accuracy overall in the range of +/-1LSB or +/- 1000ppm when using
    all calibration data correctly with a perfect 3.0V power supply at 25°C temperature on new
    devices ?

  • Hello every one ,

    whats is value off -set and gain error of adc in msp430f55xx  ...how to calculate or measure the off-set and gain error of adc in that controller ...how to remove the off -set error and gain -error from software algorithm ...how to over come ?

    thanks

    sunil 

  • Did you actually read the thread before posting? I'd say, all of this has been covered above.

  • What .h files contain the address of these calibrations registers?  They aren't in the specific micro-controller .h file

  • Hi Sean, you posted this a while back, but as I had the same question and initially couldn't find the answer anywhere I thought I'd go ahead and tack on my 2p.

    I couldn't find any pre-defined names for these calibration factors either, so just added them manually via a handful of pointers:

    // Pointers for calibration data - NOTE: MEMORY ADDRESSES ONLY VALID FOR MSP430G2XX2
    unsigned int *CAL_ADC_25VREF_FACTOR = (unsigned int *)0x10E6;
    unsigned int *CAL_ADC_GAIN_FACTOR = (unsigned int *)0x10DC;
    unsigned int *CAL_ADC_OFFSET = (unsigned int *)0x10DE;

    You can now refer to them as usual in your code, though don't forget the pointer * before the name! E.g.

    ADC_calib = ADC_raw * (*CAL_ADC_25VREF_FACTOR / 32768)   // See the rest of this post for more efficient ways of handling this math.

  • Oops. In relation to the above post, of course, the offset calibration is actually a 2's complement number, so the pointer types need to reflect this!:

    // Pointers for calibration data - NOTE: MEMORY ADDRESSES ONLY VALID FOR MSP430G2XX2
    unsigned int *CAL_ADC_25VREF_FACTOR = (unsigned int *)0x10E6;
    unsigned int *CAL_ADC_GAIN_FACTOR = (unsigned int *)0x10DC;
    int *CAL_ADC_OFFSET = (int *)0x10DE;

**Attention** This is a public forum