This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

how to do ADC calibration in MSP430G2553

Other Parts Discussed in Thread: MSP430G2553, ADS1248

I have to calibrate results. 

Suppose I have adc input & measured 1.4V but in actual it was 1.5V.

So what will be calibration factor:

1. 1.5/1.4 = 1.071. So whenever I read a adc voltage I multiplied it by 1.071.

2. or. 1.5-1.4 = 0.1. Whenevre I measure a voltage I add 0.1 in add to get correct value.

3. Or I measure it in two extreme range. Let I have min value = 0V & max value = 2.5V.

On measuring I get x & y value respectively. So how to get mathematical relation in them 

  • Not so easy. There are more ADC errors than offset error you are trying to address. First you need to know possible ADC errors, then you need to properly characterize ADC and only then compensate for errors. This could be good starting point:

    http://www.maximintegrated.com/app-notes/index.mvp/id/748

    Sorry, TI, perhaps there's similar appnote from TI or BB, but this one popped-up first ;)

  • Indeed, there are more errors than just an offset error. However, most of them are negligible if you don't compensate for the most important two: offset error and gain error. Especially since external circuitry adds to them too and can be calibrated as well in one step.

    Best way is to to two measurements. Say, 0.5V and 2V. The difference between the two readings (R) corresponds to 1.5V. So divide it by 1500 and you know the number of counts per mV. That's your gain (G).
    Now multiply if by 500 and subtract it from the 0.5V value. The result is your offset (O).

    Now all you have to do to get the measured voltage is to do

    U(mV) = (R-O)/G

    This compensates offset and gain for all analog circuitry, voltage dividers and whatever, between the point where you applied the 0.5/2V and the ADC output.

  • Hi Jens,

    I will use (Vm = measured voltage) & (Va = actual voltage).
    I have to measure voltage range (0 - 2.5V) with 12 bit adc & 2.5 internal reference.

    So I obtain digital data for:
    1. Vm_2.5 = 4090
    2. Vm_0.5 = 814
    3. Vm_2.0 = 3271

    4. Va_2.5 = 4095
    5. Va_0.5 = 819
    6. Va_2.0 = 3276

    A) The difference between the two readings (R) corresponds to 1.5V. So divide it by 1500 and you know the number of counts per mV. That's your gain (G).

    G = (Vm_2.0 - Vm_0.5)/1500
    = (3271-814)/1500
    = 1.638

    B) Now multiply if by 500 and subtract it from the 0.5V value. The result is your offset (O).

    O = Va_0.5 - (G * 500)
    = 819 - 819
    = 0

    C) U(mV) = (R-O)/G

    So let Vm_2.5 = 4090.
    So U = (4090 - 0)/1.638
    = 2496mV


    My questions:
    1. Offset comes out to be zero. Am I doing it correct.
    2. Or offset is zero as error gets compensated by gain.

  • Aamir Ali said:
    1. Offset comes out to be zero. Am I doing it correct.

    Almost. For gain, you used the values from first measurement, for offset from the second.  You should use the same set of data for the whole calculation. Offset seems to be -5 in this case (814-819). So you have an offset of 0-3mV.
    Perhaps you should get multiple readings and work with the average values. There's always some noise, or even current ripples on GND.

    Aamir Ali said:
    2. Or offset is zero as error gets compensated by gain.

    No. Gain and Offset errors are independent. That's why I suggested not using 0V as lower value (to avoid clipping on a value of 0)
    If you have a multimeter, precision is usually defined as (e.g.)  +-3%+10digits, where the (up to) 10 digits are the possible offset error, and the +-3% are the gain error.

  • But in calculating the method you suggested we haven't considered any actual value(Va), we are getting offset & gain error by just measured values. So how does it compensate error when no actual value is considered while calibrating value

  • The 0.5V and 1.5V values are actual values too. And they are known. So by knowing the resulting readings for two point on the input range, you can calculate the readings for any point of the input range (presuming that the transfer curve is linear - the calibration does not remove any nonlinearity error)

    The theoretical resolution of the ADC is 2500/4095 = 0.6105mV/count. so '0' means 0V, '1' means 0.61mV, '2' means 1.221mV etc.

    However, with the calculation as described above, we know that we have 1.638 counts per mV or 1/1.638 = 0.6105mV/count. Which is, wow, exactly the theoretical value. On this single specific MSP.
    But we also know that the result is up to 5 counts (= 3.05mV) higher than it should. So if you read 3.05mV, it is really 0V, 5.05mV is 2mV and so on. The offset.

    Since you know the value of each count, and the number of counts reported even if there is no voltage, you know what any possible ADC reading means: subtract the offset and multiply the remaining counts with the measured resolution.

    This kind of two-point calibration is pretty much standard for all linear systems.

  • Thanks for your detailed reply.

    One last question: Does same apply for bipolar adc. I have ADS1248 attaced with MSP430G2553 to measure thermocouple.  & for ambient temp comensation I am using TC1047A which is attached to 12 bit adc of MSP430.

    So to compensate error for bipolar adc. I have to measure b/w +60 to -60 mv considering its linear.

    So first I select lower & higher end voltage i.e -40 & + 40mV.

    After that I get gain error by this & then offset error.

  • Aamir Ali said:
    Does same apply for bipolar adc.

    Yes. Bipolar ADC have just different input voltage range, other properties are similar.

**Attention** This is a public forum