This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Constant for one-point-calibrated temperature calculation

Other Parts Discussed in Thread: MSP430G2553

As I continue my ongoing fight to fully comprehend the MSP430 calibration constants and how to use and recalculate them so measurements correspond to the real world:

The msp430g2553 data sheet has the usual footnote describing how to convert ADC measurements from channel 10 into a temperature:

The following formula can be used to calculate the temperature sensor output voltage:
VSensor,typ = TCSensor (273 + T [°C] ) + VOffset,sensor [mV]

where TCSensor is 3.55 mV/Cel.  VOffset,sensor is presumably a part-specific calibration value.

The msp430g2x33_adc10_temp.c example has the following calculation to convert an ADC10 measurement of INCH10 to a temperature:

    // oC = ((A10/1024)*1500mV)-986mV)*1/3.55mV = A10*423/1024 - 278
    temp = ADC10MEM;
    IntDegC = ((temp - 673) * 423) / 1024;

Where did the 986mV come from?  It corresponds to an offset of 278 degrees, and seems to suggest the zero volt measurement would be at about 5 degrees below absolute zero.  The equation from the data sheet suggests it should be 969mV (273 Cel * 3.55 mV/Cel).

Is it a one-point calibration constant from a specific device that's been folded into the example equation?  An undocumented adjustment for self-heating?  A mistake?

  • Hi, it seems that TI is only describing the temperature sensor characteristic within the temp range -40 deg C up to 85 degC. The offest error seems to be a measured value that fits quite well in for the specified temp range... In some other MSP430 datasheets the 986mV is still speced. However, because of the big variation you have to do at least a one point calibration. When you do a one point calibration the offset error parameter is not so important anymore (because of the offset calibration you do ;o) )...

     

  • Sure, that works, and I'm happily proceeding based on a one-point calibration at 30 Cel. (It would be nice if I could trust the factory-calibrated two-point calibration values at 30 Cel and 85 Cel and interpolate, but at least one MSP430G2553 was delivered with a zero value for the 85 Cel calibration, so I've completely given up on trusting the factory calibrations, and will be doing my own with a DS18B20.)

    The theoretical question, though, still stands: why is the value 986 mV instead of 969 mV?  If it's a measured value, under what conditions was it measured?

    In practice, treating the INCH_10 sample as a standard ADC value and adjusting it for offset and gain factor then plugging it into the equation to get temperature, based on one part using 986 produces the right temperature and using 969 doesn't.  I understood the purpose of the one-point calibration to be to eliminate ADC-specific gain and offset errors when all you care about is the temperature.  If there's there's a part-specific error isolated to INCH_10 that also applies, it can't be packed into that constant.

    A plausible explanation would be that TC_sensor is temperature sensitive, with the value 3.55 mV/Cel valid only from -40 Cel to 85 Cel, and the 986 mV corrects for non-linearities outside that range.  Which may be what you said.

    I'll go with that theory unless somebody can teach me better.

**Attention** This is a public forum