This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Linux/TMP422: Linux driver temperature calculation

Part Number: TMP422

Tool/software: Linux

Hello team,

I have a question about TMP422 linux driver provided on TI web(http://www.ti.com/tool/TMP421SW-LINUX).

When we obtain temperature range through the driver, we got strange temperature information.

Looking at the source code, I found calculation such as (temp * 1000 + 128) / 256 is run in the driver when returning the temperature data.

1) Could you please let me know a reason of the calculation?

2) What is best way to calculate actual temerature in degree C from the output data?

Best regards,

  • Hi Taketo-san,

    These drivers were made by the open source community, so TI is not able to provide much support for them.

    Can you tell me what values you received?

    The temperature may be reported in deci, centi, or milliCelsius in order to avoid the use of floating point (fractional) variables.

    You can look at the first byte (8 bits) of the result to get an integer temperature directly without conversion. This is true only when the RANGE=0 in the Configuration register. When RANGE=1, you must subtract 64.

    Refer to page 11 and 12 of the datasheet for more information.

    Thanks,
    Ren
  • Hello Ren-san,

    What I received is, for example 25000 when measuring at 25degC condition.
    Output data is calculated as (16-bit temp data*1000+128)/256. I would like to know why the driver performs such calculation.

    I know first byte represents integer temperature directly, so I would like to have the raw data. However the driver somehow performs the calculation. So I wanted to know the reason of the calculation. Do you know why the driver includes the calculation?

    Best regards,
  • Taketo-san,

    See Table 2 of datasheet. Notice that temperature result is binary weighted. Bits 0-3 are always zero, so they can be ignored or discarded. Bit 7 has a weight of 0.5°C, bit 6 has a weight of 0.25°C and so on. For this reason, the 16 bit value can be converted to floating point temperature by performing a right shift of 8.

    2^1 = 2 and x << 1 is equivalent to x * 2
    2^-1 = 0.5 and x >> 1 is equivalent to x * 0.5 and x / 2

    If we have the raw 16 bit result from the temperature register stored correctly in a signed 16 bit integer datatype, then negative numbers are naturally handled correctly under 2's complement rules. If this same value were stored in a signed 32 bit datatype, it may not correctly resolve as negative, but I digress.

    If we wanted to take that 16 bit signed integer and convert it to °C using only integer math operations, we could emulate a right shift of 8 using divide by 256. However, this would cause the fractional bits to fall off the right side and be lost if we don't store the result in a float datatype. (Additionally, float datatypes won't allow you to >> or <<.) Instead, if we scaled the value by 1000 before dividing by 256, we would be left with temperature in milliCelsius. If the temperature sensor were configured for RANGE=1, then you would need to subtract 64°C afterwards. I can't explain why they are performing +128. In summary, these are correct functions:

    if (RANGE){
    signed short x = 0x5950
    mC = (x*1000)/256-64000;
    }
    else {
    signed short x = 0x1950
    mC = (x*1000)/256;
    }

    mC = 25312 or 25.312C

    Thanks,
    Ren