Hello,
We are having difficulty calculating the Sensitivity factor S0.
After following the calculations presented in the TMP006 User's guide (SBOU107.pdf) we end up with a Sensitivity factor that is negative and outside the typical range specified in the user's guide. As part of the calibration procedure, we've taken a set of raw temperature and voltage readings from the TMP006 (see table below). The object being measured is a large anodized aluminum block that is placed about 1/16th of an inch in front of the sensor (blocking the entire field of view). This block temperature is controlled and accurately measurable. We have also painted this block with lampblack paint to ensure good emissivity.
Here is the sample data we are using:
Tobj Meas (deg C) | Vraw | Traw |
28.2 | 65173 | 3572 |
31 | 64820 | 3728 |
35 | 64588 | 4052 |
40 | 64447 | 4436 |
45 | 64272 | 4784 |
50 | 64094 | 5226 |
We've had a couple of different engineers at our company independently calculate the Sensitivity factor based on the calculations presented in the User's guide using our data and they both ended up with the exact same results (which don't make sense) . The plot of slope of the Calibration Function vs (Tobj^4-Tdie^4) that we get has a negative slope instead of the positive slope that is shown on page 11 of the User's guide. We are very confident that our math exactly matches what is presented in the User's guide. We are using Kelvin everywhere that a temperature is used in the calculations even though the User's guide doesn't explicitly say this.
Has anyone here had a similar experience or has any suggestions for what might be going on?
Thanks,
Chip Lukes