This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Need details on LMP90098 background calibration

Other Parts Discussed in Thread: LMP90100, LMP90098

I have read application note "AN-2180 LMP90100 True Continuous Background Calibration" but I still have a few questions that directly affect my project, so I'd like some input from TI.

I have an application where I'm trying to measure a resistive sensor of value ~6K with 0.25 ohm resolution. I'm using the LMP90098 for this. Biasing for the sensor consists of a resistive divider with a 100K ultra-low TCR resistor (< 1 ppm/degree Celsius) on top and my sensor at the bottom. The measurement is made between the two ends of the sensor (the other end is connected to ground). Filtering is performed by a 100 nF capacitor in parallel to the sensor. The reference voltage for the LMP90098 is the same reference applied to the voltage divider, hence the circuit should be ratiometric in my understanding. If it matters, the reference voltage is 5V supplied by a TPS71750 regulator (since the circuit is ratiometric, my understanding is that a more precise reference is not warranted).

The physical process I'm trying to measure causes the resistance to first go up in a slow exponential curve (from 2K ohm to about 7K ohm over 2 minutes), and then I trigger an external event which makes it go down from 7K ohm to 6K ohm, again in an exponential curve which should settle in close to 40 seconds. After this last settling, I measure the sensor's voltage.

The LMP90098 is configured for gain 8, buffer on, 1.6775 SPS data rate, reading a single channel in ScanMode2, using background calibration mode 2.

It appears that the background calibration feature of the LMP90098 is introducing some imprecisions in my measurements, so much so that I've chosen to turn it off for now, which seems to have improved the repeatability of my measurements.

With this in mind, here are my questions:

1. AN-2180 says the calibration procedure depends on the input signal being "approximately DC". From my description above, could my signal be considered "approximately DC", enough that the calibration does not introduce an error above 10 uV? After the signal is fully settled I assume the calibration will eventually settle as well, but my process must be measured very shortly after it settles so I just cannot wait.

2. How often is the calibration performed? Also, if I'm scanning multiple channels, is offset calibration performed on a single channel or on every channel? What about gain calibration? If the answer is "single channel" to either of these questions, is a specific channel used in the calibration (say the first or the last in the scan sequence) or is it random?

The datasheet says this:

If operating in BgcalMode2, four channels (with the same ODR) are being converted, and FGA_BGCAL = 0 (default), then the ODR is reduced by:

1. 0.19% of 1.6775SPS
2. 0.39% of 3.355SPS
3. 0.78% of 6.71SPS
4. 1.54% of 13.42SPS
5. 3.03% of 26.83125SPS
6. 5.88% of 53.6625SPS
7. 11.11% of 107.325SPS
8. 20% of 214.65 SPS 

I had this theory (which perfectly fits the reported ODR reductions) that calibration is performed exactly once every full scan, on a single channel, using a single conversion at the 214.65 SPS data rate. So, at a 1.6775 SPS data rate, the LMP90098 would read 4 channels at 1.6775 SPS and make one reading at 214.65 SPS, and the combined rate is indeed 0.19% off the rate of reading 4 channels at 1.6775 SPS only. I mention this because it would suggest one channel out of 4 is singled out to perform the calibration, and I'd like to know if it is a specific channel, or if it is random. Say it's always the last channel in the sequence. I devised a strategy whereby instead of scanning a single channel, I'd scan two channels where the second one that I'd add would have both inputs tied to ground. That's a DC signal that the calibration could use to be more accurate. It'd result in a 50% reduction in data rate but my application can tolerate that, if it means less noise in the calibration. But to do that I need to be sure that the calibration is going to be performed on the channel that I'd create for this specific purpose.

Also, what would be the effect on the gain calibration of using a 0V input? AN-2180 says this:

Gain calibration for the PGA (programmable gain amplifier, 1x to 8x) in the modulator is done by obtaining the output at alternate input samples with the FGA and buffer OFF. The PGA gain coefficient is obtained by dividing the difference of these outputs by the difference of these alternate input samples. The digital output code is multiplied by the PGA gain factor to correct for the PGA error. 

The wording suggests that reading a full-scale input would produce better resolution for the gain correction scheme, and conversely, reading 0V might possibly introduce huge errors (on the order of %, not ppm). Is my interpretation correct? That would suggest I'd be better off not performing gain calibration on a channel with both inputs grounded.

3. Although this might not be directly related to the calibration, it is a problem I've faced with this part: say I was using a gain of 8 and reading an ADC code of 4,000,000 on a fully DC signal. If I change the gain to 1 and restart the conversion, I'd expect to read an ADC code of 500,000. However, I usually have to discard the first result after changing the gain, because it's somewhere between 500,000 and 4,000,000, despite the fact that I restarted the conversion. More info in this EE Stack Exchange question that I posted. Is this expected? Maybe the delay in getting a "good" result is due to the calibration?

4. This is a more open-ended question, but I'd still like to get some input if possible. An experiment I've performed involved switching to background calibration mode 3 (offset and gain estimation) before starting a measurement, and then at some point during the final stage of my process (exponential from 7K ohm to 6K ohm) I switch back to background calibration mode 2. If I do this, I see step changes in the signal which cannot be attributed to noise or a physical phenomenon of my process -- it must be a recalibration. The calibration was off for only a few minutes and environmental conditions didn't change significantly (maybe a couple degrees C at most), yet the step change was on the order of tens of uV, hence way above the noise plus whatever could be explained by small temperature changes. In between measurements, when background calibration is enabled, my sensor is exposed to the environment and its reading is likely to fluctuate by a few ohms (at a baseline value of 2K ohms). Could a calibration error, due to my signal having too much "noise", explain the step changes above?

I may have a few more questions later but that is all for now. I'd really appreciate a response from TI, especially to questions 1 and 2.

Thanks