This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

ADS1256: Temperature drift

Part Number: ADS1256
Other Parts Discussed in Thread: LP2981, ADS1262

Hi !

I'm trying to make a temperature compesation system for a ADC breakout board called "Raspberry Pi AD/DA Expansion - Waveshare"
It uses a ADS1256 as ADC chip and the external reference LM285-2.5 on it's PCB.

So, i'm trying to compensate the changes of temperature in the room where a magnetic sensor is monitored.  Because there're about 10°C variations, I think i'm getting readings that could be acurated if I make a temperature compensation system.

Looking at the specifications on both ADS1256 and LM285-2.5, we have:

Lm282-2.5 - Temperature Coefficient:  80 ppm / °C
ADS1256 - Offset drift:  ± 100 nV / °C 
ADS1256 - Grain drift:  ± 0.8 ppm ppm / °C

So, we can say that I can measure the room temperature with a high precision termometer (let's say...measuring 30 °C), and every minute I subtract a fixed "no drift" optimal value from it  (let's say....20°C)  ?
This way, i'm getting a list of values every minute (30ºC - 20ºC = 10ºC.....31-20 = 11ºC.....and so on) .......and i'm multiplying this result number by the sum of the 3 drift values I said before ( 80 ppm/°C  +  100 nV/°C   +   0.8 ppm/°C  =  0,00008090000 V/°C).   So, I can have a corrected value of, let's say:  11 * 0,00008090000 = 0,0008899 V.  So, this final value should be "removed", from the original reading from my sensor monitored by the ADC.

It does have some logic ? Or it's wrong ?

  • Hi Fábio,

    Your idea for correcting for temperature related effects is a good one; however, I might suggest implementing the temperature corrections a bit differently...

     

    1) First, I'd say that it is important for you distinguish between the offset and gain errors.
    By grouping these errors together, you won't get as good of a result. The big difference is that offset errors are generally constant (ignoring drift) and don't depend on the applied input signal. Gain errors are dependent on the applied signal and will scale proportionally. Just as the as the ADS1256 calibration allows for separate correction of offset and gain errors, I would recommend compensating for offset drift and gain error drift separately. Note: The reference drift error will look like an ADC gain error drift. 
     

    2) Using the datasheet's typical specifications for drift is risky.
    The typical drift specifications are the average drift performance (absolute value) of a large population of devices. Your specific device may or may not actually match the typically specifications. It could be better or it could be worse, and even more importantly there is no telling if the direction of the error will be positive or negative. In fact, each of these errors may have different signs associated with them so it would unlikely for them to add directly. I'd even go so far as to say that if you're not able to measure the actual temperature drift errors, then there would likely be little to no benefit from guessing at the correction factors.

    A general strategy for removing temperature related errors:

    To calibrate for temperature related errors, you would likely need an accurate calibration source that is stable across temperature. You could then perform calibration as normal, but do it at multiple temperatures. This way, you determine actual system offset and gain error corrections that can be recalled at different ambient temperatures. Obviously, the more temperatures you use for calibration the better; however, it's impractical (and impossible) to perform calibration at every ambient temperature. Therefore, you might consider calibrating at just a few different temperatures and interpolating between the calibration points (using a piece-wise linear approximation). So for example, if at 20°C you measure the offset error to be 1 uV, and then measure 2 uV @ 30°C, you would be fairly safe to assume a 1.5uV offset @ 25°C. The same logic could also be applied to the gain error.

    This method isn't perfect, but it does provide better accuracy than simply calibrating at one ambient temperature (that is, assuming that your calibration source is more accurate than the error you're trying to account for and remove). Also, this is not the only option to remove temperature related errors (you could put everything in a more tightly temperature controlled room or oven to avoid additional drift errors); however, this is a common and cost-effective way to do it.

     

    Best Regards,
    Chris

  • Hi Chris, 

    First of all, thank you very much for all the information !  Unfortunately, my magnetic sensors are just buried on the ground right now along some kilometers from me, but at least, I can take the daily readings from it, in real time, and I do have a reference sensor from another remote location. Maybe there're some natural differences between the reference sensor and my one, but I think I can use it for calibraton.  But about the temperatures: I do only have the temperature values from my own sensor, not the reference one.

    Please, take a look in the pictures below.  With that "correction" formulas I've told you before, we know it's not the more correct way, but at least i'm getting a better result than leaving it without correction.   Explaining the temperature sensors: when Temperature sensor 1 (it's installed around the ADC and it's reference voltage chip) goes up, the output on magnetic sensor goes down, so I need to "lift" the magnetic sensor voltage to compensate.  And on the Temperature sensor 2 (it's installed around the magnetic sensors, buried with them), it's the opposite:  when it's showing a higher temperature, the magnetic sensor goes high too, so I need to lower the value on the magnetic sensor to compensate it.   The ADC and it's reference voltage is instaled outdoors, while the magnetic sensors is buried, so that's because the variation on temperature 2 is much smaller.

    In some tests i've done months before installing it, I've artificially simulated a increase on magnetic sensor (temperature sensor 1)  and in the ADC (temperature sensor 2) to test the linearity of the drifts:   the ADC (temperature 1)  seems to react lineary to the drift, but the magnetic sensor (temperature 2), seems to react like a capacitor charge/discharge curve, exacly like this: https://static.lwn.net/images/cpumemory/cpumemory.57.png

    Then, what do think, in your opinion, that I can change on that formula i've posted to at least help to correct these drifts on the ADS1256 ADC and on the LP2981 along with the magnetic sensor ?  Maybe adding logarithms, I don't know... One problem is: I currently need to make this compensation on real time, while the  ADC is getting the new values... 

      

  • Hi Fábio,

    I'm a bit confused by what you mean by your "reference sensor". Is this just another remote sensor; what makes it a reference sensor? Is the temperature at the reference sensor constant, is it measuring the same input signal as your other sensor, or is it measuring a calibrated input signal? If this is just another sensor (with it's own temperature drift errors) measuring a different input signal, then I'd question if your "calibrated" result would be meaningful. I'd want to be convinced that you're calibrating against a good standard; otherwise, it would seems like you're simply changing the data.

    If you're reference sensor is in fact a type of "reference" input, then then next question is, "how good of a reference is it?" If this reference sensor drifts, if it is not perfectly matched to your other sensor, if the wiring is mismatched, etc., then calibrating against this sensor will provide an imperfect calibration. Lets say your "reference sensor" has a 20% error, then using this for calibration cannot give you a result with better than 20% error. Additionally, if your un-calibrated result only had a 10% error to begin with, then calibrating against a reference source with a 20% error is actually making your "calibrated" accuracy worse. Therefore, it is important to make sure your "reference" is indeed accurate.

    One type of calibration you could do with the ADC is self-offset calibration. You could short the ADC inputs at the device (providing a known 0V input) and then measure the offset against temperature. This would only allow you to remove the ADC's offset drift error. However, since the ADC experiences the biggest change in ambient temperature, this type of calibration alone could provide a significant improvement in accuracy.

    ...Using the ADC's input chopping function wold provide a similar result, except that it would occur in "real-time" and would not require you to measure the offset and recall any calibration corrections. Are you currently using this mode?

    ...If you wanted to measure the sensor's offset error, you would need a way to provide a 0V input from the sensor (but it doesn't sound like that is something you have much control over). Likewise, for gain error you would need to provide an accurate, non-zero input (ideally a full-scale voltage, though any large non-zero voltage could suffice). If you apply this accurate voltage source directly at the ADC and measure across temperature, then you'd be able to calibrate for the ADC's gain error drift. And again, to calibrate for the sensor's gain error drift, you would need a way to provide an accurate (known) input to the sensor itself.

    Best regards,
    Chris

  • Hi Chris, 

    Well, it's just another sensor, with it's own temperature drift errors just like ou said...   It's a from a comercial, acurated magnetometer that does have a own, garanted drift compensation system, but unfortunately it's closed specs, so I doesn't have access to it's details of implementation.  But it's well known that it does have accuracy of better than 0.25% and resolution of 0.1 nanoTesla   (1 nT =  0,00002 Volts), so  i'm assuming that, comparing the charts,  we can say that a difference of 5 to 7 nT is caused, in great part, by drift in the temperature of my sensor side.  I have seen other similar charts from others comercial sensors in that same region with much smaller diferences.....   so, based on all this, i'm assuming  that remote sensor as a reference. But, as you said, it's not a very perfect reference, so I will try to do my best...

    About the chopping function: you mean the ANALOG INPUT BUFFER from the ADS1256 ?   

    And about the ADC self-offset calibration:  by shorting you mean wiring the inputs to GND, and measure the leaving voltage remaining across the time (and across the temperature changes)  and then later subtract it via software until it can reach the zero volt while it's still shorted ?

    At the gain drift calibration:  I forgot to mention that I'm using two chanels of the ADC to monitoring the 5 Volt VCC from the magnet sensor and from the own ADC's VCC. Looking at these charts, the VCC charts variation are very close to the temperature charts variation and I do have all the values from it along the time, by several days. So, maybe it's possible to use it for the Gain Drift Calibration of the ADS1256 ? (well, i'm assuming the other channels from the ADC used with the magnetic sensores will have a similar thermal drift similar to the two channels used in VCC's monitoring)..

     

  • Hi Fábio,

    Fabio Oliveira said:
    so, based on all this, i'm assuming  that remote sensor as a reference. But, as you said, it's not a very perfect reference, so I will try to do my best...

    Apart from being an accurate sensor, is it measuring the same signal or what other similarity with your other sensor does it have that would make it a reference?

     

    Fabio Oliveira said:
    About the chopping function: you mean the ANALOG INPUT BUFFER from the ADS1256 ?

    I'm sorry, I was thinking about the ADS1262 when I wrote this...The ADS1256 does not have this function built-in, but there's no reason you couldn't implement this manually in software. Essentially you measure the input twice, but change the input polarities between measurements. So for example you would measure AIN0/AIN1, and then measure AIN1/AIN0. Then you would take measurement #1, subtract measurement #2, and then divide by two. The result is an average of the two measurements, except that by performing this operation any ADC offset that did not change polarity between measurements will get removed.

     

    Fabio Oliveira said:
    And about the ADC self-offset calibration:  by shorting you mean wiring the inputs to GND, and measure the leaving voltage remaining across the time (and across the temperature changes)  and then later subtract it via software until it can reach the zero volt while it's still shorted ?

    Correct! It wouldn't necessarily have to be GND, but both inputs need to be connected to the same potential (i.e. shorted). You could even do this internally by configuring the ADS1256 to use the same input pin for both the positive and negative input channels.

     

    Fabio Oliveira said:
    At the gain drift calibration:  I forgot to mention that I'm using two chanels of the ADC to monitoring the 5 Volt VCC from the magnet sensor and from the own ADC's VCC. Looking at these charts, the VCC charts variation are very close to the temperature charts variation and I do have all the values from it along the time, by several days. So, maybe it's possible to use it for the Gain Drift Calibration of the ADS1256 ? (well, i'm assuming the other channels from the ADC used with the magnetic sensores will have a similar thermal drift similar to the two channels used in VCC's monitoring)..

    Yes, if your sensor's output is directly proportional to it's supply (or excitation) voltage source, then you could directly scale the result with the supply voltage. In some cases this is done simply by using the sensor's excitation voltage as the ADC's reference voltage (in the case of the ADS1256, you  can't have a 5V reference, but you can certainly measure this voltage separately, as you are doing). This is called a "ratiometric" measurement and it helps to remove the variations of the excitation voltage from the measurement result. I would definitely recommend performing that kind of measurement compensation!

     

    Best regards,
    Chris

  • Christopher Hall said:

    Hi Fábio,

    Fabio Oliveira
    so, based on all this, i'm assuming  that remote sensor as a reference. But, as you said, it's not a very perfect reference, so I will try to do my best...

    Apart from being an accurate sensor, is it measuring the same signal or what other similarity with your other sensor does it have that would make it a reference?

    Yes... it's the same signal from the Earth's geomagnetism. There's difference in the amplitude, the absolute values. But the variations percentage of the geomagnetism along a day are almost identical in a 1500 km distance between two points in our country, Brazil.  Because of that, speaking in calibration, I think the most important for us is not exactly the absolute values from the magnetic sensor, but it's variations measured from our equipment to be at least close to the variations on the reference's chart, because in Geology we just study when there're big differences in the variations from two relatively close locations, that could mean geomagnetic storms, for example, or another influence from the geological site.

    Fabio Oliveira
    About the chopping function: you mean the ANALOG INPUT BUFFER from the ADS1256 ?

    I'm sorry, I was thinking about the ADS1262 when I wrote this...The ADS1256 does not have this function built-in, but there's no reason you couldn't implement this manually in software. Essentially you measure the input twice, but change the input polarities between measurements. So for example you would measure AIN0/AIN1, and then measure AIN1/AIN0. Then you would take measurement #1, subtract measurement #2, and then divide by two. The result is an average of the two measurements, except that by performing this operation any ADC offset that did not change polarity between measurements will get removed

    Interesting this method you said. Currently, I measure the OUT+ of a sensor with AIN0 and it's reference voltage in the AIN1 (that's internally divided by 2 by the own sensor). There're 3 sensors, so 6 channels used.  The last 2 channels are used to monitoring the VCC.  
    So, to make this calibration, I guess I need to hookup a fixed, stable 5V signal to measure it first in the AIN0, later in AIN1 and subtract the values from them and after that divide by 2.  Then, I should consider summing this result value to each of the next measurements of these two channels to get it offset calibrated...  that's right? 

     

    Fabio Oliveira
    And about the ADC self-offset calibration:  by shorting you mean wiring the inputs to GND, and measure the leaving voltage remaining across the time (and across the temperature changes)  and then later subtract it via software until it can reach the zero volt while it's still shorted ?

    Correct! It wouldn't necessarily have to be GND, but both inputs need to be connected to the same potential (i.e. shorted). You could even do this internally by configuring the ADS1256 to use the same input pin for both the positive and negative input channels.

    The idea of configuring the ADS1256 to use the same input pin looks like perfect for me, because maybe I can do it connecting remotely over SSH to the equipment (it's a Raspberry Pi with the ADC), because it's currently installed in a kind of a hill, with difficult nature access.  I can find how to make this configuration on the datasheet ?  I took a look but I still did not find it.

     

    Fabio Oliveira
    At the gain drift calibration:  I forgot to mention that I'm using two chanels of the ADC to monitoring the 5 Volt VCC from the magnet sensor and from the own ADC's VCC. Looking at these charts, the VCC charts variation are very close to the temperature charts variation and I do have all the values from it along the time, by several days. So, maybe it's possible to use it for the Gain Drift Calibration of the ADS1256 ? (well, i'm assuming the other channels from the ADC used with the magnetic sensores will have a similar thermal drift similar to the two channels used in VCC's monitoring)..

    Yes, if your sensor's output is directly proportional to it's supply (or excitation) voltage source, then you could directly scale the result with the supply voltage. In some cases this is done simply by using the sensor's excitation voltage as the ADC's reference voltage (in the case of the ADS1256, you  can't have a 5V reference, but you can certainly measure this voltage separately, as you are doing). This is called a "ratiometric" measurement and it helps to remove the variations of the excitation voltage from the measurement result. I would definitely recommend performing that kind of measurement compensation!

    Yes... the sensor output is perfectly linearly proportional do it's supply voltage as I've already tested it in lab, before installing the equipment on the hill.  Chris, I should really thank you for all this info and the pacience..  I will search the net on how to make this ratiometric measurement/calibration...    

     

    Best regards,
    Fabio

  • Christopher Hall said:


    Hi Fábio,

    Fabio Oliveira
    so, based on all this, i'm assuming that remote sensor as a reference. But, as you said, it's not a very perfect reference, so I will try to do my best...
    Apart from being an accurate sensor, is it measuring the same signal or what other similarity with your other sensor does it have that would make it a reference?

    Yes... it's the same signal from the Earth's geomagnetism. There's difference in the amplitude, the absolute values. But the variations percentage of the geomagnetism along a day are almost identical in a 1500 km distance between two points in our country, Brazil. Because of that, speaking in calibration, I think the most important for us is not exactly the absolute values from the magnetic sensor, but it's variations measured from our equipment to be at least close to the variations on the reference's chart, because in Geology we just study when there're big differences in the variations from two relatively close locations, that could mean geomagnetic storms, for example, or another influence from the geological site.




    Fabio Oliveira
    About the chopping function: you mean the ANALOG INPUT BUFFER from the ADS1256 ?
    I'm sorry, I was thinking about the ADS1262 when I wrote this...The ADS1256 does not have this function built-in, but there's no reason you couldn't implement this manually in software. Essentially you measure the input twice, but change the input polarities between measurements. So for example you would measure AIN0/AIN1, and then measure AIN1/AIN0. Then you would take measurement #1, subtract measurement #2, and then divide by two. The result is an average of the two measurements, except that by performing this operation any ADC offset that did not change polarity between measurements will get removed

    Interesting this method you said. Currently, I measure the OUT+ of a sensor with AIN0 and it's reference voltage in the AIN1 (that's internally divided by 2 by the own sensor). There're 3 sensors, so 6 channels used. The last 2 channels are used to monitoring the VCC.
    So, to make this calibration, I guess I need to hookup a fixed, stable 5V signal to measure it first in the AIN0, later in AIN1 and subtract the values from them and after that divide by 2. Then, I should consider summing this result value to each of the next measurements of these two channels to get it offset calibrated... that's right?





    Fabio Oliveira
    And about the ADC self-offset calibration: by shorting you mean wiring the inputs to GND, and measure the leaving voltage remaining across the time (and across the temperature changes) and then later subtract it via software until it can reach the zero volt while it's still shorted ?
    Correct! It wouldn't necessarily have to be GND, but both inputs need to be connected to the same potential (i.e. shorted). You could even do this internally by configuring the ADS1256 to use the same input pin for both the positive and negative input channels.

    The idea of configuring the ADS1256 to use the same input pin looks like perfect for me, because maybe I can do it connecting remotely over SSH to the equipment (it's a Raspberry Pi with the ADC), because it's currently installed in a kind of a hill, with difficult nature access. I can find how to make this configuration on the datasheet ? I took a look but I still did not find it.





    Fabio Oliveira
    At the gain drift calibration: I forgot to mention that I'm using two chanels of the ADC to monitoring the 5 Volt VCC from the magnet sensor and from the own ADC's VCC. Looking at these charts, the VCC charts variation are very close to the temperature charts variation and I do have all the values from it along the time, by several days. So, maybe it's possible to use it for the Gain Drift Calibration of the ADS1256 ? (well, i'm assuming the other channels from the ADC used with the magnetic sensores will have a similar thermal drift similar to the two channels used in VCC's monitoring)..

    Yes, if your sensor's output is directly proportional to it's supply (or excitation) voltage source, then you could directly scale the result with the supply voltage. In some cases this is done simply by using the sensor's excitation voltage as the ADC's reference voltage (in the case of the ADS1256, you can't have a 5V reference, but you can certainly measure this voltage separately, as you are doing). This is called a "ratiometric" measurement and it helps to remove the variations of the excitation voltage from the measurement result. I would definitely recommend performing that kind of measurement compensation!

    Yes... the sensor output is perfectly linearly proportional do it's supply voltage as I've already tested it in lab, before installing the equipment on the hill. Chris, I should really thank you for all this info and the pacience.. I will search the net on how to make this ratiometric measurement/calibration...



    Best regards,
    Fabio


  • Hi Fábio,

    Fabio Oliveira said:
    Yes... it's the same signal from the Earth's geomagnetism. There's difference in the amplitude, the absolute values. But the variations percentage of the geomagnetism along a day are almost identical in a 1500 km distance between two points in our country, Brazil.  Because of that, speaking in calibration, I think the most important for us is not exactly the absolute values from the magnetic sensor, but it's variations measured from our equipment to be at least close to the variations on the reference's chart, because in Geology we just study when there're big differences in the variations from two relatively close locations, that could mean geomagnetic storms, for example, or another influence from the geological site.

    Interesting! Since you're doing a relative comparison between two sensors, the (dc) offset is probably not as critical, but I would still be concerned about offset drift and gain errors (either static or over temperature) as those would tend to affect the relative measurements between sensors. However, without being able to go to your each of your sensors and apply a known magnetic field, you probably wont be able to calibrate each sensor independently. The ADC may be the only thing you are really able to calibrate, since you have access to it and can apply calibration signals locally.

    The problem I see with doing a relative calibration is that your sensors are in two different locations (possibly measuring two different signals), so unless you can say with some certainty that the magnetic field at both locations is precisely the same (at some point in time), then you can't use one sensor as a reference for the other....

    NOTE: You certainly wouldn't want to do a real-time correction of one sensor (based off the other) as this would have the effect of making your sensor always output the same result as your reference sensor, which defeats the purpose of trying to detect differences in sensor readings.

    ...If somehow you are able to make the determination that the two sensor readings ought to be the same at some point in time, then you would still have the problem of figuring out how much error is offset (independent from the input signal) and how much is gain error (proportional to the input signal). To solve that you would probably need at least two different reference points in time in which the readings were significantly different in amplitude, yet you knew with certainty that both sensors were measuring the exact same signal.

    Those are just my thoughts, but I hope they are some value to your project.

     

    Fabio Oliveira said:
    The idea of configuring the ADS1256 to use the same input pin looks like perfect for me, because maybe I can do it connecting remotely over SSH to the equipment (it's a Raspberry Pi with the ADC), because it's currently installed in a kind of a hill, with difficult nature access.  I can find how to make this configuration on the datasheet ?  I took a look but I still did not find it.

    All you would need to do is set the PSEL[3:0] and NSEL[3:0] bits (in the MUX register) to be the same value. For example, setting MUX = 0x88 would select AINCOM for both the positive and negative input (effectively shorting the inputs to AINCOM). It doesn't have to be AINCOM, but wherever you make this connection, make sure that the voltage on this pin is within the ADC's allowable input range.

     

    Fabio Oliveira said:
    Yes... the sensor output is perfectly linearly proportional do it's supply voltage as I've already tested it in lab, before installing the equipment on the hill.  Chris, I should really thank you for all this info and the pacience..  I will search the net on how to make this ratiometric measurement/calibration...

    Just a quick example. You said you use a 5V supply for the sensor... You could first measure the sensor, then measure the sensor's supply (ideally you want to measure both at the same time, but since you multiplexing you'd probably want to just do them in close succession). If you measure the supply to be 5.1V, then you want to scale the sensor's measurement by a factor of "5 / 5.1" to account for the supply variation in the sensor's result.

     

    Best regards,
    Chris