This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

ADS1278: Gain and offset calibration strategy for ADS1278 with THS4521 front-end

Part Number: ADS1278
Other Parts Discussed in Thread: THS4521,

Hello,

I am using for precision voltage differential measurements the combination of several ADS1278 with THS4521 as front-end differential amplifiers. I am acquiring the signals in a uC and sending them via ethernet to a PC running a Labview application. I would like to define a gain and offset calibration strategy.

I've found some literature in page 5 of the following TI training document:

https://training.ti.com/sites/default/files/docs/adcs-understanding-and-calibrating-the-offset-and-gain-for-adc-systems-presentation-quiz.pdf

Would you suggest the same strategy in my setup?:

The full range input at the THS4521 is 20Vpp (+-10V), which translates to 4.63Vpp at the differential inputs of the ADS1278. I am wondering how to translate the method in the slide 5 of the training to my application. Shall I choose the -10V point as the first calibration point and the 10V point as the second?, or shall I use 0V and 10V for instance?.

Thanks for your support,

Javier

  • Hello Javier,

    Yes, the procedure you linked to will work well to calibrate gain and offset.

    For bipolar inputs, you have a few options on how to calibrate.

    1. Calibrate at +/- full scale, or in your case, at -10V and +10V.  This will cover the entire range, but you will likely still have some error at 0V due to linearity, and the reference source accuracy at +10V and -10V.

    2.  Calibrate at 0V and +10V.  You will get a very accurate measurement at 0V and 10V, but you may see increased error at -10V due to linearity.  Using a 0V calibration can be done by shorting the inputs, which provides better accuracy than possible with a reference source voltage.

    3.  Calibrate at -10V, 0V, and +10V.  This will provide the best accuracy over the entire input range, as well as a very accurate level at 0V.  However, this is more complex, and requires keeping two sets of calibration coefficients; one set of slope/offset coefficients for the negative range and another set of coefficients for the positive range.  This approach will also reduce linearity errors since you have 3 points of calibration.

    In general, if you do not need the best possible accuracy at 0V, then I would typically choose the first option; calibrate at -10V and +10V.  If accuracy at 0V is more important than full scale errors, then option two is the better option.  However, if you want the best possible accuracy after calibration over the entire full scale range, then option 3 is probably the best approach.

    Regards,
    Keith Nicholas
    Precision ADC Applications