This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

High precision signal path with digital post correction

Hi to all,
I read the article on true continuous background calibration. My system is completely discrete made of amplifiers, antialiasing discrete-digital filters and ADC as a heart from analog to digital domain. It requires outstanding linearity including offset and gain since it is intended to be used in biomedical applications where useful signal goes from couple mV up to 100mV. Now since all these components have certain fabrication mismatch that will definitely require advanced calibration technique to achieve  necessary performance. Cutoff frequency is about 50Hz (0-50Hz). 
Can you advice me how to perform calibration? Do I need some very precise signal source? Should it be only static (DC) or there will be a need for dynamic training signals (in terms of frequency) ? Should I use look up table mathematical approach (for linearity, gain and offset) in order to post correct sampled data from ADC? Can you help? Also I can't made any changes in hardware so everything needs to be done with digital calibration, LUTs or similar. I would need real help since I never did this before.
  • Hi Max,

    Welcome to the E2E Community!

    Could you share the article you're referencing?

    Calibration takes many forms but is most often performed at room temperature with dc (static) inputs at two or three points.  This allows for the gain and offset errors to be determined and then a common y = mx+b (m=gain, b= offset) adjustment is made to remove the calculated gain and offset errors.  More advanced systems will perform this calibration at multiple temperature points and then use a local temperature sensor to apply different gain and offset factors based on the temperature. Calibrations to remove linearity errors with a full look-up table are rare because it requires capturing an entire input sweep and saving that information somewhere for the calibration which would be memory intensive.

    Take a look at this TI Precision Labs training video for more information on calibrating gain and offset errors:

  • Hi Collin,

    Thanks.

    Article is provided in the bottom.In that article it is stated at the beginning that linearity errors can be compensated using math.  What concerns me the most is the signal dynamics or errors introduced in the signal for different frequencies (in the biomedical frequency range 0-50Hz) from analog signal chain and ADC. DC offset and gain or static errors are not hard to compensate. I even have enough fast memory to store every single LSB in the form of the look up table. On the other hand for dynamic calibration with expensive equipment one can apply sinwave histogram methods, phase plane, state-space or other (other attach) that I find hard but maybe it is the only way to achieve performance. They are all based on some heavy math concepts and they use Look up table approach to compensate for spurs and nonlinearity.

    Regards

    www.sensorsmag.com/.../true-continuous-background-calibration

    3583.adc post correction.pdf

  • Spectral purity, window sampling, nonlinear distortion? Anything else? What are important variables to take care in biomedical data acquisition system if we assume that static offset and gain are removed?
  • Marjan,

    Linearity errors that can be polynomial fit are often calibrated out but random non-linearities require a look-up table which most people aren't willing to implement. If you have the processing power and capabilities to individually calibrate each unit then I agree this would provide a very accurate solution.

    I think you'll find that the gain/offset and linearity errors are relatively constant over the frequency range you're interested in but a multi-frequency calibration would be even more robust if you're willing to implement it.
  • Hi Collin,

    Yes I can agree that static parameters will stay constant in the frequency range of interest (0-50Hz). However it bothers me how to quantify spectral purity of the system, distortion level and noise and how to correct that?I am really lost since I am quite sure that static calibration won't solve the problem. On the other hand I have no real experience or idea how to actually go in this direction. There is also this window sampling method to reduce spectral leakage in the signal. 

    Collin can we continue in private? I would require further expertise on my system.

    Best Regards

  • Max,

    These advanced topics go beyond what the majority of our customers implement and are therefore outside of my area of expertise. We can continue in private but I'm not sure how much help I'll be able to provide.
  • sbaa220.pdf Hi Guys,

    I resolved the problem regarding calibration. Apparently in my case since the frequency is very low (cutoff at 60Hz) there is no need to take care of frequency dependent nonidealities (INL for example).  I calibrated however for static nonidealities.

    I have now a new problem that deserves some explanation and correction. I would like now to digitally compensate for signal degradation in low frequency range. You may look in the attach where square waves are recorded in time domain and as can be seen there are certain overshoots that don't exist in the original square wave signal (It is 15mV signal, sampled at 250S/s). My opinion is that my analog signal chain introduced this because of the nonlinear behaviour at low frequencies. I can't change the hardware so software intervention should be performed. Also I attached application note from TI where similar problem with square waves was discussed(TI app. note). I would need some help to improve on this.

    I wish you all Happy New 2019.

  • Hello,

    We're going to need need a little more information about what you're doing to continue effectively supporting this thread. What ADC are you using and can you share the schematic of the analog front-end? I agree with your assessment that the overshoot is likely coming from overshoot in the AFE as a result of an under-damped system. Since you are not able to modify your AFE, other than the techniques described in the attached app note I'm not sure what you'll be able to do about the overshoot other than try to "blank" it out if you know it's coming.