This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Hi Max,
Welcome to the E2E Community!
Could you share the article you're referencing?
Calibration takes many forms but is most often performed at room temperature with dc (static) inputs at two or three points. This allows for the gain and offset errors to be determined and then a common y = mx+b (m=gain, b= offset) adjustment is made to remove the calculated gain and offset errors. More advanced systems will perform this calibration at multiple temperature points and then use a local temperature sensor to apply different gain and offset factors based on the temperature. Calibrations to remove linearity errors with a full look-up table are rare because it requires capturing an entire input sweep and saving that information somewhere for the calibration which would be memory intensive.
Take a look at this TI Precision Labs training video for more information on calibrating gain and offset errors:
Hi Collin,
Thanks.
Article is provided in the bottom.In that article it is stated at the beginning that linearity errors can be compensated using math. What concerns me the most is the signal dynamics or errors introduced in the signal for different frequencies (in the biomedical frequency range 0-50Hz) from analog signal chain and ADC. DC offset and gain or static errors are not hard to compensate. I even have enough fast memory to store every single LSB in the form of the look up table. On the other hand for dynamic calibration with expensive equipment one can apply sinwave histogram methods, phase plane, state-space or other (other attach) that I find hard but maybe it is the only way to achieve performance. They are all based on some heavy math concepts and they use Look up table approach to compensate for spurs and nonlinearity.
Regards
www.sensorsmag.com/.../true-continuous-background-calibration
Hi Collin,
Yes I can agree that static parameters will stay constant in the frequency range of interest (0-50Hz). However it bothers me how to quantify spectral purity of the system, distortion level and noise and how to correct that?I am really lost since I am quite sure that static calibration won't solve the problem. On the other hand I have no real experience or idea how to actually go in this direction. There is also this window sampling method to reduce spectral leakage in the signal.
Collin can we continue in private? I would require further expertise on my system.
Best Regards
sbaa220.pdfHi Guys,
I resolved the problem regarding calibration. Apparently in my case since the frequency is very low (cutoff at 60Hz) there is no need to take care of frequency dependent nonidealities (INL for example). I calibrated however for static nonidealities.
I have now a new problem that deserves some explanation and correction. I would like now to digitally compensate for signal degradation in low frequency range. You may look in the attach where square waves are recorded in time domain and as can be seen there are certain overshoots that don't exist in the original square wave signal (It is 15mV signal, sampled at 250S/s). My opinion is that my analog signal chain introduced this because of the nonlinear behaviour at low frequencies. I can't change the hardware so software intervention should be performed. Also I attached application note from TI where similar problem with square waves was discussed(TI app. note). I would need some help to improve on this.
I wish you all Happy New 2019.