This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

ADC14155QML-SP: Offset & calibration

Part Number: ADC14155QML-SP

For DC input, I see that the offset (when Vin + & -) are shorted) is specified at typical -0.1%FS (-60dBFS) but it could be as much as -0.9%FS (-41dBFS, ~6.5 bits ENOB). 

Is this correct (effectively 6.5 ENOB worst case at DC)?

Do we have methods to calibrate this out other than calibration in the processor that monitors the ADC?

  • I don't see in the datasheet where offset calibration is available on this device so any means to mitigate DC offset would have to come from post processing.

    For your calculations, I believe it should be 20*log(1-.009) = -0.07853dBFS which is a much smaller change in ENOB.  The spec 0.9%FS equals 18mV so the digital code will wrong by 18mV from the actual analog input signal, worst case.  The electrical characteristics table shows the worst case SNR and ENOB which includes offset impairments and temperature effects.

    Thanks

    Christian

  • Thanks Christian - would you mind walking me through the 20log(1-0.009) calculation (rationale for low ENOB)?

  • The max offset voltage is 0.9% of full-scale or 18mV.  If 2V full scale signal is input to the device then the input signal is backed off from full scale by 0dBFS.  If an 18mV offset is present then even though the input signal is 2V, the digital code reflects a signal that is slightly less by 18mV or  backed off from full scale by 20*log [(2-.018) / (2V)] = -0.078dBFS.  The offset is relative to a full scale signal and is not the absolute signal (i.e. not 20*log [(.018/2V) = -41dBFS.)]

    Thanks

    Christian

  • We are looking for relative step changes of around 0.4%, for a DC signal sitting at about midway between 0V and max range differential input. Will we be able to detect this change, and how many bits will it be?

    Thank you.

  • A step change of 0.4% of FSR is 8mV.  Using the worst case ENOB numbers in the datasheet which are spec'd across the full temperature range we see ENOB is 10.7bits (SNR=66.7dBFS) with a 70MHz input signal.  Since SNR and ENOB appear to degrade some as the input signal frequency of interest gets lower I will use an SNR of 65.6dBFS worst case which equates to an ENOB of 10.6bits.  With 10.6 bits of resolution the ADC now has an effective LSB of 1.28mV instead of 122uV if it were a true 14-bit converter.  So the ADC should be able to detect an 8mV shift with 1.28mV resolution, worst case.

    Thanks

    Christian