This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Clock line optimization and ENOBs (ADC16DV160)

Other Parts Discussed in Thread: ADC16DV160, ADC16DV160HFEB

Dear sirs,

we understand that the ENOB [dbFS] decreases with increased Vpp (of the signal sinus).

We tested different clocking devises (with different jitter numbers), clocking waveforms (squared vs. sinus) and different clock line filtering methods (being inspired by page 15 of TI’s SLAA510 document).

We see improvements of the ENOB when different scenarios are applied.

But here comes the question. With decreasing Vpp the difference vanishes and all graphs tend towards 12.6 ENOB (for Vpp almost 0). Please have a look at [1]. We would have expected an improvement over the complete Vpp range.

I assume the clock-line optimization is worse the few cents, but does it mean, that the ENOBs of the ADC will never be greater than 12.6 ? This is the number we can derive from the datasheet (Fig. 10 of the ADC16DV160 datasheet) and which we see in the conducted tests, using WaveVision and extrapolation ?

BTW: we use the ADC16DV160HFEB evaluation platform.

By the way, do you propose a specific oscillator for the ADC16DV160 ?

Thank you for your answer in advance,

Best Regards, Florian

[1] hll.mpg.de/.../enob_vs_vpp.png

  • Florian,

    When the input signal is reduced to low amplitudes, the SNR at the output of the ADC is limited by the inherent noise of the ADC itself. Phase noise in the spectrum (due to clock jitter) scales with signal amplitude, so when amplitudes are low the clock jitter does not significantly contribute to the total noise so the choice of clocking schemes is arbitrary (and hence all schemes converge as you observed). At low amplitude, the noise performance of this ADC is limited entirely by the thermal noise in the ADC which limits the SNR and SINAD to ~78dBFS which translates to 12.7 bits.

    Regards, Josh

  • Josh,

    thank you again for your very helpful answer.

    In our application (counting number of electrons of a Solar CCD device), we have a signal that changes at 20MHz and remains “stable” during these periods. With oversampling at 160MHz and FPGA logic we select 4 proper samples [1]. We were hoping to add another ENOB by using this method of oversampling.

    Is there anything we can do to be better than 12.7 ENOB ? Do you recommend to switch to a 25MHz ADC with 16 ENOBs in our case?

    If we average 4 samples of a constant signal, can we still call it “ADC Oversampling”, or is it ADC Interleaving” (as it is greatly outlined in TI’s SLAA510). I assume, we cannot use the formula of “ADC oversampling”, because it is only valid for a dynamic signal, not a constant one. In our case, we unknowingly plan to use “ADC Interleaving” – although it is the same device. So we just add 3db theoretical improvement to the average noise floor and the ENOB improvement remains questionable. Please correct me if I’m wrong.

    Thank you for your answer in advance,

    Best Regards, Florian

    [1] hll.mpg.de/.../constant_signal_scenario.png
  • Florian,

    Oversampling should work in your application to improve the SNR of your desired signal because it allows you to average more samples. So, the SNR of your measureent after sample averaging should be roughly equal to (6.02*12.7 + 1.76) + 10*log10(N) where N is the number of averaged points. Interleaving is the same as adding more oversampling. You can also have multiple converters coherently sample at the same time and average the data in the digital domain to improve the overall noise performance

    Regards, Josh

  • Josh,

    sorry for the late reply, I've been on vacation.

    Thank you so much for your thoughtful and very helpful answer again. You saved us a lot of wasted time.

    Best Regards, Florian