This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

ADC3660: Latency and Propagation Delays

Part Number: ADC3660

We are using the ADC3660 in one of our new designs, and are having difficulty trying to calculate/specify the latency and propagation delays through the device.

We are driving DCLKIN and CLKP with synchronized, 40.96MHz clock signals. We are configuring the device for 16-bit data output, complex x16 decimation, and SDR output clocking. If I understand the datasheet correctly, the ADC3660 should sample the input signal on 16 consecutive clock edges for decimation into a single output value; with SDR output clocking, a single 16-bit output value will require 16 DCLK cycles (aligned with FCLK). Therefore, since DCLKIN and CLKP are synchronous and running at the same 40.96MHz frequency, we should be sampling and decimating 2.56 Msamples/second, and outputting complete 16-bit data values at the same rate. Oscilloscope measurements seem to confirm this throughput.

Unfortunately, we are unable to accurately determine the delay between the time when the input signal is sampled and the time when the corresponding decimated output value is clocked out of the ADC. If Samples N through Samples N+15 are being captured and decimated into the output value DA[15:0], how do we determine the consistent delay between the CLKP edge that captures Sample N and the DCLK edge that propagates DA[15]?

  • Hi Sean,

    Your calculations for the clock rates are correct, and I would expect that the data/clock outputs are in a good state.

    In regard to your question of clock cycle latency, I would expect that the latency from the rising of CLKP to the corresponding sample DA[15] would be 24 clock cycles. In terms of time, we can multiply the period of the sampling time (40.96 MHz/16 = 2.56 MHz = 390.625n sec) by the latency (390.625*24 = 9.375 u sec).

    Let's follow up on an email to discuss options for determining your systems latency.

    Best Regards,


  • Based on the latest datasheet, I was expecting a latency of approximately 24 clock cycles. But for a clock running at 40.96 MHz, wouldn't the latency then be only 586 ns? (1/40.96 MHz) * 24 = 586 ns. We have tried to measure the latency using an oscilloscope, and can't correlate the input signal with the output data after a 586 ns delay.

    The 9.375 us delay that you calculated seems more in line with what we have seen experimentally. Is it possible that the term "clock cycles" used in the latency calculation does not actually refer to the SAMPLING CLOCK period, but rather the SAMPLING period?

  • One more thing....I double-checked our ADC3660 configuration settings and we are using REAL decimation, not COMPLEX decimation. Does this difference impact the latency through the device?

  • Dan,

    Thanks for your quick response. I outlined my remaining concerns in 2 other replies.


  • Hi Sean,

    I am verifying with our design team with regard to what sampling period to use (clock cycle unit) when calculating the latency with decimation.

    The latency should be the same for Real or Complex decimation at 16x decimation factor.



  • Thank you, Dan.

    Something else just occurred to me...if the correct latency is 24 sampling CLOCK cycles (40.96 MHz), and it takes only 16 data clocks (also running at 40.96 MHz) to output the 16-bit data for each decimated sample, then one of two things would have to happen:

    1) The data buffer/FIFO within the ADC will eventually overflow, or...

    2) There must be a "gap" equal to 8 data clock cycles between valid 16-bit output values

    Does this assessment sound correct?

  • Hi Sean,

    The clock cycle latency must be the sampling rate (40.96 MHz) divided by the decimation factor (16) which is 40.96/16 = 2.56 MHz = 390.625 nano seconds. Otherwise, there would be an overflow, as you had mentioned. There should not be any gaps in sample data being output by the ADC (yes to latency though).

    If you are to probe the frame clock (FCLK) signal in your configuration, you should see that the frequency is 2.56 MHz. Since we need to move 16 bits per frame, we know that the DCLK rate is 2.56MHz * 16 =40.96 MHz. This data/frame clock/ dclk will be output continuously.

    The latency is due to the time required for digital decimation and low pass filter the sample data.

    Best Regards,


  • Your analysis of the FCLK and DCLK frequency is correct. But I'm still confused as to where this leaves us regarding the data latency. We originally agreed that the latency should be 24 clock cycles, but we were unsure as to the "definition" of "clock cycle". Now, it sounds like you are confirming that 24.4 ns is the correct "clock cycle" value (1/40.96 MHz), but we should only see a latency of 16 clock cycles, not 24?

  • Hi Sean,

    The correct clock cycle is from the sample clock, not the DCLK, and the period of the sample clock is 40.96 MHz divided by the decimation rate (16), which is 2.56MHz. 1/2.56 MHz = 390.625 nano seconds is the period of the sample clock, and should be used to calculate the latency of when the analog inputs are sampled (clock edge) to when that sample is present on the digital outputs.

    The latency is fixed by the number of clock cycles (24 as per the data sheet when using 16x Real Decimation), so the latency is 390.625 nano second times the number of sample clock cycles (24). 390.625n seconds * 24 =  9.375 u seconds.

    I'm not exactly sure where the 16 clock cycles is coming from. During one sample clock cycle (2.56 MHz), there will be 16 data bits transmitted, but this 16 data bit transmission does not impose any additional latency since we are now utilizing the higher frequency of the DCLK (40.96 MHz). In any case, all of the data bits MUST be transmitted within the period of the sample clock.

    Hope that helps.

    Best Regards,


  • OK, so it sounds like we are in agreement, we may just be using slightly different terminology lol....

    In our particular design, we are running both CLKP (what I've been calling the sample clock) and DCLKIN at 40.96 MHz, which is why I sometimes use the two terms interchangeably...the x16 oversampling (decimation) is "countered" by the 16 DCLK cycles it takes to pump out the 16 bits per sample.

    The important takeaway from this conversation is that the "sampling clock cycle" referred to in the datasheet for calculating latency is actually equivalent to the "sampling period" required to take 16 decimated samples, which is 16 * (1/40.96 MHz) = 390.625ns. The latency is equal to 24 of these cycles, or 24 * 390.625ns = 9.375us.

    If we are in agreement on this summary, then I think we can call this issue resolved.