This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

ADC32RF45: DDC accuracy

Part Number: ADC32RF45

In analyzing the DDC down converted data out of ADC32RF45, we found that I and Q of each sample don't seem to satisfy I=A cos() and Q=A sin() to high accuracy, at least not to 16 bits but more like 12~13 bits.  The measurement was done using phase locked signal input and by correlating I and Q to remove the effects of A.  Our application aims at measuring the amplitude A from I/Q to the highest possible accuracy, therefore the lost bits concern us.  Our guess is that the computation of cos() and sin(), and perhaps the multiplication, may have truncation errors that are visible in the delivered 16-bit I/Q data.  The question is, is it possible to know more details about how this part of the digital algorithm is implemented, so that we can model it and understand our observations?

Thanks.

  • Yuan:

    The actual converter is a 14-bit ADC.  During the complex mixing and decimation the data is transformed to a 16-bit word generally comprising two octets in the JESD data converter.  As such, I would not expect any better than 14-bit resolution.  There is also some degradation due to jitter and noise.  From an SNR perspective, these parameters impact SNR performance such that the ENOB hovers around 10-bits.  There is also the decimation filter which limits the decimation bandwidth.  That filter may have an impact on the quadrature integrity depending on the signal frequency offset and bandwidth.

    --RJH