In analyzing the DDC down converted data out of ADC32RF45, we found that I and Q of each sample don't seem to satisfy I=A cos() and Q=A sin() to high accuracy, at least not to 16 bits but more like 12~13 bits. The measurement was done using phase locked signal input and by correlating I and Q to remove the effects of A. Our application aims at measuring the amplitude A from I/Q to the highest possible accuracy, therefore the lost bits concern us. Our guess is that the computation of cos() and sin(), and perhaps the multiplication, may have truncation errors that are visible in the delivered 16-bit I/Q data. The question is, is it possible to know more details about how this part of the digital algorithm is implemented, so that we can model it and understand our observations?
Thanks.