Other Parts Discussed in Thread: AFE5816,
Hello,
we've designed a custom RFE board with 4 cascaded AWR2243s. Everything up until this point has worked well and we are able to control the chip via the mmwave DFP API. We are using a Zynq Ultrascale+ as the external host controller for the platform. We're now at the point where we're trying to stream LVDS data from the AWR2243s and we've encountered a pretty serious problem.
The LVDS data input architecture looks as follows:
We are streaming 4 LVDS data channels, 1 LVDS clock and 1 LVDS valid per MMIC into the Zynq Ultrascale+. These go through an internal utility buffer to convert from differential LVDS to single-ended for internal FPGA use. The clock is a 300 MHz clock and the data rate is set for 600 Mbps DDR. The clock is taken by a PLL and a /4 clock (75 MHz) is generated so that both the 300 MHz and 75 MHz can be used for the 1:8 SERDES, which converts the serial data stream into 8-bit parallel words. We set the data format to be 16 bits.
We use the API function: rlDeviceSetTestPatternConfig(); to set an LVDS synchronization pattern so that we can synchronize every channel with our 1:8 SERDES inside the Zynq Ultrascale+. This may look like this, the following are each 8-bit samples:
The sync pattern is 0xFF00 (or 0b1111 1111 0000 0000) as you can see a random amount of bit-slippage is to be expected, and this is ok. We synchronize off of these incoming bits and are able to get the reconstructed 0xFF00 sync-pattern.
Now onto the problem. Every time we use the rlDeviceSetTestPatternConfig() function, for instance to start a ramp to test the data pattern, or to simply rewrite the sync-pattern, the data is shifted another 2-4 bits! For example, here is the output of the 4 SERDES modules for the data:
rlDeviceSetTestPatternConfig() /* with 0xFF00 */
rlDeviceSetTestPatternConfig() /* with 0xFF00 */
rlDeviceSetTestPatternConfig() /* with 0xFF00 */
And so on and so on.
This means that we're losing our bitslip pattern every time we update the test pattern configuration. If we sync on the 0xFF00 pattern in some configuration, then switch over to a ramp to test the data, we're in a new sync pattern. 2 bits would be equivalent to 1 clock cycle lost on the 300 MHz clock (because DDR).
We do not believe this is a timing issue on our part because we can let the LVDS run for an hour and it'll stay exactly the same. As soon as we update the value with rlDeviceSetTestPatternConfig() we lose the synchronization again. This is catastrophic because we can't move on to getting in real ADC data until this is verified.
Can anyone explain why this is happening? With every other TI product we've used, for instance the AFE5816, we could use this process to sync to a specific pattern, then switch to real data. Why does the AWR2243 "hiccup" a clock cycle when the test pattern is updated? How can we get around this?
As far as we are aware, we can not use the frame clock because we have 2 MMICs per I/O bank that are asynchronous to each other and we can't use a PLL to lock onto the frame clock because it's always removed when there's no data!
Thank you in advance for your help, if we can't get this to work it will seriously impact our product!
Best Regards,
WBL