This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

ADC08DJ3200: JESD204C IP - RX clock and sysref issues

Part Number: ADC08DJ3200

Hi everyone!

We have a project where we want to connect an ADC08DJ3200 to an Kintex ultrascale (KCU105 from Xilinx). I've started to use the JESD204C IP from TI in simulation and with a loopback board to learn how it works before connecting the ADC08DJ3200EVM and I've run into a problem. I start with the exemple design coming with the IP and everything works fine in simulation and on the board, but when I change the rx_sys_clock from 156.25 to 300MHz (the clock frequency I want to use) several things happen and It stops working.

For information I'm using a Line Rate of 12.5Gb/s for the moment (the same as the exemple design) but would like to decrease it to 8Gb/s once all kinks are ironned out. My encoding is 8B/10B and my user data width is 64. My Jesd204 parameters are a resolution of 8bits, 8lanes, on Kintx ultrascale so GTH transceivers, a reference clock of 156.25MHz, F val of 1 and K val of 32 as the ADC08DJ3200 datasheet for a JMODE5 configuration. As I said before, the exemple design works on hardware, so I don't think it is the problem. I don't need deterministic latency.

The first thing that happen is that the rx_sys_clock seems to interact with the sysref but I don't know why. In the datasheet, it says that sysref is function of the line rate and the JESD204 properties, but why does I have desynchronization of the lmfc if my sysref is not a multiple of my rx_sys_clk? Is there something I'm missing here?

The other thing that happens is that when my rx_sys_clk is to 300MHz and I put data in simulation, the right data comes out for a few tenth us but after that the last 4bytes seems to become random. If I put my rx_sys_clk to 200MHz instead of 300MHz it seems to take longer before it starts reading rubish. The datasheet says that for an interface of 64bits the rx_sys_clk should be higher than the line rate/80 which it is. When I put the rx_sys_clk exactly to that frequency it works without issue, but when I increase the frequency it start doing it. The same phenomenon happen on hardware and simulation. I've put a screenshot of what it looks like in simulation. The same thing here, is there something I'm missing here? I don't understand why it doesn't work

If you have any other insight, I would really appreciate it.

Regards,

Étienne

  • Hi Etienne,

    You have mentioned that you change rx_sys_clock. Kindly let me know where you are making this change. In the original reference design, the rx_sys_clock and tx_sys_clock are both derived from a PLL (with the sys_clk of 156.25MHz as an input). Are you forcing the clock somewhere in the path?

    The SYSREF is sampled by the Rx part of the JESD IP using the rx_sys_clock, so if deterministic latency is needed, the SYSREF needs to be synchronous to the clock. If not, you can tie SYSREF to '0' and the link will still operate correctly. 

    Please let me know if you see any of the error count outputs of the IP incrementing before the data starts to get corrupted.

    Regards,

    Ameet