Other Parts Discussed in Thread: TMDX654IDKEVM
Hello Interface team,
my team is developing an industrial communication application on TMDX654IDKEVM board, and we are required to work with timestamps within 8 nanosecond precision for both transmitting and receiving. (The requirement comes from time synchronization.) Unfortunately, the board does not make use of the PHY's Start of Frame Detection feature, so we need to make corrections on timestamps we get from MAC implemented in PRUICSSG. To make things even more complicated, we need this for all three link speeds in full duplex, i. e. 1000Base-T Full duplex, 100Base-TX full duplex and 10Base-T full duplex.
My ultimate aim is to minimize the timestamp jitter and verify how big values it can reach. This translates to minimizing processing delay variance between MDI and MAC timestamping unit in both directions (receive and transmit).
So much for the background. Now to the questions:
1) I suppose RX_CLK signal on RGMII/MII is synchronous to the incoming line clock (with fixed delay) whenever valid RX data are passed. More precisely, I expect no jitter caused by crossing of clock domains from link to the RGMII/MII. (Any static delay is perfectly fine, jitter is an issue.) Is my assumption valid?
2) Can the PHY, during transmission driven by RGMII (driven by GTX_CLK from MAC) use this clock source for transmission on the medium for 10Base-T Full Duplex, resp. 100Base-TX Full Duplex operation?
(This seems to me theoretically very well possible for 10Base-T, as the outgoing line clock is not continuous there and receiver clock is always synchronized at the start of the frame, during preamble.)
The purpose here would be to remove the jitter of clock domain crossing from GTX_CLK to transmission line clock.
3) When the PHY, on its XI clock input, is provided with 25 MHz clock signal from the MAC (the SoC in this case) reference output, I expect, on transmission, no jitter for clock domain transition from RGMII GTX_CLK to the outgoing line clock. Simply said, when on same source of ticks, the delay caused by GTX_CLK being different from line clock, is fixed (no variation).
4) I would like to use MII loopback, and External loopback for jitter measurements. Should I expect any difference in jitter compared to regular sending and receiving combined (aside from obvious bypassing most of the processing in MII loopback mode)?
Thank you!
Best Regards,
Jan Smrčina