This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM6442: PTP synchronization with long daisy chains

Part Number: AM6442

Tool/software:

Hello team,

The customer is experiencing PTP sync deviation on the Ethernet switch when long daisy chains of devices are used (20+ nodes). Offset of the PTP hardware clock in the end of the chain is in range of us, target is hundreds of ns.

Thanks,

Aida

Other information:

* using Linux kernel version 6.1.93
* drift at the very end of the daisy chain is fine for 10 nodes in the daisy chain. They start seeing a drift of several microseconds at the end of the daisy chain once they get 20+ nodes.
* every device is a boundary clock
* the drift gets bigger over a period of 3-4 seconds

  • Hello Aida,

    Each device in the daisy chain is a custom board designed with AM6442 SoC?

    * drift at the very end of the daisy chain is fine for 10 nodes in the daisy chain. They start seeing a drift of several microseconds at the end of the daisy chain once they get 20+ nodes.

    Is there is jump in the offset/drift between the 19th-to-20ths device vs the 20th-to-21st device? In other words, I'm wondering if the change is immediate from ns scale offset at 19th-to-20th device to the us scale drift you are describing for the 20th-to-21st device.

    How many total devices is the customer testing the daisy chain? You mention 20+, do they have an exact target or threshold?

    Are they using linuxptp (ptp4l) for the synchronization?

    Was this us scale drift immediately seen upon synchronization? Or does it take some time before the us drift occurs?

    * the drift gets bigger over a period of 3-4 seconds

    If they are using ptp4l, what ptp4l console messages show up when the drift increases?

    -Daolin

  • Hello Daolin

    The test setup consists of custom devices based on the AM6442 processor. Each device runs Linux version 6.1.83 and the ptp4l service. Each device is configured as a PTP boundary clock. The devices are connected in a daisy chain, with the first device functioning as the Grandmaster (GM).

    Below are the test results for a setup consisting of the GM device and 14 custom AM6442 devices:
    The pmc tool was run twice, with an interval of a few seconds, on the Grandmaster device to observe the offsetFromMaster values on the nodes in the chain. Here are the results:


    For a larger chain of 20 devices, the offsetFromMaster grows nearly exponentially, with no specific pair of devices inducing a major offset. The offsetFromMaster values at the 20th node are around 1–2 microseconds.

    In the ptp4l log, the offsetFromMaster jitter level on each node remains stable after the initial clock adjustment phase.

    BR, Viktar

  • Hello Viktar,

    For a larger chain of 20 devices, the offsetFromMaster grows nearly exponentially, with no specific pair of devices inducing a major offset. The offsetFromMaster values at the 20th node are around 1–2 microseconds.

    Just to be clear, in a network of 20 devices, the offsetFromMaster for the first 14 devices is also larger than the offsetFromMaster in network of just 14 devices? In other words, can you also show the offsetFromMaster measurements for the 20 devices test case (assuming the screenshots you shared already are for the 14 devices test case)?

    Is there a particular reason for configuring each device in the chain as a PTP boundary clock? Why not as a transparent clock instead? From my understanding each boundary clock in the chain syncs with an upstream clock and distributes time based on its local clock. For this reason, each boundary clock in the chain depends on how synchronized the previous clock is with the grandmaster. The larger number of boundary clocks you add, the more likely the clocks become less accurate. 

    In the ptp4l log, the offsetFromMaster jitter level on each node remains stable after the initial clock adjustment phase.

    Can you share specifically what ptp4l command (including the options you selected) was running on the grandmaster device and the clock follower devices?

    -Daolin

  • Hi Viktar,

    An additional question, is the customer using the CPSW Ethernet interfaces or the PRU_ICSSG Ethernet interfaces from AM6442 to run these PTP tests?

    -Daolin

  • Hello Daolin,


    The offsetFromMaster for the first 14 nodes is the same for both the shorter and longer daisy chains. We will collect the logs of the PMC and PTP4L for the 20-node chain once we have the necessary number of devices available.

    Boundary clock mode is used because a PPS signal synchronized to the PTP master is required on every node.

    CPSW Ethernet interfaces are utilized on the AM6442 device.

    Boundary clock configuration:

    [global]

    slaveOnly               0

    delay_mechanism         Auto

    network_transport       L2

    time_stamping           hardware

    step_threshold          0.0004

    dscp_event              0x2e

    dscp_general            0x2e

    tx_timestamp_timeout    300

    sanity_freq_limit       0

    priority1               247

    [eth0]

    egressLatency           100

    ingressLatency          300

    [eth1]

    egressLatency           100

    ingressLatency          300

    Grand master configuration:

    [global]

    slaveOnly               0

    delay_mechanism         Auto

    network_transport       L2

    time_stamping           hardware

    step_threshold          1.0

    dscp_event              0x2e

    dscp_general            0x2e

    tx_timestamp_timeout    300

    [eth0]

    BR, Viktar

  • Hello Viktar,

    Is it possible for you to share what is the customer's end use case for testing ptp synchronization on a long daisy chain network of devices? In other words, what is the customer trying to build?

    Can you share specifically what ptp4l command including the command options that was running on the grandmaster device and the clock follower devices to set up the ptp boundary clock mode on each device?

    Boundary clock configuration:

    Was this content part of a ptp configuration file that was referenced by the ptp4l command?

    -Daolin

  • Hello Daolin,

    PTP is used in a distributed measurement system with a daisy chain topology, where all ADCs sample synchronously.

    The ptp4l service is started on the device using the following command:
    /usr/sbin/ptp4l -f /etc/ptp4l.cfg.

    The ptp4l.cfg files were provided in the previous message.

    BR, Viktar

  • Hi Viktar,

    Can you check if the following linuxptp configuration also shows the us-range offset at the end of the long chain of devices?

    Run "ptp4l -P -2 -H -i eth0 -i eth1 -f gPTP.cfg --step_threshold=1 -m -q -p /dev/ptp0" on all the devices that have both eth0 and eth1 ports connected via Ethernet cable.

    Run "ptp4l -P -2 -H -i ethX -f gPTP.cfg --step_threshold=1 -m -q -p /dev/ptp0" on the grandmaster device and the last follower device in the chain that only have on ethernet port connected (ethX can be eth0 or eth1, depending on what you connected).

    gPTP.cfg is the following where priority1 is 100 only for the grandmaster device and should be changed to 240 for all the follower devices.

    # 802.1AS example configuration containing those attributes which
    # differ from the defaults. See the file, default.cfg, for the
    # complete list of available options.
    #
    [global]
    gmCapable 1
    priority1 100
    priority2 248
    logAnnounceInterval 0
    logSyncInterval -3
    syncReceiptTimeout 3
    neighborPropDelayThresh 800
    min_neighbor_prop_delay -20000000
    assume_two_step 1
    path_trace_enabled 1
    follow_up_info 1
    transportSpecific 0x1
    ptp_dst_mac 01:80:C2:00:00:0E
    network_transport L2
    delay_mechanism P2P
    ingressLatency 88
    egressLatency 288

    -Daolin

  • Hello Daolin,

    With the gPTP.cfg configuration, the offsetFromMaster values have become smaller.

    However, I now see "selected best master clock" messages appearing every few minutes.

    BR, Viktar

  • Hello Viktar,

    Thanks for testing this out.

    With the gPTP.cfg configuration, the offsetFromMaster values have become smaller.

    Do you still see the drift behavior at the end of the daisy chain?

    However, I now see "selected best master clock" messages appearing every few minutes.

    Is this message from the log of the follower devices or the grandmaster device? I'm assuming it's from one of the follower devices. 

    If it's from the follower devices, I believe this should be expected as it is simply indicating that that port (port2) is the port connected to a device which is acting as a master clock relative to the device port2 is on. 

    -Daolin