This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM6442: Added delay running the RT/NRT tunneling mechanism proposed by Nitika Verma and Team for heterogeneous SoCs

Part Number: AM6442

Tool/software:

Hello,

on the 2025 embedded world exhibition and conference Nitika Verma, Teja Rowthu, Pradeep HN, Archit Dev and Manuel Philippin presented a paper in which they describe a method to split RT and NRT data onto different processors of a MPU using a software bridge.

The paper was titled 'Network Traffic Tunneling on Heterogeneous SoCs'.

I was wondering how much delay was added by this software bridge. I would appreciate access to the packet handling delay with and without this bridge for the Linux as well as for the RTOS cores.

I am also interested in the methodology used to obtain these numbers, if available.

Please do add or ask for any missing information relevant to this thread that I forgot to add.

Kind regards,

Christian

  • Hi Christian,

    I regret to inform that we did not measure additional latency added by the bridge and shared memory. We have a solution to handle RT and NRT traffic separately for time critical applications using our PRU firmware. This ensures RT traffic will always stays with-in the bounds. We have observed maximum RTT to be around 4ms from external host PC to linux, and back. If you would like to get these numbers, we can plan to conduct some tests, and share these results.

    I would like to understand the type of end solution that you are planning to use this, so we can have better test methods to reflect your use case.

    Thanks and regards,
    Teja.

  • Hi Teja,

    Thank you for your answer. The planned solution is a bridged TSN endpoint with high frequency cyclic traffic. The TSN cycle times would need to match the current state of the art for industrial ethernet protocols.

    EtherCAT apparently supports 12.5 us cycle times (see https://www.ethercat.org/download/press/etg_201202_e.pdf) and profiNET TSN reaches 31.25 us (see https://us.profinet.com/digital/tsn/).

    If the delay added by forwarding within the RT path exceed allowable tolerances, this solution is not suitable for my application.

    The NRT path is not as critical, but still interesting from my perspective.

    Best regards,

    Christian

  • Hi Christian,

    Are you trying to run a TSN node on a core not connected to the network peripheral directly? Only in those situations, there will be additional latencies incurred due to additional software switching.

    For port-to-port forwarding, there won't be any additional latency for any traffic. So, it would still meet the current cycle time and maximum number of hops parameters supported by our SDKs. TI provides EtherCAT Master and slave solutions as part of Industrial Communications SDK, and it supports protocol specific firmware to run on our PRU-ICSS cores, which will handle both RT and NRT traffic.

    Currently, we only tested our shared memory based solutions for NRT traffic. That was also out intended usecase, where both RT and NRT traffic on the network can be handled, and this solution provides a way to offload NRT traffic handling which has higher processing demand.

    If we can help you in provide you any more information or clarifications, please let us know.

    Thanks and regards,
    Teja.

  • Hi Teja,

    as far as I understand your paper, any incoming ethernet packet is routed through the ICSSG and into the lwip software bridge. From there the NRT packets are send towards the A cores, while the RT packets are handled in the RTOS directly. My application would use a similar layout, where only NRT traffic is routed onto a different processor.

    In any case, the RT packets of the chosen industrial ethernet protocol are handled by the software bridge before being processed by the target process in the R5 core complex. This would necessitate additional delay in the receive, as well as transmit paths.

    Please do correct me, if I inevitably got some detail wrong.

    Best regards,

    Christian

  • Hi Christian,

    The software bridge is placed only in the NRT path for ICSSG examples. The RT packets will take a different route entirely from the firmware. In the discussed paper, we have taken the example of CPSW which doesn't discriminate between RT and NRT, and will add delay for all packets.

    If your solution requires RT and NRT traffic handling separately, then we suggest you to take the implementation with ICSS, with support for RT firmware. For general purpose usecase, we suggest using CPSW for offloading process heavy traffic to A cores, and deterministic traffic with higher flexibility in time bounds.

    Regards,
    Teja.

  • Hi Teja,

    If I understood you correctly: your paper was not meant to show the fast handling of RT packets, which should have been handled within the ICSSG itself. Instead you forwarded them into one of the R5 processors to provide some form of RT traffic for your paper.

    In my application the switching and handling of RT packets should be handled within the ICSSG itself. This would be very fast, as this special hardware is meant to handle ethernet packets in real-time. Only NRT packets would be forwarded to one of the A/ R processors of the AM64xx.

    Thank you for your help.

    Best regards,

    Christian

  • Hi Christian,

    In case of ICSSG, as you mentioned correctly, RT traffic will be handled separately in firmware. If you follow the method shown in the paper, with ICSSG, only and all NRT traffic will be routed to the software bridge for forwarding to A53/R5 cores. In our paper, we didn't discuss fast handling of RT packets as we don't alter any RT packet handling in ICSSG. The scope of paper is to discuss tunneling of traffic without taking much of CPU load for tunneling network traffic to remote cores for process offloading.

    I hope I am able to clarify your questions. If there are any further queries, please let us know.

    Thanks and regards,
    Teja.