This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM2434: LP-AM243: Ethernet/IP performance parameter

Part Number: AM2434

Tool/software:

Hello Team,

 My company are developing industry communication base on Ethernet/IP slave within AM2434, but there are some question request support

 1. the max node can be connected with one line (daisy chain topology)?, the protocol stack support 100 node?

 2. the Ethernet/IP work on GMII  mode up to 100Mbps, not 1Gbps? 

 3.the max latency between two node when setup 100 node?

 

 My team had took some days to discuss the details, the scan time(2ms) per cycle up to 100 node is critical for us product, we must make decision the Ethernet/IP can work or not soon, or we need switch to EtherCat.

 Expert to your reply soon, thanks very much!

  • Hi Liang,

    What Industrial Comms SDK version are you using/planning to use?

    Regards
    Archit Dev

  • with latest version(ind_comms_sdk_am243x_09_02_00_15)

  • Hi Liang,

    Are you planning to run any other protocols? For example, Layer 2 protocols like DLR/PTP?

    Regards
    Archit Dev 

  • Not as of now. the another option is EtherCAT

  • Hi Liang,

    2. You can use EthernetIP with RGMII upto 100Mbps. Gigabit speed is currently not supported with EthernetIP.

    3. The cut-through latency between 2 nodes is around 4 micro-seconds on the higher side.

    Regarding your first question, I have notified our EthernetIP Stack expert. He will get back to you with an answer shortly.

    Thank you for your patience.

    Regards
    Archit Dev 

  • Okey, Thank you

  • Hi Liang,

    regarding the first question, the limiting factor is the available RAM.
    The LLDP neighbor devices for SDK 9.2.0.15 is fixed to 16, but for the next SDK it will be configurable, so that you can for example set it to 100, but if your device only relies on on-chip RAM, then your device will run out of memory and you will face errors.
    Other than that I don't see any theorical limitation.

    Best regards
    Pourya

  • Hi Pourya,

    Got it, but when the next SDK will be update?

    Br.

    Liang, 

  • Hi Liang,

    It is expected to be released for AM243x by the end of November.

    Best regards
    Pourya

  • Hi Pourya,

    We just use the AM243X as the slaver node, that need a large RAM size to support 100 node? how much size is needed?

    BTW, can we setup a meeting to consult this or email, my team member can discuss this immediately, I think is will be more efficient.

    Br.

    Liang

  • Hi Archit Devs,

    can I get back to the question number 3? We have calculated the overall latency as a sum of Store and forward Latency, Wireline Latency, Switch Fabric Latency, Queuing Latency multiplied by number of nodes which is 100. But we are not sure to be honest about what latencies we shall involve. (Store and Forward vs. Cut-through latency, what queuing latency we shall calculate, whether the switch fabric latency shall be calculated etc.) Our results showed us the overall latency around 8ms. (That is too high for our application, we need to get into 2-3ms with overall latency + PLC cycle time) With cut-through latency between 2 nodes around 4 micro-seconds on the higher side we would have 0.4ms overall latency, but I guess it is not the only latency we shall respect. Do you have any more numbers (what latencies shall be respected), calculators or some more insights? Thanks a lot Petr.

  • Hi Liang,

    the amount of RAM you would need of course depends on your Application. The AM243x-EVM board has for example DDR ram which you could use to extend your RAM. Regarding the exact amount of bytes that would be needed for 100 nodes I cannot give you an estimate right now, because some memories are allocated dynamically in run Time which can vary in different scenarios. But assuming that the external RAM would meet your RAM requirement, then the limiting factor would be the overall latency. If latency is critical in your application I would suggest investigating our EtherCAT solution.

    Regarding a meeting please consult your respective TI sales & support to setup one.

    Best regards
    Pourya

  • Hi Petr,

    In order to understand the scenario better, can you please explain the procedure that you used to get the overall latency as 8ms?

    In the meanwhile, you can also go through the latency measurements for the EtherCAT offering document here : https://software-dl.ti.com/processor-industrial-sw/esd/ind_comms_sdk/am243x/latest/docs/am243x/ethercat_slave/ethercat_datasheet.html#:%7E:text=Key%20Performance%20Parameters 

    Regards
    Archit Dev

  • Hi Archit Devs,

    We have considered:
    Data size ca. 85 Bytes, including header, per one DCM (We would have 100 DCM's in daisy chain) With redundancy considered, we multiplied data size by 3. The total data transmission size then 255 Bytes per one DCM. Maximum Data rate from MCU is taken as 100Mbits/sec for GMII Mode. Then we calculated time required to send a Byte and time
    required to send a package per 100 DCM (2.04ms). In addition we have calculated overall latency as a sum of individual latencies:

    1) Store and Forward Latency - data size 255 Bytes divided by 100Mbits/sec, result 20us, then multiplied by 100 DCM
    2) Wireline latency - 0.03us, then multiplied by 100 DCM
    3) Queing Latency - 30us, then multiplied by 100 DCM
    4) EIP switch latency - 20us

    Total latency 50.03us, then multiplied by 100 DCM + EIP switch latency. We have added Latency to previously calculated 2.04ms, hence 2.04ms plus 5.003ms that is around
    7ms. In addition we considered PLC cycle time 1ms, overall 8ms.

    Do you have a calculator or guidance what latencies shall be respected, or some other insights? Thanks a lot Petr.

  • Hi Petr,

    Thanks for sharing the update. We are discussing this internally and working on a recommendation. 
    I'll get back with an update here soon.

    Thank you for your patience.

    Regards
    Archit Dev

  • Hi Petr,

    Thank you for your patience.

    The EthernetIP Firmware running on AM243x behaves as a cut-through switch. This means that as long as there is no active packet transmission, the firmware forwards a packet as soon as the destination address is processed, without waiting for the entire packet to be received.

    This is much faster than the store and forward method. The cut-through latency for the EthernetIP between 2 nodes is around 4 micro-seconds.

    Can you also elaborate more on what is the Queing Latency and how you got 30us for it?

    Regards
    Archit Dev
  • Hi Archit Devs,

    thanks for explanation. This gives us better results. Total latency 2.939ms (it is on the edge of usability) We haven't consider Queing Latency for this calculation). Previously we considered Queing Latency, based on calculations available from open source, described below:

    "Queuing introduces a non-deterministic factor to latency since it can often be very difficult to predict exact traffic patterns on a
    network. Estimating the average latency for an Ethernet frame for a network with no traffic load, the queuing latency for a frame will be nil. For a loaded network, one can assume that the likelihood of a frame already in the queue is proportional to the network load. The average queuing latency can then be estimated as: 

    Lq = Network load * Store and forward latency of a full-size frame.

    For example, a network with 25% load would have an average queuing latency of :

    Lq = 0.25 * (12000 bits / 100Mbps) = 30us"

    I guess if EthernetIP Firmware running on AM243x behaves as a cut-through switch, we dont need to consider queuing latency, correct?

    Thanks Petr.

  • Hi Petr,

    Thank you for your patience.

    As mentioned previously, the EthernetIP firmware running on AM243x behaves as a cut-through switch. However, in a case where there is an on-going transmission on the node, the incoming packet is not cut-through. In this case, the firmware switches to store and forward mode.

    Considering the worst case scenario, in case a very large sized packet (over 1500 bytes) is being transmitted, the latency can be pretty high (close to 125 micro-seconds).

    In an ideal case where all the frames are cut-through, we can take the average cut-through latency as 4 micro-seconds. However, unless the traffic in the network is maintained very strictly, the exact prediction of the latency for the node becomes difficult.

    Our recommendation would be to try analyzing the use of EtherCAT for your product - it offers high speed with low latency and jitter with on-the-fly processing of data.

    You can go through the latency numbers and other KPIs for our EtherCAT solution based on AM243x here : https://software-dl.ti.com/processor-industrial-sw/esd/ind_comms_sdk/am243x/latest/docs/am243x/ethercat_slave/ethercat_datasheet.html#:~:text=Key%20Performance%20Parameters 

    Regards
    Archit Dev