This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM5728: Ethernet/DDR questions

Part Number: AM5728

Hello,

One of our customers is designing with the AM5728 and needs some help on the following quesitons:

  1. For the GiGE RGMII interface they get the following results when they use a loopback cable;
  1. AM572x EVM,  Bandwidth: 794 Mbps, Jitter: 0.008ms, Packet Loss: 0.89%
  2. Customer Implementation, Bandwidth: 714 Mbps, Jitter: 0.001ms, Packet Loss: 0.46% 

Is this what you would expect? Is there a limiting factor or can you advise of how they can improve on the bandwidth?

  1. They implemented the same DDR as the AM572x EVM to de-risk the project but they're wondering whether we have a tool for setting up the DDR that would advise of register settings to aid them in specifying different DDR devices. When they have worked with other suppliers they have been provided spreadsheets to aid with this as there are a significant number of DDR registers.

Regards,
Ryan B.

  • Hi,

    Q1: Please clarify how this was measured. Is the custom board identical to the EVM? Full details please.
    Q2: www.ti.com/.../sprac36a.pdf
  • Hi Biser,

    Thank you for the very speedy response. I have informed the customer to elaborate on their measurement setup for you.

    Regards,
    Ryan B.
  • Hi,

    Taken from processors.wiki.ti.com/.../Processor_SDK_Linux_Kernel_Performance_Guide

    • iperf version 2.0.5
    • For receive performance, on DUT, invoke iperf in server mode.
    iperf -s -u
    • For transmit performance, on DUT, invoke iperf in client mode.
    iperf -c <server ip> -b <bandwidth limit> -f M -t 60

    Our implementation uses the same Microchip KSZ9031RNX PHY as the eval card but we use the RGMII1 interface rather than the RGMII0.

    Regards,

    Robert
  • Thanks. I have asked the Ethernet experts to comment on this. They will respond here.
  • Hi,

    Could please post the results of ethtool -S eth1? If the RX DMA overflow is non-zero it might indicate an rx descriptor exhaustion.

    The next request I have is if you are able to reproduce the traffic numbers on the TI EVM. Do you have a TI GP EVM?

    Which TI SDK are you using?

    Best Regards,
    Schuyler
  • Hi Schuyler,

    We are using Processor SDK 4.01, built from Yocto.

    The 1st set of numbers are from TI's AM572x GP EVM, let me know what other numbers you would like to see.

    1. TI AM572x GP EVM:  Bandwidth: 794 Mbps, Jitter: 0.008ms, Packet Loss: 0.89%
    2. Our Board: Bandwidth: 714 Mbps, Jitter: 0.001ms, Packet Loss: 0.46% 

    ethtool -S eth0

    NIC statistics:
    Good Rx Frames: 1406140
    Broadcast Rx Frames: 105
    Multicast Rx Frames: 209
    Pause Rx Frames: 0
    Rx CRC Errors: 1
    Rx Align/Code Errors: 0
    Oversize Rx Frames: 0
    Rx Jabbers: 0
    Undersize (Short) Rx Frames: 0
    Rx Fragments: 0
    Rx Octets: 2131260635
    Good Tx Frames: 3646619
    Broadcast Tx Frames: 5
    Multicast Tx Frames: 65
    Pause Tx Frames: 0
    Deferred Tx Frames: 0
    Collisions: 0
    Single Collision Tx Frames: 0
    Multiple Collision Tx Frames: 0
    Excessive Collisions: 0
    Late Collisions: 0
    Tx Underrun: 0
    Carrier Sense Errors: 0
    Tx Octets: 1233183386
    Rx + Tx 64 Octet Frames: 98
    Rx + Tx 65-127 Octet Frames: 199
    Rx + Tx 128-255 Octet Frames: 77
    Rx + Tx 256-511 Octet Frames: 36
    Rx + Tx 512-1023 Octet Frames: 2
    Rx + Tx 1024-Up Octet Frames: 5052348
    Net Octets: 3364445537
    Rx Start of Frame Overruns: 40
    Rx Middle of Frame Overruns: 0
    Rx DMA Overruns: 40
    Rx DMA chan 0: head_enqueue: 1
    Rx DMA chan 0: tail_enqueue: 1406161
    Rx DMA chan 0: pad_enqueue: 0
    Rx DMA chan 0: misqueued: 1
    Rx DMA chan 0: desc_alloc_fail: 0
    Rx DMA chan 0: pad_alloc_fail: 0
    Rx DMA chan 0: runt_receive_buf: 0
    Rx DMA chan 0: runt_transmit_bu: 0
    Rx DMA chan 0: empty_dequeue: 0
    Rx DMA chan 0: busy_dequeue: 506453
    Rx DMA chan 0: good_dequeue: 1406034
    Rx DMA chan 0: requeue: 0
    Rx DMA chan 0: teardown_dequeue: 0
    Tx DMA chan 0: head_enqueue: 113269
    Tx DMA chan 0: tail_enqueue: 3533350
    Tx DMA chan 0: pad_enqueue: 0
    Tx DMA chan 0: misqueued: 95689
    Tx DMA chan 0: desc_alloc_fail: 0
    Tx DMA chan 0: pad_alloc_fail: 0
    Tx DMA chan 0: runt_receive_buf: 0
    Tx DMA chan 0: runt_transmit_bu: 25
    Tx DMA chan 0: empty_dequeue: 113211
    Tx DMA chan 0: busy_dequeue: 3515104
    Tx DMA chan 0: good_dequeue: 3646619
    Tx DMA chan 0: requeue: 2069345
    Tx DMA chan 0: teardown_dequeue: 0

  • Hi Dallas,

    I want to make sure I am following the testing setup, the statistics, were these from the TI EVM or the custom board? Is there 1 or 2 ethernet ports on the custom board?

    A couple of statistics to note are these overruns which I mentioned above that come from rx descriptor exhaustion.

    Rx Start of Frame Overruns: 40
    Rx Middle of Frame Overruns: 0
    Rx DMA Overruns: 40

    To reduce the possibility of the overrun type error please refer to this link that shows how to increase the default rx descriptor count. The default count is 128, by increasing the count to something significantly larger like shown in the link should help.

    processors.wiki.ti.com/.../Linux_Core_CPSW_User's_Guide

    This statistic is concerning as it points to a problem somewhere in the hardware chain:
    Rx CRC Errors: 1

    Best Regards,
    Schuyler