This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

MCU-PLUS-SDK-AM243X: Unable to reach gigabit throughput using iperf tool

Part Number: MCU-PLUS-SDK-AM243X
Other Parts Discussed in Thread: SYSCONFIG

Hi,

SDK : AM243x MCU+ SDK  08.05.00 , AM243x GP EVM

Example : Enet Lwip ICSSG Example

Enet Lwip ICSSG example

Testing the above example to verify gigabit ethernet functionality. The EVM is connected to PC via Gigabit Ethernet port.

gigabit support is enabled in sysconfig

iperf results:

Bandwidth : 56.9 Mbits/sec

  

How to achieve gigabit speeds using the above example?

  • Hi Praveen,

    The observation that you are seeing are due to:

    1. This example use SW network stack LWIP, and it causes limitation.

    2. Gigabit speed testing can be done via using ENET Layer 2 ICSSG example and ENET Layer2 CPSW Example

    How to achieve gigabit speeds using the above example?

    Let me check with software team about this.

  • Hi Praveen,

    Let me clarify here, the test you are running is a throughput test. 

    1. It highly depends on packet size. For smaller size packet due is will be less beacuse of preamble and IPG. So to achieve best case throughput use MTU size packet.

    2. You are using TCP communication test using Iperf. Due to TCP ack and retransmissions, achieving Gbps throughput on TCP communication will not be possible. Also keeping in mind that we are using software stack, and you are running bidirectional test(TX and RX together)  will be overhead on CPU.

    3. For observing maximum throughput using UDP will be ideal. You can run UDP tx test.

    Can you pleas elet me know what is your use case here?

    BR

    Nilabh A.

  • Hi Nilabh,

    1) The packet size is closer to MTU size, ~ 1460 bytes

    2) Below are the results for UDP iperf test,

    The uart log from evm shows packet loss of 28% for 50 Mbits/sec throughput.

    To achieve 0 % packet loss i had to set iperf bandwith as 10 Mbits/Sec 

    The use case is to demonstrate gigabit throughput capabilities to the client.

  • Hi Praveen,
    let me check this with the internal team and get back to you by friday.

    BR

    Nilabh A.

  • Hi Praveen,

    Can you please tell me why do you want to use ICSSG for the demonstration. For standard ethernet based use case we recommend CPSW. And ICSSG is recommended for Industrial Based use case. I would like to suggest using CPSW if your use case is not industrial based. You will be having a better performance, as CPSW is a HW based ethernet peripheral. Where as ICSSG is software based ethernet peripehral.

  • Hi Nilabh,

    We are simulating the Industrial edge gateway. It will be running multiple application, and which creates load on the network stack. In order to achieve high throughput, we need send the data packets as soon as possible. Hence, we chose ICSSG over CPSW.

    Can we Increase the MTU Size to Larger value?
    If MTU increase option is not possible, can you suggest how to achieve higher throughput closer to 1Gbps?

  • Thanks Praveen

    Let me try to get back after discussing with internal team.

  • Hi  Praveen,

    Apologies for delay in response.

    Please try following changes in application and library:

    In Syscfg

    DMA channel Cfg:
    tx dma ch: num of pkts: 32
    rx dma ch0: num of pkts: 64
    rx dma ch1: num of pkts: 64
    Pkt pool config:
    Large Pool Pkt Count: 128 (this is the total no of Rx Packets)
    PktInfoMem Only Count: 32 (this is the total no of Tx Packets)

    lwippools.h:
    Double all the numbers

    lwipopts.h:
    #define TCP_SND_BUF (16 * TCP_MSS) -> only needed for tcp performance

    Then rebuild Lwip library, lwip contrib library, Lwipif library and rebuild the application for changes to take place.

    Let me know if you face any difficulty.

    Please also note that with above configuration you are bound to see increase in memory foot print as we are increasing buffer size. And in udp iperf command you gave -B as 50, so this will limit BW at 50 try higher number and see what performance you get. We expect this to give a better performance(not Gbps). We are still working on finding ways to increase performance for ICSSG application. We are expecting the timeline for this work to be Q2 2023

  • Hi Nilabh,

    Thanks for your response.

    We have made the following changes as suggested by you:

    In SysCfg:

    DMA channel Cfg:
    tx dma ch: num of pkts: 32
    rx dma ch0: num of pkts: 64
    rx dma ch1: num of pkts: 64
    Pkt pool config:
    Large Pool Pkt Count: 128
    PktInfoMem Only Count: 32 

    In lwipopts.h :

    #define TCP_SND_BUF (16 * TCP_MSS) 

    In lwippools.h :

    Default settings, as doubling the values as suggested by you results in error during compilation.

    We have observed a TCP throughput of ~125 Mbits/Sec which is a ~50% increase from previous test results.

    We also observe that the MSRAM utilization is around 99% with these settings. Please confirm that this is expected.

    As we are currently in the design phase, please clarify on the excepted RAM utilization for ICSSG application with the performance improvements you are planning.  Also, please clarify whether we will need an external RAM as we observe very high RAM utilization currently.

  • Hi Praveen,

    We are working on improving the performance, which naturally will increase the size, because of increasing the buffers. But we are also considering on optimising it. Let me check internally on this.

    BR

    Nilabh A.