This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM3352 UDP Performance Issue

Other Parts Discussed in Thread: AM3352

Hi Sir 

We connect AM3352 EVM with PC/Other EVM board and do UDP transmission performance test in 100M bps setting.

in the end,  we found the Bandwidth is only about 369 Kbits/sec with long transmission testing(about 1800 seconds or 3600 seconds)

Below is the duplicated steps

in PC/other EVM size

           1. Ifconfig eth0 192.168.1.50

           2. iperf -s –u

in AM3352 boards

1.   ethtool -s eth0 autoneg off ; ethtool -s eth0 speed 100

2.  ifconfig eth0 192.168.1.51

3. iperf  -c 192.168.1.50  -t 1800   -b 100M  -f k  -i 5

And we can see the log as below 

------------------------------------------------------------
Client connecting to 192.168.1.50, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 160 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.1.51 port 59311 connected with 192.168.1.50 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 5.0 sec 6377 KBytes 10448 Kbits/sec
[ 3] 5.0-10.0 sec 227 KBytes 372 Kbits/sec
[ 3] 10.0-15.0 sec 546 KBytes 894 Kbits/sec
[ 3] 15.0-20.0 sec 225 KBytes 369 Kbits/sec
[ 3] 20.0-25.0 sec 225 KBytes 369 Kbits/sec
[ 3] 25.0-30.0 sec 224 KBytes 367 Kbits/sec
[ 3] 30.0-35.0 sec 225 KBytes 369 Kbits/sec
[ 3] 35.0-40.0 sec 225 KBytes 369 Kbits/sec
[ 3] 40.0-45.0 sec 225 KBytes 369 Kbits/sec

BTW, we also follow below link to improve the performance and the result is no improvement.

http://processors.wiki.ti.com/index.php/TI81XX_UDP_Performance_Improvement#Socket_Buffer_Queue

Do you have any suggestion to improve UDP transmission performance  with long transmission testing? 

Thanks for your help

BR

Yimin

  • Hi Yimin,

    What Linux version are you using? For your reference here are the SDK 7.0 UDP performance figures: http://processors.wiki.ti.com/index.php/Processor_SDK_Linux_Kernel_Performance_Guide#AM335XX_UDP_Performance

  • Hi Biser

    1. We use SDK6.0 with kernel 3.2
    2. about the link you highlighted, you can see the command as below
    On the DUT iperf is invoked in client mode (bi-directional traffic for 60 seconds).
    iperf -c <server ip> -w <window size> -m -f M -d -t 60

    t = 60, and we can get good performance.
    if we set t = 1800 or 3600, we found the Bandwidth is only up to 369 Kbits/sec.

    please advise and thanks in advance.

    BR
    Yimin.
  • I have asked the Ethernet experts to help on this.

  • Hi Biser

    There are some phenomenons we found as below
    1. we used rmii phy (10/100M) in our customized board and found this baud-rate issue

    2. if we use TI GP-EVM with giga-phy, we need to add parameter "-b 100M" by using iperf command.
    And the baud-rate is only up to 367 Kbits/sec.

    3. If the setting of “-b” is set to anything above 95M in GP-EVM, the baud-rate seems lower than we expected.
    And set it to be 95M or below, it works normally.
    root@am335x-evm:~# iperf -c 192.168.1.50 -t 1800 -b 95M -f k -i 5
    WARNING: option -b implies udp testing
    ------------------------------------------------------------
    Client connecting to 192.168.1.50, UDP port 5001
    Sending 1470 byte datagrams
    UDP buffer size: 160 KByte (default)
    ------------------------------------------------------------
    [ 3] local 192.168.1.51 port 40051 connected with 192.168.1.50 port 5001
    [ ID] Interval Transfer Bandwidth
    [ 3] 0.0- 5.0 sec 58355 KBytes 95609 Kbits/sec
    [ 3] 5.0-10.0 sec 58356 KBytes 95611 Kbits/sec
    [ 3] 10.0-15.0 sec 58355 KBytes 95609 Kbits/sec
    [ 3] 15.0-20.0 sec 58356 KBytes 95611 Kbits/sec

    For your information

    BR
    Yimin
  • Hi Sir



    Do you have any update or suggestion ?



    BR

    Yimin
  • Feedback will be posted here when available.

  • From the results on the second test with the 10/100 this looks to be normal, what BW are you expecting? 95M out 100M is good.

    The first test with the low performance when the autoneg was turned off, was it turned off on both ends?
  • Hi Sir

    1. the issue is our customized board used 100/10 etherent phy.

        And it cannot get better performance in lab by using command "iperf" without adding parameters "-b 100M".  

      Please advise why we found the Bandwidth is only about 369 Kbits/sec with long transmission testing(about 1800 seconds or 3600 seconds).  

    2. Yes . It does not matter the autoneg was turned off on both site and get the same result.

    BR

    Yimin

  • I just performed the iperf test using this command line on a Beagle bone black board that uses a 10/100 PHY. I ran the test for t=1800 and did not see a performance drop off, the throughput was consistently around 95Mbps.

    iperf -c 128.247.125.152 -u -b 100M -t 1800 -i 2

    I left the port in the auto negotiated state. I also ran the iperf without setting the bandwidth and it still performed at about 95Mbps.

    When your board comes up what does ethtool report for this port?
  • Hi Sir

    I did again in beaglbone black with SDK6.0 for this experiment.

    In AM437x Board

    1.Ifconfig eth0 192.168.1.50

    2.iperf -s –u

    in AM335x Beaglebone black with SDK6,0

    1.   ifconfig eth0 192.168.1.51

    2. iperf -c 192.168.1.50  -t 1800 -i 5 -f k -b 100M

    And the log message is showed as below

    WARNING: option -b implies udp testing
    ------------------------------------------------------------
    Client connecting to 192.168.1.50, UDP port 5001
    Sending 1470 byte datagrams
    UDP buffer size: 160 KByte (default)
    ------------------------------------------------------------
    [ 3] local 192.168.1.51 port 47145 connected with 192.168.1.50 port 5001
    [ ID] Interval Transfer Bandwidth
    [ 3] 0.0- 5.0 sec 6370 KBytes 10436 Kbits/sec
    [ 3] 5.0-10.0 sec 441 KBytes 722 Kbits/sec
    [ 3] 10.0-15.0 sec 221 KBytes 362 Kbits/sec
    [ 3] 15.0-20.0 sec 221 KBytes 362 Kbits/sec
    [ 3] 20.0-25.0 sec 224 KBytes 367 Kbits/sec
    [ 3] 25.0-30.0 sec 221 KBytes 362 Kbits/sec
    [ 3] 30.0-35.0 sec 221 KBytes 362 Kbits/sec

    The bandwidth is lower than we expected and it is only up to 362 Kbits/sec.

    And we did another experiment as below 

    root@am335x-evm:~# iperf -c 192.168.1.50 -t 1800 -i 5 -f k -b 95M
    WARNING: option -b implies udp testing
    ------------------------------------------------------------
    Client connecting to 192.168.1.50, UDP port 5001
    Sending 1470 byte datagrams
    UDP buffer size: 160 KByte (default)
    ------------------------------------------------------------
    [ 3] local 192.168.1.51 port 55000 connected with 192.168.1.50 port 5001
    [ ID] Interval Transfer Bandwidth
    [ 3] 0.0- 5.0 sec 58355 KBytes 95609 Kbits/sec
    [ 3] 5.0-10.0 sec 58356 KBytes 95611 Kbits/sec
    [ 3] 10.0-15.0 sec 58355 KBytes 95609 Kbits/sec
    [ 3] 15.0-20.0 sec 58355 KBytes 95609 Kbits/sec
    ^C[ 3] 0.0-21.9 sec 256057 KBytes 95609 Kbits/sec

    The bandwidth is up to 95609 Kbits/sec. 

    please advise and thanks in advance.

    BR 

    Yimin

  • When I ran the test I was using the latest kernel. I performed the test with the kernel that you are using and was able to reproduce the results that you are seeing. On later kernels I am able to set the bandwidth to 100M and it still performs no better than the 95M setting. Is there a concern that you have about setting the bandwidth to 95 vs. 100? From testing I have done I have not seen the BBB perform better than the 95-96M bandwidth whether TCP or UDP.

  • Hi Sir

    Yes. we are concerned about this issue because the lab is doing the performance test and result is failed.

    As we know, the lab RDs will use " -b 100M" to do performance test  instead of using "-b 95M".

    please advise how to fix this issue in kernel 3.2 version

    Thanks in advance.

    BR

    Yimin