This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

OMAPL138B-EP: NDK Throughput

Part Number: OMAPL138B-EP
Other Parts Discussed in Thread: OMAPL138, MATHLIB

Hi,

Customer configured 100M ethernet and tested dtask_tcp_echo in ti\ndk\examples\ndk_omapl138_arm9_examples\ndk_evmOMAPL138_arm9_client, but the maximum Throughput is 12KB/s and gradually decline.

How to improve the throughput? Is it possible to reach 100M?

  • Hi, 

    Can you please provide details on which processor SDK release are you using? I am not able to locate this example under PDK for OMAPL138.

    Also, please provide details on how are you measuring the throughput? 

    Thanks

  • Hi,

    CCS 7.3.0.00019
    XDCtools version 3.50.8.24_core
    NDK 3.40.1.01
    SYS/BIOS 6.73.1.01
    omapl138 PDK 1.0.7

    Customer tests with a tcp/udp tool like iperf.

  • Hi Nancy,

    I installed the specified PDK and do not see the example out of box from the PDK.

    uda0756924alocal@UDA0756924A:~/ti/PDK_OMAPL138_5.2/pdk_omapl138_1_0_7/packages/ti/transport$ find . -name "*.txt" | grep client

    uda0756924alocal@UDA0756924A:~/ti/PDK_OMAPL138_5.2/pdk_omapl138_1_0_7/packages/ti/transport$ find . -name "*.txt" | grep OMAP
    ./ndk/nimu/example/helloWorld/omapl138/c674/bios/NIMU_emacExample_lcdkOMAPL138C674xBiosExampleProject.txt
    ./ndk/nimu/example/helloWorld/omapl138/armv5/bios/NIMU_emacExample_lcdkOMAPL138ARMBiosExampleProject.txt
    ./ndk/nimu/example/helloWorld/omapl137/c674/bios/NIMU_emacExample_evmOMAPL137C674xBiosExampleProject.txt
    ./ndk/nimu/example/helloWorld/omapl137/armv5/bios/NIMU_emacExample_evmOMAPL137ARMBiosExampleProject.txt
    ./ndk/nimu/example/client/omapl138/c674/bios/NIMU_emacExampleclient_lcdkOMAPL138C674xBiosExampleProject.txt
    ./ndk/nimu/example/client/omapl138/armv5/bios/NIMU_emacExampleClient_lcdkOMAPL138ARMBiosExampleProject.txt
    ./ndk/nimu/example/client/omapl137/c674/bios/NIMU_emacExampleClient_evmOMAPL137C674xBiosExampleProject.txt
    ./ndk/nimu/example/client/omapl137/armv5/bios/NIMU_emacExampleClient_evmOMAPL137ARMBiosExampleProject.txt


    Did customer developed this application on his own?

    Few questions: 

    1. Is Cache Enabled? 
    2. Where are NDK buffers kept?
    3. Is Cache enabled?
    4. Can customer share the application along with lnk cmd file and map file?
    5. Is this with CPSW 

      Please let me know. 
  • Hi,

    1、Customer wants RTOS solution

    2、Customer tests NIMU_emacExampleClient_lcdkOMAPL138ARMBiosExampleProject, throughput is not ideal as well, he didn't modify the setting from demo, so cache is enabled by default and tx_buf and rx buf are allocate to DDR. Are tx_buf and rx_buf the NDK buffer you are referring to?

  • Hi Nancy,

    Thanks for confirming and providing the details and making customer provide information on the example that is packaged from the PDK.

    I assume you asked customer to move to latest PDK (6.3 version). Please confirm.

    Also, Can customer modify the client.cfg file under "pdk_omapl138_1_0_11\packages\ti\transport\ndk\nimu\example\client\omapl138\armv5\bios" folder to move the NDK and Packet buffers to internal instead of DDR?

    Can they share the linker and map files used for the test? OR if this is a default one from PDK example? Please let me know.

    Also, can you please confirm if the throughput they see with this example is also about 12KB/s ?

    Also, can you please provide the detailed steps that customer followed to measure the NDK throughput using the client example?

    Thanks.

  • 1、I found two buffer defined in .cfg.

    Are these NDK and pacskage buffers? So i suggest customer try to place them in internal ram. But i don't know why section named NDK_OBJMEM in .cfg, which are not match with the above picture.

    Program.sectMap[".far:NDK_OBJMEM"] = {loadSegment: "DDR", loadAlign: 8};
    Program.sectMap[".far:NDK_PACKETMEM"] = {loadSegment: "DDR", loadAlign: 128};

    2、Cutomer use default PDK example pdk_omapl138_1_0_11\packages\ti\transport\ndk\nimu\example\client\omapl138\armv5\bios

    3、The detailed steps i will post later

  • 1、import and build demo

    2、use echo callback function send packets between board and tool.

    hEcho = DaemonNew( SOCK_STREAMNC, 0, 7, dtask_tcp_echo,
    OS_TASKPRINORM, OS_TASKSTKNORM, 0, 3 );

  • If fill the send zone(about 32KB), throughput < 100KB/s

     
  • Hi Nancy,

    Thanks for the details. I will check if I can reproduce the issue locally and suggest next steps. I do not have any comment as of now.

    I may ask few questions to you while I recreate this at my side. Please note that I am working on other commitments as well. So, please expect some delay on this response.

    Thanks

  • Hi Nancy,

    I just did measure the throughput using iperf and see about 64Mbs.

    root@omapl138-lcdk:~# iperf -c 10.0.0.11 port 5001 -t60
    iperf: ignoring extra argument -- port
    iperf: ignoring extra argument -- 5001
    ------------------------------------------------------------
    Client connecting to 10.0.0.11, TCP port 5001
    TCP window size: 20.7 KByte (default)
    ------------------------------------------------------------
    [ 3] local 10.0.0.151 port 40752 connected with 10.0.0.11 port 5001
    [ ID] Interval Transfer Bandwidth
    [ 3] 0.0-60.0 sec 462 MBytes 64.5 Mbits/sec
    root@omapl138-lcdk:~#

    Let me check with NDK client example with the approach customer following and get back to you if I get any questions.

  • Hi Nancy,

    I just checked the throughput in Linux on this board using iperf. I see about 

    root@omapl138-lcdk:~# iperf -c 10.0.0.11 port 5001
    iperf: ignoring extra argument -- port
    iperf: ignoring extra argument -- 5001
    ------------------------------------------------------------
    Client connecting to 10.0.0.11, TCP port 5001
    TCP window size: 20.7 KByte (default)
    ------------------------------------------------------------
    [ 3] local 10.0.0.151 port 40751 connected with 10.0.0.11 port 5001
    [ ID] Interval Transfer Bandwidth
    [ 3] 0.0-10.0 sec 76.0 MBytes 63.7 Mbits/sec
    root@omapl138-lcdk:~#
    root@omapl138-lcdk:~#

  • Hi Nancy,

    I run the RTOS NDK Client example and see below performance with iperf. I do not understand the approach customer is using to compute the throughput. It is not 12Kb/sec at my setup. 

    Please note that I run this on latest (6.3 RTOS release), to get latest fixes/updates on NDK and emac driver.

    Here are the components, I used:


    02/03/2021 05:11 PM <DIR> bios_6_76_03_01
    04/20/2020 03:11 PM 37,560,400 cgt_arm_installer
    02/03/2021 05:13 PM <DIR> cg_xml_2.61.00
    02/03/2021 05:01 PM <DIR> dsplib_c64xP_3_4_0_4
    02/03/2021 05:01 PM <DIR> dsplib_c674x_3_4_0_4
    02/03/2021 05:02 PM <DIR> edma3_lld_2_12_05_30E
    02/03/2021 05:03 PM <DIR> ipc_3_50_04_08
    02/03/2021 05:03 PM <DIR> mathlib_c674x_3_1_2_4
    02/03/2021 05:04 PM <DIR> ndk_3_61_01_01
    02/03/2021 05:04 PM <DIR> ns_2_60_01_06
    02/03/2021 05:06 PM <DIR> pdk_omapl138_1_0_11
    02/03/2021 05:09 PM <DIR> processor_sdk_rtos_omapl138_6_03_00_106
    02/03/2021 05:14 PM <DIR> ti-cgt-arm_18.12.5.LTS
    02/03/2021 05:14 PM <DIR> ti-cgt-c6000_8.3.2
    02/03/2021 05:11 PM <DIR> uia_2_30_01_02
    02/03/2021 05:12 PM <DIR> xdais_7_24_00_04
    02/03/2021 05:13 PM <DIR> xdctools_3_55_02_22_core

    However, the throughput seems significantly low compared to linux side. No where near to expected rate.

    I may need to spend some more time on understand the reason. I will take a look when I get some chance.

    Please expect some delay in this as it may involve deep dive into the driver code and NDK.

    On the EVM:


    TCP/IP Stack 'Client!' Application

    Service Status: DHCPC : Enabled : : 000
    Service Status: DHCPC : Enabled : Running : 000
    Service Status: Telnet : Enabled : : 000
    Service Status: HTTP : Enabled : : 000
    Network Added: If-1:10.0.0.100
    Service Status: DHCPC : Enabled : Running : 017

    On Windows PC:


    c:\work\iperf-2.0.9-win64\iperf-2.0.9-win64>iperf -c 10.0.0.100 --port 5001 --udp
    ------------------------------------------------------------
    Client connecting to 10.0.0.100, UDP port 5001
    Sending 1470 byte datagrams, IPG target: 11215.21 us (kalman adjust)
    UDP buffer size: 208 KByte (default)
    ------------------------------------------------------------
    [ 3] local 10.0.0.11 port 59166 connected with 10.0.0.100 port 5001
    [ ID] Interval Transfer Bandwidth
    [ 3] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec
    [ 3] Sent 893 datagrams
    [ 3] WARNING: did not receive ack of last datagram after 10 tries.

    c:\work\iperf-2.0.9-win64\iperf-2.0.9-win64>

  • Hi Nancy,

    Just wanted to set up the right expectation. This may take some time to investigate. Please keep me posted for any further updates on this.

    Thanks

  • Hi,

    Thanks for following up. And i will suggest customer to test according to your test method.

  • Hi Nancy,

    Sorry! It is taking more than expected time. Can you please update anything on this issue? Is customer still waiting for this?

    Thanks