This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

10 MBps / Full duplex packet loss on 3PSW configured as switch



Hello,

I'm using a DM814x-custom board, with the 3PSW configured as a switch. I'm experiencing UDP packet loss (tested using the iperf tool) the ports (either one or both) is configured as 10 MBps/Full duplex. I have no problems with 100 MBps and 10 MBps/Half duplex configurations. The network topology is very simple, with just two PCs connected to the DM814x switch ports.

Did anybody experienced something similar? Are there known issues with the 10 MBps / Full duplex configuration?

Thanks for your help.

Best Regards,

Piero

  • Hi Piero,

    DM814x datasheet and TRM states that 10Mbps full duplex mode is supported. And DM814x Silicon Errata does not report any bug related to this EMAC 10Mbps full duplex mode. Thus I assume this 10Mbps full duplex mode should be available for use.

    What is the software you are using? Is it EZSDK 5.05.02.00?

    Can you reproduce this issue on the DM8148 EVM?

    Best regards,
    Pavel

  • Hello Pavel,

    yes, I'm using the EZSDK 5.05.02.00. I don't have the DM8148 EVM here, but I can with on another custom board, where the eth PHYs are different, and see what happens.

    I'll do some tests and post a feedback.

    Thanks and best regards,

    Piero

  • Piero,

    Please also apply the below u-boot patches on top of EZSDK 5.05.02.00 / PSP 04.04.00.01 (ti-ezsdk_dm814x-evm_5_05_02_00/board-support/u-boot-2010.06-psp04.04.00.01) :


    ti81xx: cpsw: get cpdma sram offset from platform data
    http://arago-project.org/git/projects/u-boot-omap3.git?p=projects/u-boot-omap3.git;a=commit;h=450c72f637f0b45c708f9548c477ebe7b666fc73

    ti81xx: cpsw: Enable CPSW in In-Band Mode to make 10Mbps to work
    http://arago-project.org/git/projects/u-boot-omap3.git?p=projects/u-boot-omap3.git;a=commit;h=80220c96d3ea13af407abb39c610b50c514c20b6

    ti81xx: cpsw: Enable ifctl_a to control external gasket in 100Mbps
    http://arago-project.org/git/projects/u-boot-omap3.git?p=projects/u-boot-omap3.git;a=commit;h=5ec2f004991f48c0705dcf9fab1aa486dffc60f9

    ti81xx: cpsw: cpdma bug fix where dma stops in bursty network
    http://arago-project.org/git/projects/u-boot-omap3.git?p=projects/u-boot-omap3.git;a=commit;h=8c0be5fd0cc4f337bd2b45d71073843c0156d31a

    TI81XX: cpsw: move descriptors to bd ram
    http://arago-project.org/git/projects/u-boot-omap3.git?p=projects/u-boot-omap3.git;a=commit;h=037d8b3752245f898a2a3e273c3e12079c1077f1

    TI81XX: cpsw: enable d-cache support for cpsw driver
    http://arago-project.org/git/projects/u-boot-omap3.git?p=projects/u-boot-omap3.git;a=commit;h=3d69208be20cb5d19a94396ae25e0c05b97d37c5

    drivers: net: cpsw: halt cpsw properly to stop receiving properly
    http://arago-project.org/git/projects/u-boot-omap3.git?p=projects/u-boot-omap3.git;a=commit;h=e6fcefa86a70460674fe38320cc3ad1cc97a2ea6

    drivers: net: cpsw: net_send dcache flush usage bug fix
    http://arago-project.org/git/projects/u-boot-omap3.git?p=projects/u-boot-omap3.git;a=commit;h=16b2b5c00229ef8cc0c64b0eb522c718d0d209f2

    drivers: net: cpsw: fix compier warning with d-cache usage
    http://arago-project.org/git/projects/u-boot-omap3.git?p=projects/u-boot-omap3.git;a=commit;h=11b72bceb408006e76be2130a8c1c1b8d97e53c1

    drivers: net: cpsw: optimize cpsw_send to increase network performance
    http://arago-project.org/git/projects/u-boot-omap3.git?p=projects/u-boot-omap3.git;a=commit;h=50586d5ebdb90c9403688c54487fed53787506f5

    drivers: net: cpsw: remove compiler warning
    http://arago-project.org/git/projects/u-boot-omap3.git?p=projects/u-boot-omap3.git;a=commit;h=4e0e7f6ce451502b292a7ddae16eb6219821620c

    Regards,
    Pavel

  • Hello Pavel,

    the patches are applied (my u-boot and kernel are up-to-date with the psp releases).

    I'm experiencing packet losses also with my other DM814x-based platform. I'm trying to get a DM814x-EVM for additional tests.

    Thanks and best regards,

    Piero

  • Piero,

    Refer also to the below wiki page:

    http://processors.wiki.ti.com/index.php/TI81XX_PSP_04.04.00.02_Feature_Performance_Guide#Ethernet_Switch_Driver

    Regards,
    Pavel

  • Hello Pavel,

    thank you for pointing out that link, I'll try adjusting the networking parameters to see if that reduces the packet loss.

    The tables report considerable packet loss in Rx performances @ 200 MBps bandwidth with 1 Gbps switch configuration, which sounds similar to what I'm experiencing (packet loss @ 1Mbps bandwidth with 10/Full switch configuration).

    I'll send a feedback as soon as I get the DM814x EVM for other tests/comparisons.

    Best regards,

    Piero

  • Hello Pavel,

    I've done some tests on the DM814x EVM, and the behaviour is the same that I see on my custom platform: with exactly the same UDP test bench, @ 10 Mbps/Full there is packet loss, while @ 10 MBps/Half there is no packet loss.

    I'll explain briefly the test bench: I have two AM335x boards connected to the switch ports of the DM814x board, so that I have the simplest possible network topology (I'm connected to the three boards with serial cables). I disable autonegotiation on the AM335x side with the ethtool program, and set the speed and duplex with the following command:

    ethtool -s eth0 speed 10 duplex full

    On the switch side, I must configure the ports with the ioctrl described here: http://processors.wiki.ti.com/index.php/DM814x_AM387x_Ethernet_Switch_User_Guide#CONFIG_SWITCH_SET_PORT_CONFIG, so I'm sure that there is no duplex mismatch.

    Then on one of the AM335x machines I run the iperf server (iperf -s -u -i 1), while on the other I launch two iperf sessions, one bidirectional 64k stream and one 1 MBps mono-directional, trying to simulate a real-use scenario:

    stream 1: iperf -c 192.168.0.122 -u -b 64000 -d -l 160 -t 100 &

    stream 2: iperf -c 192.168.0.122 -u -b 1M -l 1500 -t 100

    At the end of the test, iperf reports a packet loss percentage that can reach 30%. If I reduce the bandwidth of the second stream (for example from 1 Mbps to 300 Kbps), the test passes without packet loss.

    If I do the same tests with 10/Half configuration, there is no packet loss, as well as for a 100/Full configuration, where I start seeing packet loss only when I set 50 Mbps as bandwidth for the second stream.

    I've also tried setting the networking parameters, in particular the HW interrupt pacing feature (as explained in http://processors.wiki.ti.com/index.php/TI81XX_PSP_04.04.00.02_Feature_Performance_Guide#Ethernet_Switch_Driver), but I see no differences.

    So, it seems that the DM814x switch works well in the 10/Half and 100/Full configuration, and it has some limitations in the 10/Full configuration, so I think it's not a problem of switching capabilities, but there could be a problem with the optimization in the 10/Full case.

    Have you any suggestions? Do you think there is something that could be adjusted at cpsw driver level?

    Please let me know your opinions.

    Thanks and best regards,

    Piero

  • Piero,

    Let me check with our EMAC driver expert. May be he can help here.

    Meanwhile I can suggest you to check whether you are aligned with the EMAC 10Mbps mode timing requirements, described in the DM814x datahseet, section 8.6.2 EMAC Electrical Data/Timing. You can also check whether you are aligned with the EMAC 10Mbps full duplex mode requirements, described in the DM814x TRM, chapter 9 3PSW Ethernet Subsystem (EMAC).

    Other thing you can try is to decrease the bandwidth passed to the iperf tool (iperf -b).

    You can also refer to the below two wiki pages, might be in help:

    http://processors.wiki.ti.com/index.php/TI81XX_UDP_Performance_Improvement

    http://processors.wiki.ti.com/index.php/Iperf

    Regards,
    Pavel

  • Hello Pavel,

    Pavel Botev said:

    Piero,

    Let me check with our EMAC driver expert. May be he can help here.

    This sounds great, thank you.

    Pavel Botev said:

    Meanwhile I can suggest you to check whether you are aligned with the EMAC 10Mbps mode timing requirements, described in the DM814x datahseet, section 8.6.2 EMAC Electrical Data/Timing. You can also check whether you are aligned with the EMAC 10Mbps full duplex mode requirements, described in the DM814x TRM, chapter 9 3PSW Ethernet Subsystem (EMAC).

    Ok, we'll check these points.

    Pavel Botev said:

    Other thing you can try is to decrease the bandwidth passed to the iperf tool (iperf -b).

    Unfortunately, this is not an option: the real-world scenario requires a 1 Mbps traffic (to resemble a video stream).However, I've already verified that decreasing the bandwidth to 300 Kbps eliminates the packet loss.

    Pavel Botev said:

    You can also refer to the below two wiki pages, might be in help:

    http://processors.wiki.ti.com/index.php/TI81XX_UDP_Performance_Improvement

    I've read this page, for sure we'll check the DMA descriptors issue. I've tried different network stack queue parameters, but I haven't seen any improvements. One last question: does the results reported on the table http://processors.wiki.ti.com/index.php/TI81XX_UDP_Performance_Improvement#TI816X_Performance_Results refers to a 1 GBps connection? Could you tell me which tests have been performed to get those results?

    Thanks for your help and best regards,

    Piero

    P.S. For the sake of completeness: on the DM814x EVM, I'm running u-boot, kernel and root file system provided by the latest EZSDK (http://software-dl.ti.com/dsps/dsps_public_sw/ezsdk/latest/index_FDS.html).

  • Piero,

    This is the feedback from our EMAC driver owner:

    There is no issues with the CPSW on TI814x with 10/F configuration. Can you check whether there is drop reported in cpsw hardware statistics?

    Piero Pezzin said:
    One last question: does the results reported on the table http://processors.wiki.ti.com/index.php/TI81XX_UDP_Performance_Improvement#TI816X_Performance_Results refers to a 1 GBps connection? Could you tell me which tests have been performed to get those results?

    I will check this with him, as he is the owner of this wiki page.

    Regards,
    Pavel

  • Piero,

    Piero Pezzin said:
    One last question: does the results reported on the table http://processors.wiki.ti.com/index.php/TI81XX_UDP_Performance_Improvement#TI816X_Performance_Results refers to a 1 GBps connection? Could you tell me which tests have been performed to get those results?

    The UDP performance is taken with Gigabit Ethernet only.

    Do you have hw statistics dump on your failing setup?

    Best regards,
    Pavel

  • Hello Pavel,

    Pavel Botev said:

    The UDP performance is taken with Gigabit Ethernet only.

    Could you tell me how that tests have been performed? With which network topology and with which 3PSW configuration (Dual EMAC or Switch)? Have they used the iperf tool? With a bi-directional stream (-d option)?

    Pavel Botev said:

    Do you have hw statistics dump on your failing setup?

    Yes, I've run a lot of iterations of my test, but I don't see any errors in the /sys/class/net/eth0/hw_stats report:

    root@dm814x-evm:~# cat /sys/class/net/eth0/hw_stats
    CPSW Statistics:
    rxgoodframes ............................      53348
    rxbroadcastframes .......................          1
    rxoctets ................................   30473938
    txgoodframes ............................      53349
    txbroadcastframes .......................          1
    txmulticastframes .......................          1
    txoctets ................................   30474020
    octetframes64 ...........................         66
    octetframes65t127 .......................      33343
    octetframes128t255 ......................      40020
    octetframes512t1023 .....................          2
    octetframes1024tup ......................      33266
    netoctets ...............................   60947958

    RX DMA Statistics:
    head_enqueue ............................          1
    tail_enqueue ............................         64
    busy_dequeue ............................          6
    good_dequeue ............................          1

    TX DMA Statistics:
    head_enqueue ............................          5
    empty_dequeue ...........................          6
    good_dequeue ............................          5

    I have these figures after an iteration that reported about 33% packet loss:

    [  3] Sent 5001 datagrams
    [  4]  0.0-100.0 sec   526 KBytes  43.1 Kbits/sec   0.104 ms 1637/ 5001 (33%)
    [  3] Server Report:
    [  3]  0.0-100.0 sec   781 KBytes  64.0 Kbits/sec   0.063 ms    0/ 5001 (0%)
    [ ID] Interval       Transfer     Bandwidth
    [  3]  0.0-100.0 sec  11.9 MBytes  1000 Kbits/sec
    [  3] Sent 8335 datagrams
    [  3] Server Report:
    [  3]  0.0-101.8 sec  11.7 MBytes   963 Kbits/sec   0.006 ms    1/ 8335 (0.012%)

    Another strange thing that I've noticed is that, immediately after a cold start of the switch board or after bringing down and up the eth0 interface on the switch, the test bench runs fine. But later iterations of the same test returns packet loss...

    I'm running more tests trying to collect more data to share with you, I'll send you a feedback soon. Meanwhile, I'll appreciate each suggestion.

    Thanks for your help and best regards.

    Piero

  • Piero,

    This is the feedback:

    I am not seeing any packet loss in hardware, so the packets are lost some where in network stack due to lack of queue memory.

    The performance was one on DM816x platform, it is mentioned in that wiki as well.

    Network down and up will cleanup all the queues in network stack which also explains why it is working fine initially and later starts dropping packets.

    Regards,
    Pavel

  • Hello Pavel,

    thanks for your reply.

    Since I've noticed that using the -d option in iperf leads to misaligned periodic (-i option) reports and final reports (eg: periodic reports show 0% packet loss during the test and and the final report shows packet loss), I've modified my test removing the -d option, splitting the iperf sessions and starting three long (t=600 seconds) iperf runs, as described below:

    Machine1 (ip 192.168.0.122)

    iperf -s -u -i 1 -p 5001 &
    iperf -s -u -i 1 -p 5002 &
    iperf -c 192.168.0.124 -u -b 64000 -p 5001 -l 160 -t 600 &

    Machine2 (ip 192.168.0.124)

    iperf -s -u -i 1 -p 5001 &
    iperf -c 192.168.0.122 -u -b 1M -p 5001 -l 1500 -t 600 &
    iperf -c 192.168.0.122 -u -b 64000 -p 5002 -l 160 -t 600 &

    Machine1 and Machine2 are connected to the DM814x switch (ip 192.168.0.123) ports configured at 10 Mbps/Full.

    I see a strange behaviour: the test runs fine for a while, then it starts losing packets, getting worse up to a point when the system enters a state where one side works fine while the other lose about 30% packets in one of the 64 kbps connection:

    Machine1 iperf reports:

    [  4]  5.0- 6.0 sec   121 KBytes   988 Kbits/sec   0.041 ms    0/   84 (0%)
    [  3]  4.0- 5.0 sec  7.81 KBytes  64.0 Kbits/sec   0.017 ms    0/   50 (0%)
    [  4]  6.0- 7.0 sec   119 KBytes   976 Kbits/sec   0.038 ms    0/   83 (0%)
    [  3]  5.0- 6.0 sec  7.81 KBytes  64.0 Kbits/sec   0.008 ms    0/   50 (0%)
    [  4]  7.0- 8.0 sec   119 KBytes   976 Kbits/sec   0.041 ms    0/   83 (0%)
    [  3]  6.0- 7.0 sec  7.81 KBytes  64.0 Kbits/sec   0.007 ms    0/   50 (0%)
    [  4]  8.0- 9.0 sec   121 KBytes   988 Kbits/sec   0.037 ms    0/   84 (0%)
    [  3]  7.0- 8.0 sec  7.81 KBytes  64.0 Kbits/sec   0.010 ms    0/   50 (0%)
    [  4]  9.0-10.0 sec   119 KBytes   976 Kbits/sec   0.039 ms    0/   83 (0%)
    [  3]  8.0- 9.0 sec  7.81 KBytes  64.0 Kbits/sec   0.011 ms    0/   50 (0%)
    .....
    .....
    [  3] 470.0-471.0 sec  7.66 KBytes  62.7 Kbits/sec   0.021 ms    1/   50 (2%)
    [  4] 472.0-473.0 sec   119 KBytes   976 Kbits/sec   0.051 ms    0/   83 (0%)
    [  3] 471.0-472.0 sec  7.66 KBytes  62.7 Kbits/sec   0.022 ms    1/   50 (2%)
    [  4] 473.0-474.0 sec   121 KBytes   988 Kbits/sec   0.042 ms    0/   84 (0%)
    [  3] 472.0-473.0 sec  7.66 KBytes  62.7 Kbits/sec   0.021 ms    1/   50 (2%)
    [  4] 474.0-475.0 sec   119 KBytes   976 Kbits/sec   0.047 ms    0/   83 (0%)
    [  3] 473.0-474.0 sec  7.19 KBytes  58.9 Kbits/sec   0.014 ms    4/   50 (8%)
    ....
    ....                                                                                                                                 
    [  4] 489.0-490.0 sec   119 KBytes   976 Kbits/sec   0.030 ms    0/   83 (0%)
    [  4] 490.0-491.0 sec   119 KBytes   976 Kbits/sec   0.049 ms    0/   83 (0%)
    [  3] 488.0-489.0 sec   640 Bytes  5.12 Kbits/sec   0.020 ms   42/   46 (91%)
    [  3] 489.0-490.0 sec   480 Bytes  3.84 Kbits/sec   0.018 ms   57/   60 (95%)
    [  4] 491.0-492.0 sec   121 KBytes   988 Kbits/sec   0.031 ms    0/   84 (0%)
    [  3] 490.0-491.0 sec   320 Bytes  2.56 Kbits/sec   0.018 ms   29/   31 (94%)
    [  4] 492.0-493.0 sec   119 KBytes   976 Kbits/sec   0.032 ms    0/   83 (0%)
    [  4] 493.0-494.0 sec   119 KBytes   976 Kbits/sec   0.046 ms    0/   83 (0%)
    ....
    ....
    [  4] 522.0-523.0 sec   113 KBytes   929 Kbits/sec   0.034 ms    4/   83 (4.8%)
    [  3] 521.0-522.0 sec  4.22 KBytes  34.6 Kbits/sec   0.019 ms   23/   50 (46%)
    [  4] 523.0-524.0 sec   106 KBytes   870 Kbits/sec   0.032 ms    9/   83 (11%)
    [  3] 522.0-523.0 sec  4.53 KBytes  37.1 Kbits/sec   0.011 ms   21/   50 (42%)
    [  4] 524.0-525.0 sec   106 KBytes   870 Kbits/sec   0.024 ms    9/   83 (11%)
    [  4] 525.0-526.0 sec   103 KBytes   847 Kbits/sec   0.014 ms   12/   84 (14%)
    [  3] 523.0-524.0 sec  5.94 KBytes  48.6 Kbits/sec   0.005 ms   12/   50 (24%)
    ....
    ....
    [  3] 594.0-595.0 sec  7.81 KBytes  64.0 Kbits/sec   0.013 ms    0/   50 (0%)
    [  4] 596.0-597.0 sec   121 KBytes   988 Kbits/sec   0.040 ms    0/   84 (0%)
    [  3] 595.0-596.0 sec  7.81 KBytes  64.0 Kbits/sec   0.020 ms    0/   50 (0%)
    [  4] 597.0-598.0 sec   119 KBytes   976 Kbits/sec   0.045 ms    0/   83 (0%)
    [  3] 596.0-597.0 sec  7.81 KBytes  64.0 Kbits/sec   0.018 ms    0/   50 (0%)
    [  4] 598.0-599.0 sec   119 KBytes   976 Kbits/sec   0.048 ms    0/   83 (0%)
    [  3] 597.0-598.0 sec  7.81 KBytes  64.0 Kbits/sec   0.012 ms    0/   50 (0%)
    [  4] 599.0-600.0 sec   121 KBytes   988 Kbits/sec   0.046 ms    0/   84 (0%)

    Machine2 iperf repors:

    [  3]  6.0- 7.0 sec  7.81 KBytes  64.0 Kbits/sec   0.014 ms    0/   50 (0%)
    [  3]  7.0- 8.0 sec  7.81 KBytes  64.0 Kbits/sec   0.017 ms    0/   50 (0%)
    [  3]  8.0- 9.0 sec  7.81 KBytes  64.0 Kbits/sec   0.015 ms    0/   50 (0%)
    [  3]  9.0-10.0 sec  7.81 KBytes  64.0 Kbits/sec   0.012 ms    0/   50 (0%)
    [  3] 10.0-11.0 sec  7.81 KBytes  64.0 Kbits/sec   0.011 ms    0/   50 (0%)
    ....
    ....
    [  3] 470.0-471.0 sec  7.81 KBytes  64.0 Kbits/sec   0.021 ms    0/   50 (0%)
    [  3] 471.0-472.0 sec  7.81 KBytes  64.0 Kbits/sec   0.016 ms    0/   50 (0%)
    [  3] 472.0-473.0 sec  7.81 KBytes  64.0 Kbits/sec   0.025 ms    0/   50 (0%)
    [  3] 473.0-474.0 sec  7.81 KBytes  64.0 Kbits/sec   0.439 ms    0/   50 (0%)
    ....
    ....
    [  3] 489.0-490.0 sec  7.66 KBytes  62.7 Kbits/sec   0.019 ms    1/   50 (2%)
    [  3] 490.0-491.0 sec  7.81 KBytes  64.0 Kbits/sec   0.017 ms    0/   50 (0%)
    [  3] 491.0-492.0 sec  7.81 KBytes  64.0 Kbits/sec   0.012 ms    0/   50 (0%)
    ....
    ....
    [  3] 514.0-515.0 sec  4.38 KBytes  35.8 Kbits/sec   0.185 ms   18/   46 (39%)
    [  3] 515.0-516.0 sec  1.41 KBytes  11.5 Kbits/sec   0.114 ms   44/   53 (83%)
    [  3] 516.0-517.0 sec  1.72 KBytes  14.1 Kbits/sec   0.063 ms   39/   50 (78%)
    [  3] 517.0-518.0 sec  1.25 KBytes  10.2 Kbits/sec   0.037 ms   30/   38 (79%)
    [  3] 518.0-519.0 sec  1.25 KBytes  10.2 Kbits/sec   0.033 ms   52/   60 (87%)
    [  3] 519.0-520.0 sec   640 Bytes  5.12 Kbits/sec   0.031 ms   36/   40 (90%)
    [  3] 520.0-521.0 sec   800 Bytes  6.40 Kbits/sec   0.025 ms   57/   62 (92%)
    [  3] 521.0-522.0 sec   320 Bytes  2.56 Kbits/sec   0.022 ms   27/   29 (93%)
    ....
    ....
    [  3] 569.0-570.0 sec  5.16 KBytes  42.2 Kbits/sec   0.014 ms   16/   49 (33%)
    [  3] 570.0-571.0 sec  5.78 KBytes  47.4 Kbits/sec   0.011 ms   14/   51 (27%)
    [  3] 571.0-572.0 sec  5.16 KBytes  42.2 Kbits/sec   0.012 ms   17/   50 (34%)
    [  3] 572.0-573.0 sec  5.16 KBytes  42.2 Kbits/sec   0.007 ms   16/   49 (33%)
    [  3] 573.0-574.0 sec  5.31 KBytes  43.5 Kbits/sec   0.011 ms   17/   51 (33%)
    [  3] 574.0-575.0 sec  5.16 KBytes  42.2 Kbits/sec   0.190 ms   17/   50 (34%)
    [  3] 575.0-576.0 sec  5.16 KBytes  42.2 Kbits/sec   0.035 ms   16/   49 (33%)
    [  3] 576.0-577.0 sec  5.31 KBytes  43.5 Kbits/sec   0.017 ms   17/   51 (33%)
    [  3] 577.0-578.0 sec  5.16 KBytes  42.2 Kbits/sec   0.010 ms   17/   50 (34%)
    [  3] 578.0-579.0 sec  5.16 KBytes  42.2 Kbits/sec   0.013 ms   16/   49 (33%)
    [  3] 579.0-580.0 sec  5.31 KBytes  43.5 Kbits/sec   0.003 ms   17/   51 (33%)
    [  3] 580.0-581.0 sec  5.16 KBytes  42.2 Kbits/sec   0.011 ms   17/   50 (34%)
    [  3] 581.0-582.0 sec  5.16 KBytes  42.2 Kbits/sec   0.007 ms   16/   49 (33%)
    [  3] 582.0-583.0 sec  5.31 KBytes  43.5 Kbits/sec   0.009 ms   17/   51 (33%)
    [  3] 583.0-584.0 sec  5.16 KBytes  42.2 Kbits/sec   0.019 ms   17/   50 (34%)

    If I restart the same test, sometimes it starts losing packets and then getting better (up to no packet loss), or vice-versa, it runs fine for a while and then it starts losing packets (as described above).

    Unfortunately the /sys/class/net/eth0/hw_stats is useless, since it doesn't report any error. Moreover, the typical kernel facilities for packet losses checking (ifconfig, /proc/net/dev, /proc/net/udp, ...) can't help here, since the switching activity is not monitored by these facilities.

    I've tried accessing the registers of the 3PSW module (section 9.4 of the DM814x TRM) with devmem2 tool, searching for some error condition, but at the moment I haven't found anything.

    I've also tried increasing the Network stack's s/w queue size as described here http://processors.wiki.ti.com/index.php/TI81XX_PSP_04.04.00.02_Feature_Performance_Guide#Performance_and_Benchmarks_5 (in the UDP test section), but with no luck. Please note that I see this behaviour only with the 10/Full setting.

    So it seems there is some faulty condition that triggers the malfunctioning, it lasts for some time, then in some way the switch restart working correctly.

    This sounds very tricky to me, but if you have any suggestion on what and where to investigate, it will be very much appreciated.

    Thanks for your help and best regards.

    Piero

  • Dear Pavel,

    I would like to clarify that when I say

    Piero Pezzin said:

    .......

    Please note that I see this behaviour only with the 10/Full setting.

    it means that I've tried the same test bench @ 100/Full, but launching iperf sessions with 10x the bandwidth used for the 10/Full test:

    iperf -c 192.168.0.122 -u -b 10M -p 5001 -l 1500 -t 600 &       // 10 Mbps instead of 1 Mbps
    iperf -c 192.168.0.122 -u -b 640000 -p 5002 -l 160 -t 600 &    // 640 kbps instead of 64 kbps

    With the 100/Full configuration, I don't experience any problem (packet loss about 0.001%, definitely a standard behaviour).

    Best Regards,

    Piero

  • Dear Pavel,

    is there any update regarding this issue? As a workaround, I've patched the cpsw driver disabling the 10/FULL configuration, but certainly this is not the proper solution.

    Have you any suggestion? How can we solve this problem?

    Thanks for your help and best regards.

    Piero

  • Piero,

    Do you see any DMA over flow errors, this you can check with the hardware statistics block in CPSW.

    Regards,
    Pavel

  • Dear Pavel,

    unfortunately there is no evidence of DMA errors, I've monitored the cpsw statistics and I couldn't see any error.

    Any other suggestions?

    Thanks for your help and best regards,

    Piero

  • Pavel/Guido

    • Can the issue be reproducible on TI EVM or the happens only on custom design?
    • During running the test does CPSW hardware statistics shows same number of packets received and transmitted in 100F and 10F?

    Regards
    Mugunthan V N