This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM5726: PRU Ethernet dropping packets

Part Number: AM5726
Other Parts Discussed in Thread: TMDXIDK5718,

Hi, 

I am seeing packet drops when running multiple instances of iperf transmitting simultaneously over the PRU interface.

I am developing a product on the AM5726 (2-cores), but I am able to reproduce this using on the single-core IDK (TMDXIDK5718) and ti-processor-sdk-linux-am57xx-evm-03.03.00.05-Linux-x86 and ti-processor-sdk-linux-rt-am57xx-evm-06.00.00.07-Linux-x86, connected against a 100M ethernet link.


On the IDK, I have 5 VLAN interfaces created on the PRU interface (eth2). Similarly on the remote Linux Box, I have 5 VLAN interfaces created with the same VLAN ID's. I am using iperf to transmit UDP packets from the IDK to the Linux Box.

On Linux Box, iperf is run in UDP server mode:
iperf -u -s -i 10

On IDK, iperf is run in UDP client mode, configured to transmit faster than line rate to test the flow control:
iperf -u -c 1.0.100.2 -l 1470 -b120M -t 600 &
iperf -u -c 1.0.101.2 -l 1470 -b120M -t 600 &
iperf -u -c 1.0.102.2 -l 1470 -b120M -t 600 &

With only 2 instances of iperf, I have no packet drops at all, but as soon as I launch the 3rd instance of iperf, I start seeing ~0.3% packet drops.

  • Hello David,

    Let me take a look at this. I will get back to you in a day or so.

    Regards,

    Nick

  • Hello David,

    What are the iperf outputs you see on the server side for one iperf instance, two iperf instances, three iperf instances? I am curious about what kind of throughput you are seeing, and whether flow control packets are actually getting generated.

    Regards,

    Nick

  • Hi Nick, 
    With 1 iperf instance, I am getting 94-96 Mbps.
    With 2 iperf instance, I am getting around 47-48 Mbps for each, so the total is always around 94-96 Mbps.

    Even with 3 instances, I still get roughly 95 Mbps total, but just a few packet drops:

    [ 3] Server Report:
    [ 3] 0.0-600.0 sec 2.24 GBytes 32.1 Mbits/sec 0.243 ms 5937/1643220 (0.36%)
    [ 3] 0.0-600.0 sec 1 datagrams received out-of-order

    [ 3] Server Report:
    [ 3] Server Report:
    [ 3] 0.0-600.0 sec 2.05 GBytes 29.4 Mbits/sec 0.857 ms 4984/1503751 (0.33%)
    [ 3] 0.0-600.0 sec 1 datagrams received out-of-order
    [ 3] 0.0-600.0 sec 2.16 GBytes 30.9 Mbits/sec 0.340 ms 5476/1583087 (0.35%)
    [ 3] 0.0-600.0 sec 1 datagrams received out-of-order

  • Edited 2019-08-29

    Hello David,

    We do not implement flow control in our PRU Ethernet firmware or drivers. What is your terminal output when you establish the link? I would expect to see a message like "Link is Up - flow control off" This message does not actually indicate whether the MAC layer supports flow control. Even if you see something like "Link is Up - ... - flow control rx/tx", PRU Ethernet does not implement flow control.


    tx

    Do you have any indication that the server is actually sending flow control packets out, and that the PRU Ethernet is respecting those flow control packets?

    Regards,

    Nick

  • Hello Nick, 

    I am seeing "Link is Up - ... - flow control rx/tx".

    But I don't think the problem is with the flow control. If I use a single instance of iperf and specify bandwidth > 100M, I do not get any packet drops. It looks like there is a problem when multiple processes try to enqueue transmit packets concurrently.

    Regards,
    David

  • Hello David,

    Let's back up. What are you trying to test here?

    I thought you were trying to test flow control, but PRU Ethernet does not support flow control. So the throughput you see is probably limited by the PRU Ethernet, not by the server sending flow control packets back to the PRU port.

    Regards,

    Nick

  • Hi Nick,

    I noticed this issue when trying to test flow control, but this is not an issue with flow control. From your previous comment, I understand/agree that PRU Ethernet ignores flow control packets sent from the remote side, but the PRU Ethernet should still limit the throughput to 100M.

    It looks like there is an issue with the PRU Ethernet drivers. When there is more than 2 concurrent processes trying to transmit packets, the drivers do not always return -EAGAIN or -ENOBUFFS to indicate that the packet was not transmitted.

    Are you able to reproduce the same behavior?

    Regards,

    David

  • Hi Nick, 

    Were you able to reproduce this issue?

    David

  • Hello David,

    I am sorry for the delayed response. We will get back to you next week.

    Regards,

    Nick

  • Hello David,

    We are able to observe that 2 iperf instances over VLAN has 0 loss, while 3 instances introduces some loss. However, this does not mean that the loss is happening on the transmit side - e.g., the iperf server on the receiver side might not handle 3 streams at once very well.

    Are you able to observe that the packets are getting dropped at the transmit side rather than the receiver side?

    Regards,

    Nick

  • Hi Nick, 

    Yes, if I run this test between 2 linux Desktops, I can run at least 5 instances of iperf without errors. 

    I.e:

    Test 1:
    TI IDK transmits to Linux desktop - Packet loss with more than 2 streams.

    Test 2:
    Another Linux desktop transmit transmit to same Linux desktop as in test 1 - no packet loss with 5 streams (did not test with more than that)


    >  the iperf server on the receiver side might not handle 3 streams at once very well.
    Each iperf instances is running on a different VLAN, and the iperf server is listening on the VLAN interfaces, so each receiving iperf server only receives data from a single iperf client.

  • Hello David,

    I am also taking a look at this. We've reproduced the issue but need to take some more time to identify the root cause and see if there is a fix or workaround. While I unfortunately don't have an update right now, we'll try to get back to you next week with some more information!

    Best,

    Aaron

  • Hello Aaron, 

    Thank you for looking into this. Let me know if you need me to test anything.

    David

  • Hello David,

    I did some more testing for this. While I am able to reproduce this issue, it doesn't appear to be a PRU Ethernet driver issue. Network/driver stats indicate that all packets received from the application by the driver are transmitted, and the driver should always return -ENOBUFS if the queues are full. It looks like packet loss is occurring elsewhere in the networking stack. In this case, it seems you can work around this by adjusting some iperf client parameters. For example, I was able to transmit over 3-5 concurrent VLAN interfaces attempting to send at 120M without loss by using the iperf '-w' flag on the client side to change the socket buffer size as "-w 10000" (You may need to adjust this higher/lower depending on the number of streams and attempted bandwidth). I hope this works for you, but let us know if you do see any further testing results that point to a driver issue.

    Best,
    Aaron

  • Hi Aaron, 

    Thank you. I confirmed that I can run iperf with '-w' flag to work around the issue. I will find out what the '-w' flag does and use it as a work around in our systems.

    Thank you,

    David