This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

DM8168 TI EVM: toubles with UDP receipt when M3 is heavily used

Hello.

We are using DVR RDK with DM8168 TI EVM and custom software build on top of this. Eth is setup to work in 1Gb mode.

Our software receives UDP multicast streams (12 sd channels, up to 60 mbps in total), transcodes it with M3 LinkAPI and sends results out as UDP streams.

In this scenario some incoming UDP datagrams are lost (missed in stream). This happens irregularly once every few seconds.

I noticed that everytime this happens "ifconfig frame" counter is incremented. As far as I understand, "frame" counts for eth frames dropped due to link problems.

Further observations / experiments:

1. Special test application that receives the very same streams on another machine (both PC or other DM8168) at the same time doesn't detect any problems => streams are fine.

2. "ifconfing dropped" counter stays 0

3. "ifconfig overruns" counter stays 0 => kernels reads everything (packets are *not* discarder by network card due to high Linux Kernel/A8 load)

4. /proc/net/udp drops counter stays 0 => userspace app reads everything (packers are *not* discarder by kernel)

5. Special test application, that only reads UDP and parses header, is able to receive up to ~50 channels without any problems (~x4 more traffic).

6. When I bind test application on the same IP:ports as target software uses and run both in parallel, test app detect the same problems in streams as target sw => it is *not* bug in target software

7. When I modify target software to bypass traffic (exclude transcoding using M3) everything becomes fine - datagrams are not lost.

8. When I force eth to work in 100mb mode (ethtool -s eth0 speed 100 duplex full) everything works fine with transcoding.

9. If I keep transcoding enabled but disable giving resulting streams out to eth, then datagrams losing still persist but becomes sparse.

All in all, it seems like intensive usage of M3 (and underlying hw accelerators) and/or its IPC introduces troubles with phys link of eth working in 1gb mode.

Did anybody faced similar problems? Any clues?