This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

DM6467 network performance



I have 2 DM6467 EVM boards with the DVSDK 2.0 installed.

Recently I made some tests for testing the network performance of the DM6467.

The results show poor performance and high packet loss which is not adequate for applications which require above 10Mb/sec network bandwidth.

The test consisted of installing the Linux Iperf utility on the 2 boards and running it on both targets. The Iperf utility can measure among other things packet loss.

I connected the network ports of the boards to each other using a crossed cable (CAT5).

I tested the scenario of UDP packets starting with 8Mb/sec, 20Mb/sec, 50Mb/sec.

Using the EVMs and running the tests for 5 minutes each, the packet loss was as follows:

for 8Mb/sec - 0.5%

for 20Mb/sec - 2%

for 50Mb/sec - 20%

Also tried to use a desktop as the sender and EVM as receiver, but again the result were similar to the above.

I also installed previous DVSDK 1.4, and performed the above tests again but the results were still poor.

When running the same tests on desktops there is 0% packet loss for the rates above.

 

Is this a known problem of the DM6467?

  • There was a known issue with the DM6467 EVM boards where the bus driver between the DM6467 and the Gigabit PHY was not suitable for Gigabit operation and would cause significant packet loss. I understand there is a newer Rev H board that has a new bus driver that is more suitable for Gigabit operation and should fix this problem, you may want to contact Spectrum Digital if you intend to order a new DM6467 EVM to ensure you get the proper revision as this new revision does not appear to be posted on their support page yet.

  • Even with bitrates that are far from Gbit Ethernet ? - I tested bitrates of 8Mb/20Mb/50Mb which are far below Gbit, and still there is a significant packet loss.

  • Have you compared this to the performance numbers described in the Driver Data Manual included in PSP directory?  I am guessing you are not going to do much better than the numbers on the data -manual

  • In the driver data manual (SPRS566) there aren't any performance numbers for the EMAC of the DM6467. The only references to the Ethernet on this document are:

    On the features paragraph (1.1):

    Supports internal Gigabit Ethernet driver for transmitting and receiving network data (DM6467 only).

     

     

     

    On table 1.1:

    Supports 1000 Mbps Ethernet speeds, Half and Full duplex for DM6467

    Transmit/receive network data. Supports Auto negotiation with 10/100/1000 Mbps link speed.

    On the Ethernet driver paragraph 2.6:

     

     

     

     

     

    Following features are supported by the driver:

    ·

     

    10/100 Mbps Speed

    ·

     

    Auto Negotiation

    ·

     

    Multicast and Broadcast

    ·

     

    Promiscuous mode

    ·

     

    Full and Half Duplex modes

    ·

     

    DM6467 - 1000 Mbps speed

    On the Support and Constraint paragraph 2.6.1:

           Currently Gigabit Ethernet facility is not available on DM6467

    On this specific section there are 'Performance and benchmarks for the other processors but not for the DM6467.

     

    So, from the above, there isn't any mentioning of any packet loss or 10/100 network limitation for the DM6467.

  • On an older EVM you will get fewer dropped packets operating at 100baseT standard speed instead of Gigabit, and even with Gigabit enabled you will not see near full Gigabit performance as there are a number of bandwidth limiting factors in the whole system, from network stack overhead to raw memory bandwidth availability. I would expect some actual DM6467 Gigabit benchmarks to be available in a future PSP release, unfortunately as you have noted, they are not currently provided.

  • Thank you but again my focus is not on Gigabit. I am talking about 8/20/50Mb/sec where those bandwidth limiting factors are of no importance, and if there is severe packet loss when working with those bitrates, I think it should be handled before progressing to Gigabit bitrates where the problem will get worse.

  • We've just discovered this same issue on our DM6467 boards.

    We would be happy with 100 Mbit/s performance.  We happen to be connected to a gigabit switch, and we can see that the hardware has negotiated a 1000 Mbit/s connection.

    On a large file transfer, we noticed very poor performance.  It was 1.1-1.2 MB/sec, which works out to just about 10Mbit/s.  This is unsuitable for our application - we need to get data over the network faster than that.

    We noticed this behavior on a board that we purchased in early '09, with the following markings:

    (1) Spectrum Digital Sticker on box says:  702085-1001 REV G   05 DEC 08.  Texas Instruments sticker on Box says (2P) REV:  G.  The Board, however, clearly is marked 'Rev H' on the underside, in marker.  We are running DVSDK 2_00_00_22 with MontaVista Pro 5.0 (from the PSP 2.00 package) on this board.

    We reproduced this problem on a MUCH newer board, purchased in August of '09, for which we waited over 3 weeks for delivery.  The manufacturing date on the Spectrum Digital board is after our order date, so I assume that this board was built in response to our order, and that or order was not shipped from existing older stock.  This board is marked:

    (2) Spectrum Digital Sticker says:  702085-1001 REV H   20 AUG 09.  Texas Instruments sticker on Box says (2P) REV H.  The board is clearly marked 'Rev I' on the underside, in marker.  We are running the PSP 1.4 that was shipped with this board, with MontaVista Pro4.01 kernel that was shipping in the NAND device on this board.

    So, this Rev H still has the problem.  What can we do to improve the network performance to at least 100 Mbit/s levels?

  • Is there any new information about that issue? We are facing the same problems. For us it is mportant to know the source of the performance problems. Is it the eval board, the Linux network driver, or the DaVinci?

     

  • The issue was solved for us by a TI person. Our problem was solved by increasing receive and transmit buffers by setting greater values for rmem_default, rmem_max, wmem_default, wmem_max in /proc/sys/net/core/.

    The issue of network buffers is also discussed here: http://www.speedguide.net/read_articles.php?id=121

     

     

  • Thanks for the hint with buffer sizes. But for our experiments we already increased the buffer sizes. But even by increasing the buffer sizes by two magnitudes, we cannot achieve UDP transfare rates of more than 7 MByte/s. This is not sufficient for our planned application.

    We built a 2.6.32 kernel to check out the latest davinci_emac driver. Also with that kernel we couldn't achieve more than the 7 MByte/s.