This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VH-Q1: PCIe EP/RC transfer speed performance slow

Part Number: TDA4VH-Q1
Other Parts Discussed in Thread: TDA4VH

Tool/software:





Hello, there is a description of LCPD-37899 on our SDK10, which has solved the speed issue of PCIe. I couldn't find the corresponding modification in our TI kernel. Can you help confirm which modification it is? 
We look forward to your reply!

  • Hi Quansheng,

    It should be a change to the generic endpoint driver for PCIe: drivers/pci/endpoint/functions/pci-epf-test.c. You may clone the kernel source code and run something like "git log --follow drivers/pci/endpoint/functions/pci-epf-test.c" to see the commits made and compare against your version of the SDK/kernel.

    Regards,

    Takuma​

  • Hi Takuma,

    We have compared drivers/pci/endpoint/functions/pci-epf-test.c in SDK10 against our version and applied those changes. But our test result shows that it does not affect the transmission speed of PCIe. Could you give us more specific information about how the speed issue of PCIe is solved in SDK10? 

    Regards,

    Mingjian Shang

  • Hi Mingjian,

    But our test result shows that it does not affect the transmission speed of PCIe.

    Could you give details for what the transmission speed you are seeing is? As a disclaimer, with the PCIe EP/RC test, you will not see the max throughput of PCIe transfer, as this driver is not a performance demonstration, but more of a feature function demonstration. If you are expecting the max throughput, then you will need to optimize your PCIe driver code.

    For reference, below are the numbers we were seeing with the changes from upstream Linux kernel driver:

    S.No. TYPE Transfer size in Bytes  Throughput in KBytes/second Throughput in Gbits/second
    1 READ 1 4 0.00003
    2 READ 1024 4294 0.03435
    3 READ 1025 4311 0.03449
    4 READ 1024000 1514445 12.11556
    5 READ 1024001 1504268 12.03414
    6 WRITE 1 4 0.00003
    7 WRITE 1024 4360 0.03488
    8 WRITE 1025 4447 0.03558
    9 WRITE 1024000 1326244 10.60995
    10 WRITE 1024001 1322281 10.57825
    11 COPY 1 4 0.000032
    12 COPY 1024 4361 0.034888
    13 COPY 1025 4360 0.034888
    14 COPY 1024000 1100749 8.805992
    15 COPY 1024001 1101597 8.812776

    This test was performed with an AM69-SK acting as RC and with a J784S4-EVM acting as EP.
    Negotiated Link-Width: x4 and Link-Speed: 8 GT/s (Gen 3).

    Regards,

    Takuma

  • Hi Takuma,

    Here are out PCIe test results. This test was performed with a J784S4-EVM acting as RC and with a J784S4-EVM acting as EP. 

    Reference: https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-jacinto7/latest/exports/docs/linux/Foundational_Components/Kernel/Kernel_Drivers/PCIe/PCIe_End_Point.html

    We found that between 1kB and 1MB transfer size, the transmission speed of SDK08_05  and SDK09_02 has great difference. We want to know the reason behind that difference and if it has been fixed in SDK10.

    Regards,

    Mingjian Shang

  • Hi Mingjian,

    I will check internally if there is a specific patch. However, the issue was not something from TI driver but from upstream kernel of the EP/RC example. You may test with other PCIe drivers like the NVMe driver through a PCIe SSD card to test performance in the meantime.

    Regards,

    Takume

  • Hi, Takuma

    Besides, when we configure EP device with the script as follows in SDK0902, the epf driver print messages which did not appear in SDK0805:

      

    We check it in the epf driver and find the difference :

    Did you encounter the same problem in SDK0902? Will it influence  DMA transmission speed? 

    Regards,

    Mingjian

  • Because of the holidays, responses will be delayed from Dec. 25 through Jan. 2. Thank you for your patience.

  • Besides, when we configure EP device with the script as follows in SDK0902, the epf driver print messages which did not appear in SDK0805:

      

    We check it in the epf driver and find the difference :

    Did you encounter the same problem in SDK0902? Will it influence  DMA transmission speed? 

    Regards,

    Mingjian

    HI Mingjian

        have you ever fixed this issue?

    Regards

       Semon

  • Hi Semon and Mingjian,

    Apologies for the delay. Missed this forum thread. I took a look at the error message, and I do not think this is the cause of issue. 

    In the 9.2 SDK, we are going to fail_back_tx:

    But in 8.5 SDK, we are doing the same thing as fail_back_tx in 9.2 SDK:

    This is because this is a new feature added by the community for using eDMA: https://git.ti.com/cgit/ti-linux-kernel/ti-linux-kernel/commit/drivers/pci/endpoint/functions/pci-epf-test.c?h=ti-linux-6.1.y&id=8353813c88ef2186abf96715b5f29b191c9a7732

    And since TDA4VH PCIe controller hardware itself does not have DMA embedded, and instead it uses the DMA hardware of the SoC, the information about "Failed to get private DMA rx channel" is printed. However, DMA should still be in use.

    Regards,

    Takuma