This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

CCS/TDA4VM: The maximum PCI-e throughput of TDA4

Part Number: TDA4VM

Tool/software: Code Composer Studio

We are using an Aquantia Corp. AQC107 10Gb ethernet card to tansmit data.

The data throughput is 6088 Mb/sec when the card into the PCIe x2 lane slot, and
the throughput is 5672 Mb/sec when it into the PCIe x1 lane slot.
But the throughput is 8892 Mb/sec when the card is used in PC (PCIe x4 lane mode)

Did you have any test report of the TDA4's PCIe throughput?
What is the bottleneck of TDA4's PCIe?

  • Hi,

    The PCIe throughput on TDA4 has been validated using NVMe SSD card. We have measured the throughput of 13504 Mb/sec when using a x2 lane slot.

    For different usecases, there are factors like DMA mastering which might affect the throughput measurement. 

    Thanks and Regards

    Dhaval Khandla

  • Hi,

    We are using two 10Gb ethernet cards for tansmit data test now.

    Each card connect to one PC for transmittion.

    The throughput is 6088 Mb/sec of the crad in PCIe x2 lane slot, and
    the throughput is 5672 Mb/sec of the card in PCIe x1 lane slot.

    Then, I used a python multi-processing test function, both card to transmit data simultaneously.

    The total throughput is 7248 Mb/s only.

    What is the bottleneck of this case?  memory bandwidth? DMA? or PCI-E?

  • Hi

    The achievable throughput depends on number of factors including

    •  DMA mastering capability of the PCIe EP device
    •  if the traffic is simple pass through (forwarded through SoC interconnect) – as is the case with PCIe backplane/switch functionality
    •  If the traffic is passed through TDA4 memory  - then overall DDR memory bandwidth and concurrency involved
    • If CPU is used to handle the traffic(as is the case here for FTP application), then the associated CPU load number

    lastly Peer side application bottlenecks could also impact the achievable throughput, so please review CPU load bandwidth at the peer as well

    Would suggest that we obtain CPU load measurements alongside the FTP transfer (using top or iostat command)