This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

About concurrent usage of PCIe between Cores and EDMA

Hello,

We are having some troubles on PCIe traffic in our system and so I have some questions in my mind. 

Just some basic info about PCIe interface of our system: An 6678 DSP is deployed as RC and there is an FPGA at the other end as EP. There is no other EPs in the system. We are using core 0 for managing PCIe dataflow and EDMA for burst data transfers. All DSP transactions are outbound translations.

Here are some scenarios that I want to ask :

  1. Scenario:
    1. PCIe is initialized by Core 0. Link is up.
    2. All 8 cores do read&write transactions(not burst) to the EP concurrently without any sychronization mechanism. Each core reads&writes to the different addres of EP.
    3. Actually I did tested this scenario and saw that there is no problem. PCIe peripheral handles concurrency usage among cores. 
    4. Is this observation true?
  2. Scenario:
    1. PCIe is initialized by Core 0. Link is up.
    2. EDMA is initialized by Core 0. An EDMA channel is configured to do burst data read from EP.
    3. EDMA channel is triggered. And during burst data transfer, Core 0 tried to do a single data read(or write) from(or to) EP from an address other than the burst read address. 
    4. What happens in this scenario ? Do we have to establish a synchronisation mechanism for this? (such as waiting until the finishing of the EDMA transfer ? )
  3. Scenario:
    1. PCIe is initialized by Core 0. 
    2. MSI_0 is also configured. EP will notify DSP through MSI.
    3. Link is up.
    4. EDMA is initialized by Core 0. An EDMA channel is configured to do burst data read from EP.
    5. EDMA channel is triggered. And during burst data transfer, EP tries to send MSI_0.
    6. Actually I observed this scenario. What I saw is EP can't get its interrupt ready signal and later DSP can't get its next MSI_0. And PCIe data flow totally stops. Moreover we loose the debug connection and Core 0 jumps to a meaningless value. I guess something wrong happens in the PCIe protocol and it goes to an undefined state. And this causes debug connection to loose.
    7. What may be the problem in this scenario?
  4. Scenario:
    1. PCIe is initialized by Core 0. Link is up.
    2. EDMA is initialized by Core 0. 2 EDMA channels are configured. First one is configured to do burst data read. Second one is configured to do burst data write.
    3. First EDMA channel is triggered. During burst data read, second EDMA channel is triggered.
    4. What happens in this scenario?
    5. Once I observed a similar scenario and it seemed to me that both EDMA channels tried to make burst data transfers concurrently and both channels' data were corrupt. I don't find it logical because EDMA works through some queues. When I trigger both EDMA channels same time, it should transfer data in sequence assuming they are using the same queue. So what may be the problem then? 
Thanks in advance,
Regards,
koray.
  • Koray,

    You can't do concurrency on the system bus, but PCIE has it is FIFO which can hold concurrent transactions.

    For Scenario 1: there shouldn't be a problem as CPU R/W is not burst.

    For Scenario 2: it should work.

    For Scenario 3: do you mean the DSP can receive the first MSI, but not the second MSI. Without EDMA traffic, did DSP continuously receive MSI from EP without problem? If MSI and EDMA works individually, can you try below if it works:

    4.2.1.7 Queue Priority Register (QUEPRI)

    4.3.5 Read Rate Register (RDRATE)

    Enhanced Direct Memory Access (EDMA3) Controller User Guide, Literature Number: SPRUGS5

    For Scenario 4: how big is the burst size and you used same CC, same TC, but different DMA channels or what?

    Regards, Eric

  • Hi Eric,

    In our system, there is continuous data flow from EP to RC with the help of MSI. 

    About your questions on Scenario 3  :

    I am meaning that when DSP receives  first overlapping MSI_0 with EDMA burst read, then it can't get next MSI. And PCIe data fllow stops.
    Without EDMA traffic, the system works. I can see the MSI packets coming in sequence. 

    About your sugggestions on Scenario 3:

    We are using CC1. So I changed CC1 QUEPRI Register for setting TC0, TC1, TC2 and TC3's priority to 7. After setting to these values, I checked the register which is at 0x02720284 address. The value was 0x00007777 which is true.
    But this modification didn't fix our problem. Still PCIe data flow stops after getting the first overlapping MSI_0 with EDMA burst read.
    Is it possible that DSP can not send MSI Ack signal during burst data transfer ?

    It is not suitable for me to change RDRATE register. Because DSP have to get burst data in a very limited time and it can't afford any extra time for this transaction.

    Any extra ideas?

    PS: Right now I am ignoring scenario 4, because thinking that the source of that problem may be the same with the 3rd scenario.

    regards,
    Koray.