Hi,
I’m trying to perform pcie transactions between 2 shannons and evaluate the throughput performance in both x1/x2 mode and Gen1/Gen2 [Based on TI LLD] and I have a problem in x1 mode!
- Code:
/* Setting PL_GEN2 */
memset (&setRegs, 0, sizeof(setRegs));
gen2.numFts = 0xF;
gen2.dirSpd = 0x0; // Gen1=0x0 ||| Gen2=0x1
gen2.lnEn = 1; // x1 mode =1 ||| x2 mode =2
setRegs.gen2 = &gen2;
if ((retVal = Pcie_writeRegs (handle, pcie_LOCATION_LOCAL, &setRegs)) != pcie_RET_OK)
{
System_printf ("SET GEN2 register failed!\n");
return retVal;
}
- using CPU, I can’t look the difference, say that is normal because the throughput performance of CPU is limited by Data payload size (4B) and the efficiency is about 11%. right ???
- But Using EDMA:
- throughput (Gen2) is about 2x throughput (Gen1) OK
- throughput x2 is always equal to throughput x1 (about 5.53 Gbps in Gen2 & 2.7 Gbps in Gen1) => NOK!!
So, how can I change PCIe link to x1 mode ? (gen2.lnEn = 2 don’t work !!)
PS: I’m using the BOC, and may I ask if the BOC can have an effect on PCIe link mode (x1 or x2) ?