This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-AM64X: Where is the specification explain of the inbound setting registers concerning with the pci-endpoint?

Part Number: PROCESSOR-SDK-AM64X

Hi,TI team

I try to PCIE end-point test on AM64x original board + Processor SDK Linux for AM64X 08_02_00_14.

I wish know how to transact the local memory area to address area on PCIe host computer.

My example) local 0x81000000, size 0x100000 ---> pci config BAR0 ---> host 0xe0500000

But u-boot behavior (or linux kernel behavior) and the description of the manual "spruim2c.pdf" are mismatched.

u-boot behavior (or linux kernel behavior):
cdns_set_bar() or cdns_pcie_ep_set_bar() (in pcie-cadence-ep.c)

0D400840: DNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0 <-- addr0 (lower 32bit 0x81000000)
0D400844: DNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1 <-- addr1 (upper 32bit 0x00000000)
0D100240: CDNS_PCIE_LM_EP_FUNC_BAR_CFG0 <-- cfg (0x0505058d)

the manual "spruim2c.pdf": (not found register address)
12.2.2.5 PCIe Subsystem Registers, Table 12-1665,

address 0D40_0840h is not found.
address 0D40_0844h is not found.
address 0D10_0240h is not found.

Where is the specification explain of the inbound setting registers concerning with the pci-endpoint?

1. address and setting value of registers

2. how to use CDNS_PCIE_LM_EP_FUNC_BAR_CFG0 register

Regards,

Hanaoka

  • Hi Hanaoka-san,

    I am looking into this and will get back to you once I have update.

  • I found new MCU+ SDK verison 8.3.0.18 on TI software-development site.
    Soon trying to apply to my example the drivers and examples because this version includes the PCIe and SERDES modules...

    drivers.am64x.r5f.ti-arm-clang.debug.lib
    pcie_legacy_irq_ep_am64x-evm_r5fss0-0_nortos_ti-arm-clang

    My Example Inbound Setting:
    0x81000000 <-- 0x6a000000 (Size:0x200000) <-- 0xe0600000 (BAR0 in RC)

    Register Setting:
    0D400840: DNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0 <-- addr0 (lower 32bit 0x81000000)
    0D400844: DNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1 <-- addr1 (upper 32bit 0x00000000)
    0D100240: CDNS_PCIE_LM_EP_FUNC_BAR_CFG0 <-- cfg (0x05058e8e)

    As trying it, I have new question as below.


    From R5F core, 0x81000000 read/write accesses are reflected 0xe0600000(the pci area by BAR0) in RC(host machine).
    But from A53 core, 0x81000000 read/write accesses are NOT reflected 0xe0600000 area.

    Why A53 is no good, against R5F is OK?
    Does its cause exist the CDNS_PCIE_LM_EP_FUNC_BAR_CFG0 setting?

    Regards,
    Hanaoka

  • Hi Hanaoka-san,

    Sorry for my late response.

    I wish know how to transact the local memory area to address area on PCIe host computer.

    My example) local 0x81000000, size 0x100000 ---> pci config BAR0 ---> host 0xe0500000

    Can you please provide details of your example? Do you have a separate driver setting the local address 0x81000000? or do you modify an existing kernel code for it?

    The kernel driver drivers/pci/controller/cadence/pcie-cadence-ep.c is the main driver implementing the PCIe Endpoint function for AM64x. Kernel also provides a PCIe EP test driver drivers/pci/endpoint/functions/pci-epf-test.c, which dynamically allocates DMA memory and configures it to the BAR configuration, this is done in function pci_epf_test_set_bar() which is called in pci_epf_test_core_init() during driver init.