This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM6442: Unexpected PCIe BAR 0 assignment due to large dma memory range definition in the k3-am64-main.dtsi

Part Number: AM6442

Tool/software:

In our product based on AM6442 we see unexpected PCIe BAR assignment error when Linux Kernel is booting, the failure seams to not be severe as the attached PCIe end-point is working as expected. Nevertheless, since we are designing and developing robust products for industrial purposes we aim to do a exact root cause and understand this failure.

Our initial investigation is telling us that this failure is coming as a consequence of the dma-ranges definition of the PCIe node (pcie0_rc) in the k3-am64-main.dtsi description, the dma-ranges is defined as "dma-ranges = <0x02000000 0x0 0x0 0x0 0x0 0x00000010 0x0>" this definition expects to map 64GB address space and auf course on our system with 1GB of RAM this is failing. The reason that this is still working is that Linux Kernel PCIe stack is deciding to ignore and recycle such bogus allocations.

Can you please explain us the reason behind this dma-ranges entry, and where is this expected to be mapped on the  parent bus? Also when looking into the used cadence driver (drivers/pci/controller/cadence/pcie-cadence-host.c) one can see that dma-ranges and cdns,no-bar-match-nbits properties are mutually exclusive so one of them is not needed and indeed if for example we remove the dma-ranges property the PCIe is still working as expected and the BAR 0 assignment error is not there.

 

Logs snipped:

Case when dma-ranges is present:

[ 1.553386] j721e-pcie f102000.pcie: host bridge /bus@f4000/pcie@f102000 ranges:
[ 1.560120] mmcblk0: p1
[ 1.565353] j721e-pcie f102000.pcie: IO 0x0068001000..0x0068010fff -> 0x0068001000
[ 1.568960] mmcblk0boot0: mmc0:0001 DG4008 4.00 MiB
[ 1.575944] j721e-pcie f102000.pcie: MEM 0x0068011000..0x006fffffff -> 0x0068011000
[ 1.582266] mmcblk0boot1: mmc0:0001 DG4008 4.00 MiB
[ 1.588928] j721e-pcie f102000.pcie: IB MEM 0x0000000000..0x00ffffffff -> 0x0000000000
[ 1.595361] mmcblk0rpmb: mmc0:0001 DG4008 4.00 MiB, chardev (243:0)
[ 1.811743] j721e-pcie f102000.pcie: Link up
[ 1.816568] j721e-pcie f102000.pcie: PCI host bridge to bus 0000:00
[ 1.822874] pci_bus 0000:00: root bus resource [bus 00-ff]
[ 1.828366] pci_bus 0000:00: root bus resource [io 0x0000-0xffff] (bus address [0x68001000-0x68010fff])
[ 1.837858] pci_bus 0000:00: root bus resource [mem 0x68011000-0x6fffffff]
[ 1.844766] pci 0000:00:00.0: [104c:b010] type 01 class 0x060400
[ 1.850782] pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0xffffffff 64bit pref]
[ 1.858078] pci 0000:00:00.0: supports D1
[ 1.862086] pci 0000:00:00.0: PME# supported from D0 D1 D3hot
[ 1.869889] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[ 1.878099] pci 0000:01:00.0: [17cb:1109] type 00 class 0x028000
[ 1.884153] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x001fffff 64bit]
[ 1.891193] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
[ 1.897368] pci 0000:01:00.0: 4.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x1 link at 0000:00:00.0 (capable of 15.752 Gb/s with 8.0 GT/s PCIe x2 link)
[ 1.923745] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[ 1.930413] pci 0000:00:00.0: BAR 0: no space for [mem size 0x100000000 64bit pref]
[ 1.938071] pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x100000000 64bit pref]
[ 1.946074] pci 0000:00:00.0: BAR 8: assigned [mem 0x68200000-0x683fffff]
[ 1.952862] pci 0000:01:00.0: BAR 0: assigned [mem 0x68200000-0x683fffff 64bit]
[ 1.960186] pci 0000:00:00.0: PCI bridge to [bus 01]
[ 1.965150] pci 0000:00:00.0: bridge window [mem 0x68200000-0x683fffff]

Case when dma-ranges is removed:

[ 1.580175] j721e-pcie f102000.pcie: host bridge /bus@f4000/pcie@f102000 ranges:
[ 1.587659] j721e-pcie f102000.pcie: IO 0x0068001000..0x0068010fff -> 0x0068001000
[ 1.595758] j721e-pcie f102000.pcie: MEM 0x0068011000..0x006fffffff -> 0x0068011000
[ 1.815096] j721e-pcie f102000.pcie: Link up
[ 1.819921] j721e-pcie f102000.pcie: PCI host bridge to bus 0000:00
[ 1.826230] pci_bus 0000:00: root bus resource [bus 00-ff]
[ 1.831740] pci_bus 0000:00: root bus resource [io 0x0000-0xffff] (bus address [0x68001000-0x68010fff])
[ 1.841211] pci_bus 0000:00: root bus resource [mem 0x68011000-0x6fffffff]
[ 1.848117] pci 0000:00:00.0: [104c:b010] type 01 class 0x060400
[ 1.854209] pci 0000:00:00.0: supports D1
[ 1.858221] pci 0000:00:00.0: PME# supported from D0 D1 D3hot
[ 1.866014] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[ 1.874206] pci 0000:01:00.0: [17cb:1109] type 00 class 0x028000
[ 1.880259] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x001fffff 64bit]
[ 1.887298] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
[ 1.893473] pci 0000:01:00.0: 4.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x1 link at 0000:00:00.0 (capable of 15.752 Gb/s with 8.0 GT/s PCIe x2 link)
[ 1.919114] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[ 1.925772] pci 0000:00:00.0: BAR 8: assigned [mem 0x68200000-0x683fffff]
[ 1.932581] pci 0000:01:00.0: BAR 0: assigned [mem 0x68200000-0x683fffff 64bit]
[ 1.939905] pci 0000:00:00.0: PCI bridge to [bus 01]
[ 1.944869] pci 0000:00:00.0: bridge window [mem 0x68200000-0x683fffff]

Thanks in advance for your valuable support!

Regards,

Aleksandar

  • Hi Aleksandar,

    I am looking into it and will get back to you in a few days.

  • Hi Aleksandar,

    I am still discussing with our developer for this matter and haven't closed on the BAR0 failure message in the kernel log yet. My understanding at this moment is that the message is misleading and can be ignored. Removing "dma-ranges" from the devicetree is also okay, the PCIe driver would then use the "cdns,no-bar-match-nbits" property to configure the same.

  • Hi Bin Liu,

    Thanks for confirmation we will do so, however we are still curios to understand the background of the dma-ranges property for this device and if that indeed is needed. So we would apricate if you come back with an answer on this.

    Regards,

    Aleksandar

  • Hi Aleksandar,

    The following kernel commit might give you a better picture of this "dma-ranges" setting. It appears the PCIe driver originally used "cdns,no-bar-match-nbits" to configure the PCIe controller, but later wants to use the standard "dma-ranges" to pass in the address mapping information, so is this patch.

    https://github.com/torvalds/linux/commit/5d3d063abb276

  • Hi Bin Liu,

    Yes, I have already seen this patch but anyhow thanks for pointing out.

    The patch is also saying: "

    However standard PCI dt-binding already defines "dma-ranges" to
    describe the address ranges accessible by PCIe controller.

    "

    According the DTS spec:

    "The dma-ranges property is used to describe the direct memory access (DMA) structure of a memory-mapped
    bus whose devicetree parent can be accessed from DMA operations originating from the bus. "

    And based on the the PCIe framework indeed is trying to allocate the defined space (64GB), so this entry is not only related to the Cadence controller configuration and from my view the entry should not exist for this device as we don not dma-ing such a large range. That's why I'm keen if you can clarify that as I might oversee something.

    Thanks for your support!

  • E.g my doubt is is the dma-ranges indeed needed but the size has to be adapted or is not needed at all for this case?

  • Hi Aleksandar,

    The "dma-ranges" is configure the PCIe controller to be DMA ready. It doesn't hurt if your application doesn't use DMA. I believe the only problem here is the confusing kernel message caused by "dma-ranges".

  • from my view the entry should not exist for this device as we don not dma-ing such a large range.

    Sorry, I mis-read your message.

    The "dma-ranges" is basically map the entire bus address to system address, it seems to be what the PCIe controller register asks for, I didn't dug into the controller hardware spec.

  • E.g my doubt is is the dma-ranges indeed needed but the size has to be adapted or is not needed at all for this case?

    We are chatting live, so I didn't see this message when I was replying...

    From applications perspective, I agree only the available memory address makes sense, but I believe the Cadence PCIe controller register for the DMA mapping asks for the full address range.

  • I somehow think that the intention is to do the RP_NO_BAR config case (see: cdns_pcie_host_bar_ib_config).

    To do so the dma-ranges size has to be bigger then 128GB (see cdns_pcie_host_find_max_bar), at least that's what other TI platforms are doing (e,g k3-j721e-main.dtsi).

    Looking even more cdns,no-bar-match-nbits and dma-ranges in k3-am64-main.dts are conflicting as  cdns,no-bar-match-nbits is doing RP_NO_BAR case for cdns_pcie_host_bar_ib_config and dma-ranges site is doing BAR0 case (< 128GB).

    But maybe a speculate a bit, as I don't really read the Cadence PCIe RC spec.

  • I am not sure about the "128GB", I didn't read the PCIe drivers deep enough to comment on this. But the reason that AM64x uses size of "0x10_0000_0000" while other J7xx devices use "0x10000_0000_0000" is because AM64x has 36-bit addresses while J7xx have 48-bit addresses.

  • Thanks for the answers, we can close this case.

    Regards,