This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

[FAQ] AM62X/AM62Ax/AM62Px: What to do if Linux fails to initialize remoteproc with DM R5F?

I am using a processor with a DM R5F (AM62x, AM62Ax, AM62Px). I am using SDK 9.2.1 or SDK 10.0. The Device Manager (RM) R5F core is listed in the Linux devicetree, so Linux should use the remoteproc driver to attach to the DM R5F during Linux boot time, and initialize RPMsg communication with the DM R5F [1] [2]. However, the Linux remoteproc driver is failing with error -19. What is going on?

Here is an example of a "working" attach to the DM R5F core:

[    6.569096] platform 78000000.r5f: R5F core may have been powered on by a different host, programmed state (0) != actual state (1)
[    6.572660] platform 78000000.r5f: configured R5F for IPC-only mode
[    6.573291] platform 78000000.r5f: assigned reserved memory node r5f-dma-memory@9da00000
[    6.573941] remoteproc remoteproc3: 78000000.r5f is available
[    6.574088] remoteproc remoteproc3: attaching to 78000000.r5f
[    6.589108] platform 78000000.r5f: R5F core initialized in IPC-only mode
[    6.589191] rproc-virtio rproc-virtio.5.auto: assigned reserved memory node r5f-dma-memory@9da00000
[    6.589483] rproc-virtio rproc-virtio.5.auto: registered virtio1 (type 7)
[    6.589504] remoteproc remoteproc3: remote processor 78000000.r5f is now attached

And here is an example of a failing case:

[    6.670963] platform 78000000.r5f: ti-sci processor request failed: -19
[    6.671050] platform 78000000.r5f: ti_sci_proc_request failed, ret = -19
[    6.674274] k3_r5_rproc bus@f0000:bus@b00000:r5fss@78000000: k3_r5_core_of_init failed, ret = -19
[    6.674301] k3_r5_rproc bus@f0000:bus@b00000:r5fss@78000000: k3_r5_cluster_of_init failed, ret = -19

----------------------------------------------------

Additional notes:

[1] The DM R5F core starts running earlier in the boot process than Linux. For most remote cores (i.e., non-Linux cores), the Linux remoteproc driver both initializes the core, and sets up the infrastructure to enable RPMsg communication with the remote core. However, since the DM R5F core is already running, the Linux remoteproc driver simply "attaches" to the already-running DM R5F, and initializes the RPMsg infrastructure. For more information about booting & disabling processor cores, refer to the Multicore module of the associated processor academy:
AM62x: https://dev.ti.com/tirex/explore/node?node=A__AdnCtmyMLBJQE.U38-Dw4w__AM62-ACADEMY__uiYMDcq__LATEST
A
M62Ax: https://dev.ti.com/tirex/explore/node?node=A__Ada-WjvmUg3JmB109NavGA__AM62A-ACADEMY__WeZ9SsL__LATEST
AM62Px: https://dev.ti.com/tirex/explore/node?node=A__AYnQdcxcEc8wt-WsqMumsA__AM62P-ACADEMY__fp5YxRM__LATEST 

[2] If RPMsg communication is not used with the DM R5F, and Linux does not interact with the DM R5F code during runtime, then the remoteproc driver does not need to attach to the DM R5F in the first place. However, keep in mind that the DM R5F's DDR memory allocations should ALWAYS be reserved in the Linux devicetree file, in order to prevent Linux from overwriting the DM R5F's data. For more information, refer to the Multicore module of the associated processor academy:
AM62x: https://dev.ti.com/tirex/explore/node?node=A__ASVmm1hNWx7CjUJCy91Aig__AM62-ACADEMY__uiYMDcq__LATEST 
AM62Ax: https://dev.ti.com/tirex/explore/node?node=A__Ab1lHTiDEw5GmFW034IFNw__AM62A-ACADEMY__WeZ9SsL__LATEST
AM62Px: https://dev.ti.com/tirex/explore/node?node=A__AUiUIYaptWENZHufiXw.FQ__AM62P-ACADEMY__fp5YxRM__LATEST 

  • First, some background on what is going on here 

    Before a processor core can query the status of another core, it needs to request ownership of that core from TIFS (i.e., you need ownership of the core, not just to send "set" commands to the TISCI code, but ALSO to send "get" commands to query for information). Before the Remoteproc driver tries to initialize a core, or even attach to a currently running core, the remoteproc driver needs to see the status of that remote core. Thus, the remoteproc driver requests ownership of that other core with a ti_sci_proc_request message.

    Between SDK 9.2.0 and SDK 9.2.1, we updated the communication code between the DM R5F and the TIFS software. That is when this behavior started being seen. The source of this behavior is that TIFS is rejecting the remoteproc driver's ownership request.

    The DM R5F code requests ownership of itself while the DM R5F is initializing. The failure cases are caused by situations where the communication code between the DM R5F and the TIFS software fails to release ownership of the DM R5F after initialization. In those cases, the TIFS code thinks that the DM R5F is still holding onto ownership of itself, so the TIFS code rejects the remoteproc driver's request for ownership of the DM R5F.

  • How to fix the behavior? 

    This behavior will be fixed in SDK 10.1.

    Customers who are using SDK 9.2.1 or SDK 10.0 can download the latest version of the ti-sysfw here:
    https://git.ti.com/cgit/processor-firmware/ti-linux-firmware/commit/?h=ti-linux-firmware&id=ddc544aaff0f289ad9cfc07d619f686c203a38e7

    The fix is in the new SYSFW code. If you are programming your DM R5F, you can continue to use your DM R5F code that is based on SDK 9.2.1 or SDK 10.0 MCU+ SDK.