This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

[FAQ] How to check if Linux can communicate with a non-Linux core through RPMsg

Other Parts Discussed in Thread: AM67, AM67A

Before Linux can communicate with a non-Linux core over the RPMsg inter-processor communication (IPC) protocol, several things needs to be true:

1) RPMsg between Linux and the non-Linux core must be supported by software drivers on both software instances

2) Application code to communicate over RPMsg must be written for both software instances

3) The Linux remoteproc driver must either initialize the core, or attach to the core (if the core is already running)

4) The RPMsg infrastructure (including VIRTIO buffers) must be initialized by Linux

For more information:

Running the out-of-the-box RPMsg test code:
refer to the processor academy > Linux > Evaluating Linux > IPC Example
AM62x  ||  AM62Ax  ||  AM62Px  ||  AM64x

Examples of what "pass" and "fail" tests look like with the rpmsg_echo example:
[FAQ] Linux: How to check what binary is running on the DM R5F 
Examples 1, 2, 3

  • Is RPMsg between Linux and the non-Linux core supported in software? 

    Refer to the processor academy > Multicore > IPC > IPC Basics
    AM62x  ||  AM62Ax  ||  AM62Px  ||  AM64x

  • Is there application code to communicate over RPMsg?

    Refer to the processor academy > Multicore > IPC
    AM62x  ||  AM62Ax  ||  AM62Px  ||  AM64x

  • DM R5F: Did remoteproc attach to the core? Was the RPMsg infrastructure initialized? 

    These processors have DM R5F cores:
    AM62x
    AM62Ax
    AM62Dx
    AM62Px
    AM67
    AM67A

    The DM R5F is loaded early in the boot process, before Linux has initialized. Since the DM R5F is already running during Linux boot, the remoteproc driver should "attach" to the DM R5F instead of "initializing" the core.

    Let's look at the terminal output of a boot where Linux successfully attached to the DM R5F and initialized the RPMsg infrastructure:

    // tested on AM62x RT Linux SDK 10.1
    root@am62xx-evm:~# uname -a
    Linux am62xx-evm 6.6.58-rt45-ti-rt-01780-gc79d7ef3a56f-dirty #1 SMP PREEMPT_RT Wed Nov 27 14:15:26 UTC 2024 aarch64 GNU/Linux
    
    // search for any terminal output that could be helpful
    // I will cut out anything unrelated to DM R5F to keep it simple
    
    root@am62xx-evm:~# dmesg | grep -e r5 -e remoteproc -e rproc -e rpmsg -e virtio
    [    0.000000] OF: reserved mem: initialized node r5f-dma-memory@9da00000, compatible id shared-dma-pool
    [    0.000000] OF: reserved mem: 0x000000009da00000..0x000000009dafffff (1024 KiB) nomap non-reusable r5f-dma-memory@9da00000
    [    0.000000] OF: reserved mem: initialized node r5f-memory@9db00000, compatible id shared-dma-pool
    [    0.000000] OF: reserved mem: 0x000000009db00000..0x000000009e6fffff (12288 KiB) nomap non-reusable r5f-memory@9db00000
    ...
    
    // DM R5F is already running
    [   11.099611] platform 78000000.r5f: R5F core may have been powered on by a different host, programmed state (0) != actual state (1)
    [   11.102767] platform 78000000.r5f: configured R5F for IPC-only mode
    [   11.103001] platform 78000000.r5f: assigned reserved memory node r5f-dma-memory@9da00000
    [   11.124169] remoteproc remoteproc1: 78000000.r5f is available
    [   11.124363] remoteproc remoteproc1: attaching to 78000000.r5f
    
    // now we are initializing the VIRTIO buffers that are used for RPMsg communication
    [   11.125123] rproc-virtio rproc-virtio.3.auto: assigned reserved memory node r5f-dma-memory@9da00000
    [   11.136027] virtio_rpmsg_bus virtio1: rpmsg host is online
    [   11.136083] rproc-virtio rproc-virtio.3.auto: registered virtio1 (type 7)
    
    // the remoteproc driver successfully attached to DM R5F
    [   11.136098] remoteproc remoteproc1: remote processor 78000000.r5f is now attached
    
    // the RPMsg code on the DM R5F defined 2 "endpoints" at 0xD and 0xE
    // now Linux creates 2 RPMsg channels to communicate with those 2 endpoints
    [   11.136462] virtio_rpmsg_bus virtio1: creating channel ti.ipc4.ping-pong addr 0xd
    [   11.136684] virtio_rpmsg_bus virtio1: creating channel rpmsg_chrdev addr 0xe
    

  • other non-Linux cores: Did remoteproc attach to the core? Was the RPMsg infrastructure initialized?

    We will use M4F on AM62x as an example. But this same pattern applies to all other non-Linux cores.

    // tested on AM62x RT Linux SDK 10.1
    root@am62xx-evm:~# uname -a
    Linux am62xx-evm 6.6.58-rt45-ti-rt-01780-gc79d7ef3a56f-dirty #1 SMP PREEMPT_RT Wed Nov 27 14:15:26 UTC 2024 aarch64 GNU/Linux
    
    // search for any terminal output that could be helpful
    // I will cut out anything unrelated to M4F to keep it simple
    
    root@am62xx-evm:~# dmesg | grep -e m4 -e remoteproc -e rproc -e rpmsg -e virtio
    [    0.000000] OF: reserved mem: initialized node m4f-dma-memory@9cb00000, compatible id shared-dma-pool
    [    0.000000] OF: reserved mem: 0x000000009cb00000..0x000000009cbfffff (1024 KiB) nomap non-reusable m4f-dma-memory@9cb00000
    [    0.000000] OF: reserved mem: initialized node m4f-memory@9cc00000, compatible id shared-dma-pool
    [    0.000000] OF: reserved mem: 0x000000009cc00000..0x000000009d9fffff (14336 KiB) nomap non-reusable m4f-memory@9cc00000
    [   10.608475] k3-m4-rproc 5000000.m4fss: assigned reserved memory node m4f-dma-memory@9cb00000
    [   10.608593] k3-m4-rproc 5000000.m4fss: configured M4 for remoteproc mode
    [   10.608655] k3-m4-rproc 5000000.m4fss: local reset is deasserted for device
    [   10.621124] remoteproc remoteproc0: 5000000.m4fss is available
    [   10.658258] remoteproc remoteproc0: powering up 5000000.m4fss
    [   10.658328] remoteproc remoteproc0: Booting fw image am62-mcu-m4f0_0-fw, size 55100
    
    // now we are initializing the VIRTIO buffers that are used for RPMsg communication
    [   10.689140] rproc-virtio rproc-virtio.1.auto: assigned reserved memory node m4f-dma-memory@9cb00000
    [   10.772318] virtio_rpmsg_bus virtio0: rpmsg host is online
    [   10.772764] rproc-virtio rproc-virtio.1.auto: registered virtio0 (type 7)
    
    // M4F has been successfully initialized
    [   10.772789] remoteproc remoteproc0: remote processor 5000000.m4fss is now up
    
    // the RPMsg code on the M4F defined 2 "endpoints" at 0xD and 0xE
    // now Linux creates 2 RPMsg channels to communicate with those 2 endpoints
    [   10.772865] virtio_rpmsg_bus virtio0: creating channel ti.ipc4.ping-pong addr 0xd
    [   10.773133] virtio_rpmsg_bus virtio0: creating channel rpmsg_chrdev addr 0xe
    
    // note that Linux only has visibility into whether it initialized the M4F subsystem and loaded the firmware
    // Linux does NOT have direct visibility into the current state of the M4F software
    // so Linux will still say "running" even if the M4F code has crashed
    
    // earlier in the boot log, we saw that remoteproc0 = M4F for this specific boot
    root@am62xx-evm:~# cat /sys/class/remoteproc/remoteproc0/state
    running