This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

RE: PROCESSOR-SDK-AM62X: How to run an M4F example that communicates with Linux?

[thread is a followup to https://e2e.ti.com/support/processors-group/processors/f/processors-forum/1344067/processor-sdk-am62x-setup-the-development-environment-for-m4f-core]

Hi Nick,

Ok, will propose this development board to the management see if they agree to get one for me. Thanks for the suggestion.

I have modified my Linux system to overlay the "/lib/firmware" folder so that it is writable for M4F application now.

And I am able to boot the M4F core from the Linux console also.


root@p550:/lib/firmware# ls /sys/class/remoteproc/
remoteproc0
root@p550:/lib/firmware# ls /sys/class/remoteproc/remoteproc0
coredump  device  firmware  name  power  recovery  state  subsystem  uevent
root@p550:/lib/firmware# echo start > /sys/class/remoteproc/remoteproc0/state
[44905.350246] remoteproc remoteproc0: powering up 5000000.m4fss
[44905.357308] remoteproc remoteproc0: Booting fw image am62-mcu-m4f0_0-fw, size 484732
[44905.373453] rproc-virtio rproc-virtio.4.auto: assigned reserved memory node m4f-dma-memory@9cb00000
[44905.388573] virtio_rpmsg_bus virtio0: rpmsg host is online
[44905.388983] virtio_rpmsg_bus virtio0: creating channel ti.ipc4.ping-pong addr 0xd
[44905.395782] rproc-virtio rproc-virtio.4.auto: registered virtio0 (type 7)
[44905.413452] remoteproc remoteproc0: remote processor 5000000.m4fss is now up
[44905.424965] virtio_rpmsg_bus virtio0: creating channel rpmsg_chrdev addr 0xe
root@p550:/lib/firmware# echo stop > /sys/class/remoteproc/remoteproc0/state
[44909.449809] remoteproc remoteproc0: stopped remote processor 5000000.m4fss
root@p550:/lib/firmware#



The M4F core application is "/ti/mcu_plus_sdk_am62x_09_02_00_38/examples/drivers/ipc/ipc_rpmsg_echo_linux/am62x-sk/m4fss0-0_freertos/ti-arm-clang/ipc_rpmsg_echo_linux.release.out" from the SDK.


So, the next question is what Linux example application that can communicate with this M4F application?

rgds,

kc Wong

  • Hello kc Wong,

    I am glad to hear that you are able to move forward!

    Please start by running the out-of-the-box IPC Echo example. You can find information on how to run it in the AM62x academy, Linux module > evaluating Linux > IPC Example:
    https://dev.ti.com/tirex/explore/node?node=A__AXINfJJ0T8V7CR5pTK41ww__AM62-ACADEMY__uiYMDcq__LATEST

    Once you are ready to start building userspace code (or kernel space code), you can move on to the
    Multicore module > How to develop with RPMsg IPC
    https://dev.ti.com/tirex/explore/node?node=A__AVjm7chph.4Q-bCWodAr.w__AM62-ACADEMY__uiYMDcq__LATEST

    NOTE! If you decide to build the RPMsg kernel space example: please note that the CROSS_COMPILE used in the SDK 9.0 version of the academy no longer applies for SDKs 9.1 & 9.2. I would use the CROSS_COMPILE used in the SDK docs here:
    SDK 9.1: https://software-dl.ti.com/processor-sdk-linux/esd/AM62X/09_01_00_08/exports/docs/linux/Foundational_Components_Kernel_Users_Guide.html#overview
    SDK 9.2: https://software-dl.ti.com/processor-sdk-linux/esd/AM62X/09_02_01_09/exports/docs/linux/Foundational_Components_Kernel_Users_Guide.html#overview

    Regards,

    Nick

  • Please note that my guidance above assumes you are building and running the IPC Echo example at
    mcu_plus_sdk_am62x_09_02_00_38/examples/drivers/ipc/ipc_rpmsg_echo_linux

    If you are just running a Hello World example, the most you can do is use Linux to inspect the memory log to see the trace of the Hello World output. For guidance on how to do that, please follow
    AM62x academy, multicore module, page "Application Development on Remote Cores"
    https://dev.ti.com/tirex/explore/node?node=A__AVn3JGT9fqm0PbS.pegO-g__AM62-ACADEMY__uiYMDcq__LATEST

    Regards,

    Nick

  • Hi Nick,

    The kernel module "rpmsg_client_sample" is already a built-in module in our Linux. I am able to run the kernel module.

    root@p550:~# ls -al /lib/modules/6.1.46-g7d494fe58c/kernel/samples/rpmsg/
    total 8
    drwxr-xr-x 2 root root   45 Apr  5  2011 .
    drwxr-xr-x 3 root root   28 Apr  5  2011 ..
    -rw-r--r-- 1 root root 7784 Apr  5  2011 rpmsg_client_sample.ko
    root@p550:~ 
    root@p550:~# echo start > /sys/class/remoteproc/remoteproc0/state
    [   57.647264] remoteproc remoteproc0: powering up 5000000.m4fss
    [   57.653954] remoteproc remoteproc0: Booting fw image am62-mcu-m4f0_0-fw, size 484732
    [   57.680538] rproc-virtio rproc-virtio.3.auto: assigned reserved memory node m4f-dma-memory@9cb00000
    [   57.693937] virtio_rpmsg_bus virtio0: rpmsg host is online
    [   57.695177] virtio_rpmsg_bus virtio0: creating channel ti.ipc4.ping-pong addr 0xd
    [   57.704254] rproc-virtio rproc-virtio.3.auto: registered virtio0 (type 7)
    [   57.710871] virtio_rpmsg_bus virtio0: creating channel rpmsg_chrdev addr 0xe
    [   57.723420] remoteproc remoteproc0: remote processor 5000000.m4fss is now up
    root@p550:~# modprobe rpmsg_client_sample count=10
    [   97.896424] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: new channel: 0x401 -> 0xd!
    [   97.905187] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 1 (src: 0xd)
    root@p550:~# [   97.930742] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 2 (src: 0xd)
    [   97.945068] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 3 (src: 0xd)
    [   97.955539] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 4 (src: 0xd)
    [   97.964997] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 5 (src: 0xd)
    [   97.975240] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 6 (src: 0xd)
    [   97.986152] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 7 (src: 0xd)
    [   97.997618] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 8 (src: 0xd)
    [   98.008505] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 9 (src: 0xd)
    [   98.019121] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 10 (src: 0xd)
    [   98.030459] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: goodbye!
    root@p550:~# echo stop > /sys/class/remoteproc/remoteproc0/state
    [  112.593612] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: rpmsg sample client driver is removed
    [  112.628441] remoteproc remoteproc0: stopped remote processor 5000000.m4fss
    root@p550:~#


    And also able to run the user space example "rpmsg_char_simple" I cloned from below git repository. 
    git://git.ti.com/rpmsg/ti-rpmsg-char.git

    root@p550:/lib/firmware# LD_LIBRARY_PATH="/lib/firmware/" ./rpmsg_char_simple -r 9 -n 3
    Created endpt device rpmsg-char-9-790, fd = 4 port = 1025
    Exchanging 3 messages with rpmsg device rpmsg-char-9-790 on rproc id 9 ...
    
    Sending message #0: hello there 0!
    Receiving message #0: hello there 0!
    Sending message #1: hello there 1!
    Receiving message #1: hello there 1!
    Sending message #2: hello there 2!
    Receiving message #2: hello there 2!
    
    Communicated 3 messages successfully on rpmsg-char-9-790
    
    TEST STATUS: PASSED
    root@p550:/lib/firmware#
    


    rgds,

    kc Wong

  • Hi Nick,

    I am thinking to baseline on both "ipc_rpmsg_echo_linux" and "rpmsg_char_simple" to modify for our application. And also, the "ti-rpmsg-char" library.

    What is the license type for the source code? Anything we need to pay attention to use the source code?

    Also, is there an example how to enable the below interface?

    /sys/class/remoteproc/remoteproc0/trace

    rgds,

    kc Wong 

  • Hello KC,

    Trace log

    For enabling the trace interface, please reference the document I pointed you to in the previous response:
    AM62x academy, multicore module, page "Application Development on Remote Cores"
    https://dev.ti.com/tirex/explore/node?node=A__AVn3JGT9fqm0PbS.pegO-g__AM62-ACADEMY__uiYMDcq__LATEST

    "It can also be helpful for debugging to go to “TI DRIVER PORTING LAYER” > Debug Log and enable “Enable Memory Log”."

    I've made a note to explicitly rewrite those steps in the section "Check the trace log".

    License for source code 

    License for the ti-rpmsg-char userspace library is in the manifest file here:
    https://git.ti.com/cgit/rpmsg/ti-rpmsg-char/tree/

    It looks like the MCU+ SDK's software manifest is in the MCU+ SDK under the docs/ folder.

    Regards,

    Nick

  • Thanks Nick. I can see the debug trace now, I was looking at the wrong interface "/sys/class/remoteproc/remoteproc0/trace".


    root@p550:/lib/firmware#  cat /sys/kernel/debug/remoteproc/remoteproc0/trace0
    [m4f0-0]     0.000740s : [IPC RPMSG ECHO] Version: REL.MCUSDK.09.02.00.38+ (Apr  3 2024 09:17:56):
    [m4f0-0]     0.018410s : [IPC RPMSG ECHO] Remote Core waiting for messages at end point 13 ... !!!
    [m4f0-0]     0.021990s : [IPC RPMSG ECHO] Remote Core waiting for messages at end point 14 ... !!!
    root@p550:/lib/firmware# 



    But, it seems that I don't have to enable “Enable Memory Log” to see the debug trace.




    rgds,

    kc Wong

  • Hello kc Wong,

    Glad to hear you are making progress!

    Hmm, for some reason I remember not seeing the Print statements show up in the Linux trace log until I enabled "Enable Memory Log"... perhaps there was something else that I changed, or that was needed for previous versions of the MCU+ SDK, but not current versions? I'll take a note to double-check that functionality when I have some spare time in the future.

    Regards,

    Nick

  • Hi Nick,

    The "ipc_rpmsg_echo_linux" example application creates 2 echo endpoints at 13 and 14.

    root@p550:/lib/firmware#  cat /sys/kernel/debug/remoteproc/remoteproc0/trace0
    [m4f0-0]     0.000740s : [IPC RPMSG ECHO] Version: REL.MCUSDK.09.02.00.38+ (Apr  3 2024 09:17:56):
    [m4f0-0]     0.018410s : [IPC RPMSG ECHO] Remote Core waiting for messages at end point 13 ... !!!
    [m4f0-0]     0.021990s : [IPC RPMSG ECHO] Remote Core waiting for messages at end point 14 ... !!!
    root@p550:/lib/firmware# 

    In our application baseline on this example, shall we remove these 2 echo endpoints and create a new endpoint at 15 for our application ?

    Any documentation that I can refer when working on the remote core IPC development ?

    I am aware of the "0001-Linux_RPMsg_Echo-add-additional-endpoints.patch" patch. Just not sure shall I remove the 2 echo endpoints and create a new endpoint with different number or I must keep the original 2 endpoints in our application.

    Because it seems to run out of memory by just adding one additional endpoint.

    makefile:145: recipe for target 'ipc_rpmsg_linux.out' failed
    "../linker.cmd", line 30: error #10099-D: program will not fit into available memory, or the section contains a call site that requires a trampoline that can't be generated for this section. placement with alignment fails for section ".rodata" size 0x1d96.  Available memory ranges:
       M4F_DRAM     size: 0x10000      unused: 0x1308       max hole: 0x1308    
    error #10010: errors encountered during linking; "ipc_rpmsg_linux.out" not built
    tiarmclang: error: tiarmlnk command failed with exit code 1 (use -v to see invocation)
    gmake[1]: *** [ipc_rpmsg_linux.out] Error 1
    gmake: *** [all] Error 2
    makefile:141: recipe for target 'all' failed

    Also, I don't understand why I can still write to and read from the /dev/rpmsg0 after shutting down the remote core.

    root@p550:/lib/firmware# echo stop > /sys/class/remoteproc/remoteproc0/state
    [1043657.487802] remoteproc remoteproc0: stopped remote processor 5000000.m4fss
    root@p550:/lib/firmware# cat /sys/class/remoteproc/remoteproc0/state
    offline
    root@p550:/lib/firmware# echo "hello" > /dev/rpmsg0
    root@p550:/lib/firmware# cat /dev/rpmsg0
    hello
    root@p550:/lib/firmware#
    

    rgds,

    kc Wong 

  • Hello KC Wong,

    Endpoints - do they matter for graceful shutdown?

    My understanding is that shutdown or suspend messages from the Remoteproc driver do NOT depend on the exact endpoint number. (if you want your code to support graceful shutdown, or if you want AM62x to be able to go into low power modes, you can find more information here):

    https://software-dl.ti.com/mcu-plus-sdk/esd/AM62X/09_02_00_38/exports/docs/api_guide_am62x/GRACEFUL_REMOTECORE_SHUTDOWN.html

    https://dev.ti.com/tirex/explore/node?node=A__AVjm7chph.4Q-bCWodAr.w__AM62-ACADEMY__uiYMDcq__LATEST
    section "Graceful shutdown"

    Ok, so how many endpoints do I need? 

    As far as I am aware, none of the specific endpoints are REQUIRED. So you would only use the number of endpoints that your application actually needs.

    Let's say you are writing your own userspace application, and you are using the ti-rpmsg-char example as your reference:
    https://git.ti.com/cgit/rpmsg/ti-rpmsg-char/tree/examples/rpmsg_char_simple.c

    You'll notice that example uses REMOTE_ENDPT = 14. So in that case, you just need a single endpoint in your MCU+ project, and that single endpoint needs to have the same number as REMOTE_ENDPT does in your userspace code.

    On the other hand, if you have two different userspace tasks, and you wanted each task to be able to trigger different code to run in your M4F, then you might create two endpoints in the MCU+ project. And so on.

    What about /dev/rpmsg0? 

    It is possible that /dev/rpmsg0 belongs to the firmware running on the DM R5F, and there is a different /dev/rpmsg1 that belongs to the M4F. This is what I see when I check the default filesystem:

    root@am62xx-evm:~# cat /dev/rpmsg
    rpmsg0       rpmsg1       rpmsg_ctrl0  rpmsg_ctrl1
    

    Regards,

    Nick