This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM625: ipc error

Part Number: AM625

1 I am adapting the latest m-core software development package with the a-core linux version 09.0000.03 and the m-core version 09_00_00_19

2 I am testing the ipc_rpmsg_echo_linux _ m4fss0-0_freertos example。 I compiled the m-core program with ccs, put am62-mcu-m4f0_0-fw under the / lib / firmware directory and boot to load up automatically。It works just fine 

3 I compiled the a-core program ti-rpmsg-char and put it under / home / root to run the error 

4 I use the TI official development board on my hardware, and A core and m core do not made any changes. Why does this phenomenon occur?

  • Hello Yu,

    Interesting. Do you get any useful output during boot time when initializing the M4F? Like this:

    root@am62xx-evm:~# dmesg | grep m4
    [    0.000000] OF: reserved mem: initialized node m4f-dma-memory@9cb00000, compatible id shared-dma-pool
    [    0.000000] OF: reserved mem: initialized node m4f-memory@9cc00000, compatible id shared-dma-pool
    [   19.393301] k3-m4-rproc 5000000.m4fss: assigned reserved memory node m4f-dma-memory@9cb00000
    [   19.394124] k3-m4-rproc 5000000.m4fss: configured M4 for remoteproc mode
    [   19.394287] k3-m4-rproc 5000000.m4fss: local reset is deasserted for device
    [   19.394775] remoteproc remoteproc0: 5000000.m4fss is available
    [   19.507892] remoteproc remoteproc0: powering up 5000000.m4fss
    [   19.507937] remoteproc remoteproc0: Booting fw image am62-mcu-m4f0_0-fw, size 54860
    [   19.540073] rproc-virtio rproc-virtio.2.auto: assigned reserved memory node m4f-dma-memory@9cb00000
    [   19.541327] remoteproc remoteproc0: remote processor 5000000.m4fss is now up
    

    You can also connect to the M4F core with CCS to see exactly what the M4F is doing. Reference the new AM62x Multicore academy > Application Development on Remote Cores > How to debug a remote core while Linux is running

    https://dev.ti.com/tirex/explore/node?node=A__AVn3JGT9fqm0PbS.pegO-g__AM62-ACADEMY__uiYMDcq__LATEST 

    Regards,

    Nick

  • 1 I found the reason, the ti-rpmsg-char I use is not the latest version, after the replacement, the test is normal. 
    2 I found a new phenomenon, the r5f core will send 100,000 data to the m core by default, which takes more than ten minutes. I noticed that is tispl.bin contains ipc_echo_testb_mcu1_0_release_strip.xer5f with ipc function。

    3 I don t know that ipc_echo_testb_mcu1_0_release_strip.xer5f is compiled from that source file and I want to modify this file to reduce the number of people who send it from 100000 to 1 so how do I need to do it ?

  • 1 If I want to compile the r5f core of that Does the xer5f file just follow the following instructions?

    e2e.ti.com/.../am623-building-xer5f-file

  • Hello Yu,

    It should not take 10 minutes to send 100,000 RPMsg messages. That's 6 milliseconds per message on average, which is waaaaay too slow, especially for MCU+ core to MCU+ core. Without running tests, I would expect an average latency closer to 60 us if the cores are not doing other tasks as well.

    I think it is probably this MCU+ SDK example that is running out-of-the-box on both cores (in addition to the DM firmware task running on the R5F core):
    examples/drivers/ipc/ipc_rpmsg_echo_linux/

    I am going to point you to documentation that is for AM62Ax instead of AM62x, since we have not finished writing the AM62x MCU Academy yet. Also, note that this specific page talks about building a slightly different example, ipc_rpmsg_echo. Hopefully this is enough to get you going on rebuilding your project after you modify it: https://dev.ti.com/tirex/explore/node?node=A__ASXF9zmxJYKHaKJd2C9kbA__AM62A-ACADEMY__WeZ9SsL__LATEST

    Regards,

    Nick