This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: Re : [FAQ] TDA4VM: IPC_Test on PSDK QNX 7.2 / PSDK QNX 7.3

Part Number: TDA4VM

Hi Team,

While "[FAQ] TDA4VM: IPC_Test on PSDK QNX 7.2 / PSDK QNX 7.3" i tried to build "ex02_bios_multicore_echo_test_mcu1_0_release.xer5f" and "ex02_bios_multicore_echo_test_mpu1_0_release.xa72fg".

and followed with the linking procedure to link "ex02_bios_multicore_echo_test_mcu1_0_release" to "j7-mcu-r5f0_0-fw" and was successful in it.

And following the given "Boot Log and IPC Test Log PSDK QNX 7.2" at line number 108 they have run "ipc_test", so do we have to create ipc_test from "ex02_bios_multicore_echo_test_mpu1_0_release.xa72fg" ?

If so can you please tell me how should i do that ?

Although, I am using PSDKRA7.3 and not QNX.

Thanks,

Tanvi

  • Hi Tanvi,

    Does below reflect a correct understanding of your request:

    • Looking to establish IPC communication between MPU1_0 and MCU1_0 using echo test, with PSDK  RTOS 7.3?

    Thanks,

    kb

  • Hi KB,

    Thanks for the quick reply.

    Yes that's what i want to do.

    Inorder to do that i have been trying to follow the forums and other query requests that were answered.

    1.) --> I have also tried to run ./vx_app_arm_ipc.out application in vision apps but following is the log -->

    /******************************************************************************

    root@j7-evm:/opt/vision_apps# ./vx_app_arm_ipc.out
    APP: Init ... !!!
    MEM: Init ... !!!
    MEM: Initialized DMA HEAP (fd=4) !!!
    MEM: Init ... Done !!!
    IPC: Init ... !!!
    rpmsg_chrdev driver is not enabled/installed
    IPC: Init ... Done !!!
    APP: ERROR: IPC init failed !!!
    REMOTE_SERVICE: Init ... !!!
    rpmsg_char_open cannot be invoked without initialization
    rpmsg_char_open cannot be invoked without initialization
    rpmsg_char_open cannot be invoked without initialization
    rpmsg_char_open cannot be invoked without initialization
    rpmsg_char_open cannot be invoked without initialization
    REMOTE_SERVICE: Init ... Done !!!
    0.000000 s: GTC Frequency = 0 MHz
    APP: Init ... Done !!!
    0.000000 s: VX_ZONE_INIT:Enabled
    0.000000 s: VX_ZONE_ERROR:Enabled
    0.000000 s: VX_ZONE_WARNING:Enabled
    0.000000 s: VX_ZONE_INIT:[tivxInit:71] Initialization Done !!!
    0.000000 s: VX_ZONE_INIT:[tivxHostInit:48] Initialization Done for HOST !
    APP IPC: ERROR: Send msg 1 to CPU [mcu2_0] failed !!!
    APP IPC: ERROR: Send msg 1 to CPU [mcu2_1] failed !!!
    APP IPC: ERROR: Send msg 1 to CPU [c6x_1] failed !!!
    APP IPC: ERROR: Send msg 1 to CPU [c6x_2] failed !!!
    APP IPC: ERROR: Send msg 1 to CPU [c7x_1] failed !!!
    APP IPC: ERROR: Send msg 1 to CPU [mcu2_0] failed !!!
    APP IPC: ERROR: Send msg 1 to CPU [mcu2_1] failed !!!
    APP IPC: ERROR: Send msg 1 to CPU [c6x_1] failed !!!
    APP IPC: ERROR: Send msg 1 to CPU [c6x_2] failed !!!
    APP IPC: ERROR: Send msg 1 to CPU [c7x_1] failed !!!
    APP IPC: ERROR: Send msg 1 to CPU [mcu2_0] failed !!!
    APP IPC: ERROR: Send msg 1 to CPU [mcu2_1] failed !!!
    APP IPC: ERROR: Send msg 1 to CPU [c6x_1] failed !!!
    APP IPC: ERROR: Send msg 1 to CPU [c6x_2] failed !!!
    APP IPC: ERROR: Send msg 1 to CPU [c7x_1] failed !!!
    APP IPC: ERROR: Send msg 1 to CPU [mcu2_0] failed !!!
    APP IPC: ERROR: Send msg 1 to CPU [mcu2_1] failed !!!
    APP IPC: ERROR: Send msg 1 to CPU [c6x_1] failed !!!
    APP IPC: ERROR: Send msg 1 to CPU [c6x_2] failed !!!
    APP IPC: ERROR: Send msg 1 to CPU [c7x_1] failed !!!
    APP IPC: Waiting for all messages to get echoed from remote core...
    APP IPC: Waiting for all messages to get echoed ... Done.
    APP IPC: Running remote service test ...
    0.000000 s: REMOTE_SERVICE_TEST: Running test for CPU mcu2_0 !!!
    REMOTE_SERVICE: TX: FAILED: mpu1_0 -> mcu2_0 (port 21) cmd = 0x00001234, prm_ss
    0.000000 s: REMOTE_SERVICE_TEST: Test failed @ iteration 0 !!!
    0.000000 s: REMOTE_SERVICE_TEST: Running test @ 0xb8000000 of 1024 bytes !
    REMOTE_SERVICE: TX: FAILED: mpu1_0 -> mcu2_0 (port 21) cmd = 0x00005678, prm_ss
    0.000000 s: REMOTE_SERVICE_TEST: Test failed @ iteration 0 !!!
    0.000000 s: REMOTE_SERVICE_TEST: Running timer test of 10000 msecs for CP!
    REMOTE_SERVICE: TX: FAILED: mpu1_0 -> mcu2_0 (port 21) cmd = 0x00000002, prm_ss
    0.000000 s: REMOTE_SERVICE_TEST: ERROR: Timer test !!!
    0.000000 s: REMOTE_SERVICE_TEST: Running timer test of 10000 msecs for CP!
    0.000000 s: REMOTE_SERVICE_TEST: Running test for CPU mcu2_1 !!!
    REMOTE_SERVICE: TX: FAILED: mpu1_0 -> mcu2_1 (port 21) cmd = 0x00001234, prm_ss
    0.000000 s: REMOTE_SERVICE_TEST: Test failed @ iteration 0 !!!
    0.000000 s: REMOTE_SERVICE_TEST: Running test @ 0xb8000000 of 1024 bytes !
    REMOTE_SERVICE: TX: FAILED: mpu1_0 -> mcu2_1 (port 21) cmd = 0x00005678, prm_ss
    0.000000 s: REMOTE_SERVICE_TEST: Test failed @ iteration 0 !!!
    0.000000 s: REMOTE_SERVICE_TEST: Running timer test of 10000 msecs for CP!
    REMOTE_SERVICE: TX: FAILED: mpu1_0 -> mcu2_1 (port 21) cmd = 0x00000002, prm_ss
    0.000000 s: REMOTE_SERVICE_TEST: ERROR: Timer test !!!
    0.000000 s: REMOTE_SERVICE_TEST: Running timer test of 10000 msecs for CP!
    0.000000 s: REMOTE_SERVICE_TEST: Running test for CPU c6x_1 !!!
    REMOTE_SERVICE: TX: FAILED: mpu1_0 -> c6x_1 (port 21) cmd = 0x00001234, prm_sis
    0.000000 s: REMOTE_SERVICE_TEST: Test failed @ iteration 0 !!!
    0.000000 s: REMOTE_SERVICE_TEST: Running test @ 0xb8000000 of 1024 bytes !
    REMOTE_SERVICE: TX: FAILED: mpu1_0 -> c6x_1 (port 21) cmd = 0x00005678, prm_sis
    0.000000 s: REMOTE_SERVICE_TEST: Test failed @ iteration 0 !!!
    0.000000 s: REMOTE_SERVICE_TEST: Running timer test of 10000 msecs for CP!
    REMOTE_SERVICE: TX: FAILED: mpu1_0 -> c6x_1 (port 21) cmd = 0x00000002, prm_sis
    0.000000 s: REMOTE_SERVICE_TEST: ERROR: Timer test !!!
    0.000000 s: REMOTE_SERVICE_TEST: Running timer test of 10000 msecs for CP!
    0.000000 s: REMOTE_SERVICE_TEST: Running test for CPU c6x_2 !!!
    REMOTE_SERVICE: TX: FAILED: mpu1_0 -> c6x_2 (port 21) cmd = 0x00001234, prm_sis
    0.000000 s: REMOTE_SERVICE_TEST: Test failed @ iteration 0 !!!
    0.000000 s: REMOTE_SERVICE_TEST: Running test @ 0xb8000000 of 1024 bytes !
    REMOTE_SERVICE: TX: FAILED: mpu1_0 -> c6x_2 (port 21) cmd = 0x00005678, prm_sis
    0.000000 s: REMOTE_SERVICE_TEST: Test failed @ iteration 0 !!!
    0.000000 s: REMOTE_SERVICE_TEST: Running timer test of 10000 msecs for CP!
    REMOTE_SERVICE: TX: FAILED: mpu1_0 -> c6x_2 (port 21) cmd = 0x00000002, prm_sis
    0.000000 s: REMOTE_SERVICE_TEST: ERROR: Timer test !!!
    0.000000 s: REMOTE_SERVICE_TEST: Running timer test of 10000 msecs for CP!
    0.000000 s: REMOTE_SERVICE_TEST: Running test for CPU c7x_1 !!!
    REMOTE_SERVICE: TX: FAILED: mpu1_0 -> c7x_1 (port 21) cmd = 0x00001234, prm_sis
    0.000000 s: REMOTE_SERVICE_TEST: Test failed @ iteration 0 !!!
    0.000000 s: REMOTE_SERVICE_TEST: Running test @ 0xb8000000 of 1024 bytes !
    REMOTE_SERVICE: TX: FAILED: mpu1_0 -> c7x_1 (port 21) cmd = 0x00005678, prm_sis
    0.000000 s: REMOTE_SERVICE_TEST: Test failed @ iteration 0 !!!
    0.000000 s: REMOTE_SERVICE_TEST: Running timer test of 10000 msecs for CP!
    REMOTE_SERVICE: TX: FAILED: mpu1_0 -> c7x_1 (port 21) cmd = 0x00000002, prm_sis
    0.000000 s: REMOTE_SERVICE_TEST: ERROR: Timer test !!!
    0.000000 s: REMOTE_SERVICE_TEST: Running timer test of 10000 msecs for CP!
    APP IPC: Running remote service test ... Done.
    0.000000 s: VX_ZONE_INIT:[tivxHostDeInit:56] De-Initialization Done for !
    0.000000 s: VX_ZONE_INIT:[tivxDeInit:111] De-Initialization Done !!!
    APP: Deinit ... !!!
    REMOTE_SERVICE: Deinit ... !!!
    REMOTE_SERVICE: Deinit ... Done !!!
    IPC: Deinit ... !!!
    Segmentation fault (core dumped)

    ******************************************************************************/

    I tried "ls -l /sys/bus/rpmsg/devices/" but i got the following output -->

    root@j7-evm:~# ls -l /sys/bus/rpmsg/devices/
    total 0
    root@j7-evm:~#

    2.)--> I tried to make use of UBOOT_DM and UBOOT_DM_R5 variables to load different applications in the following manner -->

    /*************************************************************

    UBOOT_DM=<INSTALL_DIR>/ti-processor-sdk-rtos-j721e-evm-07_03_00_07/mcusw/binary/ipc_spi_slave_demo_app/bin/j721e_evm/ipc_spi_slave_demo_app_mpu1_0_release.xa72fg


     UBOOT_DM_R5=<INSTALL_DIR>/ti-processor-sdk-rtos-j721e-evm-07_03_00_07/mcusw/binary/ipc_spi_master_demo_app/bin/j721e_evm/ipc_spi_master_demo_app_mcu1_0_release_strip.xer5f

    *************************************************************/

    and even CROSS_COMPILE was successful.

    CROSS_COMPILE = make ARCH=arm CROSS_COMPILE=<INSTALL_DIR>/ti-processor-sdk-rtos-j721e-evm-07_03_00_07/gcc-arm-9.2-2019.12-x86_64-aarch64-none-linux-gnu/bin/aarch64-none-linux-gnu- O=j721e-arm64 ATF=<INSTALL_DIR>/ti-processor-sdk-linux-j7-evm-07_03_00_05/board-support/prebuilt-images/bl31.bin TEE=<INSTALL_DIR>/ti-processor-sdk-linux-j7-evm-07_03_00_05/board-support/prebuilt-images/bl32.bin DM=<INSTALL_DIR>/ti-processor-sdk-rtos-j721e-evm-07_03_00_07/mcusw/binary/cdd_ipc_profile_app_rc_linux/bin/j721e_evm/ipc_spi_slave_demo_app_mpu1_0_release.xa72fg -j8

    But the application didnt run. It ended with the following log -->

    /***************************

    U-Boot SPL 2020.01-dirty (Feb 14 2022 - 13:41:09 +0530)
    SYSFW ABI: 3.1 (firmware rev 0x0014 '20.8.5--v2020.08b (Te)
    Trying to boot from MMC2
    Loading Environment from MMC... *** Warning - No MMC card t

    Starting ATF on ARM64 core...

    NOTICE: BL31: v2.4(release):07.03.00.005-dirty
    NOTICE: BL31: Built : 00:15:40, Apr 10 2021

    ***************************/

    After this it didn't progress.

    3.)--> I tried to load using this method-->

    /****************************************************************

    => rproc init
    => rproc stop 3
    => rproc stop 2
    =>
    =>
    => load mmc 1:2 0x94000000 /lib/firmware/j7-main-r5f0_0-fw
    5265752 bytes read in 227 ms (22.1 MiB/s)
    => rproc load 2 0x94000000 0x${filesize}
    Load Remote Processor 2 with data@addr=0x94000000 5265752 bytes: Success!
    => rproc start 2
    => load mmc 1:2 0x90000000 /lib/firmware/ipc_mcu2_1
    4111136 bytes read in 177 ms (22.2 MiB/s)
    => rproc load 3 0x90000000 0x${filesize}
    Load Remote Processor 3 with data@addr=0x90000000 4111136 bytes: Success!
    => rproc start 3
    => boot

    ****************************************************************/

    But image loading wasn't successful, so i tried to use the default application of "ex02_bios_multicore_echo_test_mpu1_0_release.xa72fg", but i was stuck here. The main problem being loading of applications using SPL.

    I couldn't get a working method to load the images.

    I hope you can help me with the integration steps.

    I couldnt find any suitable IPC application make steps for PSDKRA 7.3, only found for QNX.

    I am not using CCS, so steps to load images without the use of CCS would be highly appreciated.

    Thanks & Regards,

    Tanvi

  • Hi Tanvi,

    You have mixed up different things. Can you also please clarify whether you are trying to run the sample on QNX or Linux? Your original question started out with QNX, but you are also trying to list /sys/bus/rpmsg/devices which is a Linux concept, and not relevant for QNX.

    1. ex02_bios_multicore_echo_test_mpu1_0_release.xa72fg is actually an firmware application when running RTOS on A72. This is not applicable to using QNX or Linux on A72.

    2. ex02_bios_multicore_echo_test_mcu1_0_release is not the correct image to use for MCU1_0. You need to use the firmware that relies on R5F BTCM to boot, ex02_bios_multicore_echo_testb_freertos.xer5f

    Please see section 4.5.2 Example Application of the QNX SDK documentation.

    3. The MCU1_0 firmware is not picked up from filesystem by U-Boot. It has to be built into the U-Boot, since MCU1_0 is also a boot processor. Please see 8.3.3.3 SPL/uboot Loading of the RTOS SDK documentation.

    4. ipc_test is the standalone QNX IPC sample application.  vx_app_arm_ipc.out is the IPC test at Vision Apps layer. 

    5. The QNX IPC test and the Vision Apps IPC test use very different firmwares. The firmwares use completely different memory maps, so you need to load the appropriate firmwares for each test. The default SDK is setup to run Vision Apps by default on QNX. You need to rebuild the SDK with the appropriate memory map to use using the VISION_APPS_BUILD_FLAGS_MAK variable as pointed out in the FAQ.

    We have improved this to use runtime selection for the IPC Resource Manager in the latest QNX SDK release.

    regards

    Suamn

  • Hi Suman,

    Yes I agree that i am confused on how to boot multicores along Linux using PSDKRA 7.3.

    You have mixed up different things. Can you also please clarify whether you are trying to run the sample on QNX or Linux? Your original question started out with QNX, but you are also trying to list /sys/bus/rpmsg/devices which is a Linux concept, and not relevant for QNX.

    I am not using QNX, i am using Linux on A72.

    4. ipc_test is the standalone QNX IPC sample application.  vx_app_arm_ipc.out is the IPC test at Vision Apps layer. 

    So, we can't run ipc_test with Linux  ?

    3. The MCU1_0 firmware is not picked up from filesystem by U-Boot. It has to be built into the U-Boot, since MCU1_0 is also a boot processor. Please see 8.3.3.3 SPL/uboot Loading of the RTOS SDK documentation.

    So, i have to do "UBOOT_DM = <PATH_TO_XER5F>/ex02_bios_multicore_echo_testb_freertos.xer5f" for successful MCU1_0 boot.

    But what if i want to link another image also ? How should i do that ?

    How does UBOOT_DM UBOOT_DM_R5 differ ?

    I tried "ls -l /sys/bus/rpmsg/devices/" but i got the following output -->

    root@j7-evm:~# ls -l /sys/bus/rpmsg/devices/
    total 0

    Can also please suggest a solution for this ?

    I tried make sdk_clean; make sdk_scrub; make sdk -j4 once again just to see if it had failed to inlcude the rpmsg devices, but output remains the same.

    vision_apps_init is successful.

    IPC: Init ... !!!
    rpmsg_chrdev driver is not enabled/installed
    IPC: Init ... Done !!!
    APP: ERROR: IPC init failed !!!

    Can't understand what other steps should i follow to remove this error.

    3.)--> I tried to load using this method-->

    Can you suggest the use of this method ?

    I hope that this time I was clear and I am waiting eagerly for the above solutions.

    Note : I am not using CCS.

    Thanks & Regards,

    Tanvi

  • Hi,

    So we were able successful in ipc echo test using ./vx_app_arm_ipc.out

    Following is the procedure I followed (just writing it here since it worked for me) -->

    Search for the following ".ko extension" files in your system.

    1.) rpmsg_char.ko

    2.) pru_rproc.ko

    3.) ti_k3_r5_remoteproc.ko

    4.) virtio_rpmsg_bus.ko

    5.) ti_k3_dsp_remoteproc.ko

    Than in the manner in which i have specified those files do "insmod" in "/lib/modules/5.4.74-g9574bba32a".

    Load all the modules. Than go to location "/opt/vision_apps/" and run source ./vision_apps_init.sh

    Than upon successful init run "./vx_app_arm_ipc.out"

    If everything goes fine you would get the following log -->

    vxApp_arm_ipcLog.txt
    Fullscreen
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    root@j7-evm:~# cd /opt/vision_apps/
    root@j7-evm:/opt/vision_apps# ./vx_app_arm_ipc.out
    APP: Init ... !!!
    MEM: Init ... !!!
    MEM: Initialized DMA HEAP (fd=4) !!!
    MEM: Init ... Done !!!
    IPC: Init ... !!!
    IPC: Init ... Done !!!
    REMOTE_SERVICE: Init ... !!!
    REMOTE_SERVICE: Init ... Done !!!
    177.394931 s: GTC Frequency = 200 MHz
    APP: Init ... Done !!!
    177.395017 s: VX_ZONE_INIT:Enabled
    177.395023 s: VX_ZONE_ERROR:Enabled
    177.395028 s: VX_ZONE_WARNING:Enabled
    177.398851 s: VX_ZONE_INIT:[tivxInit:71] Initialization Done !!!
    177.398995 s: VX_ZONE_INIT:[tivxHostInit:48] Initialization Done for HOST !!!
    APP IPC: Waiting for all messages to get echoed from remote core...
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0000
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

    Now if you check for the remoteprocs, following list should be available -->

    remoteprocLog.txt
    Fullscreen
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    root@j7-evm:/opt/vision_apps# cd
    root@j7-evm:~# ls -l /sys/bus/rpmsg/devices/
    total 0
    lrwxrwxrwx 1 root root 0 Nov 19 18:27 virtio0.rpmsg_chrdev.-1.13 -> ../../../devices/platform/bus@100000/bus@100000:bus@28380000/bus@100000:b3
    lrwxrwxrwx 1 root root 0 Nov 19 18:27 virtio0.rpmsg_chrdev.-1.21 -> ../../../devices/platform/bus@100000/bus@100000:bus@28380000/bus@100000:b1
    lrwxrwxrwx 1 root root 0 Nov 19 18:27 virtio0.ti.ipc4.ping-pong.-1.14 -> ../../../devices/platform/bus@100000/bus@100000:bus@28380000/bus@1004
    lrwxrwxrwx 1 root root 0 Nov 19 18:27 virtio1.rpmsg_chrdev.-1.13 -> ../../../devices/platform/bus@100000/bus@100000:r5fss@5c00000/5d00000.r5f3
    lrwxrwxrwx 1 root root 0 Nov 19 18:27 virtio1.rpmsg_chrdev.-1.21 -> ../../../devices/platform/bus@100000/bus@100000:r5fss@5c00000/5d00000.r5f1
    lrwxrwxrwx 1 root root 0 Nov 19 18:27 virtio1.ti.ipc4.ping-pong.-1.14 -> ../../../devices/platform/bus@100000/bus@100000:r5fss@5c00000/5d00004
    lrwxrwxrwx 1 root root 0 Nov 19 18:27 virtio2.rpmsg-kdrv.-1.26 -> ../../../devices/platform/bus@100000/bus@100000:r5fss@5c00000/5c00000.r5f/r6
    lrwxrwxrwx 1 root root 0 Nov 19 18:27 virtio2.rpmsg_chrdev.-1.13 -> ../../../devices/platform/bus@100000/bus@100000:r5fss@5c00000/5c00000.r5f3
    lrwxrwxrwx 1 root root 0 Nov 19 18:27 virtio2.rpmsg_chrdev.-1.21 -> ../../../devices/platform/bus@100000/bus@100000:r5fss@5c00000/5c00000.r5f1
    lrwxrwxrwx 1 root root 0 Nov 19 18:27 virtio2.ti.ethfw.notifyservice.-1.30 -> ../../../devices/platform/bus@100000/bus@100000:r5fss@5c00000/50
    lrwxrwxrwx 1 root root 0 Nov 19 18:27 virtio2.ti.ipc4.ping-pong.-1.14 -> ../../../devices/platform/bus@100000/bus@100000:r5fss@5c00000/5c00004
    lrwxrwxrwx 1 root root 0 Nov 19 18:27 virtio3.rpmsg_chrdev.-1.13 -> ../../../devices/platform/bus@100000/4d80800000.dsp/remoteproc/remoteproc3
    lrwxrwxrwx 1 root root 0 Nov 19 18:27 virtio3.rpmsg_chrdev.-1.21 -> ../../../devices/platform/bus@100000/4d80800000.dsp/remoteproc/remoteproc1
    lrwxrwxrwx 1 root root 0 Nov 19 18:27 virtio3.ti.ipc4.ping-pong.-1.14 -> ../../../devices/platform/bus@100000/4d80800000.dsp/remoteproc/remot4
    lrwxrwxrwx 1 root root 0 Nov 19 18:27 virtio4.rpmsg_chrdev.-1.13 -> ../../../devices/platform/bus@100000/4d81800000.dsp/remoteproc/remoteproc3
    lrwxrwxrwx 1 root root 0 Nov 19 18:27 virtio4.rpmsg_chrdev.-1.21 -> ../../../devices/platform/bus@100000/4d81800000.dsp/remoteproc/remoteproc1
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

    Also make sure that you have the following files .xer5f files -->

    root@j7-evm:/lib/firmware/pdk-ipc# ls
    ipc_echo_test_c66xdsp_1_release_strip.xe66 ipc_echo_test_mcu1_1_release_strip.xer5f ipc_echo_test_mcu3_0_release_strip.xer5f
    ipc_echo_test_c66xdsp_2_release_strip.xe66 ipc_echo_test_mcu2_0_release_strip.xer5f ipc_echo_test_mcu3_1_release_strip.xer5f
    ipc_echo_test_c7x_1_release_strip.xe71 ipc_echo_test_mcu2_1_release_strip.xer5f ipc_echo_testb_mcu1_0_release_strip.xer5f

    Thanks & Regards,

    Tanvi

  • Hi Suman,

    I tried to run vx_app_arm_ipc.out where i made changes in the file -->

    /home/linux/Documents/ti-processor-sdk-rtos-j721e-evm-07_03_00_07/vision_apps/apps/basic_demos/app_tirtos/common/app_cfg.h

    /**********************************************

    #define ENABLE_IPC_MPU1_0
    #define ENABLE_IPC_MCU1_0
    //#define ENABLE_IPC_MCU1_1
    //#define ENABLE_IPC_MCU2_0
    //#define ENABLE_IPC_MCU2_1
    //#define ENABLE_IPC_MCU3_0
    //#define ENABLE_IPC_MCU3_1
    //#define ENABLE_IPC_C6x_1
    //#define ENABLE_IPC_C6x_2
    //#define ENABLE_IPC_C7x_1

    **********************************************/

    After which when i ran ./vx_app_arm_ipc.out i got the following output-->

    /***************************************************************************************************************************************8

    root@j7-evm:/opt/vision_apps# ./vx_app_arm_ipc.out
    APP: Init ... !!!
    MEM: Init ... !!!
    MEM: Initialized DMA HEAP (fd=4) !!!
    MEM: Init ... Done !!!
    IPC: Init ... !!!
    IPC: Init ... Done !!!
    REMOTE_SERVICE: Init ... !!!
    _rpmsg_char_find_ctrldev: could not find the matching rpmsg_ctrl device for virtio0.rpmsg_chrdev.-1.21
    REMOTE_SERVICE: Init ... Done !!!
    0.000000 s: GTC Frequency = 0 MHz
    APP: Init ... Done !!!
    0.000000 s: VX_ZONE_INIT:Enabled
    0.000000 s: VX_ZONE_ERROR:Enabled
    0.000000 s: VX_ZONE_WARNING:Enabled
    0.000000 s: VX_ZONE_INIT:[tivxInit:71] Initialization Done !!!
    0.000000 s: VX_ZONE_INIT:[tivxHostInit:48] Initialization Done for HOST !!!
    ******************DEBUG 1**********************
    ******************DEBUG 1**********************
    ******************DEBUG 1**********************
    ******************DEBUG 1**********************
    APP IPC: Waiting for all messages to get echoed from remote core...
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0000
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0001
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0002
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0003
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0004
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0005
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0006
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0007
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0008
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0009
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000a
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000b
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000c
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000d
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000e
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000f
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0000
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0001
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0002
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0003
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0004
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0005
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0006
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0007
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0008
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0009
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000a
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000b
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000c
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000d
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000e
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000f
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0000
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0001
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0002
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0003
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0004
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0005
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0006
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0007
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0008
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0009
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000a
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000b
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000c
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000d
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000e
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000f
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0000
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0001
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0002
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0003
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0004
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0005
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0006
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0007
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0008
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead0009
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000a
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000b
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000c
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000d
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000e
    IPC: RX: mcu1_0 -> mpu1_0 (port 13) msg = 0xdead000f
    APP IPC: Waiting for all messages to get echoed ... Done.
    APP IPC: Running remote service test ...
    0.000000 s: REMOTE_SERVICE_TEST: Running test for CPU mcu1_0 !!!
    REMOTE_SERVICE: TX: FAILED: mpu1_0 -> mcu1_0 (port 21) cmd = 0x00001234, prm_size = 4 bytes
    0.000000 s: REMOTE_SERVICE_TEST: Test failed @ iteration 0 !!!
    0.000000 s: REMOTE_SERVICE_TEST: Running test @ 0xb8000000 of 1024 bytes size for CPU mcu1_0 !!!
    REMOTE_SERVICE: TX: FAILED: mpu1_0 -> mcu1_0 (port 21) cmd = 0x00005678, prm_size = 4 bytes
    0.000000 s: REMOTE_SERVICE_TEST: Test failed @ iteration 0 !!!
    0.000000 s: REMOTE_SERVICE_TEST: Running timer test of 10000 msecs for CPU mcu1_0 !!!
    REMOTE_SERVICE: TX: FAILED: mpu1_0 -> mcu1_0 (port 21) cmd = 0x00000002, prm_size = 4 bytes
    0.000000 s: REMOTE_SERVICE_TEST: ERROR: Timer test !!!
    0.000000 s: REMOTE_SERVICE_TEST: Running timer test of 10000 msecs for CPU mcu1_0 ... DONE !!!
    APP IPC: Running remote service test ... Done.
    0.000000 s: VX_ZONE_INIT:[tivxHostDeInit:56] De-Initialization Done for HOST !!!
    0.000000 s: VX_ZONE_INIT:[tivxDeInit:111] De-Initialization Done !!!
    APP: Deinit ... !!!
    REMOTE_SERVICE: Deinit ... !!!
    REMOTE_SERVICE: Deinit ... Done !!!
    IPC: Deinit ... !!!
    IPC: DeInit ... Done !!!
    MEM: Deinit ... !!!
    MEM: Alloc's: 1 alloc's of 1024 bytes
    MEM: Free's : 1 free's of 1024 bytes
    MEM: Open's : 0 allocs of 0 bytes
    MEM: Deinit ... Done !!!
    APP: Deinit ... Done !!!
    APP IPC: Done !!!
    root@j7-evm:/opt/vision_apps#

    /*************************************************************************************************************************

    It gave -->

    0.000000 s: REMOTE_SERVICE_TEST: Running test for CPU mcu1_0 !!!
    REMOTE_SERVICE: TX: FAILED: mpu1_0 -> mcu1_0 (port 21) cmd = 0x00001234, prm_size = 4 bytes
    0.000000 s: REMOTE_SERVICE_TEST: Test failed @ iteration 0 !!!

    When i do this-->insmod ti_k3_r5_remoteproc.ko

    I get the following log-->

    /****************************************************************

    root@j7-evm:/lib/modules/5.4.74-g9574bba32a# insmod ti_k3_r5_remoteproc.ko
    [ 139.910898] platform 41000000.r5f: R5F core may have been powered on by a different host, programmed state (0) != actual state (1)
    [ 139.925517] platform 41000000.r5f: configured R5F for IPC-only mode
    [ 139.933468] platform 41000000.r5f: assigned reserved memory node vision-apps-r5f-dma-memory@a0000000
    [ 139.944565] remoteproc remoteproc0: 41000000.r5f is available
    [ 139.952777] platform 5c00000.r5f: configured R5F for IPC-only mode
    [ 139.959469] platform 5c00000.r5f: assigned reserved memory node vision-apps-r5f-dma-memory@a2000000
    [ 139.970307] remoteproc remoteproc0: powering up 41000000.r5f
    [ 139.970635] remoteproc remoteproc1: 5c00000.r5f is available
    [ 139.975994] remoteproc remoteproc0: Booting fw image j7-mcu-r5f0_0-fw, size 257144
    [ 139.989437] platform 5d00000.r5f: configured R5F for IPC-only mode
    [ 139.989667] platform 41000000.r5f: R5F core initialized in IPC-only mode
    [ 139.995703] platform 5d00000.r5f: assigned reserved memory node vision-apps-r5f-dma-memory@a4000000
    [ 140.003612] remoteproc0#vdev0buffer: assigned reserved memory node vision-apps-r5f-dma-memory@a0000000
    [ 140.021109] remoteproc remoteproc2: 5d00000.r5f is available
    [ 140.023217] virtio_rpmsg_bus virtio0: rpmsg host is online
    [ 140.036036] platform 5e00000.r5f: configured R5F for remoteproc mode
    [ 140.036310] remoteproc0#vdev0buffer: registered virtio0 (type 7)
    [ 140.049080] platform 5e00000.r5f: assigned reserved memory node vision-apps-r5f-dma-memory@a6000000
    [ 140.051388] remoteproc remoteproc0: remote processor 41000000.r5f is now up
    [ 140.061715] remoteproc remoteproc3: 5e00000.r5f is available
    [ 140.071032] remoteproc remoteproc3: Direct firmware load for j7-main-r5f1_0-fw failed with error -2
    [ 140.074836] platform 5f00000.r5f: configured R5F for remoteproc mode
    [ 140.080121] remoteproc remoteproc3: powering up 5e00000.r5f
    [ 140.090599] virtio_rpmsg_bus virtio0: creating channel rpmsg_chrdev addr 0xd
    [ 140.092609] remoteproc remoteproc3: Direct firmware load for j7-main-r5f1_0-fw failed with error -2
    [ 140.106809] platform 5f00000.r5f: assigned reserved memory node vision-apps-r5f-dma-memory@a7000000
    [ 140.108145] remoteproc remoteproc3: request_firmware failed: -2
    [ 140.122719] remoteproc remoteproc4: 5f00000.r5f is available
    [ 140.130845] remoteproc remoteproc4: Direct firmware load for j7-main-r5f1_1-fw failed with error -2
    root@j7-evm:/lib/modules/5.4.74-g9574bba32a# [ 140.145755] remoteproc remoteproc4: powering up 5f00000.r5f
    [ 140.154029] remoteproc remoteproc4: Direct firmware load for j7-main-r5f1_1-fw failed with error -2
    [ 140.163138] remoteproc remoteproc4: request_firmware failed: -2
    [ 140.281089] remoteproc remoteproc2: powering up 5d00000.r5f
    [ 140.286689] remoteproc remoteproc2: Booting fw image j7-main-r5f0_1-fw, size 1968128
    [ 140.294725] remoteproc remoteproc1: powering up 5c00000.r5f
    [ 140.300301] remoteproc remoteproc1: Booting fw image j7-main-r5f0_0-fw, size 3975576
    [ 140.308227] platform 5d00000.r5f: R5F core initialized in IPC-only mode
    [ 140.314857] remoteproc2#vdev0buffer: assigned reserved memory node vision-apps-r5f-dma-memory@a4000000
    [ 140.324320] platform 5c00000.r5f: R5F core initialized in IPC-only mode
    [ 140.330937] remoteproc1#vdev0buffer: assigned reserved memory node vision-apps-r5f-dma-memory@a2000000
    [ 140.341046] virtio_rpmsg_bus virtio1: rpmsg host is online
    [ 140.346674] remoteproc2#vdev0buffer: registered virtio1 (type 7)
    [ 140.352805] virtio_rpmsg_bus virtio1: creating channel rpmsg_chrdev addr 0xd
    [ 140.353368] virtio_rpmsg_bus virtio2: rpmsg host is online
    [ 140.365321] remoteproc remoteproc2: remote processor 5d00000.r5f is now up
    [ 140.369284] virtio_rpmsg_bus virtio2: creating channel rpmsg_chrdev addr 0xd
    [ 140.376557] remoteproc1#vdev0buffer: registered virtio2 (type 7)
    [ 140.389626] remoteproc remoteproc1: remote processor 5c00000.r5f is now up

    [MCU2_1] 307.521485 s: IPC: Init ... Done !!!
    [MCU2_1] 307.521564 s: APP: Syncing with 5 CPUs ... !!!
    [MCU2_0] 307.526476 s: IPC: HLOS is ready !!!
    [MCU2_0] 307.532562 s: IPC: Init ... Done !!!
    [MCU2_0] 307.532633 s: APP: Syncing with 5 CPUs ... !!!

    ****************************************************************/

    Here i don't see MCU1_0 at the end. Is that what's causing the error ??

    What should i do to resolve the above error ?

    Please suggest.

    Thanks & Regards,

    Tanvi

  • Hi Suman,

    Is this issue resolved --> https://e2e.ti.com/support/processors-group/processors/f/processors-forum/1043221/tda4vm-remote_service_test-test-failed-iteration-0

    Wanted an update on it for PDK 7.3.

    I am facing the similar issue as per the last updated query.

    Please update on the issue.

    Thanks & Regards,

    Tanvi

  • Hi Suman,

    Any updates regarding the issue ?

    Regards,

    Tanvi

  • Hi Suman,

    Any updates ?

    Regards,

    Tanvi

  • Hi Tanvi,

    No, this is not resolved. You can refer the referenced JIRA bug listed in that ticket.

    I see that you have already been able to run Vision Apps IPC test using Vision Apps firmwares. Is there any other question here?

    regards

    Suman

  • Hi Suman,

    No, this is not resolved. You can refer the referenced JIRA bug listed in that ticket.

    Can you please confirm if work is under way to resolve this for PDK 7.3 ?

    Thanks & Regards,

    Tanvi

  • Hi Tanvi,

    Unfortunately, the bug won't be fixed anytime soon. It is slated to be fixed in 8.6 SDK (please follow the external JIRA, it will be updated soon with the correct fix Version).

    SDK 7.3 release is done, so there won't be any additional release.

    The bug only affects MCU1_0 core, and the work-around would be to use non-cached buffers for now on any affected release until the bug is fixed.

    regards

    Suman

  • Hi Suman,

    Please specify which Demo I can use to communicate between Main Domain and MCU Domain with Linux using IPC.

    Regards,

    Tanvi

  • Hi Tanvi,

    The default SDK PDK-IPC firmware images (ipc_echo_test) allow you to talk from A72 Linux to each of the remote-cores, and also between each of the remote processors themselves. Main Domain to MCU domain is one sub-set.

    I am closing this thread, since you were able to run the multicore Vision Apps IPC test. Please open a new thread for new questions unrelated to the original thread title.   

    regards

    Suman