Hello,
May i know how to debug IPC and see if its working fine?
Regards
Tarun Mukesh
This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Hello,
May i know how to debug IPC and see if its working fine?
Regards
Tarun Mukesh
PDK IPC:
Reference Documents:
Debug steps with existing example without linux:
1) We shall have pair of VRING between every 2 cores mapped as per table .It is strictly advised to avoid changing the VRING mapping details between cores.
2) Between cores we have reserved VRING address space in DDR . The starting address and size are calculated as per this FAQ .The size is calculated considering 256 buffers and each holding 512 bytes of data .It is hard coded in many parts of the code with these number of buffers and holding data size.
3) There is no requirement of resource table without linux.
4) If you are using MCU1_0 then run IPC application as one more task along with SCISERVER tasks .If not MCU1_0 then you can run application standalone as well.
5) If announce macro is enabled then the core will announce its existence on control end point to all the cores or the one you opted for.
6)IPC LLD in PDK uses dynamic end point creation for communication between cores.
7) If you want parallel communication between any 2 cores then run one more ipc task( IPC TASK 2) along with first ipc tast (IPC TASK 1) and different end points will be used for this communication.
8) There is no mechanism for validation of data after receiving on another core.
Debug steps with existing example with linux:
1) Resource table is must for linux.The resource table must have at least one entry, the VDEV entry, to define the the vrings used for IPC communication with Linux. Optionally, the resource table can also have a TRACE entry which defines the location of the remote core trace buffer.
2)The DDR VRING address space for communication with LINUX is different from the earlier DDR space.Note that this address is specific to the Linux<->remote core VRING.For TDA4VM, it starts at 0xA0000000 then the first 1 MB is what the Linux uses for IPC, so you should not link any text or code into that area, it should also be marked as non-cached in the R5 MPU.The resource table entity needs to be at the beginning of the 15MB external memory section.
R5F(mcu) Pool | 0xa0000000 | 1MB | IPC (Virtio/Vring buffers) | +------------------+--------------------+---------+----------------------------+ R5F(mcu) Pool | 0xa0100000 | 15MB | R5F externel code/data mem |
3)Additionally the Ring Buffer memory used when communicating with MPU running Linux must be reserved system wide. The base-address and size of the ring buffer is different from what is used between cores not running Linux. The base-address and size of the ring Buffer is provided to IPC LLD when Linux updates the core’s resource table with the allocated addresses. Linux allocates the base-address from the first memory-region.
4)Run the command as below and ensure your executable is linked already.
root@j721e-evm:~# ls -l /lib/firmware/ .... lrwxrwxrwx 1 root root 65 Mar 9 2018 j7-main-r5f0_0-fw -> /lib/firmware/ti-eth/j721e/app_remoteswitchcfg_server_strip.xer5f lrwxrwxrwx 1 root root 72 Mar 9 2018 j7-main-r5f0_0-fw-sec -> /lib/firmware/ti-eth/j721e/app_remoteswitchcfg_server_strip.xer5f.signed lrwxrwxrwx 1 root root 67 Mar 9 2018 j7-main-r5f0_1-fw -> /lib/firmware/ti-ipc/j721e/ipc_echo_test_mcu2_1_release_strip.xer5f lrwxrwxrwx 1 root root 74 Mar 9 2018 j7-main-r5f0_1-fw-sec -> /lib/firmware/ti-ipc/j721e/ipc_echo_test_mcu2_1_release_strip.xer5f.signed ....
5)If you are using secondary cores such as MCU2_1 and MCU3_1 ensure the primary cores MCU2_0 and MCU3_0 doesn't have vision apps executable as it impact secondary cores.
6) If you are using IPC ECHO TEST firmware on R5F cores then you can run romsg_char_simple
# MCU R5F<->A72_0 IPC root@j721e-evm:~# rpmsg_char_simple -r0 -n10
7)The end point the communication is happening is on "14". If you intend to do on multiple end point please follow FAQ
Regards
Tarun Mukesh