Part Number: AM6548
I was testing IPC communication messages latency between R5F cores on AM6548 CPU with very short ping/pong messages, and I spotted that the latency is roughly 1 ms (for a message to go back and forth between the cores). It's pretty high. What causes this and can it somehow be fixed/improved?
Yes, we don't expect 1ms latency, and we measured in the order of few usec.
Please let me know the setup you are using:
1. Which SDK version?
2. I assume you are measuring in RTOS application context, please confirm.
3. Is it your custom application or with TI provide IPC examples? If custom application, please share pseudo code for you app logic.
Thanks & Regards,
We are glad that we were able to resolve this issue, and will now proceed to close this thread.
If you have further questions related to this thread, you may click "Ask a related question" below. The newly created question will be automatically linked to this question.
In reply to Sunita Nadampalli:
1) It is "PROCESSOR-SDK-RTOS-AM65X" and inside it, there is "pdk_jacinto_07_00_00".
2) Yes, I am measuring it inside TI-RTOS application context.
3) It' custom application, but it based on TI-RTOS IPC ping pong example. Basically, There is an instance of TI-RTOS on the each core, and communication is going on between 2 TI-RTOS threads implementing "IPC server" and "IPC client". Client sends message to the server and then when the message is received, it's processes and a response is sent back. At the time when client receives response to the previous request, it sends new one, and up until it sends some certain amount of messages.
In reply to Mirsad Ostrakovic:
Hi Ostrakovic Mirsad,
Where are the data buffers located? Is it part of the message payload you are sending to rpmsg?
Are you carrying the data as a separate buffer and passing only pointer to it in the message payload?
std::string message = "ping";
int32_t status = RPMessage_send(ipcHandle, ipcDstProc, ipcRmoteEndpoint, ipcMyEndPt, (Ptr)message.c_str(), message.size());
The code above is basically the one used for sending ping messaged from client to the server. As you can see, the whole message is copied instead of sending just the pointer to it. But, I don't have an idea how that can matter in this concreate example, because sizeof(uinptr_t) is 4 bytes and size of this message is also 4 bytes?
Nevertheless, for our real application, we are considering sending a message which will contain buffer pointer to the real message, size of that message and some meta data. So at the end each message exchanged via PRMessage will be roughly 16 bytes at the most. Mainly it's due the fact that RPMessages has limited capacity of 256 bytes on AM65x.
In the mean time, I tried to run TI IPC ping-pong example without any modification (TI-RTOS) and with it I also get the same problem with latency. At the average, there is exchange of only 2000 messages (1000 in one direction and 1000 in other) in a second.
All content and materials on this site are provided "as is". TI and its respective suppliers and providers of content make no representations about the suitability of these materials for any purpose and disclaim all warranties and conditions with regard to these materials, including but not limited to all implied warranties and conditions of merchantability, fitness for a particular purpose, title and non-infringement of any third party intellectual property right. No license, either express or implied, by estoppel or otherwise, is granted by TI. Use of the information on this site may require a license from a third party, or a license from TI.
TI is a global semiconductor design and manufacturing company. Innovate with 100,000+ analog ICs andembedded processors, along with software, tools and the industry’s largest sales/support staff.