AM2612: AM261X zephyr IPC setup

Part Number: AM2612

zephyr_ipc.zip 

I'm having issues setting up IPC communication for the AM261X processor.  I am running Zephyr on core R50-0 and FreeRTOS on core R50-1.  I was following the rpmsg_static_vrings from the latest Zephyr release.  For the FreeRTOS side I was trying to adapt one of the existing rpmsg_echo examples from the MCU_PLUS_SDK.  I verified the vring rx/tx addresses (as shown in attached *.png), vring ring count, and vring buffer size all matched.  However after loading each program into the respective core the system is stuck waiting for messages to arrive.   

I removed IpcNotify calls from the FreeRTOS size and replaced with a call to RPMessage_announce.  I wasn't sure what the endpoint value should be here, there was no corresponding parameter in the Zephyr api/dt.  I assumed the service name should be the same as the value in the Zephyr ipc_ept_cfg struct.  It was also unclear which version of the SDK I should be using as the release notes just said latest.  From what I can tell there was a release of the MCU_PLUS_SDK the same day the tag was created.  That happens to be the version I am using.

I have included both example projects in the attached *zip file, any help would be appreciated.

  • Hi Michael,

    The initial setup you have seems ok. Let me try to explain on the implementation for clarity.

    1. The SDK core acts only as a remote core 
      1. Announces on 53 (announcement endpoint) with a service-name and its local endpoint id (say 10).
      2. awaits a response on its local endpoint communication.
    2. The Zephyr Core acts as host.
      1. registers a service name on its local-endpoint (say X) and awaits announcements from remote cores.
      2. once the announcement (on 53 endpoint) is received, it reads the service-name, if matches, takes note on the remote-endpoint-id (10) from the remote (this was the local endpoint id on the SDK core). 
      3. sends an ack on the local-endpoint, X (this is abstracted to the application and is not returned with any Zephyr API) to the remote-endpoint (10).

    Now SDK core received an ack on its local-endpoint (10) from a remote end point X. From here on, the messages can be exchanged using the remote-endpoint ID (X) from SDK core. whereas the Zephyr (host) core does the communication using the service name only.

    The ACK messages responses in the RPMSG needs to be handled separately as these are "NULL" messages. PFA modified example from SDK below for reference.

    /*
     *  Copyright (C) 2021 Texas Instruments Incorporated
     *
     *  Redistribution and use in source and binary forms, with or without
     *  modification, are permitted provided that the following conditions
     *  are met:
     *
     *    Redistributions of source code must retain the above copyright
     *    notice, this list of conditions and the following disclaimer.
     *
     *    Redistributions in binary form must reproduce the above copyright
     *    notice, this list of conditions and the following disclaimer in the
     *    documentation and/or other materials provided with the
     *    distribution.
     *
     *    Neither the name of Texas Instruments Incorporated nor the names of
     *    its contributors may be used to endorse or promote products derived
     *    from this software without specific prior written permission.
     *
     *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
     *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
     *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
     *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
     *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
     *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
     *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
     *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
     *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
     *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
     *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
     */
    #include <stdio.h>
    #include <string.h>
    #include <inttypes.h>
    #include <kernel/dpl/ClockP.h>
    #include <kernel/dpl/DebugP.h>
    #include <drivers/ipc_notify.h>
    #include <drivers/ipc_rpmsg.h>
    #include "ti_drivers_open_close.h"
    #include "ti_board_open_close.h"
    
    /* This example shows message exchange between multiple cores.
     *
     * One of the core is designated as the 'main' core
     * and other cores are designated as `remote` cores.
     *
     * The main core initiates IPC with remote core's by sending it a message.
     * The remote cores echo the same message to the main core.
     *
     * The main core repeats this for gMsgEchoCount iterations.
     *
     * In each iteration of message exchange, the message value is incremented.
     *
     * When iteration count reaches gMsgEchoCount, the example is completed.
     *
     */
    
    #define ZEPHYR_HOST
    
    #define CALLBACK_HANDLER_ENABLED
    
    /* number of iterations of message exchange to do */
    uint32_t gMsgEchoCount = 100000u;
    
    #define IPC_RPMESSAGE_SERVICE_PING        "my_ping_service"
    #define IPC_RPMESSAGE_ENDPT_PING_CORE0          (13U)
    #define IPC_RPMESSAGE_ENDPT_PING_CORE1          (14U)
    
    /* This example is ran between 2 cores.  */
    
    /*
     * Remote core service end point
     *
     * pick any unique value on that core between 0..RPMESSAGE_MAX_LOCAL_ENDPT-1
     * the value need not be unique across cores
     */
    volatile uint32_t msg;
    
    /* maximum size that message can have in this example */
    #define MAX_MSG_SIZE        (64u)
    
    /* RPMessage_Object MUST be global or static */
    RPMessage_Object core0_rpmsg_obj;
    RPMessage_Object core1_rpmsg_obj;
    
    volatile uint32_t dataLen;
    
    char recvMsg[MAX_MSG_SIZE];
    
    /* the announcement from other core, will create a call back to this core.
    So, we should create a table of the endpoints with their ID as index and name of the service as their value */
    #define RPMSG_NAME_SERVICE_LEN  (32u)
    
    void app_rpmsg_recv_callback(RPMessage_Object *obj, void *arg, void *data, uint16_t dataLen, int32_t crcStatus, uint16_t remoteCoreId, uint16_t remoteEndPt)
    {   
        /* this is a callback/ function for handling the registered endpoint msg */
    
        int32_t status;
        uint16_t local_ept_id = obj->localEndPt;
        uint16_t remote_ept_id = remoteEndPt;
        uint32_t remote_core_id = remoteCoreId;
    
        uint32_t self_core_id = IpcNotify_getSelfCoreId();
    
        char reply[MAX_MSG_SIZE];
        volatile uint32_t len;
    
        /* remote core so lets reply with the message */
        memcpy((void *) reply,(const void *) data,(size_t) dataLen);
    
        sprintf(&reply[strlen(reply)], " : reply from remote core %d\n", self_core_id);
    
        len = strlen(reply) + 1;
    
        status = RPMessage_send(
            reply, 
            len,
            remote_core_id, 
            remote_ept_id,
            local_ept_id,
            SystemP_NO_WAIT);
        DebugP_assert(status==SystemP_SUCCESS);
    }
    
    void ipc_rpmsg_echo_core0_core_start(void)
    {
        
    }
    
    void ipc_rpmsg_echo_core1_core_start(void)
    {
        int32_t status;
        RPMessage_CreateParams createParams;
        char recvMsg[MAX_MSG_SIZE];
        uint16_t recvMsgSize, remote_core_id, remote_ept_id;
    
        RPMessage_CreateParams_init(&createParams);
        createParams.localEndPt = IPC_RPMESSAGE_ENDPT_PING_CORE1;
    #ifdef CALLBACK_HANDLER_ENABLED
        createParams.recvCallback = app_rpmsg_recv_callback;
    #endif /* CALLBACK_HANDLER_ENABLED */
        status = RPMessage_construct(&core1_rpmsg_obj, &createParams);
        DebugP_assert(status==SystemP_SUCCESS);
    
    #ifndef ZEPHYR_HOST
        IpcNotify_syncAll(SystemP_WAIT_FOREVER);
    #else
        uint32_t hs_addr = 0x72000000;
    
        uint32_t hs1_value = 0x05050505;
        uint32_t hs2_value = 0x4;
    
        while(*((volatile uint32_t *)(hs_addr)) != hs1_value) {
        }
        *((volatile uint32_t *)(hs_addr)) = hs2_value;
    
    #endif /* ZEPHYR_HOST */
    
    
        status = RPMessage_announce(CSL_CORE_ID_R5FSS0_0, IPC_RPMESSAGE_ENDPT_PING_CORE1, IPC_RPMESSAGE_SERVICE_PING);
        DebugP_assert(status==SystemP_SUCCESS);
    
        DebugP_log("[IPC RPMSG ECHO] Remote Core waiting for messages from main core ... !!!\r\n");
    
        /* wait for messages forever in a loop */
        while(1)
        {   
    
            if(core1_rpmsg_obj.recvCallback != NULL){
                /* if callback is registered, then no need to call recv */
                ClockP_usleep(100);
                continue;
            }
    
            /* recv callback is not registered, 
               we use the RPMessage_recv for blocking call to message reception*/
            
            /* set 'recvMsgSize' to size of recv buffer,
            * after return `recvMsgSize` contains actual size of valid data in recv buffer
            */
            recvMsgSize = sizeof(recvMsg);
            status = RPMessage_recv(
                &core1_rpmsg_obj,
                recvMsg, &recvMsgSize,
                &remote_core_id, 
                &remote_ept_id,
                SystemP_WAIT_FOREVER);
            DebugP_assert(status==SystemP_SUCCESS);
    
            /* alternatively, this can be called.
                app_rpmsg_recv_callback_handler(&core1_rpmsg_obj, NULL, recvMsg, recvMsgSize, 0, remote_core_id, remote_ept_id);
            */
            if(recvMsgSize > 0)
            {
                /* handling the message */
                app_rpmsg_recv_callback(&core1_rpmsg_obj, NULL, recvMsg, recvMsgSize, 0, remote_core_id, remote_ept_id);        
            }
        }
        /* This loop will never exit */
    }
    
    void ipc_rpmsg_echo_main(void *args)
    {
        Drivers_open();
        Board_driversOpen();
    
        if(IpcNotify_getSelfCoreId()==CSL_CORE_ID_R5FSS0_0)
        {
            ipc_rpmsg_echo_core0_core_start();
        }
        else
        {
            ipc_rpmsg_echo_core1_core_start();
        }
    
        Board_driversClose();
        /* We dont close drivers to let the UART driver remain open and flush any pending messages to console */
        /* Drivers_close(); */
    }
    

    Thanks and regards,

    Madhava

  • Thanks for the explanation, I figured out my issue.  I can see messages pinging back and forth.