This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VH-Q1: How to Use A72 to Start & Stop MCAL IPC

Part Number: TDA4VH-Q1
Other Parts Discussed in Thread: TDA4VH

Tool/software:

Hi TI Experts,

Customer SOP TDA4VH on SDK9.2, and now implemented two safety channel on MCU1-1 & MCU4-0. As there is no much physical dynamic (all on R5f), the certification body requires to add more SW dynamic to enhance the safety score for the certification.

One feature customer is trying to implement is to use A72 to start & stop MCAL IPC. The background is, customer successfully use A72 to start & stop PDK IPC before, they handle the mailbox message IPC_RP_MBOX_SHUTDOWN in a callback function (rpMboxMsgFxn). When they terminate the safety, they acknowledge the shutdown via IPC_RP_MBOX_SHUTDOWN_ACK.

However, customer does not know how to use similar method to start & stop MCAL IPC cddipc at A72, could you help suggest a method for customer to try?

Many Thanks,

Kevin

  • Hi Kevin,

    The expert assigned to this ticket is not available until mid next-week. Kindly expect delay in response for a week.

    One feature customer is trying to implement is to use A72 to start & stop MCAL IPC. The background is, customer successfully use A72 to start & stop PDK IPC before, they handle the mailbox message IPC_RP_MBOX_SHUTDOWN in a callback function (rpMboxMsgFxn). When they terminate the safety, they acknowledge the shutdown via IPC_RP_MBOX_SHUTDOWN_ACK.

    Is this about stopping the MCU R5F core altogether? What is the overall motivation for this?

    This in general is against the Safety concepts, where MCU R5F is the Safety Master core. The MCU R5F core is also running the SciServer, so bringing it down is equivalent to bringing down the entire system.

    There is no support for this in the SDK, nor will there be any future requirement for this due to the above reasons.

    However, customer does not know how to use similar method to start & stop MCAL IPC cddipc at A72, could you help suggest a method for customer to try?

    If the customer has implemented this feature in PDK IPC, then they can scale the same to Cdd IPC as well. The lower-level IPC layers are very much similar between PDK IPC and CDD IPC. They have to own this feature, and this is out-of-scope for TI support.

    regards

    Suman

  • Is this about stopping the MCU R5F core altogether?  

    --Yes.

     What is the overall motivation for this?

    --Mainly for system bootphases like reconfig.

    If the customer has implemented this feature in PDK IPC, then they can scale the same to Cdd IPC as well. The lower-level IPC layers are very much similar between PDK IPC and CDD IPC. They have to own this feature, and this is out-of-scope for TI support.

    -- (1)Related code  is different  and the CDD IPC  IT have certification.

    < void Ipc_mailboxInternalCallback(uintptr_t arg)
    ---
    > static void Ipc_mailboxInternalCallback(uintptr_t arg)
    359,362c440,447
    <     uint32              n;
    <     Ipc_MailboxData      *mbox;
    <     uint32              msg[4];
    <     Ipc_MailboxFifo      *fifo;
    ---
    >     uint32_t          n, i;
    >     Ipc_MailboxData  *mbox;
    >     volatile uint32_t msg[4] = {0, 0, 0, 0};
    >     volatile uint32_t parsedMsg[4] = {0, 0, 0, 0};
    >     volatile uint32_t rpMboxMsgRecv = 0, rpMboxMsg = 0;
    >     volatile uint32_t numMessages;
    >     Ipc_MailboxFifo  *fifo;
    >     uint32_t shutdownMsg = IPC_RP_MBOX_SHUTDOWN;
    373c458
    <             if(0U != MailboxGetRawNewMsgStatus(mbox->baseAddr, mbox->userId, fifo->queueId))
    ---
    >             if(0U != Mailbox_getRawNewMsgStatus(mbox->baseAddr, mbox->userId, fifo->queueId))
    375c460,461
    <                 if( MailboxGetMessageCount(mbox->baseAddr, fifo->queueId) > 0U)
    ---
    >                 numMessages = Mailbox_getMessageCount(mbox->baseAddr, fifo->queueId);
    >                 if(numMessages > 0U)
    378c464
    <                     MailboxGetMessage(mbox->baseAddr, fifo->queueId, msg);
    ---
    >                     Mailbox_getMessage(mbox->baseAddr, fifo->queueId, (uint32_t *)msg);
    381,382c467,468
    <                     MailboxClrNewMsgStatus(mbox->baseAddr, mbox->userId,
    <                             fifo->queueId);
    ---
    >                     Mailbox_clrNewMsgStatus(mbox->baseAddr, mbox->userId,
    >                                                fifo->queueId);
    384,385c470,494
    <                     /* Call the function with arg */
    <                     (mbox->fifoTable[n].func)(msg, fifo->arg);
    ---
    >                     /* Process till we get the special RP Mbox message */
    >                     for(i=0; i<numMessages; i++)
    >                     {
    >                         if(msg[i] != shutdownMsg)
    >                         {
    >                             parsedMsg[i] = msg[i];
    >                         }
    >                         else
    >                         {
    >                             rpMboxMsgRecv = 1;
    >                             rpMboxMsg = msg[i];
    >                             break;
    >                         }
    >                     }
    > 
    >                     if((0U == rpMboxMsgRecv) || ((1U == rpMboxMsgRecv) && (numMessages > 1U)))
    >                     {
    >                         /* Call the function with arg */
    >                         (mbox->fifoTable[n].func)((uint32_t *)parsedMsg, fifo->arg);
    >                     }
    > 
    >                     if((1U == rpMboxMsgRecv) && (NULL != gIpcObject.initPrms.rpMboxMsgFxn))
    >                     {
    >                         gIpcObject.initPrms.rpMboxMsgFxn(fifo->arg, rpMboxMsg);
    >                     }
    389d497
    <                 /*CDD_IPC_CoverageGap_34: There is only one msg in the queue always so it does not excute the else part. */
    391c499
    <                     MailboxClrNewMsgStatus(mbox->baseAddr, mbox->userId, fifo->queueId);
    ---
    >                     Mailbox_clrNewMsgStatus(mbox->baseAddr, mbox->userId, fifo->queueId);

     

    --(2) CDD IPC  call the Ipc_mailboxInternalCallback is not like PDK IPC,  CDD IPC must match remote ID, if not match , it can not call Ipc_mailboxInternalCallback.  

    /** \brief Low Level Mailbox ISR for a given remote processor */
    void Ipc_mailboxIsr(uint32 remoteProcId)
    {
        uintptr_t mBoxData = 0U;
    
        if (IPC_MAX_PROCS > remoteProcId)
        {
            mBoxData = gIpcRProcIdToMBoxDataMap[remoteProcId];
            if (0U != mBoxData)
            {
                Ipc_mailboxInternalCallback(mBoxData);
            }
            else
            {
              /* Do Nothing */
            }
        }
        else
        {
          /* Do Nothing */
        }
    
        return;
    }

  • And this feature  is done by TI

  • Hi Suman, Tarun,

    Customer has described the reason why they may need your support to start & stop Cdd IPC, such as the remote ID, customer needs your help to guide them how to match the remote ID to call Ipc_mailboxInternalCallback,

    Thanks for your help!

    Kevin

  • Hello,

    As suman mentioned this is against safety standard,you cannot reset the MCU R5F core alone without resetting the entire system. There can be sciserver calls which will fail to be addressed .

    You need to reset the entire system for this.

    Regards

    Tarun Mukesh