This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TMS320C6670: c6670

Part Number: TMS320C6670

champs

We have tracked down a stability issue in our DSP C6670 application to ti/sdo/ipc/ListMP.c in ipc_1_24_03_32.  We are using IPC to pass messages between C6670 cores.  The main symptoms are that after 8 to 72 hours of runtime, an IPC assert fails or MessageQ_Alloc fails to return a buffer after the free list becomes corrupted.

 

Since our IPC message buffers are in non-cached DDR, ListMP operations skip Cache_wbInv() calls and no mfence() are called to synchronize core memory accesses.  If we modify ListMP.c  by adding double mfence operations in locations where cached DDR would have synchronized memory accesses, then we no longer see the failure.  (Note: double mfence is required due to silicon Advisory 32.)

 

Example:

    if (SharedRegion_isCacheEnabled(id)) {

        /* Write-back because elem->next & elem->prev changed */

        Cache_wbInv(elem, sizeof(ListMP_Elem), Cache_Type_ALL, TRUE);

    }

    else

    {

        _mfence();

        _mfence();

    }

 

Questions:

1)      Does TI agree that IPC ListMP lists may become corrupted when non-cached DDR is used?

2)      Has this issue been identified or fixed by TI?  I have searched published TI IPC releases, but didn’t find anything.

3)      Can TI recommend a robust IPC library fix for this issue?

4)      For now I must modify ipc_1_24_03_32 myself.  I can build ipc_1_24_03_32 successfully against bios_6_33_06_50, but NOT against bios_6_35_04_50.  BIOS 6.33 ships with the latest MCSDK, but we need >=6.35 to get fixes for Advisory 32 and perhaps others.

  1. Does the IPC build just use BIOS headers or does it link in its implementation?
  2. Which IPC and BIOS versions should I use with our ListMP.c patch?