This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

lock/semaphore for C6678<-->PC and inter-DSP communication via PCIe using Lightning DSPC-8681E card

TI implements mailboxes for DSP-host and inter-DSP data exchanges, see http://processors.wiki.ti.com/index.php/MCSDK_VIDEO_2.1_PCIe_Demo_Development_Guide#Control_Message_Exchange_via_Pipes_and_Mailboxes

When inspecting the TI code, I am not sure about something and would appreciate helps.

A mailbox is accessed from both sender and receiver (can be on PC and a DSP core respectively or on two DSP cores on same/different chips); dsp_memory_read() and dsp_memory_write() are used for processing mailbox content (mem allocated in the DSP address space). dsp_memory_read/dsp_memory_write are implemented basically via memcpy. Since both sender and receiver need to access/update (both read and write the mailbox header (as well as mail slot header), I'd expect some kind of lock/semaphore is needed to ensure the header updating is protected. However I'm able to find any such things in the code.

Is there actually a locking/unlocking mechanism implemented with the "mailbox" model for mutual exclusion in shared memory update/access from the sender/receiver?

  1. If there is one, what is it and how is it implemented? and which part of the TI code does it?
  2. If not, then is it because it is unnecessary (and for what reason) or has the TI SDK code simply overlooked it?
  3. If indeed TI SDK overlooks it, then one needs to implement a locking/unlocking mechanism. How can one implement it?
    1. If both sender and receiver (from different cores) are on the same chip, I'd assume a hardware semaphore can be used, right?
    2. If sender and receiver are on PC host and DSP respectively via PCIe, what can be used as a lock/semaphore?
    3. If sender and receiver are on separate DSP chips, what's available? perhaps the same thing as host-DSP or something similar for via PCIE?

Any comment/helpful info is very much appreciated!

Yuangao

  • Hi Yuangao, I am not a mailbox expert, but as I can see from linux SDK filetest demo is that mailbox communication is designed to be independent between cores and between host and cores. In other words, there is a mailbox per core and each mailbox is bidirectional host2dsp and dsp2host. Then, host only writes headers and mesages on one (host2dsp) and reads from the other (dsp2host), and viseversa from the cores point of view.

    Hope this help, please let us know if you have further questions.

    Thank you,

    Paula

     

  • Thank you Paula for your response. My question is not really about how mailbox works in general, but more specifically on how synchronization is achieved among multiple cores/multiple chips and host.

    In TI mailbox implementation, both sender and receiver of a specific mailbox will check and update the mailbox header and the mail slot header, but there is not an atomic arbitration to ensure the header check/update is mutually exclusive. I'm wondering why such race condition is not considered in TI's mailbox implementation.

    I've also noticed a "lock" is implemented in mcsdk_video_2_1_0_8 IVIDMC (dsp\siu\ividmc\siuVidMcMultiChip.c) where siu_ipc_osal_multichip_enter_share and siu_ipc_osal_multichip_leave_share doesn't seem atomic either for multiple chips / host to accessed shared memory (presumably in PCIe mem space). Not sure if this is an issue in the DSP/PCIe environment, but it will definitely cause race conditions in linux host CPU. I'm also wondering why the "lock" isn't implemented as atomic.

    In short, my question really is as follows:

    • is it possible at all to implement an atomic lock acquirable from multiple parties including host and cores on multiple chips via PCIe? if so, how? If not, how to prevent race conditions in such environment?

    Thanks,

    Yuangao