This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

66AK2E05: Hyperlink and concurrency

Expert 1010 points
Other Parts Discussed in Thread: 66AK2E05

I was able to make the Hyperlink Memory mapped example work between two 66AK2E05 on the A15 side. I am concerned about the concurrency topic now. I have seen almost nothing on the forum.

How can I safely lock access to the databuffer when reading/writing through Hyperlink?

My use case is the following: one 66AK2E05 is writing remotely a smalll set of integers (less than 10) to the other 66Ak2E05 through Hyperlink. On the other side, the 66AK2E05 is just reading the incoming data.

As basis, I am using the CPU block transfer from the memory mapped example of the MCSDK. 

As fas as I know, I can use the 2 following solutions:

- hyperlink interrupt packet

- hardware semaphore

In order to be exhaustive, I want to know if there are other possibilities and what would be the most efficient way? For example:

- Since it is a CPU block transfer, can I use the data buffer as an atomic structure? Can I assume that there won't be any conccurrent read and write?

- Is polling on an Hyperlink register to know if there is a pending transfer possible (any flow control register?) ?

  • Hi DPA,

    Are you working on Linux or SYS/BIOS on A15?

  • Hi Rajasekaran,

    I am working on Linux on A15.
  • You can use some synchronization mechanism to handle the concurrency transfers.
    Mutex, spin lock etc.,

  • Hi Titusrathinaraj,

    Thanks for you answer.

    I do not really understand how I can use the usual synchronisation mechanism in this context. If I have for example the following code:

    const int hyplnk_BLOCK_BUF_SIZE = 8;
    
    /* Memory block transfer buffer */
    typedef struct {
      uint32_t buffer[hyplnk_BLOCK_BUF_SIZE];
    } hyplnkBlockBuffer_t;
    
    hyplnkBlockBuffer_t *dataBufPtr;
    
    std::mutext mutex;
    
    int main(int argc, char *argv[])
    {
    
    dataBufPtr = (hyplnkBlockBuffer_t *) hyplnk_mmap(CSL_MSMC_SRAM_REGS, 0x1000);
    
    mutex.lock();
    uint32_t testRead = dataBufPtr->buffer[0];
    mutex.unlock();
    
    }

    I do not really understand how the mutex will lock access to dataBufPtr for the remote side (the second 66AK2E05). I tend to think that the remote write would be out of the scope of the mutex.

    Would the mutex stop the VBUSM master write operation initiated through hyperlink?

  • Any news or advice on this topic?

    I am trying to figure out a standard use case to handle concurrency between two Socs with Hyperlink.
  • Can you please refer to the following e2e post ?
    Here, the customer is using semaphore between the cores.
    e2e.ti.com/.../426834
  • Dpa,

    Can you explain what is your concurrency requirement? From your usage, one side just writes a small amount of the data to the remote side, and the remote side is purely reading. You need a way to make sure all the data is available on the remote side before starting read?

    Regards, Eric

  • Hi Eric,

    I just want to be sure that there is no concurrent memory access on the same data buffer. It does not seem that there is any locking principle for Hyperlink per default. If one Soc board is reading its local data buffer that the remote side is accessig through hyperlink, how can I be sure that there is no concurrent access?

    One is accessing thank memory mapping from the user space and the other one thank hyperlink peripheral which triggers a VBUSM master write operation.
     I would say that they can be in conflict if I do not use any locking or lock-free algorithm.

    Regards, dpa

  • dpa,

    Assuming there is a local buffer, the remote side writes into this buffer via Hyperlink. On the local side, we use polling to determine if the data pattern is arrived. If it is possible that remote write and local read happens at the same time for the given address. Next polling will give you the right data. Why this usage is a problem?

    Regards, Eric

  • Thanks Eric for you answer,

    How can you be sure that all data arrived? And how do you know?

    By my example code above, the data buffer is placed in SRAM handled by the MSMC. I suppose that it ensures integrity of my data when reading.

    But is it blockwise?

    In other words, can it happen that the reader is reading a memory block at the same time a block transfer is initiated by the hyperlink due to writing operation? It means that I won't have data consistency: by reading, part of data would be already new and rest of it would still not be updated.

    My understanding of this topic is limited, it can be a simple question for a simple answer.

    I want to have 100% assurance that I have data integrity but also data consistency.

    Regards,

    Damien

  • Hi,

    I summarized below what I understood of the way it is working.

    Is it correct?

    If yes, how is the arbitration done when reading and writing at the same time?

  • dpa,

    Hyperlink write is strictly ordered. What you can do:

    • write a block of the data to remote via Hyperlink
    • write an additional word to the next address with a fixed pattern via Hyperlink
    • Then on the remote side, you poll this address. If you see the pattern, you know all the block data prior to this address has arrived 

    Regards, Eric 

  • Thanks for the suggestion Eric,

    I did not mention that I want to send this small amount of data periodically (in ms range) so I cannot use this method since I cannot say if the pattern is from a previous write or not.

    My question was more to understand concurrency generally speaking in the context of hyperlink: how a local read and a remote write (with hyperlink) operations are handled by using the memory mapped example from the MCSDK.

    To be more specific, let me just ask one question then:

    By using the memory mapped example from the MCSDK with a data packet of less than 256 bits, can I be sure that a local read on ARM and a remote write through Hyperlink are performed atomically?

    I know that I can use interrupt packet of Hyperlink and hardware semaphore for example, but I want to understand how it is working to be sure that I won't overengineer a solution if it is already ensured per hardware design that a so small packet is read/written atomically.  

  • Do I need to open a new thread for this specific questions?
  • dpa,

    It is possible to make that a local read on ARM and a remote write through Hyperlink are performed atomically. We may need to give Hyperlink master priority on Teranet, so ARM can't read the buffer in the middle of the write access. This needs more to explore.

    The discussed/listed methods are well understood and more manageable to implement:
    - Hyperlink interrupt
    - HW semaphore
    - after transfer, write a pattern to the next address (the pattern can be counter, so when it changes via polling, you know the new data comes in)

    Also in Hyperlink register offset 0x8, status register, you can check bit 1 mpend and bit 11 rpend to make sure they are cleared.

    Regards, Eric

  • Hi Eric,

    Thank you for this highly valuable answer.

    lding said:

    It is possible to make that a local read on ARM and a remote write through Hyperlink are performed atomically. We may need to give Hyperlink master priority on Teranet, so ARM can't read the buffer in the middle of the write access. This needs more to explore.

    Is there any chance that we can get extra support on how to do this in details (registers to set, C code)

    lding said:

    Also in Hyperlink register offset 0x8, status register, you can check bit 1 mpend and bit 11 rpend to make sure they are cleared.

     I did not know how I missed these ones. Combined with serial_stop control bit, this would do the job by polling solution. Thank you. 

    Regards, dpa