In the first installment of my blog post, I discussed how Multicore Navigator is like a Swiss Army knife of functionality and tools, and discussed using the Queue Manager for doing Synchronization tasks. In this second installment, I will discuss the next tool: using the Queue Manager for doing Notification jobs. Notification is used when a task or processor has a buffer of data that another task or processor needs. In this case, we don’t need to involve a DMA engine to move the data because both the producer and consumer can access it where it is. This is sometimes called a “zero copy” message, and can be very efficient. This can also be thought of as “synchronization with data” (the technical term is a Mailbox).
Using Navigator for Notification
Notification works exactly like synchronization, except that the consumer needs to know where the data buffer is located. If the buffer location never changes, there is no need to send its address (and a simple synchronization can be used instead). If the buffer location can change, the producer needs to send it along with the sync.
In the Multicore Navigator world, what is pushed to a queue is usually the address of a descriptor. This descriptor is a small header of data that is normally used by one of the Navigator Packet DMAs to move the data from one location to another. It describes the data, hence the name descriptor. In the case of a notification job, we don’t need to use a Packet DMA but we can use the descriptor as a place to store the address of our buffer.
Like the synchronization job described previously, the producer pops a free queue descriptor and pushes it to a sync queue (this time we called it a tx/rx queue which is just a name. In fact these are all general purpose queues). The difference now is that the producer must also write the buffer address into the descriptor prior to pushing it to the tx/rx queue. Now, when the consumer pops the tx/rx queue, it reads the descriptor and gets the buffer address from it. The consumer now has immediate access to the buffer of data, and it did not have to be moved from where the producer created it. Here’s a picture:
Let’s take the above notification example and imagine that the producer needs to know when the consumer has finished using the buffer. As is the case with most things as flexible as Multicore Navigator, there are many ways to do it. The most important thing is that the producer and consumer abide by the same set of rules.
So how can it be done? The simplest way would be to add a third queue, we could call it a “completion” queue. The consumer would pop from the tx/rx queue, process the buffer, then push the descriptor to the completion queue. If the consumer is performing an operation such as a checksum on the buffer, the result can be returned in the descriptor. If it is performing a transform of the data (and the buffer itself is the output), the address can be left in the descriptor. This would accomplish a second notification. Either way, we are operating on the same data without moving it, thus saving valuable processing time.
In the next installment, we will introduce another key tool in the Swiss Army knife, the Packet DMA, and discuss how it is used for performing Messaging. Do you have any navigator questions so far? Stay tuned!
I have had trouble determining the details of the Packet DMA infrastructure. Specifically, data burst size (DBS), which components are viable sources & destinations, and number of transfer controllers.
This is probably better suited for the forums, but that's okay. The Packet DMA is not your usual DMA. There are no transfer controllers per se, and data flow is determined not so much by a burst size, but by the PSI (packet streaming interface) that connects the pktDMA. The PSI interface is 128 bits wide, and there's one for Rx and Tx. Each clock the pktDMA can send and receive 128 bits (16 bytes) of data. The Infrastructure pktDMA is the only pktDMA in the device that is not meant for peripheral I/O but rather memory to memory transfers. It can however be used in a chain so that the output queue can trigger the input channel of a peripheral's pktDMA. The Infrastructure pktDMA has 32 channels, so it can process 32 transfers simultaneously. You can refer to the Multicore Navigator User Guide on TI.com (search for SPRUGR9E) for more details.
All content and materials on this site are provided "as is". TI and its respective suppliers and providers of content make no representations about the suitability of these materials for any purpose and disclaim all warranties and conditions with regard to these materials, including but not limited to all implied warranties and conditions of merchantability, fitness for a particular purpose, title and non-infringement of any third party intellectual property right. TI and its respective suppliers and providers of content make no representations about the suitability of these materials for any purpose and disclaim all warranties and conditions with respect to these materials. No license, either express or implied, by estoppel or otherwise, is granted by TI. Use of the information on this site may require a license from a third party, or a license from TI.
TI is a global semiconductor design and manufacturing company. Innovate with 100,000+ analog ICs andembedded processors, along with software, tools and the industry’s largest sales/support staff.