This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

OMAP-L137/C674x EDMA3 timing limitations

Other Parts Discussed in Thread: OMAP-L137

Hi All !

I have a parallel 12-bit ADC interfaced to my OMAP-L137 via GPIO pins. This ADC is clocked to sample at 5 MHz. In other words, every 0.2 usec, my data are ready and available in the GPIO registers. Once ready, I need to transfer one array of 4 bytes from the GPIO registers (data and control signals values) to a destination buffer in memory (I already validated the feasibility to make a DMA transfer from the GPIO registers).

I configured my EDMA paramSet with an A-synchonization (ACNT=4, BCNT=1024, CCNT=1), supposing I transfer each element of 4 bytes (each triggered by a periodic event) in a buffer of 1024 * 4 bytes.

Thus, I need to trigger a DMA transfer (of 4 bytes) at the frequency of 5 MHz, i.e each 0.2usec.

My question is thus, is there any minimum time required between to successive DMA transfers, triggered by 2 successive events? Will I be able to make the DMA transfer of 4 bytes within real 0.2usec, knowing that after 204.8 usec (1024 * 0.2 usec), I must have 1024 * 4 bytes successfully copied in memory?

I could not find the information in the EDMA3 datasheet... Does anymore could help me to clarify this feasibility?

Thanks in advance for your support,

Mai.

  • Hi,

    Thanks for your post.

    You are right that there should be minimum offset required between two sucessive DMA transfer but it all depends on your application code. If your application code is BIOS based, then the software overhead would impact the delay between two consecutive DMA transfers since how much time BIOS takes and how much time it takes to configure EDMA which all depends how efficient the code is written in software perspective.

    Based on the successive peripheral events triggered, then there should be at least some change in the delay between CS hold going low and the data start. In case of data stop and the CS going high, there could be some time spent in EDMA ISR and, then inside the EDMA callback before resetting the CS hold.

    So, in my opinion, the minimum time required between two successive DMA transfers would vary and it all depends on the software overhead, performance of the peripheral event type and so on. In addition to this, please make sure that the slave device if applicable any is causing delay or not.

    In case, if your application code is BIOS based, the following sequence would impact the software overhead, thereby the delay between two successive DMA transfers would vary when there is a request to the driver for read/write [i.e GIO_write()]:

    1. Driver configures the EDMA parameter.  

    2. Just before enabling the EDMA, the CS hold will be enabled.  

    3. EDMA enabled.  

    4. Wait for the EDMA callback.  

    Note: The EDMA callback will be generated, whenever EDMA completes its transaction. On occurrence of the EDMA completion interrupt the EDMA completion handler will parse through the IPR register to get the appropriate "tcc" for which the EDMA has completed. Once getting the appropriate "tcc", the registered callback of the device will be called and in the callback, the device specific configuration are made to complete the I/O.  

    5. In the (Tx/Rx) callback, CS hold will be disabled. Then device specific configurations are made to complete the I/O.

    Thanks & regards,

    Sivaraj K

    -------------------------------------------------------------------------------------------------------

    Please click the Verify Answer button on this post if it answers your question.

    -------------------------------------------------------------------------------------------------------

  • Hi Sivaraj,

    First of all, many thanks for your reply.

    I forgot to mention but my application is not BIOS based. Everything is managed by interrupts and I configured the EDMA registers with CSL libraries.

    I first tried to generate DMA events from timer TM64O_OUT. At 1Hz, the transfers worked well, but tested at 3MHz, I do not have my transfers done is the expected time, which is normal and inevitable using the timer triggering the transferts, as it also requires CPU interruption each timeout period to clear the timer flag in an interrupt (too fast at 3MHz), to have successful transfers .

    Thus, I now plan to trigger the transfer from an external event on GPIO instead of the timer (no need to pass through interrupts to clear flags) but I wonder if even in that case, in which the CPU will not be interrupted at all anymore, it will be possible to manage the DMA transfers in the time expected, and if so, until what frequency?

    Thanks again for your help.

    Mai.