This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

DMA setting up time

The enclosed figure describes one cycle of our task.
We are triggered by a signal (i.e., CH4) being high (event 1 in the figure). Upon arrival of this signal, we have to acquire two channels CH1 and CH2, at 1MSample/s and 16bit per sample via 2 SPIs. We do not know for how long CH4 will be high but we know a maximum duration for which it is high (1 ms). Therefore we want to allocate two DMA channels that transfer the data in two sector of the memory, say S1 and S2, that are large enough. When, after CH4 turns low (event 3) we would like to deallocate the DMA channels and evaluate some statistics on the data in S1 and S2.  At the same time we would like to allocate other two DMA channels that transfer the data on CH1 and CH2 in two different memory sectors, i.e., S3 and S4, via 2 SPIs. Then, we have to read a single value from a register (event 4) and put it in memory sector S5. CH1 and CH2 have to be read upon receipt of event 6 wherein CH4 is high again and some statistics have to be calculated from S3 and S4. The process is then repeated.

My question is the following:
How long does it take to stop the DMA transfer to S1 and S2 after event 3, and to allocate new DMA channels to transfer to S3 and S4? Are we supposed to loose any data around event 3? If so, how many?

Hope to hear from you soon.
Best

Daniele

TI_question.pdf
  • Hello,

    Here is some information that should be helpful:

    The trouble may not be delay as much as the asynchronous nature of the entire process.

    At the receipt of the events – you need to reprogram the DMA.  If any SPI transfer event comes during the DMA reprogramming – the event could perhaps be lost.  This may actually leave the SPI itself in a confused state.  A DMA event that may have been lost. Perhaps this is avoidable if you can stop a DMA, reprogram it and start it in such a way that no event is lost.

    The other option is to simply treat the DMA to longer virtual buffers – say B1 and B2.  When the events on Channel4 – you can simply note where the DMA has reached at that time.  So if say at the beginning (Event 1 – E1) DMA was at B1.E1, B2.E1; at Event 3 – note the DMA position B1.E3, B2.E3.  Similarly at event 6; B1.E6, B2.E6.  Now S1 is the buffer B1 from B1.E1 to B1.E3; S2 is the buffer B2 from B2.E1 to B2.E3; S3 is B1 from B1.E3 to B1.E6; S4 is B2.E3 to B3.E6…

    This should guarantee that no data is lost.  Depending on the timing of things, it is possible that a data element gets binned into S1 instead of S3 or into S3 instead of S1.  The algorithm can manage this discrepancy.  The actual chances of this shift happening would depend on a lot of conditions and not just hardware delay – how is channel 4 sampled, how much delay is likely between an event on channel 4 and the interrupt to the processor or the interrupt to the interrupt being processed.  In my understanding, especially if there is HLOS being run, such delays would be much longer than the HW delays.

    Regards,
    Marc