I have some code which presently triggers a DMA transfer off GPBNKINT5, the DMA controller reads from an external FPGA via EMIF into L1Data, when it finishes I get another IRQ that lets me swap buffers and process the data.
It is important in this application that the data is read once and only once and with constant latency (it is a CBR data channel without sequence number protection). So the FPGA generated exactly one IRQ for every packet it has ready to pick up, and has a timer to ensure there is always a 120 ns low before the rising edge of the next IRQ.
The IRQ is automatically cleared when the DMA controller accesses the last byte of the fixed size packet.
This all works well with single packets, but when things are loaded and the FPGA is putting out IRQs separated by only the 120 ns enforced low the DMA controller misses IRQs, I can see a difference between the number of DMA completion IRQs and the number of trigger IRQs from the FPGA (this is connected to 2 GPIOs so I can have another IRQ handler count them).
I can see 2 possible problems.
Most obvious, I have not set up the DMA controller in time so it is not ready for the IRQ - I am looking...
Another problem might be that the auto clear happens automatically during the DMA controllers read phase, perhaps there is some time it will ignore a new IRQ while it is wrting data, doing an int_ack? and re-enabling the IRQ, how much time do I need to allow for ths? Is 120 ns ample?
Ta
Chris