This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Hello,
I have a design which pairs a C6748 DSP with a FPGA, where the FPGA will shift data to the DSP, which will then be retrieved from the AFIFO via DMA. To insure that the FPGA interrupts the DSP after it has processed all received data, I need to come up with a deterministic calculation for that processing time. One portion which is not clear is the time that it will take to transfer X number of words from the FIFO, to internal memory via the DMA. Is there a way to determine that?
Regards,
Robert
Hello Robert,
Could you please see if the following helps answer your question?
Regards,
Sahin
Robert Wolfe said:I need to come up with a deterministic calculation for that processing time. One portion which is not clear is the time that it will take to transfer X number of words from the FIFO, to internal memory via the DMA. Is there a way to determine that?
This will be influenced by a several items:
Of these items, the first is easily controlled. You can program the DMAQNUMn registers such that your McASP transfer maps to Queue 0 while other transfers map to Queue 1. That's assuming that this particular event is the most time sensitive thing in your system. In this manner, queue 0 would always be ready to immediately handle your DMA request when it arrives and you will avoid any associated queue latency. I've been involved in many issues over the years where customers were trying to solve real-time issues associated with DMA/McASP, and this one change solved pretty much all of them. In other words, you should pay very close attention to the DMAQNUMn configuration, as it is THE critical know with respect to controlling DMA/McASP latency.
Getting back to your original question, I expect you will need to do some benchmarking in order to ascertain the performance of these scenarios. As indicated, this is a very multi-faceted problem, so a simple calculation isn't really possible. My suggestion is to benchmark a "best case" (i.e. no competing traffic) and to synthetically create a "worst case" (e.g. where you deliberately perform tons of accesses to the same bus). That will give you a range of what to expect.
Best regards,
Brad
Sahin Okur said:Hello Robert,
Could you please see if the following helps answer your question?
Regards,
Sahin
Hi,
Thanks for the reply. There is some useful information there, that I'll comb through. First glance, it doesn't appear to show the way to the exact deterministic calculation of FIFO to memory, but a lot of benchmarks that would give a sense of what it might be.
Robert
Brad Griffis said:This will be influenced by a several items:
- Other EDMA Traffic: In particular, the EDMA contains event queues. The latency will be impacted by the way in which you assign a given event to a queue as well as the size of the associated transfers. In other words, the more events queued ahead of your McASP event, and the more data those events need to transfer, the more latency you'll observe.
- Interconnect latency: There are shared bridges within the Switched Central Resource, so contention for those bridges could further delay the data movement.
- Interrupt management: If you are disabling interrupts anywhere in your system, that can further extend the time needed to service the McASP interrupt.
Of these items, the first is easily controlled. You can program the DMAQNUMn registers such that your McASP transfer maps to Queue 0 while other transfers map to Queue 1. That's assuming that this particular event is the most time sensitive thing in your system. In this manner, queue 0 would always be ready to immediately handle your DMA request when it arrives and you will avoid any associated queue latency. I've been involved in many issues over the years where customers were trying to solve real-time issues associated with DMA/McASP, and this one change solved pretty much all of them. In other words, you should pay very close attention to the DMAQNUMn configuration, as it is THE critical know with respect to controlling DMA/McASP latency.
Great pointer, thanks. I've done that Queue mapping before, so will have to go through the memory banks, and make sure that we have it done (this is the most time sensitive thing in our system).
Brad Griffis said:Getting back to your original question, I expect you will need to do some benchmarking in order to ascertain the performance of these scenarios. As indicated, this is a very multi-faceted problem, so a simple calculation isn't really possible. My suggestion is to benchmark a "best case" (i.e. no competing traffic) and to synthetically create a "worst case" (e.g. where you deliberately perform tons of accesses to the same bus). That will give you a range of what to expect.
Ok, yeah, it's a pretty complicated system, as most processors are, particularly when involving DMA. We'll cook up some benchmarking scenarios, as advised, to try and get some average versus max latency numbers (max being used for interrupt timing calculations for the FPGA).
Thanks,
Robert