Part Number: *****>
This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
A DMA transfer takes a finite time; I haven't found a specification for how long it takes in the TRM/datasheet.
The entire second group (1-0-0) cycle seems to be stretched out. Are you using the DMA for anything else? What sort of bus activity is the CPU generating? (You seem to be running very close to the limit, so these things could matter.)
I am not using the DMA for anything else. All other DMA channels are inactive (unconfigured) in the syscfg.
If you mean external bus activity: There is only one external bus connected - the CAN bus: it is expected to receive around 15 Frames every 50ms.
For the internal bus activity: PD0 and PD1 peripheral buses see below.
For the pixel clock generation I use TIMA1.
Other configured peripherals are:
See image with the configuration overview:
I did a simple experiment for a different thread (here) and estimated a DMA transfer at 10 clocks. (I'm pretty sure the DMA runs from MCLK, but I can't seem to find the citation now.)
If this is close to accurate, that would imply a maximum of 8M DMA transfers/sec with MCLK=SYSPLL=80MHz, so you're operating very close to the limit. So any disturbance on the buses could push things over.
1) The CPU (competing for SRAM) is perhaps the easiest thing to control. If you can, arrange for the CPU to sit in WFI/WFE while this is going on.
2) Is it the case that it's always the second triplet that is stretched out, and all the other bits are "normal"? When you lower the PCLK to 5MHz, do you still see the stretching in the second triplet?
3) In my experiment I also noticed that the timings "wiggled" for the first 2x transfers. (I haven't looked further but at the time I blamed it on the ADC). It looks as though your PCLK runs all the time, i.e. the timer always runs; how do you "put it in gear" to start the DMA? (Event trigger enable? DMA channel enable?)
Hello Bruce,
thank you for checking it out!
I am also assuming that we are very close to the limit.
See answers to your points:
1. --> WFI/WFE would mean that during the DMA transfer we cannot do any computation. As we do not have a frame buffer, we use that time to compute the next row for the LCD.
2. --> It is not all the time the second triplet. It depends on how I configure the PWM for the pixel clock and on which event I trigger the DMA transfer. I saw that it happends also in the middle of the "1 0 0 data".
With 5 MHz it seems to to be working well. The pulse (from the 1 0 0 triplet) has constant high level - it is not stretched. Also, from my analysis there is no extra pixel clock.
3. Yes, the PCLK runs all the time, to respect also the porch timings. The DMA is started by:
DL_DMA_setTransferSize(DMA, DMA_LCD_CHAN_ID, transferLength);
DL_DMA_enableChannel(DMA, DMA_LCD_CHAN_ID);
Best regards,
Mihai
[I haven't been ignoring you, but I also don't have any good answers.]
Two notions ["suggestions" would be too strong a term]:
1) If we suppose that (a) what we're seeing is some kind of startup artifact with the DMA and (b) the DE signal is part of the data (the bit-vector being written to DOUT), there may be some value in prepending "dummy" bits with DE=0 to the actual data, to give whatever-this-is time to settle.
2) Starting the DMA by Enabling the channel (with PCLK active), vs clearing/enabling the Event trigger (ICLR/IMASK), would mean that there is almost certainly a request already in the Event channel (MIS), with the result that the first DMA transfer would trigger immediately -- not synchronized with PCLK -- in which case the first bit would be short. But I don't see that in the scope trace, so I suppose I'm missing something.
I set up a test case which I think (at least superficially) resembles what you're doing. I haven't been able to generate the exact waveform you posted, but I have been able to generate similar anomalies running very close to PCLK=MCLK/10.
One experiment I did was to (similar to the other thread) read the trigger timer CTR rather than write DOUT. Something I noticed was that the first DMA transfer of a "batch" (EN=1) took 2 clocks longer than the rest. 9I'm pretty sure those are MCLKs since the number is 2 even if I slow down ULPCLK to MCLK/2.) This shows up early in the cycle, some time before the source is read, though I can't see further. Using Repeat-Single DMA mode, this extra delay only appears on the first transfer, not on each repeat (reload).
Since you're running so close to the limit, those 2 extra clocks are capable of pushing the trigger sequence into overflow (TOV), cascading to make one or more transfers happen late. If operating right on the cusp, the effects appear to even out over time.
It would be tempting to say that using Repeat-Single DMA mode could be a workaround, but given the start-stop nature of your data stream, you'd have to stop the trigger (timer) to pause the data output, which with such a short PCLK period would be a significant race. (Also you said you need to keep PCLK running all the time.)
Given the constraints, it may be that (1) above is your best bet. While I suggested (1) as a "fix the symptom" mechanism, this extra evidence may provide a plausible reason for using it.
Hello Bruce,
I am answering on behalf of Mihai, he could not post here anymore (getting an error).
Thank you for the clarification!
We already tried in repeat single DMA, but, as you said, we cannot handle the other timings.
We have now also the point one (1) implementation. The DE is part of the data, and we send a few DE = 0 dummys. But the risks are big that the PCLK "loss" doesn't happen at the beginning of the transfer. Especially that we also saw that it happens in the middle of the transfer.
For us it is clear that we are at the limit of the MCU, and we cannot find a robust solution for the control. It is important that we understood what is happening, and why is this issue happening.
Therefore, we are following two paths at this moment:
- clarifying with the LCD manufacturer to find ways to lower the PCLK, and control the display as stated here.
- hw design change, in order to control the display with more GPIOs from the same DOUT port, in parallel mode. This would let us then to lower the PCLK to 3-5MHz. First tests show that in this way the DMA transfer is working without any problems.
I think that at this point, we can close this issue here.
Best regards