I have a few questions about the DMA working set updates.
- How often are the updated? The documentation says ‘upon arbitration’ but that seems a bit hand-wavy.
- Is there a way to force arbitration so that the working set will always be updated when I check it? Currently I am triggering a SW request to the highest priority DMA channel to copy 1 byte. Would this force arbitration? What happens if the channel isn’t ‘active’ because it has finished the HW request of copying 1 byte and is currently waiting for another HW request for the next byte?
- Is it possible to for the SCI to drop bytes if the DMA channel was unable to service the request in time? I currently have this RX buffer enabled for 2 separate SCI ports.
- I’ll be posting my method of attempting to get DMA working by polling below. Does this make sense or is it a limitation of DMA that I wouldn’t be able to use it in this context?
End goal: use DMA to copy SCI received bytes to a circular buffer and process them without interrupts by polling the DMA registers.
Because DMA doesn’t work well in situations where you don’t know the full packet size I have the current setup.
- 512 byte circular buffer that the DMA copies bytes from SCI->RD to using DMA_HW requests and AUTO_INIT on so I don’t have to restart it.
- 1 byte elements, with 1 element per frame. SCI is in single byte mode. (as opposed to multibuffer)
- Set PortA / PortB bypass so that 1 byte is immediately copied out for faster channel arbitration.
The polling structure:
Every polling interval I check the following:
FTCFLAG for the channel to see if the DMA channel has serviced a frame transfer request.
If yes,
I clear the FTCFLAG.
I initiate a high priority 1 byte DMA copy that should kick any active channels off and force arbitration to update the working set.
I disable all interrupts (except errors)
I check the working set increment the number of bytes currently in the buffer by the change in frames remaining in the CTCOUNT register for that channel.
I handle the buffer wraparound by checking to see if the current remaining frames > previous remaining frames. (means the DMA finished and restarted. I guess I could check the BTCFLAG register to get this as well)
I re-enable all interrupts
If no, I exit until next polling interval.
I have a check to see if the number of bytes in the buffer exceeds the size of the buffer (we have gone too far and overwritten the buffer before I could extract some bytes).
Current baudrate is 115200, current polling rate is 333hz (3ms between polls). I am definitely not saturating the link at this time. Averaging 250-700 bytes per second.
Since my packets are of indeterminate size, but have specific start / end characters, I have a separate process that checks the ‘bytesReceived’ buffer that the DMA request copies to, then extracts whole messages and processes them. The issue that I am getting is that there are times when the FTCFLAG has been set, but the working set has been unchanged. Is this due to clearing the FTCFLAG too soon? If I clear it too late I could potentially skip a polling interval even though a byte has been received. (say a DMA request gets serviced after high priority channel, but before I clear the flag).
I also appear to be dropping packets even though theoretically this shouldn’t drop any bytes. (Every message sent requires a response, sometimes my responses are quite late, e.g. more than 300ms later so they trigger a ‘timeout’ error, but a bunch of packets just don’t have responses indicating 1 of 2 things. 1. They didn’t get processed, 2. The message response didn’t get queued / sent. I am currently investigating this, if it’s not processed this could be due to a missing byte and subsequently failing the crc, The tx response gets queued into a similar 512 buffer that uses a separate DMA channel to send the TX, if too many messages get queued up at once I could potentially be throwing responses away.)
I guess there is the possibility that the DMA is actually writing to byte 525 inbetween polling requests, but my process to extract bytes reduces that number to under the maximum before the next polling interval. I am doubting this is the case, but it’s not impossible. I had a breakpoint set to trigger if the bufferCount > BUFFER_MAX_SIZE during the polling interval when I was incrementing the bufferCount by the change in FTC in the CTCOUNT register. It was not hit.