Hello,
I created a protocol to exchange messages with a slave. The thing is that the slave sends varying size packages (from 3 bytes to 1kB), which doesn't leave me with much of a choice for using DMA with it. The only choice I thought I had was to receive the protocol's overhead until the package's size, and then reconfigure the DMA for such transfer.
Thinking now (I'm not at work right now, just thinking about it), especially about chaining DMA requests. Here's my line of thought.
- Every package has its own type (first byte indicates type of message being received, so I can allocate the correct buffer)
- Maybe I could associate DMA with 1 byte transfer and associate the correct buffer to the request line;
- If it's an error, change the DMA size to receive to 2 more bytes (CRC);
- After it's done, modify the DMA size to 1 again, so it can receive the next message;
(All of this would have to be done by software)
- If it's not an error, I would manually change the destination buffer and size to receive 2 more bytes (size);
- After the size is received it would trigger another DMA request, which would write the size to the first request line's size so it can start receiving the data + crc;
- After this happens I would return everything to its original state.
It would llok something like this
MIBSPI -> DMA1 1 byte
DMA1 change size to 2 bytes to receive to its correct buffer
DMA1 triggers DMA2 to change DMA1 block size
MIBSPI -> DMA1 transfer data
After transfer is finished, restore to initial state
In my mind I can kinda see it working properly (maybe I'm too tired). What do you think? Maybe I'm overthinking it and the way I first implemented it is better? It's up to discussion here. I'd really like to hear your opinions (members and TI employees).