Hi,
I have written my own slave driver for SPI and DMA as I was having some issue and wanted a simpler piece of code to debug. Instead of using Ping-Pong mode, I am using Basic mode as I do not need to ever transfer more than 1024 bytes in one go. So I have removed all the frames and queue code. I have some questions regarding the driver:
1) It seems there is a mysterious way the driver flushes the Tx FIFO. I'm guessing this is an internal TI trick as I can't find any documentation anywhere.
/* SPI test control register */ #define SSI_O_TCR (0x00000080) #define SSI_TCR_TESTFIFO_ENABLE (0x2) #define SSI_TCR_TESTFIFO_DISABLE (0x0) /* SPI test data register */ #define SSI_O_TDR (0x0000008C)
2) The driver registers a power notification for when the device wakes from stand-by. The notify function then flushes and disables the SSI. Then it configures it again, all in initHw(). Why is this needed and can I exclude this code? If I don't release the constraint and thus preventing a transition into stand-by, I would think this never gets called anyway.
Power_registerNotify(&object->spiPostObj,
PowerCC26XX_AWAKE_STANDBY,
(Power_NotifyFxn) spiPostNotify,
(uint32_t) handle);
3) I first produced a very stripped back non-DMA driver loading the Tx and Rx FIFO's in ISR's and this worked okay but would pause when the mac became busy as the lower priority SPI ISR's got delayed. However, I noticed that I could send the chip select low from thge master side, receive data, load tx data and then transmit that data all without toggling the chip select. To overcome the data pause I implemented DMA. This works but I have to prime the DMA transfer in the slave by making the SPI master toggle the chip select in order for the master to receive the correct byte (I'm not using full-duplex comms so there is only valid data on the Tx line or Rx line and never both). This is in-line with the datasheet. So my question is rather, why was this not needed in the non-DMA version?
Many thanks.