I am using GSPI in slave mode, with FIFO, serviced by DMA in ping-pong mode. I am operating the RX side only. Data buffers are stored OK, but the ISR is called way too often. I expect the ISR to be called once upon the completion of each of the DMA jobs (primary and alternate), thus signifying that a new buffer has been transferred. What I observe is that the interrupt is triggered on every FIFO request to the DMA. That is, when my DMA transfer length is 1024, and UDMA_ARB_16, I get 64 interrupts before the transfer is actually complete. Experimenting with different ARB size and corresponding AFL produces similar results: ISR is called every time the FIFO requests service. The interrupt status reported on all of these "false" interrupts is SPI_INT_DMARX.
Granted, I detect this condition in the ISR, like so:
// // Read the interrupt status of the GSPI. // ulStatus = MAP_SPIIntStatus(GSPI_BASE, true); // ask for masked interrupts if(ulStatus & ( SPI_INT_RX_OVRFLOW)) { overFlowCounter++; } // // Clear any pending status // MAP_SPIIntClear(GSPI_BASE, ulStatus); // // Check the DMA control table to see if the ping-pong "A" transfer is // complete. The "A" transfer uses receive buffer "A", and the primary // control structure. // ulModeP = MAP_uDMAChannelModeGet(MYDMACHANNEL | UDMA_PRI_SELECT); // // If the primary control structure indicates stop, that means the "A" // receive buffer is done. The uDMA controller should still be receiving // data into the "B" buffer. // if(ulModeP == UDMA_MODE_STOP) { // // Increment a counter to indicate data was received into buffer A. // g_ulRxBufACount++; // // Set up the next transfer for the "A" buffer, using the primary // control structure. When the ongoing receive into the "B" buffer is // done, the uDMA controller will switch back to this one. // MAP_uDMAChannelTransferSet( MYDMACHANNEL | UDMA_PRI_SELECT, UDMA_MODE_PINGPONG, (void *)(GSPI_BASE + MCSPI_O_RX0), g_usRxBufA, sizeof(g_usRxBufA)/sizeof(uint16_t) ); if (spiAB == 1) syncErrorA++; spiAB = 1; // buffer A is ready postEvent = 1; } // // Check the DMA control table to see if the ping-pong "B" transfer is // complete. The "B" transfer uses receive buffer "B", and the alternate // control structure. // ulModeA = MAP_uDMAChannelModeGet(MYDMACHANNEL | UDMA_ALT_SELECT); // // If the alternate control structure indicates stop, that means the "B" // receive buffer is done. The uDMA controller should still be receiving // data into the "A" buffer. // if(ulModeA == UDMA_MODE_STOP) { // // Increment a counter to indicate data was received into buffer A. // g_ulRxBufBCount++; // // Set up the next transfer for the "B" buffer, using the alternate // control structure. When the ongoing receive into the "A" buffer is // done, the uDMA controller will switch back to this one. // MAP_uDMAChannelTransferSet( MYDMACHANNEL | UDMA_ALT_SELECT, UDMA_MODE_PINGPONG, (void *)(GSPI_BASE + MCSPI_O_RX0), g_usRxBufB, sizeof(g_usRxBufB)/sizeof(uint16_t) ); if (spiAB == 0) syncErrorB++; spiAB = 0; // buffer B is ready postEvent = 1; } if ((ulModeA != UDMA_MODE_STOP) && (ulModeP != UDMA_MODE_STOP)) slaveIntFalse++; // this was a false interrupt
I set up SPI and DMA in the standard fashion:
#define UDMA_ARB UDMA_ARB_16 // number of transfers per DMA burst, which are 16 bits, so must be half of AFL #define AFL 32 // almost full level for RX FIFO. Must read this many bytes each INT // // Reset SPI // MAP_SPIReset(GSPI_BASE); // // Configure SPI interface // MAP_SPIConfigSetExpClk(GSPI_BASE,MAP_PRCMPeripheralClockGet(PRCM_GSPI), SPI_IF_BIT_RATE,SPI_MODE_SLAVE,SPI_SUB_MODE_0, (SPI_HW_CTRL_CS | SPI_4PIN_MODE | SPI_TURBO_OFF | SPI_CS_ACTIVELOW | SPI_WL_16)); /* * Enable RX FIFO */ MAP_SPIFIFOEnable(GSPI_BASE,SPI_RX_FIFO); MAP_SPIFIFOLevelSet(GSPI_BASE,1,AFL); UDMAInit(); // // Activate uDMA for GSPI // MAP_uDMAChannelAssign(MYDMACHANNEL); /* * Make this channel high priority */ MAP_uDMAChannelAttributeEnable(MYDMACHANNEL, UDMA_ATTR_HIGH_PRIORITY ); UDMASetupTransfer(MYDMACHANNEL | UDMA_PRI_SELECT, UDMA_MODE_PINGPONG, sizeof(g_usRxBufA)/sizeof(uint16_t), UDMA_SIZE_16, UDMA_ARB, (void *)(GSPI_BASE + MCSPI_O_RX0), UDMA_SRC_INC_NONE, g_usRxBufA, UDMA_DST_INC_16); UDMASetupTransfer(MYDMACHANNEL | UDMA_ALT_SELECT, UDMA_MODE_PINGPONG, sizeof(g_usRxBufB)/sizeof(uint16_t),UDMA_SIZE_16, UDMA_ARB, (void *)(GSPI_BASE + MCSPI_O_RX0), UDMA_SRC_INC_NONE, g_usRxBufB, UDMA_DST_INC_16); MAP_SPIDmaEnable(GSPI_BASE, SPI_RX_DMA); // // Enable the GSPI peripheral interrupts. uDMA controller will cause an // interrupt on the GSPI interrupt signal when a uDMA transfer is complete. // if (0 > osi_InterruptRegister(INT_GSPI, SlaveIntHandler, 0x80)) Message("SPI Slave setup failed\n\r"); else { // // Enable Interrupts // MAP_SPIIntEnable(GSPI_BASE, SPI_INT_DMARX | SPI_INT_RX_OVRFLOW); // // Enable SPI for communication // MAP_SPIEnable(GSPI_BASE);
While I have a work-around, you can see how the interrupts coming every UDMA_ARB_16 transfers seems to defeat the purpose of using the DMA in the first place.
Is there a way to configure the SPI/DMA to interrupt only when DMA jobs complete?