This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM3352: SPI RX overflow

Part Number: AM3352


I have an application which requires reception of bursts of SPI data (>750 transmissions of 16-bits each) at 20 Mbps. The AM335x McSPI peripheral is in slave mode. The issue I'm finding is that at any data rate above approximately 1 Mbps, an RX overflow flag is being triggered, and I'm losing data accordingly thereafter. I'm using the SPI peripheral in receive-only mode and the FIFO receive buffer is enabled with a "full" trigger level of 16 bytes.

I have been leveraging the McSPI drivers from the TI RTOS SDK for the AM335x. I've tried both interrupt and polling mode. So far, I have not tested this setup with DMA transfers, but that is likely an option for improved performance. Nonetheless, it seems likely that I'm doing something wrong to see failures at such low data-rates. Are there requirements or options to improve the performance of the McSPI or FIFO data movement speeds?

Here's some example code I'm using for receiving the data:

bool receiveSpiData(void){
	
    bool ret_val;

    MCSPI_Handle hwHandle = NULL;

    appPrint("\n      Receiving SPI Data...");


    MCSPI_Params spiParams;                /* SPI params structure */
    MCSPI_Params_init(&spiParams);
    spiParams.frameFormat = SPI_POL0_PHA1;
    spiParams.mode = SPI_SLAVE;
    spiParams.dataSize = 16;

    hwHandle = MCSPI_open(BOARD_HVB_SPI_INSTANCE, 0, &spiParams);

    uint32_t        terminateXfer = 1;
    transaction.txBuf = NULL;
    transaction.rxBuf = dataVec;
    transaction.count = (uint32_t)(NUM_FRAMES * (2 + NUM_INPUTS));
    transaction.arg = (void *)&terminateXfer;

    ret_val = MCSPI_transfer_v1(hwHandle, &transaction);
    MCSPI_close(hwHandle);

    if(ret_val == 0){
        appPrint("\n      SPI Transfer Failed.");
        return ret_val;
    }
    else{
        appPrint("\n      Done receiving SPI data.");
    }

    return ret_val;
}

I'm using CCS version 9.2.0.00013 with AM335x PDK version 1.0.16.

  • Hi Joe,

    What is the A8 clock frequency? Do you have any other software executing in the system? Are you compiling with optimizations enabled? Have you profiled the code (or instrumented w/ GPIOs) to see if the CPU is unable to keep up with the SPI FIFO read at 1 Mbps?

    Regards,
    Frank

  • Hi Frank,

    Thanks for the reply. I believe the MPU clock is set to 300 MHz by the GEL configuration script. There is no other software to speak of executing on the system. I toggle some LEDs through an I2C expander to indicate transmission status before starting the SPI peripheral and again after the SPI transmission completes (if it does complete). All of this is done on bare metal, so there shouldn't be any unexpected overhead.

    I have not compiled with optimization, and frankly I'm wholly unfamiliar with compiler options and configuring them in CCS. Please let me know if you can point me to some resources to help with that.

    I am currently working on some tests with using GPIOs to time the interrupt routine within the MCSPI driver. I'll update if I find anything of note.

    Thanks,

    Joe

  • Hi Joe,

    >> not compiled with optimization

    The compiler should provide optimization levels which can selected using the "-O<n>" switch.

    >> I am currently working on some tests with using GPIOs to time the interrupt routine within the MCSPI driver

    I think this is a good approach. It might also be worthwhile to profile the McASP driver API functions used during the Rx transfers. This would conclusively show whether these functions are using too many cycles to keep up with the 1 Mbps SPI Rx.

  • Hi Joe,

    I haven't heard back from you in a while, so I'll close this thread. Please let me know if you have any further questions or concerns.

    Regards,
    Frank