This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

MSPM0G1505: Rate of updating GPIOs using DMA

Part Number: MSPM0G1505

I have a setup which requires me to use DMA writes to DOUT31_0 to toggle GPIOs with a specific pattern, which I have as a working prototype.  However, I'd like to check my understanding of what could interfere with the timings.  From experimentation, I believe that it takes 5 clocks of MCLK (which I'm currently running at 80MHz) to read the value from flash and write it to the GPIO port, so I can change the state of the GPIOs every 62.5ns.  I can't see where this is stated in the datasheet/TRM - have I got that right?  (I'm not breaching the switching rates of any single GPIO listed in 7.10.2 of the datasheet).

If I have selected the highest priority DMA channel and am keeping the device in RUN mode, is there anything that could cause the timing to change such as by requiring extra clocks?  It looks like I will need to ensure the CPU isn't trying to access the same memory as the DMA, and also ensure it's not accessing peripherals on the same bus as the GPIO (PD1)?

  • 1. Can you tell me why you think it need 5 clocks of MCLK. The flash can work under 24MHz zero-wait fetch.

    2. Sorry, I can't think any more.

  • I have set up an array of uint32_t values with the bits set to correspond to the GPIOs I wish to toggle.  I have configured a DMA channel to transfer from this array to DOUT31_0 when triggered.  The relevant snippet of my code is as follows (I've trimmed out some unnecessary additional functionality from my application):

    const uint32_t pin_output[] = {
    
        DRIVE_DR3_PIN,
        DRIVE_DR2_PIN | DRIVE_DR1_PIN | DRIVE_DR3_PIN,
        DRIVE_DR2_PIN | DRIVE_DR3_PIN,
        DRIVE_DR2_PIN,
        DRIVE_DR2_PIN,
        DRIVE_DR2_PIN
    }
    
    int main(void)
    {
        SYSCFG_DL_init();
    
        DL_GPIO_enableDMAAccess(DRIVE_PORT, DRIVE_DR1_PIN | DRIVE_DR2_PIN | DRIVE_DR3_PIN);
    
        DL_DMA_setSrcAddr(DMA, DMA_GPIO_OUTPUT_CHAN_ID, (uint32_t)&pin_output);
        DL_DMA_setDestAddr(DMA, DMA_GPIO_OUTPUT_CHAN_ID, (uint32_t)&DRIVE_PORT->DOUT31_0);
        DL_DMA_setTransferSize(DMA, DMA_GPIO_OUTPUT_CHAN_ID, sizeof(pin_output) / sizeof(pin_output[0]));
        DL_DMA_enableChannel(DMA, DMA_GPIO_OUTPUT_CHAN_ID);
    
        DL_DMA_startTransfer(DMA, DMA_GPIO_OUTPUT_CHAN_ID);
    
        while (1) {
            __WFI();
        }
    }

    From adjusting the pin_output array and measuring with an oscilloscope, I can achieve updates every 62.5ns.  I'd be very interested to hear if I should be able to go faster than this.  However, consistency is also important to me, hence wanting to understand what could impact the DMA throughput.

  • I would suggest to put the pin_out into RAM. It will faster than in flash.

  • Thank you for that suggestion.  Does the datasheet/reference manual say what the effective speed of GPIO updates should be for a given clock speed?  I'd assumed it should be two clocks (one to read a value from memory and another to write to the GPIO register) but it seems to be more than this in practice, even using your RAM suggestion.

  • Hi Alan,

    I will connect our design. I will return back soon.

    Eason

  • Here is the reply from our design, Please check if you have any additional quesitons.

    Source address

    clock cycles without CPU access

    clock cycles with CPU access at the same time

    DMA Trigger type

    SRAM without ECC

    • 6 cycles in first dma transfer
    • 3 cycles there after in subsequent transfer
    • 6 cycles in first dma transfer,
    • 3 cycles there after in subsequent transfer

    software trigger

    SRAM with ECC

    • 6 cycles in first dma transfer,
    • 3 cycles there after in subsequent transfer
    • 6 cycles in first dma transfer,
    • 3 cycles there after in subsequent transfer

    software trigger

    Flash (WS 2)

    • 8 cycles in first dma transfer,
    • 5 cycles there after in subsequent transfer
    • 8 extra cycles in first dma transfer,
    • 6 cycles there after in subsequent transfer (1 extra cycle)

    software trigger

  • That's excellent, thanks, and appears to match my measurements.

    Can I just check what your table is showing - that if my GPIO pattern is placed into SRAM, then there is no difference in timing even when the CPU is also accessing SRAM or flash?  So my CPU could still be executing code (including interrupts) and accessing data in flash or SRAM, and providing that no higher priority DMA channel is enabled or GPIO access is performed there should be nothing to perturb the timing of the output of the GPIOs?  If so, that would be ideal for my application.

  • Yes, it is. Anyway, for DMA, it will always have the base clock cycles (repeat-block DMA transfer mode with fastest 8-/16-/32-bit transfer), which is 3 for DMA read, wait, write.