This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

EK-TM4C123GXL: Unexpected performance limit

Part Number: EK-TM4C123GXL

I have a breakpoint set on the last line of the program below: t = getTicks();

If SYSTEM_TICKS_PER_SECOND is set to 100,000 the breakpoint triggers.

If I set it to 1,000,000 the breakpoint does not trigger.

This sounds like all of the processor's time is spent in the ISR or it is not clearing. Systick is said to be self-clearing and I do not think that is the case as  on the scope the blips are 1 microsecond apart. Maybe systick uses more processing power/time than I thought. I chose systick because of its high priority, -1, but maybe an external timer would be less burden on the ARM core.

The clock set to 80 MHz so I would think it would have time to do other things. Please set me straight.

.

#include <stdint.h>
#include <stdbool.h>
#include "inc/hw_memmap.h"
#include "driverlib/gpio.h"
#include "driverlib/rom_map.h"
#include "driverlib/sysctl.h"
#include "systick.h"

#define SYSTEM_TICKS_PER_SECOND 1000000UL
static uint64_t ticks;
static bool sytickConfigured = false;

void systickSetup(void)
{
    SysTickPeriodSet(SysCtlClockGet() / SYSTEM_TICKS_PER_SECOND);
    SysTickIntRegister(systickISR);
    SysTickEnable();
    SysTickIntEnable();
    sytickConfigured = true;
}


uint64_t
getTicks()
{
    volatile uint64_t pete;
    volatile uint64_t repeat;

    //Deal with ISR updates during non-atomic read
    pete   = ticks;
    repeat = ticks;

    while ( pete != repeat )
    {
      pete   = repeat;  //This should be just a register move, fairly efficient.
      repeat = ticks;
    }

    return pete;
}


static void
systickISR()
{
    volatile static bool toggle = 1;
    ticks++; // Microseconds since power up.

    //Create 1 MHz blips on scope //jvh
    GPIOPinWrite(GPIO_PORTF_BASE, GPIO_PIN_2, GPIO_PIN_2); //Blue
    GPIOPinWrite(GPIO_PORTF_BASE, GPIO_PIN_2, 0);

    //Tickle scheduler...
}

int
main(void)
{
    volatile uint64_t t = 0;

    // Set clock to 80 MHz - my eval board is 16 MHz instead of 20 MHz
    MAP_SysCtlClockSet(SYSCTL_SYSDIV_2_5 | SYSCTL_USE_PLL | SYSCTL_OSC_MAIN | SYSCTL_XTAL_16MHZ); //jvh - change freq

    SysCtlPeripheralEnable(SYSCTL_PERIPH_GPIOF);
    GPIOPinTypeGPIOOutput(GPIO_PORTF_BASE, GPIO_PIN_2);

    systickSetup();


    while (1)
    {
       SysCtlDelay(1000); //100000 cycles is between 4500 and 15000 uSec, depending on load.
       t = getTicks();
    }
}

  • Hi John,

    This sounds like all of the processor's time is spent in the ISR or it is not clearing.

    If you set SYSTEM_TICKS_PER_SECOND to 100,000 then the breakpoint will trigger. I also try that. This already proves that the processor does come out the ISR and execute the getTicks function. I also see the variable 't' and variable 'ticks' are almost identical when SYSTEM_TICKS_PER_SECOND is set to 100,000. 

      You wrote below line where SYSTEM_TICKS_PER_SECOND is defined as 1000000. You configure the system clock for 80Mhz and therefore SysCtlClockGet() will return 80000000. With this you are setting the systick period for 80000000/1000000=80 system clock cycles. That is equal to 1uS. Within 1uS, the processor needs to push the stack, execute code to set and then clear the LED and the pop the stack before returning. This can take  some cycles to complete. If I comment out the two lines to set and clear the LED then I can see the breakpoint trigger. Please try it for yourself. This is with SYSTEM_TICKS_PER_SECOND  set to 1,000,000. 

    SysTickPeriodSet(SysCtlClockGet() / SYSTEM_TICKS_PER_SECOND);

  • Thanks for posting, Charles.

    I had tried commenting out the GPIO command and saw a difference and should have so stated, apologies. 

    But my question remains. Am I still so close to the performance limit that there will be time for nothing else?

    I tried an external timer but had poor results, that is another post.

  • Hi John,

    But my question remains. Am I still so close to the performance limit that there will be time for nothing else?

    I tried an external timer but had poor results, that is another post.

    Yes, more or less as this is similar to a consumer-producer model where the consumer cannot consume the data (e.g. set/clear the LED and other operations all inside the ISR) faster than producer (e.g. be it a sensor input or a extremely fast interrupt).

     In the post https://e2e.ti.com/support/microcontrollers/arm-based-microcontrollers-group/arm-based-microcontrollers/f/arm-based-microcontrollers-forum/1088947/ek-tm4c123gxl-timer-seems-wrong-at-low-load-counts, you measure 823KHz as opposed to 1Mhz expected using timer module. Here for the systick you are also using 1Mhz for the systick interrupt interval. So the behavior is consistent. 

     80   823 KHz        1 MHz

  • Thanks for all the detailed info. I like your avatar, it makes me long for the mountains of middle Virginia.

  • Sometimes time is the cure. I left this alone for awhile, learned a little more, got better requirements and decided the code I posted earlier is overkill. Interrupts will not be needed. 

    So all I needed was an upcounting wide counter without interrupt enables. It looks like 64 bits is good for 7000+ years at 80M ticks per second. When the current microsecond count is needed I just call a function:

    uint64_t get_usec(void)

    {

       return TimerValueGet64(WTIMER0_BASE) / 80ULL;

    }

    The counter returns a clean 64 bit number where I do not need to worry about atomic copies. The divide may take 2 usecs but that will be a ~constant for all calls to the function. Plus this greatly reduces the strain on the processor.