This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Understanding setting timer trigger periods

I'm setting my Stellaris Launchpad clock speed at 40Mhz (at least I *think* I'm doing that :)):

SysCtlClockSet(SYSCTL_SYSDIV_5 | SYSCTL_USE_PLL | SYSCTL_OSC_MAIN | SYSCTL_XTAL_16MHZ);

I need an interrupt to fire every 26.4us, so I'm doing something as follows:

// ...timer setup code...

TimerLoadSet(TIMER0_BASE, TIMER_A, SysCtlClockGet() / 1000 / 1000 * 26.4);

This is obviosuly not right (getting a FaultISR).

What's the correct way of doing this?

Thanks!

  • Playing around a bit more:

    TimerLoadSet(TIMER0_BASE, TIMER_A, SysCtlClockGet() * 264 / 10000000);

    Seems at least to behave. Will this give me my 26.4us? (Have I calculated it correctly?)

  • David Kaplan said:
    at least I *think* I'm doing that

    And that would make two of us...

    Perhaps simpler way to "skin this cat."

    @ 40MHz each clock is 25nS.  Then 26.4/.025  =  1056 clock ticks.  (check 1056 * .025 = 26.4)

    Thus:  TimerLoadSet(TIMER0_BASE, TIMER_A, 1056);   //  should yield your desired time period

    KISS - most always - rules...

    There is a potential "pitfall" in the simpler method - above described.  Change from/to the 40MHz System Clock will require a new calculation of the value (1056) I hard-coded.  That said - there always is value in producing a "specific number" rather than just a complex abstraction.  (i.e. SysCtlClockGet() altered by some unclear constant) 

    Suspect that a sound, safer way is to first employ the calculation method (illustrated here) - and then insure that the "processed" SysCtlClockGet() result matches.  The discipline of the manual calculation will often save you from an "order of magnitude" miscue...

    With this ~40KHz interrupt frequency - your interrupt service must be "short/sweet."  (if time spent in interrupt service is just ~2uS - 10% of your CPU time is "lost" to this continuous interrupt process)   Seems like a heavy demand...

     

  • Thanks. I've done as you suggest.

    Yes, the interrupt is heavy; it's for a 800x600 VGA controller :).

    It pretty much steals 80% of the cycles to do what it needs to do - but I'm happy with that (that other 20% is more than enough for dealing with the framebuffer and video ram stuff).

    Counting cycles for Cortex-m4 instructions... fun... :)

  • Thank you - glad to have assisted.

    May I offer an alternative for your display implementation?  Have long been "captured" by SSD1963 Graphic Controller - which supports to 800x480 TFT/STN panels.  (I do note your 800x600 reference - but that is beyond the VGA (640x480) spec - also quoted)

    As for, "20% being more than enough" - I raise a high flag.  Stellaris has - too long imho - remained @ 8 bit bus - and wider bus is much advantaged in your Ap...  Simple addition of "purpose-built" TFT Control IC provides your MCU the necessary freedom to "expand" - accommodate additional needs - which most certainly will visit...

  • I'll take a look at that controller. Does it support VGA?

    The 20% is more than enough for handling vram and framebuffer switching. You're right, if the load grows then I'll be pressed to find the cycles but will have t wait and see. I can always push the chip to 80Mhz and double up on the free cycles.

    I'm just doing this for fun really and the challenge (getting the exact amount of cycles for it to sync is hard). If I needed something custom and proper I'd just use an FPGA. :)

    Another question: Do you know what the overhead is cycle-wise for an interrupt to be triggered? I.e. what happens between the timer expiry and when the first line of code of the interrupt routine is executed (presumably there's an table lookup for the function and a call to it)? Is there any way to find this out?

  • David Kaplan said:

    @ 40MHz each clock is 25nS.  Then 26.4/.025  =  1056 clock ticks.  (check 1056 * .025 = 26.4)

    Thus:  TimerLoadSet(TIMER0_BASE, TIMER_A, 1056);   //  should yield your desired time period

    I am facing the same problem while generating PWM using timer0 on my LM4F120H5QR controller at pin PB6.

    if we calculate the clock ticks then to generate the frequency of 50Hz I should put 8,00,000 there instead of 1056.

    And as I want 5% duty cycle (1ms on time + 19ms off time = 20ms) I've used 760000 as my MatchSet value.

    In short I am using following line of codes to do this : (at 40 MHz)

        SysCtlPeripheralEnable(SYSCTL_PERIPH_TIMER0);
        TimerConfigure(TIMER0_BASE, TIMER_CFG_32_BIT_PER);
        ulPeriod = ((SysCtlClockGet() / 50 /2);
        TimerLoadSet(TIMER0_BASE, TIMER_A, ulPeriod -1);

     I am getting 50Hz with 50% duty Cycle on Digital Oscilloscope, But not able to set the dutycycle...

    Then I've tried by enabling T0CCP0 which I've got from one of the blogs related to this:

    // 40 MHz system clock
        SysCtlClockSet(SYSCTL_SYSDIV_5|SYSCTL_USE_PLL|SYSCTL_XTAL_16MHZ|SYSCTL_OSC_MAIN);

    // Configure PB6 as T0CCP0
        SysCtlPeripheralEnable(SYSCTL_PERIPH_GPIOB);
        GPIOPinConfigure(GPIO_PB6_T0CCP0);
        GPIOPinTypeTimer(GPIO_PORTB_BASE, GPIO_PIN_6);

    // Configure timer
        SysCtlPeripheralEnable(SYSCTL_PERIPH_TIMER0);
        TimerConfigure(TIMER0_BASE, TIMER_CFG_SPLIT_PAIR|TIMER_CFG_A_PERIODIC);
        TimerLoadSet(TIMER0_BASE,  TIMER_A, ulPeriod -1);
        TimerMatchSet(TIMER0_BASE,  TIMER_A, dutyCycle); // PWM
        TimerEnable(TIMER0_BASE, TIMER_A);


    Here the value of ulPeriod can last for max. 131070 (65535 x2) - may be because I am splitting the timer.

    but I need to set 800000 as a ulPeriod (I think it is right!!? ain't I?) And for that I should not split the timer.

    But I haven't get any guidance/example/idea to use 32/64 bit timer to generate PWM pulse. I've gone through documents of API functions..

    And also have applied some of them to start 32/64 bit timers but :( I am unable to do that.

    Can someone please indicate where I am missing what ??

  • David Kaplan said:
    overhead is cycle-wise for an interrupt to be triggered?

    We've found some variation - depending upon the specific interrupt.  (and we've measured across multiple ARM MCUs - multiple vendors) 

    IIRC - elapsed time from GPIO or Timer triggered signal to entry w/in interrupt service routine is sub 1uS.  (of course depends upon MCU, System Clock, priority settings, etc.)

    Have been in the display biz - Cortex M3 and M4 really not intended for such application.   (many have tried - frustration, delay, unacceptable performance - their "norm")

    Yes if time/effort/cost no object - FPGA + buffer ram can achieve - but never as efficiently or well conceived as "pro-designed" SSD1963.  (I receive no benefit from SSD maker)  

    Suggest you find "fun" from a wheel "less superbly implemented!"