This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Changing SysTick timer settings EK-TM4C123/BIOS6.41

Other Parts Discussed in Thread: SYSBIOS

Hi,

I created an empty RTSC project. It has included a heartbeatFXN task that blinks an LED using Task_sleep. It appears that tick interval is 1 milli sec, and

the core SysTick timer is being used by BIOS to generate this, since no other timer instances have been created. Where can I view the code for this? The config

file doesnot seem to have any settings pertaining to this timer.

I want to change the tick resolution to 100usec, since the minimum interval for which Task_sleep() can be called is 1 msec at present (which might have an

uncertainity of +/-1 tick) so I need access to interval settings of sys tick timer. I don't want my task to sleep for more than 100usec at a time.

Please advise.

Regards

  • Clipping from Stellaris Sysbios how to make Systick the RTOS clock souse etc.. might help via the wiki. Search TM4C Sysbios for a direct link to the overview PDF.

    Timers and Clocks
    Timers and clocks can be configured within SYS/BIOS to maintain portability between TI microcontrollers.
    When the SYS/BIOS timer module is used, the kernel manages the hardware timer peripherals on the device.
    Timer threads are run within the context of a HWI thread.
    The SYS/BIOS clock module layers on top of the timer modules and manages the RTOS time base. The
    clock module allows functions to be fired as one-shot timers or periodically using the functionality and priority
    of a software interrupt (SWI). Clocks run at the same SWI priority so they cannot preempt each other.
    In the case of the Stellaris® Cortex™-M3, the SysTick timer in the core can be used as the RTOS time base
    by adding the following lines into the configuration file:
    var halTimer = xdc.useModule('ti.sysbios.hal.Timer');
    halTimer.TimerProxy = xdc.useModule('ti.sysbios.family.arm.m3.Timer');
    More information about using specific Cortex™-M3 timers in a SYS/BIOS application can be found on the
    SYS/BIOS for Stellaris Devices Wiki page.
    In addition to the timer and clock module, a timestamp module is provided as a component of the SYS/BIOS
    architecture. This module is useful for benchmarking applications that require precise timing measurements.
    More information about the SYS/BIOS timers and clocks can be found in the SYS/BIOS User’s Manual in the
    SYS/BIOS section of the Help Contents found in the Help > Help Contents menu of Code Composer Studio
  • Thanks. While the replies are helpful they still did not address the basic issue- " How to change the resolution of the Task_sleep(nticks) function?"
    At present by default nticks is in units of 1msec. I want it to be in units of 100uSec.
    Regards
  • Are you sure you want to do that? It will increase the time spent processing timers by a factor of ten.

    Robert
  • It might mean around 10uSec every 100uSec. I may not want exactly 100uSec but some multiple thereof. Accordingly I will set it.
    Anyway since my last post problem is solved. Clock.tickPeriod = 100; did the trick. after including
    var Clock = xdc.useModule('ti.sysbios.knl.Clock');
    Regards
  • That should make Systick = Sysclk 120mHz/100 derived frequency 1.2Mhz , period (8.3e-7) 833ns/tick. How that turns into microseconds has me scratching head. So Sysclk /200 gives a 600kHz cycle and 1.6us/tick yet runs Systick interrupt twice as fast.

    The notes in Tivaware newer SW examples still reference Systick in (ms) , was possibly true for slower MPU clock rates below 50mHz. At 120Mhz F=1/p the larger the denominator (p) the faster becomes tick rate.
  • Besides the processing overhead, the reason I asked is that waits are seldom the method to use for "every x time". Using wait for a loop timer ensure a certain amount of jitter and an overhead that makes the time longer than requested. These side effects get worse the faster you make your tick.

    Remember you have a lot of timers on these devices (and the timers can start some actions w/o processor intervention) and an interrupt that starts an action directly imposes less overhead than one that first has to check a number of sets of data to determine if something should be scheduled.

    The highest frequency tasks in my systems are often running faster than the kernel's system clock.

    Robert
  • Clock.tickPeriod = 100; is generated by XGCONF, automatically after I changed clock_tick() period to 100 uSec from 1000 uSec. Also maybe it is not the same as defining a divisor for the system clock since specs are in uSecs rather than ticks. I am currently using 80MHz CPU frequency on EK-TM4C123.
    SYS/BIOS allows graphical configuration directly in uSec for the Timers and generates divisors internally.
    Regards
  • Perhaps there is another value being divided by 100 prior passing value to Systick timer. Mathematically 80mHz/100 gets us 1.25us (.00000125) or 1250ns. Unless we scope a GPIO pin fired by Systick interrupt handler it would be hard to gauge if the tick period actually hits 100us. Otherwise we have to watch things like LED blink rate, LCD update rate to see the effects of the Systick divisor. I don't trust the programmers notes after recently dealing with this exact scenario, values note (ms) yet the math formula F=1/P proves otherwise. Not to long ago TI migrated Stellarisware into Tivaware and may have over looked factoring for faster MPU clocks around Systick.

    Oddly delay times of SysCtlDelay() get longer by increasing the dominator (P) just the opposite of Systick period divisor.
  • We have tried to limit places of SysCtlDelay() yet find it necessary to strategically place them in various code routines. Reasoning assumes instruction pointer stops processing parts of modules while other module parts keep running as if nothing is wrong. Especially true around interrupt vectors nested routines that may load buffers from slower responding peripherals. We see EEROM may at times not appreciate having the pedal to the metal during reads so we (delay) momentarily prior to asserting into them. Seems a short delay provides cool off clocks so the peripheral runs more synchronously with Systick.
  • Where tasks have to run to guaranteed precision, a Hwi is indeed the proper way. It can and will do its job in any case ,irrespective of the system clock. Task_sleep is not used to generate just a delay. It also allows other tasks to run in the meanwhile. So whilst 1 msec may be too much of a wait 100uSec allows other tasks to run yet ensures that the Task does not sleep "too long."
    Regards
  • BP101 said:
    We have tried to limit places of SysCtlDelay() yet find it necessary to strategically place them in various code routines.

    Yikes!

    BP101 said:
    . We see EEROM may at times not appreciate having the pedal to the metal during reads so we (delay) momentarily prior to asserting into them.

    I can't parse out what you mean by asserting into them but this would worry me.  The EEPROM presents as normal memory for reading, I've not seen evidence otherwise.  I do do a read of EEPROM on startup and I've not seen any evidence of difficulty.  Anyone know of an errata on fast access to the EEPROM?

    Robert

  • "can't parse out what you mean by asserting into them"

    IE: Executing code inside a function nesting many other asserted functions also push the return address of the caller onto the heap and pop it off case no return is sent back to the caller. Indeed code may have varied burst execution rates depending on the events prior to an instructions assertion. Hardware is pushed harder during instruction decode bursting of execution times so timing of a peripheral control signals may become less rolled or have less roll off time.
  • Nikhil Kant said:
    Task_sleep is not used to generate just a delay. It also allows other tasks to run in the meanwhile.

    My reaction to that is that sounds like you have your tasks configured incorrectly or are using the wrong tasking scheme.  It should never be necessary for a high priority task to sleep to give a lower priority task time to run.

    If you have several background tasks that run continuously then you need a kernel with time slicing or one with co-operative tasking for tasks of the same priority

    Robert

  • I think I'm more confused now. The text seems to suggest skipping returns which doesn't make a lot of sense (at least without additional context).

    Robert
  • BP101 said:
    ...during instruction decode bursting of execution times...

    Robert reports confusion - as does my small group.

    Might you bit detail how an, "Instruction decode "bursts" execution time?    

    "Colorful" language does have its place - yet (some) basis to reality should remain.    

    How do we "burst" execution time?     And - why & how - has, "instruction decode" been singled out as, "pushing hardware, harder?"

  • TI RTOS does not provide time slicing by default.
  • Or, apparently, 100uS ticks.  It does appear to support time slicing though.  If not, then FreeRTOS appears to support it.

    I don't think time slicing is often needed (RTKs usually don't emphasize it because it is a minority need) but I don't know your application.  It does, however, appear to be available from multiple sources.

    I haven't seen more sophisticated scheduling like Earliest Deadline First or Least Slack in a small system.  Probably because fixed priorities combined with Rate Monotonic Analysis is sufficient1 until the systems get quite large and dynamic.

    Robert

    1 - And apparently provably optimal for a fairly large subset of cases. https://www.cs.rutgers.edu/~pxk/416/notes/08-rt_scheduling.html

  • Robert Adsett said:
    Probably because fixed priorities combined with Rate Monotonic Analysis is sufficient1 until the systems get quite large and dynamic.

    Applaud your use of qualification "probably" and especially your introduction (first I've noted, here) of, "footnotes" (w/full links).

    Others (especially landing - this thread) would do well to so, "qualify/calm/fact-check" prior to, "unfurling."

  • MIPS is not really a constant speed at all times, every instruction has a unique execution timing. Instruction decode can burst in areas of SW code that have short timings and slow in areas of more dense execution instruction timings. We can tweak the projects overall execution speed in the build optimization settings by making trade offs in debug ability.
  • Your writing (may) make some sense if "burst" was replaced w/"varies."

    Instruction execution time may experience far more variance than the "instruction decode" - which you highlighted.
  • Was typically under the belief they were nearly one in the same. Instruction execution time is linked to the decoders speed in executing each instruction based on a varied binary length. Otherwise channel bursting at (varied) speeds multiple instructions in parallel decoder pipes. Thank fully those days of single and linear FIFO instruction decoders are long past us in todays highly advance MPU designs.

    You may argue individual machine cycles are relative to decode execution timings, Z80 had M1 cycles to differentiate address bus times from data bus times in little of 1 meg address space. The 8085 likewise had ALE. How do we measure machine state against a parallel channel decoders through put other than in theory assigning an acronym (MIPS) to describe what is the true MPU speed. More true measures in Wet Stones or lately the XS Bench score for over clockers in keeping CPU core temp below peak disaster points.

    Do tend to think ARM would take advantage of parallel bursting channels in the instruction decode engine. Perhaps that remains for more advanced architectures such as Intel & AMD. Have always assumed the project build adjustable speed settings in CCS took advantage of the ARMs highly advanced thumb instruction decoder engine. Keeping it real, just guessing ARM to have a parallel decode execution pipes engine.
  • Thought there might be some restriction to EEROM access and behold in reading data sheet (CB1 inspired) stumbled into a brier patch. In as much added this code piece to the top of all EEROM R/W. May not be of much benefit to FR-IOTS folks but hey I'm a likeable code sharing Friday night friend when you need one. BTW: Also reads ARM4 processor execution speed is 150 DMIPS.  WOW 150 not to shabby but watch out for sharp curves on those tracks!

    /*Software must ensure there are no Flash memory writes or erases pending before performing
      an EEPROM operation. When the FMC register reads as 0x0000.00000 and the WRBUF bit
      of the FMC2 register is clear, there are no Flash memory writes or erases pending.*/
    
     while((HWREG(FLASH_FMC) != 0x00000000) && (HWREG(FLASH_FMC2) & FLASH_FMC2_WRBUF))
     {
     }