This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

RTOS/TM4C1294KCPDT: RTOS 2.16.1.14 update has a few issues.

Guru 54087 points
Part Number: TM4C1294KCPDT
Other Parts Discussed in Thread: SYSBIOS,

Tool/software: TI-RTOS

RTOS 2.16.1.14 Has several issues:

1. Boolean switch fails state change detection inside a running task when executed from GUI composer v1.0 button. Must set Boolean value prior to entering the task for it to assert inside task.

2. Task_sleep(2000) function inside an task being exited produces no Sleep (indication) in RTOS Analyzer details or Live events but show 0000 for time listing. Task_exit() or (return) directives produce same results. 

3. Reverting MCU core with 1Meg flash to 512k flash MCU fails to place Tivaware device driver library interrupt (.vector) in the correct position and fails to compile application with M3 HWI module support.     

4. RTOS Analyzer graphs (load/execution) CC7 debug import no live data yet switching to details view shows values being posted.  

  • #1 work around fixed by adding a (while) versus (if) directive with 2000ms sleep period. Waiting for the Boolean switch to be activated from the GUI. The odd part is the task was Queued each time after sleep period but the Boolean switch was apparently not being LATCHED.

    #2 Task name used be shown so how anyone know what task is from all the (dynamic) allocated HWI task handlers. The 2000ms time out period showing the logging but there are 2 static tasks with same logger prints syntax.

     RTOS Analyzer Format: 664000063025,,CORTEX_M4_0,"LM_sleep: tsk: 0x200147a8, func: 0x58f9, timeout: 2000",Task_LM_sleep,Unknown,200147a8,,11287,LoggerIdle,ti.sysbios.knl.Task,ti.sysbios.knl.Task,,,79680007563,0x200147A8,0x58F9,0x7D0,0x0,,,,,

    Static Task Function logger:

    /* Sleep for 2000ms */
     Task_sleep(sleepDur);
     Log_print2 (Diags_USER1, "Task %d awake, count = %d", 0, count);
     count++;

    #3 program will not fit into available memory.  run placement with alignment fails for section ".vtable" size 0x26c , overlaps with ".vecs", size 0x360 (page 0) tm4c1294kcpdt.cmd /rtsc_bldc_qs_iot line 38 C/C++ Problem.

    *RTOS Project (TM4C129x.cmd) file incorrectly assigns  (.vtable as Hex value), change (.vtable : > SRAM) make (.intvecs: > FLASH) That issue is a killer time eater migrating existing APPS to RTOS.

  • Can you attach the project so we can have a better understand of your comments?

    Todd
  • Hi Todd,

    The only problem left is the RTOS Analyzer graphs are not producing any lines in the recent update CCS7 on Thursday and it replaced many IDE files etc... The only graph kind of working is Execution but only produces a single line for Kernel idle task. The CPU load graph was previously producing several different dynamic HWI loaded tasks as seem to recall some time ago.  

    Update to CCS7.0 versions:

     IDE - Debug Server Integration Feature 6.0.1.201705101800,    RTSC/XDCtools (Target Runtime Support) 3.32.2.25,   XDCtools Core Update Feature 3.32.2.25, Code Composer Studio IDE Main Feature 7.2.0.201705101800, System Analyzer (UIA Target),  (IDE Client) 2.0.5.50, <<< DVT - Graph Visualization 4.1.0.201705251038 >>>

    https://e2e.ti.com/support/development_tools/code_composer_studio/f/81/p/648499/2382951#2382951

  • Question is should Logging be producing CPU load graphs when and if it can produce the CPU load detail. If so what might have changed in the updates to break RTOS graphing functions. The original RTOS core was version 3.32.01.22

    Yes it should:

    https://e2e.ti.com/support/development_tools/code_composer_studio/f/81/p/648499/2382951#2382951

  • Hi Todd,

    After hours of RTOS 2.16.1.14 evaluation my conclusion is a Static task only execute one time even though debug live events indicate loading many times, 0.00 CPU load gets logged ever more. Static task only execute an (if) directive on very first load and forever more the task module fails to execute the loaded task function after it returns yet indicates loading, 0.00 CPU load. There fore a static task only functions executes when a (while loop) exist in the task itself, thereby keeping it alive (persistently) in the task module handler.

    That odd behavior seems contradictory of how any task module should behave in a hub and spoke Kernel or otherwise. Point is trying to throttle a tasks kernel time slice using (Task_sleep) causes erratic asynchronous timing of (high speed) visual display data when the task function never (returns) and loops out of sync with other events. The simple fact a while loop is required to keep the static task alive (spoke) seems to indicate something is horribly wrong in the Task module (hub). So the task is Queued but never re-executes when an (If) directive with Boolean switch qualifier are used to invoke the function. 

  • Hi Todd,

    Regarding a static task execution behavior; Last reported same bizarre behavior (not fully understanding) issue over a year ago. As a test have moved our USB0 printing static task call into an existing functional 1 second periodic GPTM timer. RTOS live events show (if) directive Boolean switched static task persistently (loading) with CPU load 0.00 after initial execution of task. Yet the static task persistently executes where it failed miserably as a task managed by (Clock_tick) with same Boolean switch (if) directive, (no Task_sleep) directive in function. 

    Again a static task requiring (while) directive to keep it alive seems to indicate (Clock_Tick) or something else is not having an effect on Task module to (execute) ANY static task but 1 time.

  • BP, Todd is actually out of the office this week. Can you wait until next week for a response?
  • Figured that was the case, thanks David!
  • BP101 said:
    The static task requiring (while) directive to keep it alive seems to indicate (Clock_Tick) or something else is not having an effect on Task module to (execute) ANY static task but 1 time

    Verified the static task Exits then enters Blocking yet maintains LOAD (persistently) Live events but never does actually re-load task priority15. Yet calling the same static task from a M3 HWI registered periodic GPTM handler executes the (xdc_Void) task persistently. Very puzzling why the kernel task model is not re-queuing the once blocked task. 

  • I'm confused, once a task exits, it's gone. You have to call Task_create (or Task_construct) to get it to come back.

    Why are you calling a task function from a timer? From the kernel's perspective, the Hwi function is simply calling another function. It has no idea that it is a task function.
  • Confused - Me too...

    ToddMullanix said:
    Why are you calling a task function from a timer?

    Because the task never executes after it falls through (if) directive when handled by task modules timer under kernel control, assume uses Clock_tick.  Yet the static task when moderated by Task module (Clock_tick) persistently indicates LOAD after it seemingly entered blocking status, never executes again but seemingly re-queues into a LOAD status. Fairly certain it never exited but only when under timer control does it actually exit. Adding a while loop inside the task keeps the task alive in the task handler.

    The static task is not being instructed to exit (inside function) from kernels task module. Ideally a task should never exit task module control unless explicitly instructed to do so inside the task handler or by kernels arbitration. E.G. asserting a (Task_exit) command in the function. A static task instance should remain dynamically under control of the task module once it has been loaded. That is until the application instructs the task module otherwise via (Task_exit) or the kernel preempts a tasks execution order to assert application or kernel exceptions. 

    Is not the kernel Clock_tick used to throttle task module for rolling tasks in and out of execution? So the Task_sleep directive inside a task handler could be passed count value (Void ARGU0) to throttle the task modules loading a task for execution. The kernel is not in priority control if the application has priority CPU execution over the task module, issuing Task_construct directive. Such a directive should execute from the task module under kernel control, not the main application.

    ToddMullanix said:
    You have to call Task_create (or Task_construct) to get it to come back.

    The task has already been constructed by the task module Instance, why should it be necessary to keep recreating the same Instance the task module is already dynamically moderating? The task module is under highest priority kernel control, second priority set to the task of applications control. Seemingly the Task module is allowing a static created instance (task) function to automatically exit, when it should not. Task handling is the job of the kernels task module, not so much the application. Otherwise why even bother to have a task module in the first place other than to see live events being printed in debug.

  • Let's go back to basics since I think we are talking about different things.

    Here's a typical task runs forever (albeit will sleep for 5 ticks after doing its work) and never exits.

    Void foo(UArg arg0, UArg arg1)

    {

       while(1) {

          //do something interesting that does not leave the while loop (e.g. no "break;", "goto", etc.)

          Task_sleep(5);

       }

       // should never get here

    }

    Here is a task that can exit.

    Void bar(UArg arg0, UArg arg1)

    {

       int flag = 1;

       while(flag) {

          //do something interesting

          Task_sleep(5);

          if (some condition) {

             flag = 0;

          }

       }

       // can get here and the task will exit.

    }

    Are we using the same language for when a task exits?

    Todd

  • The top (run for ever task) prints Task _LD_exit live events stats and seems to LOAD it in kernel time slots with various but expected stat info as the function executes perpetually. The bottom task_fxn returns normally and seemingly may pop stack for return address end of execution but task module never seems to execute the Boolean switch inside the task LOAD event or is clearing it.

    Adding Task_construct directive in (Void main) does not seem to bind with the task instance thread, no difference with or without it. What seems to be occurring after a task executes 1 time with an embedded Boolean switched (if) directive, the true switch used to qualify (if) may somehow be cleared by the task module. Not sure about that happening, suspect unlikely to occur.

    It doesn't seem logical we should have to add Task_construct inside the functions in order to keep binding with/to the instance thread address already in memory and known by the task modules task handler. If that were true the (while loop) embedded task would need the same Task_construct to keep it alive in the task modules live events printed stats but it hasn't and doesn't.
  • The top one will never log a Task_LD_exit event.
  • Boolean switch directive from Tivaware; e.g. #include <stdbool.h>

    bool g_bPrintingData = true;

    Task_Params_init(&taskParams); taskParams.stackSize = TASKSTACKSIZE; taskParams.stack = &task1Stack;

    Task_construct(&task1Struct, (Task_FuncPtr)PrintAllData_TaskFxn, &taskParams, NULL);

    Void
    PrintAllData_TaskFxn(UArg arg0, UArg arg1)
    {
           /* Print device statistics from tStats */
          if(g_bPrintingData)
          {
               USBprintf("\n> Print All Data tSats \n");
               /* Print out the MAC address for reference */
               PrintMac();
               /* Print every cycle the SRAM array buffer storing the CIK. */
               USBprintf("> CIK: %s\n", ExositeCIK);
    
               // Print some header text.
               USBprintf("\n << Collected Statistics >>\n");
               USBprintf("    --------------------\n");
               PrintStats(g_psDeviceStatistics);
               USBprintf("\n");
           }
    
               Log_print2 (Diags_ALL_LOGGING, "Task %d awake, count = %d", id, count);
               count++;
    }
  • Sorry, what's your point with your last post?
  • Perhaps the top while task has numerous subroutines and does return falling through the bottom, have to check again if was Task_LM_switch not exit. Task_LD_exit hover text says "falls through bottom" or when Task_exit is explicitly called.
  • That is the task that is falling through the bottom minus the task modules task instance code.
  • If I insert a Task_sleep(duration) in the print task right after UART2 logging it quickly faults the MCU. Oddly adding the same sleep to a while loop UATR2 logging works ok.
  • I explicitly wrote the top task to NOT exit to make sure our terminology was the same.
    I wrote the second task to show a case where a task could exit...again to make sure the terminology of exit is the same.

    If a task does not exit, it will stay active. It may be blocked, preempted, running or ready but the kernel manages it's execution based on its state and the state of the other threads.

    If a task exits (e.g. calls Task_exit or falls out of the task function), it is gone. You have to call Task_create (or Task_construct) if you want it back.
  • ToddMullanix said:
    If a task exits (e.g. calls Task_exit or falls out of the task function), it is gone. You have to call Task_create (or Task_construct) if you want it back.

    The task as it is written shouldn't be falling out under it's own code posted above. Oddly ROV shows task has ambiguously Terminated (yellow box). A different status than live events keep posting (LS_taskLoad) Load (below). Still doesn't answer why Task_sleep(duration) with Boolean switch (if) test in same posted code faults the kernel. Yet a Task_sleep(n) exists in the IOT task with a while loop does not fault the kernel.

    ,"LS_taskLoad: 0x2001582c,0,-764920560,0x7c81",Load,TSK,PrintAllData_TaskFxn()@2001582c,0.00,1449,LoggerIdle,ti.sysbios.utils.Load,ti.sysbios.utils.Load,,,70070274604,0x2001582C,0x0,0xD2683D10,0x7C81,,,,,
    

  • Name: Task_LD_exit Prototype: extern const Log_Event Task_LD_exit;

    Description: Logged when Task functions fall thru the bottom or when Task_exit() is explicitly called

    Package: ti.sysbios.knl Product: TI-RTOS for TivaC 2.16.1.14

    **********************************************************************

    Name: Task_exit Prototype: Void Task_exit( );

    Description: Terminate execution of the current task

    Task_exit terminates execution of the current task, changing its mode from Mode_RUNNING to

    Mode_TERMINATED. If all tasks have been terminated, or if all remaining tasks have their vitalTaskFlag attribute

    set to FALSE, then SYS/BIOS terminates the program as a whole by calling the function System_exit with a status

    code of 0.

    Package: ti.sysbios.knl

    Product: TI-RTOS for TivaC 2.16.1.14

    *****************************************************

    LS_taskLoad posted in RTOS analyzer Live events is missing from (Task.h). Also the application never asserted (Task_exit) command. It appears the kernel is not happy about something in the instance thread, but what?

  • Can you attach an exported sample project that demonstrates your issue? 

    Todd

  • Hi Todd,

    Project has confidential parts but the task in question is posted in this thread above. Adding (Task_construct) syntax below has no effect to make (if) task reload after termination no matter added to bottom of said task before it falls through or add into main. Adding (Task_construct) syntax below to a 1 second timer function or another running task with a while loop faults the MCU.

    The puzzling part is the (while) replaces the (if) in the posted (PrintAllData) task in thread above and all seems to run ok for some time. Oddly (Task_sleep) seems to slow the UART2 logging interval considerably more than the while loops they are a part of after (Task_construct) syntax (below) was added into main for each task. The two application tasks added so far seem to be snatching the CPU away from the kernel when processing any while loops in the task yet allow the (Task_sleep) to remain under kernel control.

    Void
    main
    {
    
        /* Construct BIOS objects */
        Task_Params taskParams;
    
    
        Task_Params_init(&taskParams);
        taskParams.stackSize = TASKSTACKSIZE;
        taskParams.stack = &task0Stack;
        Task_construct(&task0Struct, (Task_FuncPtr)IOTAppLoop_TaskFxn, &taskParams, NULL);
    
        Task_Params_init(&taskParams);
        taskParams.stackSize = TASKSTACKSIZE;
        taskParams.stack = &task1Stack;
        Task_construct(&task1Struct, (Task_FuncPtr)PrintAllData_TaskFxn, &taskParams, NULL);
    
    //some application init code
    
        /* SysMin will only print to the console when you call flush or exit */
        System_flush();
    
        /* Open UART for Log data */
        UARTUtils_loggerIdleInit(Board_UART0);
    
        /* Enable the IOT SW interrupt */
        g_bPrintingData = true;
    
        /* Start TI-RTOS BIOS */
        BIOS_start();
    
        /* SYS-BIOS should never reach here but just in case.... */
        while(1)
        {
        }

  • It seems (Clock_tick) was not arbitrating each tasks sleep period as expected. UART2 logger was somehow executing task period printing contrary to (Task_sleep(n) intervals. Even though RTOS GUI indicated Timer ID=5 & verified ROV, the kernel seemingly did not assert (Clock_tick) until explicitly adding (Clock.timerId = 5;) to configuration script. At least that seems to coincide to both loggers slowing down output of RTOS analyzer live events, almost never occurs as it once did relative to task_sleep(n) command.

    After that change, UART2 logger prints to RTOS analyzer (Live tab) do not occur relative to Task_sleep() duration and the Task_sleep(n) command is not very responsive to the duration value being passed. The application crashes more often with ROV posting Hard Bus Fault Precise memory address (0xbebebebc). Sadly the application is far more stable without any RTOS kernel task/s, not to say it did not crash sometimes after thousands of hours online. 

      

  • Sorry for the delay. Between holiday vacations and the fires near us (Santa Barbara), it has been rather busy. I think this will be easier to take off-line to get this resolved and then post the resolution. I'll send you a request.

    Todd