This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

RTOS/EK-TM4C1294XL: Task_sleep(duration) is disruptive to other constructed tasks being executed.

Guru 54057 points
Part Number: EK-TM4C1294XL

Tool/software: TI-RTOS

Is the TI-RTOS kernel multi tasking or only multi threading?

The sleep duration of one constructed task seems to effect that of another or other constructed tasks start of execution time. It may not be the entire task sleep duration but the kernel's CPU load usage exponentially increases relative to total task sleep duration configured with in all constructed tasks. Typically the kernel may steadily load CPU 15% usage with a task sleep duration of 400. Yet same task set to 2000 sleep duration and kernel CPU load peaks over 90%, momentarily as the task that was sleeping wakes up and also peaks over 90%.

Point is that behavior is seemingly not an artifact of a preemptive multi tasking kernel.  Windows desktop GUI leverages a preemptive multitasking kernel to seamlessly moderate background program threads without the user being aware it is occurring. We can't expect the very same from a 150 DMIPS CPU but argumentatively the TI-RTOS program scale is much smaller comparatively.   

  • Perhaps it is the kernel idle logger with a sleep duration being inside task used to throttle its execution intervals being a show stopper. The CPU Load module is set for 500ms and sleep durations 800, 2000 is derived from and assume Clock_tick 1000us?

    Notice the task load increased 0.23 to 0.34 when there should be very little added or perceived delay in the PrintAllData_TaskFxn() into high speed USB0 for either Sleep durations. Without TI-RTOS in the application there is no slow down in USB0 print data when the IOT application is connected into the cloud.

    The idea for TaskFxn time logging helps us to know how a function execution effects the CPU load as it executes instructions.

  • But wait you not only get the supper scrubber all in one Task_construct() method. If you act now we'll send you the seamless universal switch method Task.create() so you too can fool all your friends your kernels task host is more robust than theirs.

    Perhaps the (Task.h) function calls are not integrating into the kernel environment as expected? Notice the difference between methods of how a Task is constructed or created produces two very different out comes in CPU loading with same Task_sleep(800) duration as above graph calling Task_construct() in (main.c).

  • bp101,

    BP101 said:
    Is the TI-RTOS kernel multi tasking or only multi threading? s

    It depends on what you mean by "thread". Do you mean "thread" from a Linux perspective (e.g. mutliple threads in a process...multi-processes in a system) or "thread" is a generic term for a thread of execution block. We use the term "thread" in the generic sense. So each Task, Swi, or Hwi is a thread. Using this meaning, the TI-RTOS kernel is both multi-tasking and multi-threaded.

    Can attach the projects that you used to generate the above Executation Graphs. The description does not seem to make sense and I'd like to reproduce the graphs so I can better answer your comments.

    Todd

  • In the case above a task block existing in SRAM and I assume a single thread handle executes each time it is loaded. In preemptive multi tasking one task never effects an other tasks/treads virtual memory space, though overall execution speed may be impacted globally.

    There is a large execution time delay in the printing task as the other task is sleeping or otherwise executing. Some delay is expected but not to the extent that is occurring. Notice the usage dropped after removing Task_construct() directives and replacing with Task.create() created by the Task module. The printing task still has far to much delay and adjusting the sleep time lower really makes no difference.

    Surly you don't need to test our application to believe such is occurring. The graphs above paint a very clear picture there is something wrong in the RTOS handling of Task_construct() tasks besides other issues it clearly needs more work.
  • Based on my experience, perceived issues with the kernel are actually other issues...thus the request for an example project that exhibits the issue. For instance, you mention printing...are you using printf to the IDE console? Do you know the target is stopped and CCS reads the contents of the CIO buffer when it is full or a '\n' is writing...thus impacting real-time. It's things like this I want to look for.

    The long term execution of a task is the same independent of whether it was created via Task_create or Task_construct (assuming the parameters are the same). The short-term differences are very small (i.e. allocation of an Task object).
  • ToddMullanix said:
    For instance, you mention printing...are you using printf to the IDE console?

    No - print is via USB0 high speed (480KBPS) connection to windows host computer bulk device client.

    ToddMullanix said:
    The long term execution of a task is the same independent of whether it was created via Task_create or Task_construct (assuming the parameters are the same).

    The only major difference visibly noticeable is no priority assignment with Task_construct, see below the two ways of configuration.

     Task_Params_init(&taskParams);
     taskParams.stackSize = 2048;
     taskParams.stack = &task1Stack;
     Task_construct(&task1Struct, (Task_FuncPtr)PrintAllData_TaskFxn, &taskParams, NULL);
    
    
    var task1Params = new Task.Params();
    task1Params.instance.name = "TaskPrintStats";
    task1Params.priority = 6;
    task1Params.vitalTaskFlag = false;
    task1Params.arg0 = 1;
    Program.global.TaskPrintStats = Task.create("&PrintAllData_TaskFxn", task1Params);

  • It may not be the kernel specifically responsible for said delay since the red/blue lines are the idle logger respectively. The task method shaved 5% off CPU load usage. Progress to debug load even lower is most difficult!
  • ToddMullanix said:
    Do you know the target is stopped and CCS reads the contents of the CIO buffer when it is full or a '\n' is writing

    And CCSv7.3 is running on a 2.8Ghz Quad core Intel processor and pulls 73 memory threads 2% load during USB prints, IOT cloud writes. My understanding CCS7 is multi threaded multi core aware, hard to imagine it impacting RTOS in such a way as being graphed.

  • BP101 said:
    The only major difference visibly noticeable is no priority assignment with Task_construct, see below the two ways of configuration. Task_Params_init(&taskParams);

    And that priority assignment missing in (Task_construct) was enough of a difference to cause the kernel load to jump up and down.

    Yet it has not helped the Task_sleep() long duration in one task function from effecting the processing loops to speed up in the other task of printing to USB0. Long as the other task is not running with the very long sleep duration, USB0 printing is very fast processing task loops.

    That seems to indicate a very long Task_sleep(duration) within a while(1) task looping can effect over all kernel task load speed for other tasks with shorter Task_sleep() durations. This method of throttling execution of any task within the kernels task handler is archaic to say the least.

    Perhaps a better task throttling method would include an LOAD execution (wait states) parameter versus using Task_sleep().

  • There seems to be a bottle neck in RTOS Analyzer used with UARTLogger to transfer data into CCS debug at high data rates, e.g. 115200 baud. More so with HWI logging for Execution graphs when many (6) HWI stop/start interrupts choke the RTOS debug interface. Numerous HWI events are dropped in Live Session view no matter what the UART transfer buffer space (64kb) or excessive Clock_tick Diags and other kernel modules Diags logging events are turned off at runtime.

    The Execution line graphs hardly show any plotting and CCS7.3 starts bogging down gets testy to respond to GUI control likely from UART buffer choking at some place in JAVA code. Sadly we use USB0/EMAC0 for other parts of application and SPI is not even a Logger transport selection.
  • When Hwi logging is on, there can be quite a bit of Log records. If you have the memory, you can use LoggerStopmode. It writes the records to an internal buffer in RAM. You can set it to wrap or stop logging when full to catch either the last thing or the first things. Of course you need CCS connected via JTAG and the target stopped to get the records in System Analyzer.

    Additionally you can enable/disable the logging at runtime. So you can enable logging via some trigger mechanism and then disable if necessary. The API to do this is Diags_setMask.

    Todd