This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Incohenrents measurements with System Analyzer

Other Parts Discussed in Thread: SYSBIOS

Hi,

 

We are testing TI’s measurements tools. Specifically, we used System Analyzer (UIA v 1.01.01.14). We are using CCSv5.3.0.00090, SYSBIOS v 6.34.02.18 and XDCTOOLS v 3.24.05.48 on a C6748 DSP with a STM  XDS560v2 JTAG probe. We implemented an application which contains one HWI, one SWI and 3 tasks with different priorities.

In a first time, we instrumented the code with a hook switch function triggering GPIO each time a task switch occurs. We obtained an execution graph of our application. In this case, the idle task load is about 50%.

Then, in order to validate our measurements, we used System Analyzer to instrument the code. As a result, we obtained totally different and incoherent values, for example, the idle task load is now 0%. Moreover, the task scheduling is no more the same as the one we obtained with GPIOs.

 

We think that System Analyzer is intrusive but we cannot define how much intrusive it is.  How can we improve our measurements with System Analyzer ? How shall we configure it to stick as much as possible to the reality ?

 

Thanks !

Clement

  • Clement,

    There are two aspects to System Analyzer/UIA intrusivness. 1) Overhead of API to log event 2) Overhead to transport events.

    You can minimize the logging overhead by reducing the amount of data you're logging ... e.g. turn off any of HWI, SWI, SEM, TaskSwitch, etc. that is not required.

    There is no transport overhead if using JTAG (run or stop mode), EThernet has an overhead.

    Can you explain how you're calcualating idle task load with GPIO? System Analyzer uses CPU load calculated BIOS which is not exactly 100% - Idle Task. It takes into consideration other operations that are executed in the Idle Task.

    Regards,

    Imtaz.

     

  • Hi

    The amount of logged data doesn't seem to be an issue. Our application only consists of one HWI posting a SWI which post a message in a mailbox. A task is pending on this mailbox and when the pend is TRUE, the task just pass into a for loop to temporize and we use a hookswith function to trigger tasks GPIOs.

    With the GPIOs we calculate the load this way : we take several periods (which are defined by the HWI because they are triggered by the timer). On this interval, we measure the time a task is executing then we do : (time_of_task / total_period_time)*100.

    But when we calculate the load of the idle task with this previous method, the results we obtain are bigger than those obtained with System Analyzer. For example System Analyzer gives us 50% idle and we calculate 65%.

    Maybe our method is not the proper one but it seems logical to us. Do you think we missed something ? Because we don't see any other explanations for the moment.

    Regards

    Clement

  • Clement,

    Can you comment on which view in System Analyzer you're using to determine CPU load. There are two places:

    1. CPU Load Graph
    2. Task Load Graph

    The Task Load Graph shows a more accurate representation of IDLE Time but requires Task, HWI and SWI logging to be turned on (more overhead). If HWI&SWI logging are not turned on then IDLE time will include time spent in HWIs and SWIs while the IDLE task is executing.

    The CPU Load Graph show an estimation of CPU Load. This is calculated base of a formula and is minimally intrusive.

    Is your calculation taking into consideration time spent in HWI & SWI while Idle task is running. If not then this may account for the difference.

    Regards,

    Imtaz.

     

  • 2671.main.cAt the beginning, we were using Task Load view but because the results were too far from the results we obtained by measuring with gpios, we used the Exec Graph. We measured the time of the idle task during several period and then calculate the average time to deduce the load. But we took into consideration the scheduling time and the preemptions due to the HWI and SWI.

    Our last results shows that the faster the tasks were executed, the bigger was the difference between GPIOs measurements and System Analyzer results. I attached our test code because i think it will be easier for you to follow what i say ^^.

    The only parameters we changed were the TMR->PRD, TMR->RLD and the cpt and cpt1 counters. We use the timers to set the period of hwi and the counters to simulate a load on the tasks.

    We measured for 3 different periods : 20ms, 500µs and 200µs. For each period, we measured 3 idle loads :

    1) only GPIOs

    2) GPIO measurements with System Analyzer enabled.

    3) Exec graph.

    we observed for a 20ms period that the difference between the 3 measurements was insignificant. When the period was 500µs, we noticed a 6% load difference between the measurelments 1) and 2) and 2% between measurements 2) and 3) which seems to be the scheduling load.Finally, we observed a 10% load difference between measurements 1) and 2) for a 200µs period.

    In our opinion, the System Analyzer is not an appropriate tool for systems which have periods under 500µs and our system runs at 200µs. We still don't know whether we missed a point or not but this results speak for themselves. We stopped to use the task load view and maybe we should use it. But for us, the data used to build exec graph or task load graph are basically the same.6406.main.c

    We are now trying to find a method to measure task loads precisely (because for the moment we don't know what measurements are the good ones, GPIOs or SA ?) with an error which can be determined and which match our specifications (not more than 5%). Our last attempt will be to move our application on DSPIOS 5 and use the STS module to calculate the time spent on the idle task.

    Regards

    Clement.

  • Clement,

    I asked the experts on the BIOS load module to follow-up on this.

    Regards,

    Imtaz.

  • Clement,

    I believe part of the problem may be that your test case is illegally calling Mailbox_post(), a blocking API, from within the interrupt handler.

    For larger interrupt periods, the application will probably behave correctly because the background thread is able to keep up with processing all the mail being posted. Consequently the Mailbox_post() call never needs to block to allow the reader to catch up.

    For smaller interrupt periods, I suspect the reader gets behind enough that the interrupt handler's Mailbox_post() call attempts to block. This will result in corruption of the Task scheduler's internal data structures and lead to total application misbehavior.

    Can you try using Semaphore_post() in the interrupt handler and Semaphore_pend() in the task and see if this improves the correlation between the load measurement schemes?

    Alan

  • Hi Alan,

    I used semaphores API instead of mailboxes API and the result is quite clear. Our ISR, when using semphore post only lasts 4,60µs but when calling a Mailbox_post, the execution time is 12µs so i think  this was the cause of our problem. When our system is heavily loaded and running fast, the "blocked" interruptions must corrupt the entire execution and falsify our time measurements.

    Thank you for your help !

    Clement