This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

CC2538: Detecting CPU Utlization

Part Number: CC2538
Other Parts Discussed in Thread: Z-STACK

Hi,

I would like to make sure that the CC2538 is not being CPU saturated with an application running on top of Z-Stack. What methods can I use to ascertain how much idle time is available or time the CPU is utilized?

I thought about a osal timer with a short interval and measuring the jitter. Is there a better way? What do TI do here?

  • Hi,

    I did an analysis like this a while back for Z-Stack 1.2.2a, I will copy it below. Numbers for Z-Stack 3.0.1 should be very similar, the main differences between these two stacks are during the NWK commissioning procedure, which is a one-time process for a node joining a new network.


    Here are the results of the CPU load tests I ran on the 1.2.2a stack. My setup was 2 SmartRF06 + CC2538EM.

    Test 1: (Measuring ZC SampleLight, ZED SampleSwitch sending continuous 100ms toggle)
    osal_systemClock: 136902
    cpuUsageAccumulator: 20255
    % utilized: 14.8%

    Test 2: (Measuring ZC SampleLight, ZED SampleSwitch sending continuous 30ms toggle)
    osal_systemClock: 94648
    cpuUsageAccumulator: 41031
    % utilized: 43.3%

    Test 3: (Measuring ZC SampleSwitch, ZC SampleSwitch sending continuous 30ms toggle to ZR SampleLight)
    osal_systemClock: 72448
    cpuUsageAccumulator: 2633
    % utilized: 3.6%

    From these results, we can see that a device that is receiving commands utilizes much more of the CPU than a device that is transmitting commands. Keep in mind that sending toggle commands every 100ms or 30ms is probably unrealistically fast for a real application, but the point of this test was to see how the CPU performs under a heavy load.

    Code modifications made for the test:

    I added the following code to OSAL.c in the osal_run_system() loop (lines with +):

    + timeStamp1 = osal_GetSystemClock();
    
    do {
    if (tasksEvents[idx]) // Task is highest priority that is ready.
    {
    break;
    }
    } while (++idx < tasksCnt);
    
    if (idx < tasksCnt)
    {
    uint16 events;
    halIntState_t intState;
    
    HAL_ENTER_CRITICAL_SECTION(intState);
    events = tasksEvents[idx];
    tasksEvents[idx] = 0; // Clear the Events for this task.
    HAL_EXIT_CRITICAL_SECTION(intState);
    
    activeTaskID = idx;
    events = (tasksArr[idx])( idx, events );
    activeTaskID = TASK_NO_TASK;
    
    HAL_ENTER_CRITICAL_SECTION(intState);
    tasksEvents[idx] |= events; // Add back unprocessed events to the current task.
    HAL_EXIT_CRITICAL_SECTION(intState);
    
    + // CPU load testing
    + timeStamp2 = osal_GetSystemClock();
    + cpuUsageAccumulator += timeStamp2 - timeStamp1;
    }

    In Z-Stack, osal_run_system() is the function that is executed in the system's infinite loop, and each iteration through this function will execute the next pending task, if there is a pending task. Basically, if there is not a pending task, we do not enter the if() statement above, and the CPU will not be utilized by the stack during that iteration. If there is a pending task, we enter the if() statement and the next pending task is executed by (tasksArr[idx])( idx, events ).

    To measure the CPU usage by the stack, I take a "timestamp" (osal_GetSystemClock() returns the current system tick in ms) each time I enter the osal_run_system() loop before any pending task is executed, and if there is a pending task, I take another timestamp after it has finished. The difference between these two timestamps is the amount of system time that the CPU was used to execute the stack task during this iteration, and I keep track of all this time by adding it to the cpuUsageAccumulator variable.

    Let me know if you have any other questions.

  • Thanks alot Jason. That's exactly what I needed.

    I'm running 3.0 ZNP with alot of serial communication back and forth (large network, heavy application) and Security.
  • JasonB,

    If I am correct this would exclude time spent in interrupts. Correct?

  • Currently getting a value of 1.23% utilization with your supplied code on Z-Stack 3.0 ZNP on a lightly loaded network (device joined, dumped, and being pinged with a ZCL Read).

    This seems extremely low. Thoughts?

  • 1. I guess if the OSAL clock is not being incremented inside an ISR, then yes, it excludes time spent in ISRs.

    2. That sounds right. Z-Stack won't be doing much unless you are continuously slamming your ZNP device with a bunch of ZCL incoming commands which it must parse very rapidly and send notifications to the host application, otherwise the device will just sit there and RX waiting for an asynchronous message.