This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

RTOS/TMS320C6678: TSCL and time.h

Part Number: TMS320C6678
Other Parts Discussed in Thread: SYSBIOS

Tool/software: TI-RTOS

Hi,

I was roughly measuring the execution time of some function on my C6678 on one core and I am getting some time difference measuring in two ways. To do so, I included the following in my .cfg file:


var BIOS = xdc.useModule('ti.sysbios.BIOS');
var Clock = xdc.useModule('ti.sysbios.knl.Clock');
var Hwi = xdc.useModule('ti.sysbios.hal.Hwi');

In my C-file I included:


#include <ti/sysbios/knl/Clock.h>
#include <pthread.h>
#include <time.h>
#include <sys/time.h>

The function is embedded in a task which is called after BIOS_start. I am measuring the time in the following two ways:

    clock_gettime(CLOCK_MONOTONIC, &tic);
    t1 = TSCL;
    // long function call
    t2 = TSCL;
    clock_gettime(CLOCK_MONOTONIC, &toc);
    elapsed_time = calc_time(&tic, &toc);
    elapsed_tscl_time = ((double)t2-(double)t1)/(1000000000.0); // running at 1GHz

where I pasted the function calc_time() below.

I read on software-dl.ti.com/dsps/dsps_public_sw/sdo_sb/targetcontent/sysbios/6_21_01_16/exports/docs/docs/cdoc/ti/sysbios/knl/Clock.html#tick

that I should use constant Clock_tickPeriod to calculate the elapsed time. On that website, it says that Clock_tickPeriod is defined in MICROseconds.

In some post in this forum, I read that TSCL returns the number of elapsed cycles. I therefore have to use the processor frequency of 1GHz in order to extract the elapsed time.

The output of my code is the following:

TSCL elapsed time = 0.89283968 seconds
clock_gettime elapsed time: 0.000893 seconds

The results from TSCL and clock_gettime seem to be different by a factor of 1000. I measured 10 functions runs and compared it with my watch (~8 seconds), and the TSCL measurement seems to be correct. From that, I follow that either Clock_tickPeriod is defined in MILLIseconds or that the function calc_time() below does something wrong. Can someone confirm my observation? Thank you very much.

calc_time:

float calc_time(struct timespec *tic, struct timespec *toc)
{
    struct timespec temp;
    if ((toc->tv_nsec - tic->tv_nsec) < 0) {
      temp.tv_sec  = toc->tv_sec - tic->tv_sec - 1;
      temp.tv_nsec = 1e9 + toc->tv_nsec - tic->tv_nsec;
    } else {
      temp.tv_sec  = toc->tv_sec - tic->tv_sec;
      temp.tv_nsec = toc->tv_nsec - tic->tv_nsec;
    }
    // Clock_tickPeriod in microseconds
    return ((float)temp.tv_sec + (float)temp.tv_nsec / 1e9)*((float)Clock_tickPeriod)*1.0e-6;
}

  • Please refer to the guidance provided regarding the clock module by our TI RTOS team here:
    e2e.ti.com/.../677361
    e2e.ti.com/.../413946

    Your understanding regarding TSCL is correct The returned value is terms of DSP cycle count. how ever you need to confirm that the GEL file or the initialization code is actually configuring the DSP to run at 1 Ghz. Are you using the GEL file to configure the DSP clocks in CCS ? When you connect to core 0 can you confirm that you see logs in CCS console that indicate clocks and DDR memory is configured.

    Clock tick generation depend on your configuration. Do you have BIOS.cpuFrequency in the .cfg file so that you setup the CPU frequency based on your DSP clock configuration. this is required for generating accurate clock ticks as described here:
    processors.wiki.ti.com/.../Processor_SDK_RTOS:_TI_RTOS_Tips_And_Tricks

    Regards,
    Rahul
  • Hi Rahul,

    Thank you for your quick reply. I am not actively configuring any clock, but the GEL file seems to initialize the DSP at 1GHz:

    C66xx_0: GEL Output: DSP core #0
    C66xx_0: GEL Output: C6678L GEL file Ver is 2.00500011
    C66xx_0: GEL Output: Global Default Setup...
    C66xx_0: GEL Output: Setup Cache...
    C66xx_0: GEL Output: L1P = 32K   
    C66xx_0: GEL Output: L1D = 32K   
    C66xx_0: GEL Output: L2 = ALL SRAM   
    C66xx_0: GEL Output: Setup Cache... Done.
    C66xx_0: GEL Output: Main PLL (PLL1) Setup ...
    C66xx_0: GEL Output: PLL not in Bypass, Enable BYPASS in the PLL Controller...
    C66xx_0: GEL Output: PLL1 Setup for DSP @ 1000.0 MHz.
    C66xx_0: GEL Output:            SYSCLK2 = 333.333344 MHz, SYSCLK5 = 200.0 MHz.
    C66xx_0: GEL Output:            SYSCLK8 = 15.625 MHz.
    C66xx_0: GEL Output: PLL1 Setup... Done.

    (DDR3 successfully initialized afterwards)

    I guess that "PLL1 Setup for DSP @ 1000.0 MHz" is relevant and I therefore have to use 1GHz for the TSCL functionalities? So cycle count / 1GHz should give me the approximately elapsed time in seconds?

    For the other part I added a line "BIOS.cpuFreq.lo = 1000000000;" in my .cfg app as described in the link you've sent me. The results are still different by factor 1000. I will just note that and continue if you can confirm that the above is correct.

    Thanks and best wishes.

  • I created the attached project which runs on a TMDSEVM6678, which measures the duration of a test using both clock_gettime(CLOCK_MONOTONIC) and reading TSCL/TSCH. The results show agreement between the two methods:

    [C66xx_0] [Sat Apr 27 17:13:39] Test starting (Clock_tickPeriod=1000)
    [Sat Apr 27 17:14:40] Measured time from CLOCK_MONOTONIC = 60.359997 seconds
    [Sat Apr 27 17:14:40] Measured time from CPU ticks = 60.359417 seconds

    The timestamps at the start of the console output are from the PC (with a resolution of one second), which provides a sanity check that the times measured on the target are correct.

    Since the GEL script reports that the DSP is being set to run at 1000 MHz, the cpufreq in the BIOS was set to 1000000000.

    The TCSL/TSCH measurements are taken with a 64-bit timestamp to allow longer duration measurements for more than 2^32 CPU clocks, using the READ_TIMER macro found at Order of reads for _itoll intrinsic

    Idris Kempf said:
    The results from TSCL and clock_gettime seem to be different by a factor of 1000.

    I think the problem is the following from your calc_time function:

        return ((float)temp.tv_sec + (float)temp.tv_nsec / 1e9)*((float)Clock_tickPeriod)*1.0e-6;

    The clock_gettime(CLOCK_MONOTONIC) is already scaling the returned timestamp into seconds and nanoseconds components, and since Clock_tickPeriod is 1000 the *((float)Clock_tickPeriod)*1.0e-6 part of the expression explains why the reported result from clock_gettime is out by a factor of 1000. I.e. try:

    return ((float)temp.tv_sec + (float)temp.tv_nsec / 1e9);

    TMS320C6678_clock_gettime.zip

  • Amazing! Thank you very much for the example code and for pointing out the mistake in my code.

    Best wishes,

    Idris