Hello,
I have a question about the values returned by the Timestamp_get32() and the Timestamp_get64() functions.
I'm currently using these methods to measure performance of a multicore program running on the c6678 evm. My code is as follows:
static int i = 0; static unsigned int time = 0; unsigned int now; if(i==0){ now = Timestamp_get32(); unsigned int delta = (now-time)/10; float fps = 1000000000.0 / (float)delta; System_printf("fps: %f\n", fps); time = Timestamp_get32(); } i = (i +1) %10;
This code is placed in a function that is regularly called on the first core (CORE0).
According to the printfs, my fps rate is 24 in the Debug configuration. However, if I measure the time between two printfs (with my watch), I get 4 seconds, which corresponds to 2.5 fps.
I replace the Timestamp_get32() call by a call to Timestamp_get64() to confirm that it was not an overflow issue.
I also try to change the number of iteration between two printf from 10 to 100, but got the same results (i.e. 40 seconds with my watch and 24fps measured by Timestamp_get64().)
My question is: why is there such a big difference between the two values?
Is the timer paused when the program is running "waiting" functions such as Semaphore_Pend() ?
More information about my program and test environment:
My program involves multicore communication using the IPC.notify module.
I'm using CCS v5.2.1
This is the only call to printf in my whole program.
I'm using the default TimeStamp module (var Timestamp = xdc.useModule('xdc.runtime.Timestamp');)
My EVM is running at 1000 MHz (I checked with the Timestamp_getFreq() function)
I also checked the cpu frequency using the Task_sleep(1000); methods between two calls to Timestamp_get64() and got ~1000000000 ticks.
Regards,
Karol