Part Number: XTCIEVMK2LX
Hello, experts.
I'm testing an example code from FFTLIB, which contains following lines to measure the time to execute an FFT:
... clock_t t_start, t_stop, t_overhead, t_opt; ... t_start = _itoll(TSCH, TSCL); t_stop = _itoll(TSCH, TSCL); t_overhead = t_stop - t_start; plan_fxns.ecpyRequest = NULL; plan_fxns.ecpyRelease = NULL; p = fft_sp_plan_1d_r2c (N, FFT_DIRECT, plan_fxns); t_start = _itoll(TSCH, TSCL); fft_execute (p); t_stop = _itoll(TSCH, TSCL); fft_destroy_plan (p); t_opt = (t_stop - t_start) - t_overhead; ...
So, the t_opt is the total collapsed time for fft_execute(), which shows the cycle number. I want to convert this number to seconds.
As I know, the clock speed of the DSP cores of the SoC is 1GHz or 1.2GHz (which one is correct?), and I simply think dividing the cycle number by the clock speed is the time in seconds. Am I right?
For example, if the cycle is 10,000,000, then the time in seconds is 1s (where the clock speed is 1GHz).
I know this is very basic and easy math, but I want to be sure. I also appreciate if I can get another good method to measure the time consumed by a function or a code block running on the DSP cores.
Thank you!