This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Convert CLK_gethtime to CPU cycles

In the statistics box from DSP/BIOS v5.42, I see an average value for various HWI, SWI routines,

KNL_swi, PRD_swi, etc. Is that the number of CPU cycles, or the number of times high-resolution timer called?

When I put STS_set() and STS_delta() to get the CLK_gethtime() elapses and assign to an STS object,

what's the proper way to convert that to actual CPU cycles? Inside the STS object, I see it can be configured for A * x format used in the host How do I set the A format used in the host- it looks greyed out? It looks like CLK_cpuCyclesPerHtime() could be used for A, such that CPU cycles = CLK_gethtime() *  CLK_cpuCyclesPerHtime().

Finally, what is the best method in DSP/BIOS to measure elapsed time between ISR calls and associate to STS object?

Thanks,

Chris

  • I am not an expert on DSP/BIOS as it is a legacy product. While I dig in more, here is something regarding the units.

    From the API documentation for DSP/BIOS. See 2.28 STS module for more information.

    Vikram

  • But this API referenced is for the C6000 series, whereas I'm only interested in what happens on a C5505 platform.

  • Hi Chris,

    Oops. I meant to send this link.

    I was reading up a bit on the question you asked. CLK_gethime() will return the high resolution timer value. Since you are setting this value, the average shown in the statistics data is the high resolution timer value.

    Instead, if you like to view the CPU cycles in the statistics data view. You could do the following:

    STS_set(&sts, (CLK_gethtime() * CLK_cpuCyclesPerHtime()));
    "processing"
    STS_delta(&sts, (CLK_gethtime() * CLK_cpuCyclesPerHtime()));

    You can find more details in the application programming guide link above.

    Hope that helps.

    Vikram

  • I had tried that earlier. But, it is puzzling why when using CLK_cpuCyclesPerHtime(), my CPU and

    cycle count both go up by about 10x. If I leave it out and just use STS_set(&sts, CLK_gethtime()),

    the CPU and average cycle count go down significantly.

    Doesn't seem like a viable solution if it's that intrusive does it?

  • That's weird. Is it possible to share a sample project with me so that I can reproduce the problem?

    Thanks,

    Vikram