This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Interrupt overhead and CPU clock speed

Other Parts Discussed in Thread: TMS320F28015

Hi, Does anyone know what is the maximum clocking speed of the CPU (Not peripherals) with 20M oscillator + PLL (Oscclk x 10, i.e PLLSTS(CLKINDIV = 1)) ? . The DSP is TMS320F28015.

According to the datasheet the maximum instruction speed is 60MHz for this processor. But with the above settings I get 200MHz peripheral clock and 200MHz CLKIN to the CPU block. I already tested whther the peripherals work at 200M, and they do. But it is strange if the 60M is the maximum clock of the CPU, how does the program run?.

Also I tested the program speed by varing the CLKIN to 100MHz as well (By changing PLLCR values), and it does vary. Because I wanted to check whther the CPU is only working at its maximum speed of 60M.

I get interrupt overhead (Before executing the first line in service routine) of 1.4uS [130clks] (with 100MHz CLKIN) and 740nS (with 200M).  Is it normal to have such interrupt latency for a DSP ?

Thanks in advance.

 

 

  • Alf,

    the hardware interrupt latency of this device is 14-16 clock cycles, e.g. 16x 1/60MHz = 270 ns.  How did you get your results? I used two GPIOs, one as input , which requests an XINT1 interrupt and the second as output, which is toggled at the beginning of the interrupt service. With an oscilloscope you can measure the time difference between the two signals. Of course, you will have to add a few cycles, which are caused by the opening instructions of the C-code ISRs context save and which are executed before the toggle-instruction is reached. My measurements show 18-22 clock cycles.

    Another measurement option would be to use the XINT1CTR register. This register is reset when XINT1 sees an interrupt request signal. XINT1CTR is incremented by SYSCLKOUT, e.g  1/60MHz.  What you can do at the beginning of an ISR is to copy XINT1CTR into a global variable and halt the execution after that instruction with a breakpoint. In my tests the global variable shows also values between 18 and 22. 

    Regarding your experiments with SYSCLKOUT: The datasheet is very strait: it defines a maximum internal frequency. If you go beyond that value, you are out of the specs. For a good nights sleep I personally wouldn't do that.

    Regards

     

     

  • Frank,

    Thank you.

    Yes I did the same using LEDs and a scope to find the latency. After your suggestions I changed the clock to 60M and changed all relevent paramters (I was wrong to use 100M) and did the test again. And it seems to be even longer now. It takes 1.6uS from XINT1 pulse reception to execute the first line of int.service routine's code (turn on LED). This is not acceptable for a DSP as you said 14-16 clocks is the standard for these DSPs.

    In your test/code are you running the code from flash? because running the code from flash is faster I assume?

    Kind regards.

  • My code is running from RAM.  Flash would be slower because of waitstates. Hardware latency timing however is independent of the memory type. Only the software context save of the C - Interrupt service routine would be slightly slower from Flash. There must be something else wrongwith your measurement setup.

    Regards

     

     

     

  • Frank,

    Ah, my mistake, my question should be RAM not flash. Ok to do this are you running this piece of code before the main loop, and then initialise the flash (InitFlash())

    MemCopy(&RamfuncsLoadStart, &RamfuncsLoadEnd, &RamfuncsRunStart);

    MemCopy(&intfuncsLoadStart, &intfuncsLoadEnd, &intfuncsRunStart);

    Regards,

  • Alf,

    I am not sure, what your last question is about.  The interrupt overhead is not related to the MemCopy - functions. These function calls are required to copy a piece of code from FLASH to RAM - (a) for function "InitFlash()" - which must be exectued in RAM only, to reduce the wait - cycles of the FLASH and (b) to copy Interrupt Service Routines from FLASH to RAM - to get a higher execution speed for these ISR functions.

    If your test project to measure the interrupt latency is RAM based only, you do not need the 2 function calls and the "InitFlash()" call at all, If you test the interrupt latency with a FLASH based project, then yes , you have to copy the two code blocks in RAM. After that, your measurement results for the interrupt latency should be identical to the RAM only project, because all latency related parts (Hardware Interrupt response and beginning of the ISR-code) are identical.

    Regards