This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Concerto/Master interrupt latency



Hello,

We have an application that requires minimum interrupt latency, and we are trying a Concerto kit (Cortex processor inside it).

Based on the datasheets, we understand that the interrupt latency should be 12 cycles between the interrupt is generated and the first instruction of the ISR is executed.

Unfortunately, we running a simple test and the interrupt latency is much longer (around 25 CPU cycles). These are the details:

1) We generate interrupts with an GPIO interrupt (we also did it with an UART_INT_RX interrupt and the latency looks identical). There is no other interrupt enabled.

2) The ISR toggles a GPIO output, we measure with an oscilloscope the time between the signal that triggers the external interrupt and the GPIO output that is toggled by the ISR.

4) To change the GPIO output we use the fastest way: HWREG(GPIO_PORTE_AHB_BASE + (GPIO_O_DATA + (GPIO_PIN_3 << 2))) = 0;

5) The whole project is very simple, has only this interrupt and almost no other code. The main() function is just toggling LED, identical to the blinky example. The rest of the code is also the same as the blinky example.

Can somebody please help us understand why the interrupt latency is so long?

Thanks,

Max

  • Hi Max,

    Please let us know how many assembly instructions are used for the GPIO toggle code, mentioned in the step 4. You can check this in dis-assembely window of CCS.

    Regards,

    Vivek Singh

  • Vivek,

    It generates 4 instructions:

    LDR A3, $C$CON32 ; |276|
    MOVS A2, #0 ; |276|
    ADDS A3, A3, #32 ; |276|
    STR A2, [A3, #0] ; |276|

    It is worth to mention that if we take out the GPIO toggle the interrupt takes the same long time (a couple of clocks less because of the missing GPIO instructions).

    Thanks,

    Max

  • Hello,

    Another way of answering my question is if somebody could provide me with a simple source code that demonstrates that the Interrupt Latency is only 12 cycles (any interrupt source is fine).

    Thanks,

    Max

  • Hi Max,

    I assume the interrupt latency no you are referring (12) are from Cortex-M3 data sheet. This no. is basically in NVIC context and does not includes any other delay. In your code you are toggling the GPIO by writing into GPIO register which itself will take few cpu cycles (you can measure this by two back2back toggle of same GPIO pin and measure the pulse width on scope). Considering  4 instruction fetch  and one GPIO write, 25 cycle latency looks OK to me.

    You mentioned that even when you remove the GPIO toggle code, interrupt latency remains same. In this case how do you measure the latency?

    Regards,

    Vivek Singh

  • Hi Vivek,

    I actually had measured the time the GPIO takes and confirmed that it is not the reason for the long latency. I checked the assembly instructions from the beginning of the interrupt function and there are very few before the GPIO is toggled. When I computed the 25 cycles I didn't count the time of the instructions to toggle the GPIO (assuming 1 clock per ASM instruction).

    I did the same test in the C2000 of the concerto and the latency is OK: the number of cycles that the datasheet says, which is half the cycles of the Cortex seems to be taking. I'm surprised that the Cortex-M3 interrupt latency is so worse than the C2000, given the fact that the interrupt latency is supposed to be one of the best improvements with respect to previous ARM architectures. I am missing something?

    I believe that there are around 13 cycles that the CPU is "wasting" on top of the interrupt latency and the GPIO toggle code.

    To answer your question: I verify the time the interrupt takes by continuously toggling a another GPIO in the main() function. When the GPIO stops toggling I now that the CPU is attending the interrupt. The total processing time of the ISR changes very little when I take out the GPIO toggle code.

    Can the CPU take 10 extra cycles to start the interrupt function? I assume that the 4 instructions I sent you take all 1 CPU clock but I'm not sure. Surprisingly the Instruction Set manual does not say how may cycles the instructions take.

    I would very much appreciate if you could help me by creating a very simple project (maybe based on the blinky example or similar) where you can demonstrate the minimum possible interrupt latency, which I expect to be 12 cycles.

    Do you think that implementing the whole ISR service in assembly could help? is there any example code that has an ISR implemented in assembly?

    Thanks,

    Max

  • Hi Max,

    One thing you need to consider here that the instruction to toggle GPIO will not take just one cycle but about 5-6 cycle because of the bus architecture. Also some instruction may take more than one cycle based on what that instruction is doing.

    Regards,

    Vivek Singh

  • Hi Vivek,

    Thanks for the post. As I mentioned before, I have measured the time it takes to toggle the GPIO by toggling multiple times. The time that I have measured is around what you said.

    Would you be so kind to ask one of TI's Stellaris "gurus" about the interrupt latency of this processor? There CPU is spending several cycles doing something that I don't know, between the interrupt triggers and the actual ISR function.

    Thanks,

    Max