This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

600 ns to jump into interrupt routine?

Hello everyone I am currently monitoring the effect of a GPIO interrupt latency with a scope by raising the XF pin in the interrupt routine.

So far I have measured around 600 ns from the raising edge of the input signal onto the GPIO and to the first line of code inside the routine. 

I am aware of the automatic context savings but I can't think of any solution to lower this latency. 

For what I know, jumping to the routine via vectors shouldn't take more than few cycles (around 50 ns I guess), saving context shouldn't take more than 20 cycles(only few registers to save). So I was aiming at something at around 100 ns to get to the first line of my routine. 

Have you got any idea of how I can lower this latency?

Many thanks

Silvere

  • Hi Silvere,

    What DSP is it? What is your system clock speed?

    What memory model is used? Huge and large memory models are slower than small. See 6.1 Memory of the C55x Compiler User's Guide

    Have you tried different optimization levels?

    You might also try fast return, which saves minimal context in registers instead of pushing them into the stack. See 4.2 Stack Configurations to 4.4 Automatic Context Switching in the C55x v3.x CPU Reference Guide

    So you trigger interrupt on GPIO then raise XF in the ISR, correct? Do you use the asm("   bset XF") command to set XF - not a function call?

    Are there any other interrupts?

    Hope this helps,
    Mark

  • Hello Mark and thanks for you time.

    I am running a 5515 dsp at a speed clock of 98MHz

    Memory is set to huge and the XF is raised via the asm("   bset XF") command.

    interrupt void GPIO_isr(void)
    {
    asm(" BIT (ST1, #ST1_XF) = #1");
    
    
    I am using the fast return mode and no other interrupts are active. 
    
    
    The only thing I haven't thought about is the small memory model. How does it affect the timing ?
    I don't really how to optimize this part of the program as it's all done in hardware with vectors and stuff.
    I put a screen capture attached to this email from my mixed signal scope.
    I am really forward to hear from you again.
    Many thanks

    
    


  • You might try writing your interrupt routines in C55x assembly instead of C.

    The C compiler is very smart about context saving, but if you call a C subroutine within your interrupt function then the context on the stack will necessarily be significantly larger. I learned a great deal by looking at the assembly output generated by the C compiler as I modified my code. I was able to use macros and other compile-time code generation to avoid subroutines, which significantly sped up my C language interrupts. For some interrupts, though, I decided to use assembly for minimum cycle counts.

    You should also look into the available #pragma hints that you can use to tell the compiler more about how your C language subroutines will be used.

    Note that I have over 6 million DMA operations per second on a C5506, which is more than one DMA transfer every 27 cycles. My interrupts are designed to occur much less frequently than the DMA, but I still found that I needed to minimize the cycles wasted by context saving that were really only needed due to the convenience of the C language and an initial lack of understanding of the consequences of simple design paradigms.

  • thanks for your reply.

    Indeed I've got very different result with subroutines in the ISR. Even though they are a couple of cycles, it changes everything for the registers and the stack.

    I will definitely look at how the compiler behave into the generated asm files. 

    Thanks a lot for your help

    ps: I found out that GPIO interrupt are slow by definition. I use a timer instead. It is much better now.