This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

CCS/EK-TM4C129EXL: Minimal time between Interrupts, interrupt service routines

Part Number: EK-TM4C129EXL
Other Parts Discussed in Thread: CODECOMPOSER

Tool/software: Code Composer Studio

I have a TM4C129EXL 120MHZ board and use codecomposer.

Is there a minimal time needed between interrupts so an interrupt is not skipped?
Or will interrupts just be stacked to one another no matter the time between the interrupts.

If interrupts are stacked to one another, is there a limit to the stacking of interrupts?

I'm using the board to read out an encoder and an interrupt is created whenever there is a rising edge.
But the encoder is high end quality and there will be a great amount of pulses coming in.

Already thanks!

  • Hello Arne,

    It sounds like this would be with just one, or a handful of GPIO, is that correct?

    If so, if the speed of the encoder is too fast for the interrupt to finish processing for a given GPIO, then you wouldn't be able to stack interrupts because the interrupt would still be being serviced.

    What kind of processing to do you need to do? Do you need to count the number of edges? How fast are they coming in? Do you need to also understand the timing between edges?

  • Dear Ralph,

    Already thanks for replying.

    Some explanation:
    I'm reading out 2 encoders, so I read out atleast 2 GPIO's, but i'd rather read out 4 (2phases per encoder so I can check the direction).
    My test setup at max speed is sending pulses every 50microseconds (for 1phase), if I read out 2phases (which are 90degrees apart) I will have a pulse every 25µs.

    So every 50µs interrupt A will trigger, but 25µs after interrupt A is triggered, interrupt B will trigger. (A and B are on different pins)

    If I place another encoder and read out 2 more phases, in the best case I can manage to get the interrupt exactly to trigger between A and B.
    So 2 encoders 2phases per encoder the minimal time between different interrupts is at best 12.5µs.

    But I really like to know what the minimal time is for the same interrupt to be detectable.

    Questions:
    So if I understand interrupts that are on different GPIO's will have no influence on one another, it just gets stacked?
    If interrupt A isn't cleared, it won't detect another interrupt of A and so doesn't stack?
    So only Interrupts on different GPIO stack?
    What's the minimal time an interrupt takes to do if I literally only do "pulses1+=1;"?

    Can an interrupt length be measured as this or will it take longer or shorter since "TimerValueget" will also take some time?

    void Trigger1(void){
    Timervalue1=TimerValueGet(TIMER0_BASE,TIMER_A);
    GPIOIntClear(GPIO_PORTE_BASE, GPIO_INT_PIN_2);
    pulses1+=1;
    Timervalue2=TimerValueGet(TIMER0_BASE,TIMER_A);
    }

    int main(void)
    {
    //triggercode
    while(1){
      ElapsedTime=Timervalue2-Timervalue1;
      }
    }

    Arne

  • Hello Arne,

    Arne Poelaert said:

    So if I understand interrupts that are on different GPIO's will have no influence on one another, it just gets stacked?

    Yes they get stacked and handled based on priority (there are datasheet sections on priority and also explaining what happens if all priority is the same)

    Arne Poelaert said:

    If interrupt A isn't cleared, it won't detect another interrupt of A and so doesn't stack?

    Correct.

    Arne Poelaert said:

    So only Interrupts on different GPIO stack?

    Correct.

    Arne Poelaert said:

    What's the minimal time an interrupt takes to do if I literally only do "pulses1+=1;"?

    I am not sure. You can measure this with two ways though

    1) Imprecise, but gives you a ballpark - toggle a GPIO high and then low in the ISR as you enter and exit and measure that. This adds extra time though.

    2) Precise - Put a Breakpoint in your ISR, and then when the breakpoint hits, go to the Disassembly viewer and see the instructions that execute in your ISR. From there you can calculate the CPU cycles based on the ARM processor assembly instructions, and then calculate the time based on clock cycles taken while knowing how fast your clock is running.

    I have not tried your timer method but I don't think it would do well, I would recommend method 1) as a better way to do that. But ultimately if that doesn't give you enough of a picture, method 2) would because it won't add any latency from additional processing. TimerValueGet would add to both pull the timer value and add it to a variable which is more expensive than the GPIO method.

  • Thankyou very much for all the information!
    I will count the CPU cycles, that will give me the accurate value.

    Arne