This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

interrupt latency time



Hi!

I would like to use my LM4F232 at 80Mhz to measure time between front edges of one GPIO with a timer an interrupt connected to GIPO pin. The measure error will be the interrupt latency time of the interrupt.

If I haven't any other interrupt source how can evaluate the interrupt latency time? Someone can tell me where I can find documentations to evaluate it?

Thank you very much.

Lisa

 

  • Lisa56900 said:
    measure time between front edges of one GPIO

    If you truly seek to measure the, "time between front (leading) edges - is not any interrupt latency irrelevant?  Believe this is so as each leading edge will encounter this same latency - thus if each latency time is equal your timer delta should equal the period of your input signal.  (i.e. edge to edge time difference)  I'd not bet the farm/boat on this - but think I'm correct.

    To insure highest accuracy - set this interrupt to highest priority - and possible "mask off" other competing MCU operations - so that you always enter the interrupt from the same context.   FYI - we've consistently measured ARM interrupt response as sub 1uS.

    Simple means to measure such latency is to scope both the "interrupting signal" and a toggled GPIO which resides as the top instruction w/in that interrupt service.  Difference between these 2 edges reveals the interrupt latency.

    Do note - TI's Dave Wilson has posted cautions about use of LX4F232 when operated @ such high speed.  (recall a prohibition especially between 70-80MHz)  Most current errata always useful...

  • I have measured the Latency on the LM4X(F)120XL Launchpad -- and as cb1 says, I can confirm the latency to be in the area of 1US -- my scope is 40MHz so it gets a little dim with ns rise times.

    I found it very effective to toggle the GPIOs attached to the LEDS and measure dwell time in the interrupt and see how it fit into the main program loop and where I was "dropping" information packets.

    If you are dealing with interrupts on a data collection program it is well worth setting up a measurement regimen and examining behaviour of the interrupts and the ISRs.

    I have a post in the Sharing Forum "ADC with Interrupts" -- it can be quite entertaining to put a measurement routine inside the interrupt and then move all but a flag out to a program main loop -- a while (1) "forever loop. Then, the difference in behaviour of the interrupt is much better. What it reveals is that you can push a lot less information than you  would believe out of a serial port back to a PC -- even at 256K BAUD.

    Bottom line -- a topic worth investigating if you are new to LM4F.

    Cheers!

  • A trick I've used to do this kind of thing before involved using a general purpose timer in Input Edge-Time mode. This takes a snapshot of the free-running timer value when an edge occurs on a pin and raises an interrupt. As soon as you enter the timer ISR, read both the snapshot and the current timer value. The difference between these two, with an adjustment for the number of cycles between the two reads and the number of instructions between the start of your ISR and the reads, represents the interrupt latency.

  • Well yes - but does not this method "assume" that latency of a Timer - config'ed in Edge-Time mode - is equal to the latency of GPIO - config'ed also for edge detect?

    Our group had run such "latency tests" (perhaps upon another's M3) and measured variations based upon the interrupt source and/or mode.  IIRC - the more involved the interrupt (i.e. PWM Generator) the longer was such latency.  GPIO edge triggers - again from memory - appeared the fastest responding.  (which may suggest that "hi-speed" GPIOs may be "best in class.")

    At any rate - none have broached poster's issue (and my concern) about "back to back" (same edge triggered) latencies "cancelling" each other - thus providing a true measure of the input signal's period.

    Your use of a Timer for such measure does have advantages over the GPIO and Scope method - verified by original poster.  However - programming (set-up + config of multiple peripherals) efforts/resources are required - some combination of the two methods may be best trade-off...

  • Indeed - I'll give you that point. I used this as a way to measure the spread of interrupt latencies in an application, as far as I can remember, setting the timer interrupt to a lower priority than everything else. In this case, the measurement gave me a good idea of the length of the other ISRs (that would hold off the timer ISR). If you are worried about a very small number of cycles and the difference in latency between a GPIO and a timer makes a significant difference, this may not be the best way to do it. That said, I have no idea what the latency of the timer interrupt is compared to a GPIO edge interrupt. I would expect it to be a couple of cycles slower but that's purely a guess on my part.

  • Suspect both posts - and methods - have good value.  For the poster's issue - extreme accuracy may not be paramount - either means should satisfy.

    Would welcome your comment re: back to back (identical) GPIO Edge triggered latencies cancelling each other - resulting in a true, "edge to edge" measurement.  

    Thanks...

  • I agree that, if the interrupt used for the measurement is the highest priority interrupt in the system, the latency on entry should be deterministic and should indeed cancel out if you are measuring pulse widths based on interrupts on two edges.  If the interrupt could itself be interrupted, though, you will end up with jitter in the measurement caused by this. Using the timer capture method, if we assume the latency between the edge occurring and the timer firing the interrupt is fixed, should offer a far more accurate way to make the measurement since the hardware captures a timestamp on the edge and that is held regardless of the time it takes between then and the ISR running. As a result, you filter out all the software-induced interrupt latency from the measurement (as long as that latency is never longer than half a period of the pulse you are trying to measure, of course). Does that sound right to you?

  • Great - well thought and presented - as per "Dave Wilson" normal - deep thanks.

    Pardon - have appt. right now - will respond later tonight - we are being deluged w/snow in US midwest at this moment.  Want to give your valued writing the thought/effort it requires - again - thank you.

  • Dave Wilson said:
    the latency on entry should be deterministic and should indeed cancel out if you are measuring pulse widths based on interrupts on two edges.

    Good - and thank you - this confirms my earlier response to poster.

    Dave Wilson said:
    as long as that latency is never longer than half a period of the pulse you are trying to measure

    Could not absorb this earlier - now believe that you're warning us that there is a critical input frequency - and if the sum of 2 latencies exceeds the input signal's period - we are likely to "miss" that 2nd input signal's arrival.  (or - respond to it with the error caused by the 2 latent times, combined) (that being the case when 2*latent_time > input signal period)

    Your point about MCU "timestamping on the edge" to me implies that we must then quickly copy/log this timestamp - and be ready/waiting for subsequent such "edge timestamps" - in each case quickly copying and logging the timestamp data - for later review.  (this requirement due to likely over-write of early timestamps) 

    Is this a correct understanding?

    Again much thanks for your ongoing - care & guidance...

  • I'm posting all this without actually going off and coding anything but..... :-)

    My point about the latency never exceeding half the period of the measured signal was to ensure that it is never possible to miss an edge. Because the hardware timestamp capture isn't FIFOed, you only get one captured timestamp at a time. If another edge occurs before you've read the previous measurement, therefore, you loose one of them and get out of sync. If the signal frequency (and duty cycle, of course) is such that no two edges will ever occur within the maximum interrupt latency of the system, you should be fine.

  • Dave Wilson said:
    insure never possible to miss an edge - as the hardware timestamp capture isn't FIFOed,

    Aha - this confirms my interpretation of your earlier (1 up) writing.  The timestamp must be copy/saved to prevent an over-write.

    Dave Wilson said:
    without actually going off and coding anything

    And that would make 2 of us - mon ami.  

    Glad that the original poster has stayed interested/active (rewarded my earlier post & now yours) - think that we've done well on this one...