This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Interrupt triggered on critical moments - is this code safe?

Fellows,

I am studying two situations here to which I am unsure how safe is the interrupt behaviour. Although they are two questions, I'll leave them together are they are somewhat related and quite specific...

First, inside the typical interrupt service routines:

void UARTInt_U5Handler(void)
{
	uint32_t	ui32Status;
	ui32Status = MAP_UARTIntStatus(UART5_BASE, true);		// Get interrupt status
	/* What if the original IRQ was RX, and right now a TX happens? */
	MAP_UARTIntClear(UART5_BASE, ui32Status);				// Clear interrupts
	if ((ui32Status & UART_INT_RX) || (ui32Status & UART_INT_RT))	// Rx interrupt flag was set
	{
		while(MAP_UARTCharsAvail(UART5_BASE))
		{
			pc_UARTInt_RxBuf[5][ui32_UARTInt_RxRec[5]] = MAP_UARTCharGet(UART5_BASE);
			ui32_UARTInt_RxRec[5]++;
			ui32_UARTInt_RxRec[5]%=UARTINT_RXBUFSIZE;
			ui32_UARTInt_RxFlag[5] = true;
		}
	}
	if (ui32Status & UART_INT_TX)							// Tx interrupt flag was set
	{
		TransmitMoreBytesFromTxBuffer();
	}
}

What if the next interrupt event happens exactly between the int status read and the clearing? Will I clear the unserviced bit and ignore it forever, correct? It's usually good to post a question, as adding a FIFO level check on the second if() will make sure that the second situation is still serviced, but I'll appreciate if someone can confirm that the TX event would actually be missed with the code above.

The second one is: if I briefly enable the interrutpt for a certain peripheral, and right away disables it, will these few clock cycles in the middle be enough to let code detour to ISR? Is there a risk that some sort of optimization prevents that?

HWREG(UART0_BASE + UART_O_IM) &= ~(UART_INT_TX | UART_INT_RX | UART_INT_RT);		// Disable UARTInt
ExecuteCriticalTask1();
HWREG(UART0_BASE + UART_O_IM) |= (UART_INT_TX | UART_INT_RX | UART_INT_RT);		// Re-enable UARTInt
/* Is this interval enough to detour code in case the UART interrupt flags were set? */
HWREG(UART0_BASE + UART_O_IM) &= ~(UART_INT_TX | UART_INT_RX | UART_INT_RT);		// Disable UARTInt
ExecuteCriticalTask2();
HWREG(UART0_BASE + UART_O_IM) |= (UART_INT_TX | UART_INT_RX | UART_INT_RT);		// Re-enable UARTInt

Thanks for the attention!

Bruno

  • Sorry to answer to my own question, but I further noticed that:
    UARTIntClear(UART5_BASE, ui32Status);
    Will only clear the bits that were set when the variable was read. So if a different interrupt flag is set meanwhile, it will not be cleared, and will remain set at the exit of ISR.
    So that code is safe, and part 1 is answered...
  • Hello Bruno,

    In the 2nd part of your question, the important thing is to note when the interrupt occurs. If the interrupt were to occur before the IM bits are cleared, then YES, it will go the ISR. If the interrupt were to occur after the IM bits are cleared then it will wait for the IM bits to be set.
  • Hi Amit, thanks for the reply.

    The example on the 2nd question is not necessarily related to the first (although yes, they were pasted from the same project and peripheral!) -it was more of a generic question.

    Let's say the interrupt event ocurred sometime during the execution of CriticalTask1, and the flag was certainly set when such task ended.

    I simply reenable the interrupts on that peripheral in one line, and immediately on the next, disable it again. My concern would only be that the enable/disable is faster than the interrupt engine would divert the program execution. Or maybe some "smart optimization" could neutralize the two register settings...

    There are still a few mysteries going on here with our communication when lots of serial ports receive and sent at 921600 bauds, and I'm trying to cover all the possibilities.

    For now, I'll read you YES as the answer (as I would initially expect, anyway). Thanks again!
  • You seem to be recreating interrupt priorities in SW. The only time I've seen code like this is when working around hardwired interrupt priorities that were not aligned with the application needs. It was, even then, the second or third choice alternative.

    Can you give a little more information about the application where you are using this? Frequency of the associated interrupts, latency requirements etc...

    Robert
  • Hello Bruno,

    I don't think the compiler will optimize the same. When enabled the interrupt shall fire, based on the understanding I have of the peripheral interrupt generation.
  • Robert,

    Not using some sort of RTOS, and indeed we need to somehow set priorities on the code. But no, I guess we are not trying to recreate interrupt priorities... Let me try to give some further input:

    - The board receives a good deal of uart bytes at 921600 from 4 different ports. One of the ports reach up to 48,000 bytes per second, a couple will hear 6,000 bytes in a second, the last one receives eventual short event based messages. Two of the ports also send control messages, and sometimes relay bytes from one port to the other depending on the message content. There's a lot of buffering going on, and I want to triple check that the system is able to service everything. I've done some flooding/crash test with counters and learned that we are safely within the limit of processing capacity.

    - The CriticalTask1 mentioned above is not that critical actually, it is reading some registers from a gyro sensor. That's main thread code (as opposed to inside any interrupts) and it is short enough (37us) that I can disable other interrupts and wait for it to de finished. Nothing fails if that code is interrupted, but coming out that thread to service a very possible uart rx call 3 or 4 times during the event will actually be a waste of code deviation, so I figured I'd better of disable uart interrupts during such task. As a matter of fact, I could as well disable general interrupts, there is nothing on this system that can't wait 37us.

    The rest of the time, processor is doing some inertial measurement calculations and if all done, doing nothing on the while loop - I did not even bother to implement any sort of sleep for now.

    The question in the morning was myself thinking ahead and trying to look for unlikely reasons to a system problem - a few messages were getting lost, not properly processed or forwarded. In the end, it was a hardware issue in one of the boards on the prototype installation - but not on the ones I have here in my lab, to make debugging even trickier - things were working fine on the bench!

    I myself had never seen code like this as well, switching one specific interrupt on and quickly off again... but for the reasons on the long and tiring text above, it seems to make sense. Probably, never having had formal education in embedded programming (or any sort of programming for what is worth) makes me come up with some weird - creative? - solutions...

    And thank you for the comment and making me think a bit further into it!

    Bruno
  • Bruno Saraiva said:
    Not using some sort of RTOS

    If I suggested you were that was a mistake on my part.

    Bruno Saraiva said:

    - The board receives a good deal of uart bytes at 921600 from 4 different ports. One of the ports reach up to 48,000 bytes per second, a couple will hear 6,000 bytes in a second, the last one receives eventual short event based messages. Two of the ports also send control messages, and sometimes relay bytes from one port to the other depending on the message content. There's a lot of buffering going on, and I want to triple check that the system is able to service everything. I've done some flooding/crash test with counters and learned that we are safely within the limit of processing capacity.

    OK, that's fairly easy to prioritize.

    1. The 48K port has the highest priority
    2. The 6K ports are next, they could share a priority
    3. The remaining port has the lowest priority.

    Assign your interrupt priorities accordingly.

    Bruno Saraiva said:
    - The CriticalTask1 mentioned above is not that critical actually, it is reading some registers from a gyro sensor. That's main thread code (as opposed to inside any interrupts) and it is short enough (37us) that I can disable other interrupts and wait for it to de finished. Nothing fails if that code is interrupted, but coming out that thread to service a very possible uart rx call 3 or 4 times during the event will actually be a waste of code deviation, so I figured I'd better of disable uart interrupts during such task.

    That is at best premature optimization and I strongly suspect that it's provably wrong1. There is no advantage to delaying the response of the higher priority interrupts, it simply raises the probability that they will fail.

    Robert

    1 - I may be able to find a reference on that.

  • Robert,

    No, you did not suggest I would be using RTOS. It was just a hook from my own thinking.

    Sure, I did not consider defining the UART priorities according to the expected data flow... Thanks, will do!

    As for the premature optmization, here's what I considered:

    - The said "main thread function" takes 37us to run... at my uart speed, that's somewhere between 3 or 4 bytes. If a uart interrupt clicks in that meantime, it will be "go there, copy the bytes to the buffer, come back"... While I'll surely have enough time to start servicing it right after the sensors function. So it SEEMS to me that overall, ignoring the interrupt during that period is a more efficient policy.

    But as you can incurr from all above, I have not drawn timelines to study the different possibilities - it's a "implement, run and work on it in case it fails" (sounds worse that reality...)

    Shall you come accross the mentioned reference, it will be surely welcome.

    BY THE WAY, how do  I quote others' pieces of messages?

    Cheers,

  • Bruno Saraiva said:
    Shall you come accross the mentioned reference, it will be surely welcome.

    I will look.

    Bruno Saraiva said:
    BY THE WAY, how do  I quote others' pieces of messages?

    1. Click on 'Use rich formatting'
    2. Set cursor in the edit box
    3. Use mouse to highlight what you want to quote
    4. Click the quote link at the bottom of the original message

    Robert

    Advanced technique, use the HTML button and <sup></sup> to insert footnote references.

  • Robert Adsett72 said:
    Use mouse to highlight what you want to quote

    Thanks! (using basic technique)

    And this is the "advanced part"

  • As (yet another) semi-advanced technique - the following is presented:

    Use of "ALT" and the keypad (together) enables:

    Ω  ...234

    µ  ...230

    °  ....248

    ≈  ...247

    ±  ...241

    Σ ...228

    These all are w/in the, "Upper Half" of standard ASCII code table.

  • Bruno Saraiva said:
    Shall you come accross the mentioned reference, it will be surely welcome.

    See

    e2e.ti.com/.../1510723

    for the first time I referenced.

    From the article (near the bottom)

    https://e2e.ti.com/support/microcontrollers/tiva_arm/f/908/p/422113/1510723#1510723 said:
    Rate monotonic priority assignment is guaranteed to be optimal. If processes cannot be scheduled using rate monotonic assignment, the processes cannot be properly scheduled with any other static priority assignment.

    I take optimal to mean that tweaking will make it less optimal rather than more. I think efficiency searching is a red herring in this case, any gains are likely to be more than offset by complexity, bugs and sub-optimal performance elsewhere. If you are close enough to maximizing the cpu usage you likely have bigger problems than can be solved by this minor adjustment.

    Robert

    BTW, I have seen the claim elsewhere just not as easily referenced. I think I may even have a copy of the proof somewhere in my library.

  • Robert Adsett72 said:
    I think I may even have a copy of the proof somewhere in my library.

    Speaking of your "library" - has it not been reported that (several) - known to have entered that pristine facility - have (yet) to emerge?

    If memory serves - one was seeking "deadband" & (more recently) "alias" references...   (Or...I dreamed that?...)

  • Robert Adsett72 said:
    Rate monotonic priority assignment is guaranteed to be optimal.

    Robert, I read the post and the PK article, thanks. It makes perfect sense.

    Robert Adsett72 said:
    gains are likely to be more than offset by complexity, bugs and sub-optimal performance elsewhere

    True and accepted! One may think that "all the possible combinations have been considered and this tweeking is a performance gain!" - but, that might be the case only until you blink one more led on your process... Or not at all...

    Further, I'll surrender: should the required processing be so tight that 4 cpu cycles make a difference, there are two ways to solve it: try to find an unconventional tweek that can work or not (and will cost a lot of time to test and prove), or spend some extra cents on the next CPU option... Isn't that what the Redmond folks do? "Our op system ain't bad, it's you who must replace your i7 core for an i70!"

  • Glad I could be of help Bruno.

    Bruno Saraiva said:
    should the required processing be so tight that 4 cpu cycles make a difference, there are two ways to solve it: try to find an unconventional tweek that can work or not (and will cost a lot of time to test and prove), or spend some extra cents on the next CPU option

    Or

    • solve a different problem. i.e. maybe measure beat frequency rather than measure two different frequencies and subtracting, or use a quicksort rather than a bubblesort or maintain the data sorted to begin with or.... This is obviously preferred
    • relax your timing, Maybe you can reduce the frequency of some tasks. Maybe a diagnostic can be run every second rather than ever 1/4 second or ....
    • move some processing into hardware. I.E. analog filters may be less flexible and more subject to tolerance but they don't load the cpu
    • Solve a smaller problem. Do you REALLY need to cram in an IOT in the same package?
    • And don't forget. MEASURE! MEASURE! MEASURE! Don't optimize until you know where the CPU is used. Saving 4 cycles in a loop that's run 10,000 times in the fastest process gains you a lot. saving 4 cycles in an event that happens twice a day is pretty much irrelevant. It's easy to make the mistake of thinking you know where the CPU is spending its time, we're quite often wrong in those assumptions.

    Bruno Saraiva said:
    ... Isn't that what the Redmond folks do? "Our op system ain't bad, it's you who must replace your i7 core for an i70!"

    Or, upgrade to the latest because .. Umm .. fins! that's it, its got fins!

    Robert

    General rule of thumb, once you approach 80% of CPU usage you will have problems meeting your deadlines and the problems will get worse as CPU loading increases. It's not inevitable to run into problems at that loading but it's certainly time to be concerned that you are approaching the limits.