According to manuals, the TI sanctioned way of disabling and reenabling interrupts for critical sections is:
void CriticalFn() { __istate_t s = __get_interrupt_state(); __disable_interrupt(); /* Do something here. */ __set_interrupt_state(s); }
In in430.h, these intrinsics are implemented as inline ASM:
#define _get_interrupt_state() \ ({ \ unsigned int __x; \ __asm__ __volatile__( \ "mov SR, %0" \ : "=r" ((unsigned int) __x) \ :); \ __x; \ }) #define _set_interrupt_state(x) \ ({ \ __asm__ __volatile__ ("nop { mov %0, SR { nop" \ : : "ri"((unsigned int) x) \ );\ })
As one can see, "s" receives the value of the whole status register, not the GIE flag exclusively. Thus, a subsequent _set_interrupt_state() will destroy condition flags, that have changed inside the critical section. This is bad, because assumptions the optimiser makes about condition flags are no longer valid. And even worse, LPM flags (SCG0, SCG1) that were changed inside the critical section are overwritten.
Proposed fix:
Add a
"and #GIE, %0"
instruction to _get_interrupt_state() and change _set_interrupt_state(x) to
"nop { bis.w %0, SR { nop"
or just be honest about what _set_interrupt_state() does, and add "cc" to the clobber list.