This issue appears to be related to the issue linked, but the thread was closed. I am running a modified version of the Simple Serial Socket Server application and am encountering a hang. I have some additional information made from adding some bread-crumbs to the program to track what was going on when the hang occurs.
The symptom is sometimes a hang in cryptoTransactionPoll, but it looks like the OS is attempting a context switch during an i2c interrupt.
The sequence of events appears to be:
I2C Slave IRQ -> Application calls to add the received i2c data to a message -> Message placed on a queue using the List_put API -> Event_post to notify the SerialSocketServer task to check message queue.
The SP is pointing to a value inside the ICALL_taskEntry task stack, rather than CSTACK. The BIOS Scan for errors shows a SP outside stack! message, but I believe this is due to the SP being set to an area in the ICALL_task area during the I2C interrupt. The CSTACK and all hardware stacks appear to have plenty of margin (CSTACK has 0x480 bytes set to the fill pattern at the base)
When the debugger halts the application after the hang, the IPSR is 0x11, indicating the I2C interrupt.
The SP register is set to a value inside the ICall_taskEntry stack.
The breadcrumbs indicate the Event_Post function was called but didn't return back to the irq context.
The callstack from the debugger shows LL_ENC_EncryptMessage -> cryptoTransactionExecute->cryptoTransactionPoll
The SyncEvent for the SSSS task returned from ICall_registerApp and passed to Event_Post does not appear to be corrupted ( still at the same value as when it gets registered ).
Code added to ICALL_Malloc and ICALL_Free to did not find any allocations or frees outside of the heap range.
This thread https://e2e.ti.com/support/legacy_forums/embedded/tirtos/f/355/p/195796/698945 indicates that Event_post can be called from an interrupt as long as that interrupt is registered(It is registered via I2C_IntRegister) . Additionally, the registered irq handler is not marked as __interrupt.
Does something else need to be done with the priority to prevent switching during the execution of the I2C interrupt? Is there anything else that could explain this behavior?
I2C Code snippets: static void setupI2CSlave(void) { I2CSlaveInit(I2C0_BASE, I2C_SLAVE_ADDRESS); I2CSlaveEnable(I2C0_BASE); I2CIntRegister(I2C0_BASE, I2CIntHandler); I2CSlaveIntEnable( I2C0_BASE, I2C_ALL_IRQ_MASK ); } void I2CIntHandler(void) { uint32_t irqStatus = I2CSlaveIntStatus(I2C0_BASE, true); if( irqStatus & I2C_SLAVE_INT_START) { i2c_startIrqHandler(); } if( irqStatus & I2C_SLAVE_INT_DATA) { i2c_dataIrqHandler(); } if( irqStatus & I2C_SLAVE_INT_STOP) { i2c_stopIrqHandler(); } // Check Master Irq if(I2CMasterIntStatus( I2C0_BASE, true )) { i2c_masterIrqHandler(); } i2cIrqState = 0; }