This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Hello
I have a program that is fully functional in with I2C. The TI MSP430 F5310 is a slave device. When using a bootloader, I remapped the interrupts to a proxy vector. I noticed that when the main application which uses interrupt-driven I2C operation (slave), the I2C MSp430 will clock stretch due to increased latency delay due to the interrupt re-routing.
Is there any way I can solve this latency problem? I know I can increase the clock. But what about methods of vector interrupt or using polling I2C methods?
If I put the interrupts in RAM for the application program, then could the BSL continue to use the default interrupt table in flash? Would I still need a proxy vector?
Hi Vern,
Re-mapping the interrupts has the same latency as the flash vectors, the device gets the address directly from RAM.
But I'm thinking that something else is probably causing the latency in your system, is this the only interrupt enabled?
Even if you are using the proxy method, it should only add a few cycles (maybe ~5) to your latency. How much clock stretch do you see in your system?
It's important to mention that the most critical latency is for the first byte when the MSP430 works as a Slave Transmitter since you need to write the TXBUF quickly in order to avoid clock stretching:
The situation is different for the rest of the bytes and for reception due to the double-buffered mechanism of USCI.
Regarding your previous question, the default BSL doesn't use interrupts, but I'm not sure if you have a custom BSL or something.
In any case, the device will always have the interrupts in Flash after Reset, but if your application redirects to RAM and then you need to use them in Flash again (in your bootloader or somewhere else), then you need to clear the SYSRIVECT bit.
Regards,
Luis R
It looks like small operations (1 byte/2 byte) writes or reads work. But doing a block write/read will fail immediately. The CPU jumps to address 0x4 (no symbols) and hangs there. The only three interrupts I have are:
watchdog, usci_b1_isr (pretty sure no interrupts are happening here), usci_b0_isr (this is the i2c slave)
all 3 have their addresses properly re-mapped to RAM.
also, I am using timerA0 , but I am not using it with an ISR function. I'm just using it as a flag checking. I wonder if I also need to re-route the address for this? But it shouldn't be, since there isn't an ISR to call right?
oh, i'm not using a bootloader yet. Just making sure the main application works first.
Question - is it best to have the bootloader use the default interrupt vector in flash, and the main application use the interrupt vector in RAM? or vice versa, bootloader uses the interrupt vector in RAM, and application uses interrupt vector in flash?
Hi Vern,
It depends on your application, but I would usually protect the bootloader area and the interrupt vectors, leaving the interrupt vectors fixed to be used by the bootloader.
Regards,
Luis R
Please click the Verify Answer button on this post if it answers your question
**Attention** This is a public forum