This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Bits and variables only getting set during debug, not during execution?

Other Parts Discussed in Thread: MSP430G2553

I am having some very odd problems that have only recently started appearing while running code in an MSP430G2553.

Basically what I am seeing are instances where variables and bits are not getting set while the device is running, but when I am debugging, if I break the execution prior to the assignment statement(s) and continue by single-stepping or continuing to run, it will correctly assign the bits.

For instance, I have a routine that re-enables an interrupt by setting the P2IE.BIT7 = 1, then sets a flag variable to state that the interrupt is enabled. I noticed during execution that it would never enter the interrupt. It was executing the routine, because I could see that the flag variable was getting set, but looking at the registers in the CCS after a halt, the P2IE.BIT7 bit was still 0. I've checked that no other functions are clearing that bit.

The interesting thing is, if I break the program right before it executes that routine, and then continue, or even single-step, it WILL set the bit.

Another example from the same program. In a SPI routine, using the USCI_B interface, I have a routine where the MSP430 is gathering data from a sensor and then writing that sensor data to an external EEPROM. Previously it was working fine. I later noticed that the firmware was getting stuck checking a status bit in the EEPROM. The procedure, in accordance with the data sheet, was to put it into write mode by setting the status register, writing to it, disable write, and check the status register bit for a write-in-progress before continuing. When I ran the program from debug and paused it, it was clearly 'stuck' in that routine, constantly checking the status register. I put a logic analyzer on the SPI bus, and it was working correctly, showing a RDSR followed by the status register state on the MISO. The SR value on the bus was 0x02, and WIP was bit 0, indicating that the write was complete. When I paused the debugger, the temp variable reading the data out of the status register was 0xFF.

I know the SPI config is correct because I can see it directly on the logic analyzer. Here's the kicker - if I break the debugger just before it reads the status register, it will correctly read a value of 0x02 whether I continue or single-step, and the program will proceed.

The way I solved it was by putting an extra dummy 0x00 write to the SPI bus before reading the UCB0RXBUF, I assure you, I am checking the TXIFG and RXIFG, and UCBUSY. This does not appear to be a SPI timing issue.

I am suspecting that something is wrong with the chip, or the compiled code, etc. It seems to me that neither of these things should be happening. The fact that they don't happen when I am debugging indicates to me that it may not be related directly to the firmware.

What could cause these issues? Any help would be appreciated.

  • I have seen people "debug" object code that they do not "release", and "release" the object code they did not "debug". Yes, the "debug" and "release" object codes are both compiled by the same compiler from the same source code, but the optimization levels are different.

    Are you doing that?

  • No, I have always been using the "debug" configuration on this project. 

  • Is Vcc high enough for the MCLK when you are running without the debugger?

  • I'm using the launchpad debugger interface. Vcc = 3.55V.

  • When you say things are different when you are not debugging - you disconnect FET debugger or leave it connected? First thing we need to know - does actual debugging causes difference or just FET connection to your circuit?

    Apart from debug/release compilation differences which leads to different optimization levels that can affect not only execution timing but even program flow, there's more..When you debug then FET debugger is doing chip reset. When your circuit is alone - it is responsible for chip reset. Perhaps this is reset problem. Please (in short) describe: VCC voltage and ramp-up time, reset circuit, what's uC clock freq and how and when you set it?

  • Actually, let me describe the conditions better:

    I have the target board wired to the debugging interface of the launchpad module. When I say debugging, I mean just that - running the code from debug in CCS. Although I HAVE disconnected the board to test it on its own (using the same debug configuration after loading, disconnect, and powerup), the behavior is the same. The only reason I know exactly what is happening is because I am running the code, pausing it, checking bits and bytes which are seemingly changing states outside of the firmware.

    The MSP is running in active mode at 16MHz, by initializing the clock system with CALDCO and CALBCS on power up. The reset pin is pulled high and brought to a test point, where the debugger connects. The board is getting its power from the Launchpad. There are many power modes and states in this FW, but the scenario is only occurring while active. i.e. I have limited the conditions under which it is occurring.

    Let me also emphasize that these functions were working correctly before. They have not been changed, only additions to the remainder of the FW.

    I am in the process of setting watchpoints for bits and bytes, but CCS is apparently picky about when you enable these (I get messages about limited resources, and having to disable SW breakpoints.)

  • Scott Wohler said:
    The MSP is running in active mode at 16MHz, by initializing the clock system with CALDCO and CALBCS on power up.

    If you do it right after power-on, then you do it wrong. CPU is started when VCC is around ~1.6 Volts - not enough for 16MHz. So you must do delay_cycles at default DCO freq to pause some 100ms or so in hope that during that time VCC reach it's nominal 3.5 volts.

    [edit] First disable watchdog, then pause 100ms, then set DCO to 16MHz.

  • old_cow_yellow said:
    Yes, the "debug" and "release" object codes are both compiled by the same compiler from the same source code, but the optimization levels are different.

    Not necessarily. The two main differences (and even these are jsu tdefault settings and can be changed) are that in debug mode, the compiler gets a prdefined smybol "_DEBUG" which can be use for conditionally compiling debug code into the binary. In release mode, this code won't be compiled. Also, in release mode, most of the debug information is removed from the output file.

    besides this, you may define different optimization levels, but then, it isn't a good idea to debug and test with different optimization setting than the release. Even though it might be good for debugging some problems to switch optimization off.

    However, a possible reason for a cleared IE bit when halting code that executes while it is set when single stepping is a big bug in the code that resets the processor. It may have to be with different optimization levels (which in turn usually indicates fragile code). Or with a race condition that crashes when running freely but not when the debugger has control.

**Attention** This is a public forum