This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

cc430F5137 stack overflow with CC430F6137 example project (IAR)

Other Parts Discussed in Thread: CC430F5137, CC430F6137

I have loaded the example RF project from the CC430F5137 page.  I can load it it on the CC430F6137 eval board and my hardware using the 5137 fine.  But if I choose the 5137 processor from the device list when I run it on both boards I instantly get a stack overflow error message.  Is there some other IAR options I need to set when changing form the 6137 to the 5137? 

  • There's nothing you can do against a stack overflow except careful programming.
    If you call a function or have local variables, the stack WILL grow. And shrink again when you dismiss the local variables and return from a function.
    The compiler itself does not know nor care for any restrictions that apply to the stack.

    The linker just checks whether all variables fit into the RAM and assigns the stack to the top of ram. So as long as the stack doesn't grow longer than the unused ram, all is well. If it does, things go wrong.

    There is a linker setting that will issue a warning if the total ram minus the used data space is smaller than a given value (reserved fro stack), but this will not ensure that your stack won't grow larger than this amount. It is just a check that at least a given amount is available or you'll get warned.
    Also, the debugger can use this setting to determine if the stackpointer has crossed this point. but only if such kind of check is supported by the debugging interface of the processor (Embedded Emulation Module EEM, if present)

    It's up to you to determine this amount by probability checks (maximum nesting level of your code etc.).

    Either one processor has less ram than the other, or your code has always generated a stack overflow, but you just didn't notice. Or the threshold setting is too high for one processor but not for the other while your program does not fully use the 'reserved' amount (so it worked).  So now you'll get a stack overflow warning while there is still some reserve left.

    You can play with the stack size setting somewhere in the project (linker or debugger) settings, but you should rather calculate it thoroughly and use some safety margin.

  • Thank you for your great response.

    It seems like a different problem as I can increase the stack to 2K and I still get the warning before I execute the first line of code.  The project is identical as the only paramter I change in between builds is the processor.  The two processors are identical other than the one has the LCD module.  I looked at the linker files and they are indentical too.

     

    It baffles me?

     

    M

  • I don't know the processors and not the debugger (I work with mspgcc and usually without debugger, as a debbuger is useless when working with real-time external signals - my debugging tools are oscilloscope and signal analyzer)

    I checked the datasheets and indeed the two should be identical (at least related to memory organisation).

    So I'm just guessing...

    The warning seems to indicate that the current stack pointer has moved past the assumed threshold point.

    One thing when this will happen is when the stack pointer is initialized to a different point or is assumed to be initialized at a different point.

    One example:

    Processor A has 2K Ram from 0x1000 to 0x1800.
    Processor B has 4K Ram from 0x1000 to 0x2000.

    If the program is compiled for processor A, the stack pointer will be initialized to 0x1800. This works perfectly on both processors.

    The stack size is set to 256 bytes. So if the debugger starts with the A setting, the threshold is at 0x1400. All is well. Now start the debugger with the B setting and the threshold will be assumed (by the dubugger) to be at 0x1c00. Note that this is just a debugger setting and NOT in any way part of the code. The stack pointe rwill be initialized again to 0x1400 (this usually happnes at the end of the initialisation code that copies the variable initialisation from flash to ram and right before jumping to main(). The very moment the stack pointer is initialized, the debugger will notice that this setting is well below the assumed threshold level of 0x1c00 and give a stack overflow warning.
    This does not mean the stack has overflown but jsu tthat the stack pointer is found to be below the set threshold level.

    In the above case, the stack size needs to be 2k+x to nor issue a warning since the ram size differs by 2k. And of course on the system with lower ram it will never issue a warning even if the ram is completely overwritten.

    My guess is (if the problem has indeed to do with the process explaind above) that the linker file for the 5137 processor has a bug and places the end of ram 2k lower than it is. Maybe it is a copy/pasty typo and uses the 5135 hardware settings or something similar.

    You can check the linker files for the segment sizes but I cannot help you with this. Or you can try to get a memory map printout from your binary file, checking whether really the full 4kb ram is used or only 2kb. During operation you won't notice until you need more than 2kb for data and stack, but the debugger will complain.

     

**Attention** This is a public forum