This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Starterware/TM4C129ENCPDT: Why the TivaWare default its optimisation level to O2?

Part Number: TM4C129ENCPDT

Tool/software: Starterware

Hi, There

For my own project, I changed optimisation level to OFF and program running into FAULT ISR, recompiled to optimisation level 2, running fine. Any idea why is that?

Why it is default to O2, how about the speed vs. size trade-off setting?  What number shall I set to?

Regards!

Ping

  • Hello Ping,

    Let me be 100% sure I am understanding this correctly.

    When optimization is turn OFF, the code does NOT run correctly.

    When optimization is turned ON, the code runs as expected.

    Is that true?

    I ask because that is backwards to any other case I can think of... usually turning on optimizations is the cause of problems, never switching them off that I can recall.

    Given the issue is related to FAULT ISR though, I think you should investigate if you are running into ISSUE #2 from this posting: e2e.ti.com/.../374640
  • Ralph Jacobi said:
    I ask because that is backwards to any other case I can think of... usually turning on optimizations is the cause of problems, never switching them off that I can recall.

    Oh, I've run into cases where turning off optimizations causes apparent misbehaviour. Changing optimization can reveal bugs (especially heisenbugs) regardless of whether you "increase" or "decrease" the optimization level**.

    In most cases these are latent bugs in the application (And I have seen it in commercial, presumably highly tested applications), only in a minority of cases has it been compiler bugs.

    For this reason I say you should

    • never test* at any other optimization than your release optimization
    • never modify your optimization on a module by module basis

    Robert

    * And only rarely debug. If you cannot debug at the release optimization level you need to either change the release optimization or learn better techniques for debugging.

    ** Really, increase and decrease are the wrong terms

  • Hi Robert,

    Humm, good to know! I've had many many instances where 'cranking up' the optimizations cause bugs to be revealed but had yet to observe the inverse to be true! Thanks for the enlightening information. And yes, bugs are almost always application related... can attest.

    I 100% agree that testing should be done based on intended release optimization level. Found that one out the hard way early in my career. Though such lessons were also useful for long term growth.

    Actually that note dovetails into the topic of why is TivaWare released at that optimization level - frankly, I don't know the answer as to why that exact level setting was chosen as I was not involved in any of the development of it. If this issue proves to be deeper rooted than the commonly run into "ISSUE #2" from our Common Problems thread, I may need to find out more details behind the choice and if there were any known consequences for changing that chosen level. I wonder if there was much, if any, testing done with other optimization levels for the exact reason Robert listed (though I feel we could debate that for a Driver Library that practice may not be as valid as with an end application due to the nature of the scope of delivery...).
  • *** LIKE ***
    Those pesky "heisenbugs" prove - most usually - highly uncertain... (word alone deserves the (improperly banned LIKE))

    Crack staff (surely) will (now) quickly claim, "heisenbugs" as the cause of their "heisenerrors!"     (see what you've started, Robert?)

    BTW - my findings/beliefs were (much) the same as vendor Ralph's...

    Poster must realize that the "discovery of an issue (especially a fault) now - rather than later - is "Much to be desired" and should lead to cleaner & more robust code...

  • Ralph Jacobi said:
    I wonder if there was much, if any, testing done with other optimization levels for the exact reason Robert listed (though I feel we could debate that for a Driver Library that practice may not be as valid as with an end application due to the nature of the scope of delivery...).

    I agree Ralph, I think you can make a very good argument that libraries should be tested at multiple optimization settings (perhaps with indications of what levels the testing was performed at). I can understand why that is not common though, the number of variations would be large.

    Robert

  • Thanks, Ralph

    and glad to see so many responses. Yes, your understanding is correct.

    The reason I turn the optimisation level to OFF is because at O2, the debug functions (stepping, breakpoints) do not behaviour well, see my other post here -

    And this the first time I noticed the optimisation level changes code behaviour, thanks for pointing out the #ISSUE 2, but I don't think it is related to that, the application passed through initiation stage and running fine, I am still try to catch which bit of code causing FAULT ISR - which is hard, seems only when it runs certain functions.

    Any suggestions why is that? Thanks!

    Ping 

  • Ping Wang said:
    I am still trying to catch which bit of code causes entry to, "FAULT ISR" - which is hard, seems only (to occur) during certain functions.

    Good that you've made such observation.    This vendor has a detailed App Note which describes methods by which you can gain deep insights into the (real) cause of the MCU Fault.   (somehow the recent (claimed) forum "upgrade?" neglected any attempt to make the "finding of such key documents" comfortable, convenient (God forbid) "quick & easy" for "LIKEless" client users!    

    The red striped (Style Guide) atop the forum page provides (highly sought) "Blogs, Groups, TI Training" - and (surprisingly/disappointingly) NOT a SINGLE LINK to such VITAL TECH DOCS!    Can the (claimed/LIKEless "upgrade?") prove anything other than a mirage?

    Absent vendor's critical App Note - you may systematically remove "suspect functions" (one which you believe cause entry into the "Fault ISR") and try to discover the causative code event.    Most common is a, "too small MCU stack" - your use of the Forum Search Box (again atop the forum page) should reveal past posts which model, "How the stack may be increased..."

  • Try compiling with both TI's ARM compiler and GCC. When compiling with GCC, use -Wall. You may see new or different warnings.
  • twelve12pm said:
    Try compiling with both TI's ARM compiler and GCC. When compiling with GCC, use -Wall. You may see new or different warnings.

    Or better yet, use a proper static analyzer like PC-Lint.

    More complete, easier to understand and more easily tuned to proper coverage.

    Robert

  • PC-Lint is a great tool for uncovering difficult-to-find bugs. We use it. Note that it cannot detect errors like failure to turn on a peripheral before attempting to use it. That is one thing that will put you in Fault ISR.

    The more testing you can do on your code, the better off you'll be. This includes static testing (PC-Lint, compiler warnings), dynamic testing (automated test suite, fuzzing), running on multiple compilers and hardware platforms, and on and on.

    But if that is not available, the easiest and quickest thing to do is compile with another compiler and see what warnings it outputs.

    Another thought on Fault ISR: It is possible that the optimization level is not related to the failure. Suppose you ran firmware that turned on a peripheral. Suppose you later did a core reset (not a system reset), which clears the core but leaves the rest of the chip in whatever unknown state. Suppose you later ran the -O2 version, which worked. Then, suppose you did a system reset or powered off and on. The peripherals are powered down until turned on explicitly. And finally, suppose you run the -O0 version of the software, and it goes to Fault ISR. This is just one example sequence that could make you believe you're losing your mind. Remember to always do a system reset, not a core reset. I would re-try the -O2 and -O0 versions.

    Yet another thought: It is possible you are compiling with different sets of predefined symbols, or linking with different sets of libraries, in your Release and Debug builds. I would check project configuration and compare to make sure everything (except optimization level and debug code output) is identical, or (more easily) create a new project and bring the existing code into it.
  • twelve12pm said:
    PC-Lint is a great tool for uncovering difficult-to-find bugs.

    It's even better for preventing them.

    It should not be an after the problem salve but rather on ongoing supplement to provide what's otherwise missed.

    twelve12pm said:
    Note that it cannot detect errors like failure to turn on a peripheral before attempting to use it. That is one thing that will put you in Fault ISR.

    I think it could be with some forethought on library construction. Not sure it would be worth it but I'd have to think about it. You may well end up with better code.

    In this case though I agree. TDD is the better tool for this.

    Lint first, test second, then compile for the target. If you still have legitimate compiler warnings by the time you get to compiler you're probably doing something wrong.

    twelve12pm said:
    But if that is not available,

    If it's not you really should make it available. It's one of the essential tools.

    twelve12pm said:
    It is possible you are compiling with different sets of predefined symbols, or linking with different sets of libraries, in your Release and Debug builds

    Never have a debug build. Always build to release.

    Robert

  • Robert Adsett72 said:
    twelve12pm
    It is possible you are compiling with different sets of predefined symbols, or linking with different sets of libraries, in your Release and Debug builds

    I wonder what is your technical rationale on this one. If you always build to release, then how are you supposed to switch quickly to a build that contains debug information and where source-level single stepping is possible? Or are you so good that you NEVER use a debugger?

  • twelve12pm said:
    Robert Adsett72
    twelve12pm
    It is possible you are compiling with different sets of predefined symbols, or linking with different sets of libraries, in your Release and Debug builds

     wonder what is your technical rationale on this one. If you always build to release, then how are you supposed to switch quickly to a build that contains debug information and where source-level single stepping is possible?

    That's the point you do not switch builds, you debug the production release. You can have a release build with debug information. They are not mutually exclusive.

    You can also step through the code (single stepping is overrated) if necessary.

    twelve12pm said:
    Or are you so good that you NEVER use a debugger?

    If you lint first, then use unit testing and then compile. In the running of the code there are many debug techniques that do not involve the need of a debugger. Some with less intrusion, some more.

    The items where you need to get in and observe the code behaviour directly do not benefit from single stepping.

    Yes, I have stepped through optimized code on occasion but it's rare. Observation and inspection is generally more productive than poking at things.

    When I started* an ICE was generally 5 figures or more (and inflation has eaten away at that). Even the adapters could be four figures and they would be soldered to the board, fragile and non-reusable. This was not a resource to use lightly. As a result I may have learned techniques that people starting out don't value. I've also worked with systems that you simply cannot stop**, although not as extreme as some have.

    And with some (at least) of the modern tools there is considerable information glean-able without needing to step through the code or even set a breakpoint.

    Robert

    That last time I single stepped an embedded application was startup assembly code some years ago. For that you don't even need symbol information. I have single stepped PC applications, largely because the object models were not well or completely documented. Although for some of that logs will the most powerful investigative tool.

    ** Bad things happen if you stop the code

  • And some of my recent work has been on platforms were it is simply not possible to step through the code. There is no method to allow that.

    Robert