This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Difference of compiling results with TI-ARM compiler and Long Term Support compiler

Other Parts Discussed in Thread: RM46L852

Hi All,

I am developping a RM46L852 software using CCS v6.1. I was used to use the TI-ARM compiler with -O0 level optimizations.


I have recently switched from my current TI-ARM compiler (5.2.7 version) to a TI LTS compiler (15.12.02). Compiling optimizations were kept identical (equal to -O0). When I flashed my software on target I saw some weird behaviors. Software was not running as per the expectations anymore.

Then I've tried to disable the optimization (off) and kept the freshly installed LTS compiler ; my software became back to its normal behavior.

--> This looks like enabling the O0level optimizations disturbed my software.

I performed some more tests with previous versions of older compiler for both the TI-ARM and the LTS one. Behaviors were identical :

--> software is running properly when build with TI-ARM compiler, whatever the optimization level (off or O0)

--> as soon as O0 optimizations are enabled with LTS compiler, the software is not working anymore.

Would you have any clue that could explain what happen? Which kind of optimizations are done by the optimizer (which registers are handled) that could lead to such a different behevior? I cannot understand why both compilers do not have a consistent behavior?

Thanks for help!!!

  • There are two possibilities to consider.  One, you are experiencing a compiler bug that first appears in version 15.12.x.LTS.  Two, you have a bug in your code that does not get exposed until you use optimization in the 15.12.x.LTS compiler.  Regardless of which possibility it is, the cause of the problem must be narrowed down.

    Consider using this method to narrow down the cause of the problem.  Use file specific build options to disable optimization for one source file at a time.  When everything starts to work, you know the problem is associated with that file.  Within that file, you can narrow the source of the problem further by disabling optimization on a function specific basis.  Use the #pragma FUNCTION_OPTIONS for that.  Read about it in the ARM compiler manual.  I realize this method could be a lot of work.  I'm sorry about that.  But I don't have any better ideas.

    Thanks and regards,

    -George

  • Hi George,

    I investigated a bit more my aforementioned issue.

    I performed all my below investigations with LTS 15.12.2 compiler.
    I have tried both "off" and "O0" optimizations sucessively to identify the detailed conditions when issue was reproducible or not.

    Step by step, I identified the file and the function on which optimizations were causing the issue I observed.
    I identified that the issue was linked to the function inet_chksum_pseudo of inet_chksum.c file (Texas instruments LWIP port).
    According to the lst assembly file, i finally found that the macro SWAP_BYTES_IN_WORD is compiled "strangely" by the LTS compiler.
    --> When optimizations are enabled, the LTS compiler uses REVSH instruction to code this MACRO; I assumed it should be REV16 instead.

    The issue can be reproduced easily with following C function:
    unsigned long swapBytesInWord(unsigned long us)
    {
    return 0xfffflu & (((us & 0xfflu) << 8u) | ((us & 0xff00lu) >> 8u));
    }

    In assembly, LTS compiler with -O0 optimizations enabled provides :
    REVSH A1, A1
    BX LR

    Even if masks are casted in 16-bits data, the compiler uses the REVSH instruction that extend sign to 32-bits...
    It should be REV16 instruction instead.

    What do you think?
    Is this an already known issue?

    Thanks,
  • Thanks for the nice short test case! Yes, the output of 15.12.2.LTS is clearly wrong in this case. I've submitted SDSCM00052901 for further analysis.