This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Optimization Levels

Hi all,

I'm trying to optimize my C code invoking optimization compiler options in CCS environment. 

When I change the various optimization levels the performance of my code change (in terms of time execution), but there are not visible improvements in terms of code size.

I tried all configurations (Optimize for code size, Speed vs. size trade-offs), but the size of .out file not change significantly.

Can anyone tell me what I wrong?

PS: below some specifications:

DSP: TMS320C6678;

Evaluation board: TMSEVM6678LE;

CCS Version: 6.2.0.00050;

Compiler Version: v8.1.1.

  • Hi Geni,

    Moved this over TI Compiler forum for appropriate and faster response. Thank you.
  • Hi Raja,
    ok, thank you.

    Regards

    Geni
  • Geni,

    The size of the .out file is not really a good way to measure the size of your program that gets loaded to the device as the .out is primarily symbols.

    If you go to the View menu and select "Memory Allocation" that tool will give you a better idea of the actual code size. There is also a .map file that is typically generated during a build that you can open and see code size information.

    Regards,
    John
  • Thank you very much JohnS.

    Increasing the optimizations for code size options no significantly improvements are visible.

    Furthermore, changing the "speed vs. size trade-offs" from the dropdown menu window, no changes are observable.

    Have you an idea of this behavior?

    Thanks in advance.

    Geni
  • Exactly how do you measure changes in code size?

    As you experiment with builds for code size, please keep the optimization level high.  Use --opt_level=2 or --opt_level=3.  Then vary the setting for --opt_for_space.  You should see a difference.

    Please consider measuring code size with this method.  Read this wiki article about finding functions which increase in code size the most.  You don't want to do that.  But it is worth knowing a method for collecting the code size of individual functions and loading that into a spreadsheet.  Even if overall size does not change that much, you should see individual functions changing size.

    Thanks and regards,

    -George

  • Hi George Mock,

    thanks a lot for your answer.

    As suggested by JohnS I used the Memory Allocation window, in particular the size of L2SRAM memory. 

    Below the results for: Speed vs. size trade-offs = 4.

    Changing Speed vs. size trade-offs = 0, no differences are visible.

    Do you know why?

    Thank you in advance.

    Best regards

    Geni 

  • Geni Butera said:
    Do you know why?

    I have a guess.  The size varies from 144 bytes to 150 bytes.  This must be one function.  Or maybe 2 or 3 really small functions.  Is that right?  It seems likely there is very little opportunity for code size improvements in these functions.

    Instead of looking at how much L2SRAM you use ... What is the total size of the .text section?  That section contains all the code.  You can see the size of this section in the linker map file.  It usually has the same name as the executable .out file, but with the extension changed to .map.

    Thanks and regards,

    -George

  • Hi George,

    yes, it is right, are just a few functions.

    The length of the .text section is 74848 Byte (0x12460) as shown below:

    ==================================================================

    SEGMENT ALLOCATION MAP

    run origin  load origin   length   init length attrs members

    ----------  ----------- ---------- ----------- ----- -------

    00800000    00800000    00012460   00012460    r-x

     00800000    00800000    00012460   00012460    r-x .text

    00812460    00812460    00008e7c   00000000    rw-

     00812460    00812460    00008e7c   00000000    rw- .far

    0081b2e0    0081b2e0    00004152   00004152    r--

     0081b2e0    0081b2e0    00004152   00004152    r-- .const

    0081f438    0081f438    000035e4   00000000    rw-

     0081f438    0081f438    00002000   00000000    rw- .stack

     00821438    00821438    0000146c   00000000    rw- .fardata

     008228a8    008228a8    00000120   00000000    rw- .cio

     008229c8    008229c8    00000024   00000000    rw- .bss

     008229f0    008229f0    0000002c   00000000    rw- .neardata

    00822c00    00822c00    00000a50   00000a50    r-x

     00822c00    00822c00    00000200   00000200    r-x .vecs

     00822e00    00822e00    00000850   00000850    r-- .cinit

    ==================================================================

    below, instead, the memory configuration:

    ==================================================================

    MEMORY CONFIGURATION

    name                     origin     length      used      unused  attr      fill
    ----------------------  --------    ---------    --------    ----------   ----    -------
    L2SRAM                       00800000   00080000    0002345e    0005cba2    RW          X
    MSMCSRAM                0c000000   00400000    00200000    00200000    RW          X
    DDR3                           80000000    20000000    00000000    20000000    RWI         X

    ==================================================================

    Maybe do I have to look the .text section in my optimization?

    Thanks

    Regards

    Geni

  • Geni Butera said:
    Maybe do I have to look the .text section in my optimization?

    Yes.  The option --opt_for_space affects the size of the .text section, and no other section.

    Thanks and regards,

    -George

  • Thanks a lot for your help George.

    Regards

    Geni