This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

CCS/TMS320F28023: How does the compiler decide which code to use?

Part Number: TMS320F28023

Tool/software: Code Composer Studio

win7 32 bit, CCS V6, F28023.

Hello:

I have two code snippets that compile into different assembly language code, and I'd like to know how the compiler decides which to use. I'd like to utilize this technique in my own code.

Snippet 1:

memset((char *) RootName, 0, sizeof(RootName));

This statement assembles into the following INLINE code...

3f50aa:   8F000096    MOVL         XAR4, #0x000096
3f50ac:   F618        RPT          #24
3f50ad:   2B84     || MOV          *XAR4++, #0

Nice and tight. I understand the compiler knows ahead of time that sizeof distills down to a number less then 255 (the max size of the RPT instruction.)

Snippet 2:

memset(buffer, 0, sizeof(buffer));

and it's assembly code...

3f42a7:   D500        MOVB         XAR5, #0x0
3f42a8:   76800400    MOVL         XAR6, #0x000400
3f42aa:   8F000400    MOVL         XAR4, #0x000400
3f42ac:   06A6        MOVL         ACC, @XAR6
3f42ad:   767F5EE8    LCR          memset

and then the actual work routine...

memset():

3f5ee8:   FF58        TEST         ACC
396          register char *m = (char *)mem;
3f5ee9:   C5A4        MOVL         XAR7, @XAR4
398          while (length--) *m++ = ch;
3f5eea:   EC09        SBF          C$L2, EQ
3f5eeb:   1901        SUBB         ACC, #1
3f5eec:   1EA6        MOVL         @XAR6, ACC
        C$L1:
3f5eed:   0200        MOVB         ACC, #0
3f5eee:   DE81        SUBB         XAR6, #1
3f5eef:   7D87        MOV          *XAR7++, AR5
3f5ef0:   1901        SUBB         ACC, #1
3f5ef1:   0FA6        CMPL         ACC, @XAR6
3f5ef2:   EDFB        SBF          C$L1, NEQ
        C$L2:
3f5ef3:   0006        LRETR 

Again, I understand the compiler knows the sizeof is larger then 255

Another example uses the second snippet when the compiler does not know the size ahead of time, as expected.

My question...

How does the compiler know which code to use?

Here is the code I see when debugging (copied from the debugger during a debugging session...)

#if ((defined(_INLINE) || defined(_MEMSET)) && !defined(_TMS320C6X)) && !defined(__TMS470__) && !defined(__ARP32__)
_OPT_IDEFN void *memset(void *mem, register int ch, register size_t length)
{
     register char *m = (char *)mem;

     while (length--) *m++ = ch;
     return mem;
}
#endif /* _INLINE || _MEMSET */

As far as I can tell, the 3 processor types are not defined (so, true), and I do not know how the other two are set (_INLINE_ and _MEMSET)

Is there more to this that I am not seeing?

Thanks, Mark.

PS. This is unrelated, but snippet 2 work code seems VERY inefficient, I wrote this in assembly and I got it to two instructions. I may ask another thread on that later. (of course, I have optimization set low.)