This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Need to better understand size of variables in CCS 5.5

Other Parts Discussed in Thread: TMS320F2808

I having a great deal of difficulty making changes to an old design I inherited on a TMS320F2808. The DSP uses a serial EEProm to store operating parameters. We are making some enhancements and I'm using what were Reserved bytes in the EEProm to store new information. I went from:

// as of Rev 5.6                                                                             
//    int    Reserve[68];             // Step_Data[72]; /* Step check Data */    
//    int    Reserve_H[68];        // Step_Data_H[72]; /* Step check Data */    

My original byte calculation said:

2 X 68 X 2 bytes/int = 272 bytes

So I implemented this:

   int                    Reserve[72];                      // not used                                  
   unsigned char BoardRevision;                // A single Alpha character indicating PCA revision
   int                    FirstOutPtr;                      // Pointer to the most recent First Out             
   unsigned char FirstOutEEProm[125];     // First Out message code - these are byte codes

the new byte calculation should have been:

2 X (72 + 1) + 126 X 1 byte/char = 272 bytes

This did not work though. Observing the variables in Debug I noticed both ints and chars were displayed as 2 byte 0xnnnn values. So I recalculated:

2 X (72 + 1) + 126 X 1 byte/char = 272 bytes

and I ended up with:

   int                    Reserve[9];                     // not used                                  
   unsigned char BoardRevision;                // A single Alpha character indicating PCA revision
   int                    FirstOutPtr;                      // Pointer to the most recent First Out             
   int                    FirstOutEEProm[125];     // First Out message code - these are byte codes

1 (unsigned char) X 2 bytes/char + 2 bytes/int X (9 + 1 + 125) = 272 bytes 

However this introduced a bug in my program and I was getting unexpected errors. Assuming I was overwriting something I just shorted the number of Reserve to:

    int                    Reserve[8];                     // not used                                  
   unsigned char BoardRevision;                // A single Alpha character indicating PCA revision
   int                    FirstOutPtr;                      // Pointer to the most recent First Out             
   int                    FirstOutEEProm[125];     // First Out message code - these are byte codes

1 (unsigned char) X 2 bytes/char + 2 bytes/int X (8 + 1 + 125) = 270 bytes 

 My updates work perfectly with this but I'm concerned I don't understand why.

Can some explain how many bytes chars and int use in CCS? Even though everything is working perfectly in the emulator I have some old LabWindows 5.5 code that had this in the message structure:

short                  Reserve[68];  

Clearly LabWindows did not treat a short as 2 bytes, it must be 4 bytes. When I try to add the changes to the LabWindows code I did this:

   short                Reserve[3];                       // not used                                  
   short                BoardRevision;                // A single Alpha character indicating PCA revision 
   short                FirstOutPtr;                      // Pointer to the most recent First Out             
   unsigned char FirstOutEEProm[125];     // First Out message code - these are byte codes

This only squares with the CCS application if shorts are 4 bytes and chars are 2. Then the sum is

5 X 4 bytes/short + 125 X 2 bytes/char = 270.  Note, the original LabWindows structure used:

short              Reserved[68];

I'm confused first because I always thought

chars   = 1 byte

ints      = 2 bytes

shorts = 2 bytes

long    = 4 bytes

It's like finding out there's no Santa Claus.

Please don't laugh. I'm a hardware engineer who also writes code not a software engineer.

  • Phil Dilmore said:

    I'm confused first because I always thought

    chars   = 1 byte

    ints      = 2 bytes

    shorts = 2 bytes

    long    = 4 bytes

    It's like finding out there's no Santa Claus.

    Santa Claus is still coming to town.

     chars can occupy either even or odd address. Following a char, the next available address can  be either odd or even (respectively).  ints, shorts, and longs have to "aligned" to even address. Thus one byte of odd address may need to be wasted. Some compilers are clever enough to group all chars to minimize wastes.

  • There is more.

    I'm confused first because I always thought

    chars   = 1 byte

    ints      = 2 bytes

    shorts = 2 bytes

    long    = 4 bytes

    It's like finding out there's no Santa Claus.

    That assumption is unsubstantiated. The size if basic data types in C is implementation-dependant, i.e. determined by the compiler - there is no standard. In most cases, int refers to the register size, but I know of toolchains for 8-bit MCU's that lets you define those sizes in the project settings.

    Since DSP have their "special" architecture and toolchains, this can well be 24 bits. Rather check with you "old" project.

  • I usually use uint8_t instead of "unsigned char" and int16_t/int32_t instead of int. This is important especially when defining a structure.

    Btw, also need to notice the _packed_ compiler option.

  • I usually use uint8_t instead of "unsigned char" and int16_t/int32_t instead of int. This is important especially when defining a structure.

    One should note that things are "the other way around". Something like "uint8_t" does not mean it is always 8 bit - this is only valid for a specific compiler / architecture. It is always the user's/developer's responsibility to take care that those suggestive names match reality. Porting code between 8 bit and 32 bit architectures can be highly educative in this regard.

    The C standard only defines : char <= short int <= int <= long int.

  • Phil Dilmore said:
    I'm confused first because I always thought

    chars   = 1 byte

    The TMS320C28x devices are "unusual" in that a char is 16-bits. This is shown by the following in the TMS320C28x Optimizing C/C++ Compiler User's Guide:

**Attention** This is a public forum