This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Binary number representation in array initialization doesn't compile

This line of code does not compile.
char char_gen[100][5] = {0b0000000, 0b0000000, 0b0000000, 0b0000000, 0b0000000};

error "extra text after expected end of number"

These lines of code do work.
char char_gen[100][5] = {0x0000000, 0x0000000, 0x0000000, 0x0000000, 0x0000000};
int i = 0b01;

I am using Code Composer v4 Limited for MSP430 development, a recent download and install.

 

P.S. How about some code formating boxes for the forum posts?

  • I don't believe binary notation is a standard feature of the C language. Hex notation, however, is a standard feature.

    Some compilers implement an extension of the language to allow certain features and increase usability. 

    So are you saying that Code Composer has binary notation as a C extension? That would be pretty cool. I'm not a user of Code Composer, but out of curiosity I downloaded the latest compiler manual MSP430 Optimizing C/C++ Compiler v 3.1 User's Guide (Rev. C) and looked at the compiler extensions. I don't see it in the list. Perhaps CCS doesn't support that feature.

     

  • The "0b" syntax for binary numbers is NOT part of standard C.

    One workaround is a "binary.h" containing 256 lines worth of "#define B10000000 0x80" and etc :-(

     

  • Hi,

    since you did not mention what you intend to do with your array I will give you my thoughts on how to deal with bit variables (since (standard) C has no 0xb support for binary variables)!

    I'm defining my own data type BITFIELD which looks like (has to be a int according to C; but char (BIT0 to BIT7) will also work!)
    /* ============================================================================
    // structure which stores information used for program flow or debugging purposes
    // ==========================================================================*/
      struct BITFIELD
      {
        unsigned char FLAG0  :1;
        unsigned char FLAG1  :1;
        unsigned char FLAG2  :1;
        unsigned char FLAG3  :1;
        unsigned char FLAG4  :1;
        unsigned char FLAG5  :1;
        unsigned char FLAG6  :1;
        unsigned char FLAG7  :1;
        unsigned char FLAG8  :1;
        unsigned char FLAG9  :1;
        unsigned char FLAG10 :1;
        unsigned char FLAG11 :1;
        unsigned char FLAG12 :1;
        unsigned char FLAG13 :1;
        unsigned char FLAG14 :1;
        unsigned char FLAG15 :1;
      };

    Now, you define an initialize your variable(s) like:

    volatile struct BITFIELD STATUS = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
                                              // defines/initializes a structure of
                                              // type BITFIELD, variable name -STATUS-


    // access them from your application like i.e.
    STATUS.FLAG0 = 1;  // set FLAG0 = BIT0 to 1

    I'm usually defining symbolic name for the bitfield to ease my life. This could look like
    /*-----------------------------------------------------------------------------
    // Symbolic names for STATUS flags
    //---------------------------------------------------------------------------*/
    #define bADC10FLAG  STATUS.FLAG0         // ADC10 conversion finished
    #define bWDTFLAG  STATUS.FLAG2  // status flag for WATCHDOG timer

    Now you can do thinks like i.e.
    while (!bADC10FLAG)
      __no_operation(); // wait for bADC10FLAG to be set

    You can group your variables into an array or union if you want to.
    Rgds
    aBUGSworstnightmare

  •  

    Thanks for the help.  This is all very good info.  I searched the forum and found nothing but suspected all that was said here.  I have a large 100 x 5 array of const character data I want to store in FLASH.  This data is all in binary number representation from another compiler I am porting the code from.  The easiest answer might be the #define binary.h that makes it manageable.  

    It is interesting that they support binary number extensions just not in the array initialization context.  Perhaps they should add this feature.  Seems most micro controller compilers do. (In my limited experience)  

     

  • Hold off on that, as that seems like a massive effort (unless someones already done it, in which case speak up!).

    In the meantime, see this article by Michael Barr on Binary Literals in C.

    http://embeddedgurus.com/barr-code/2009/09/binary-literals-in-c/

     

  • It is a good excuse to learn python scripting language.  It is great for stuff like this.  I would do it in matlab but I don't have a copy anymore.  Attached is the binary.h file generated using a python script in comments at the beginning of the header file.

     

    Charles

  • I attached the binary.h in the above post.

  • We had a similar problem, since when working with font data, binary representations are way easier to handle than hexadecimal or decimal numbers.

    After fiddling around with some own macro implementations, we ended up with with Tom Torfs' public domain binary constant generator macros:

     

    /*******************************************************************************
    * Binary constant generator macro
    * By Tom Torfs - donated to the public domain
    *
    * used to define binary values values which expand to compile-time constants
    *******************************************************************************/

    /* *** helper macros *** */
    /* turn a numeric literal into a hex constant
       (avoids problems with leading zeroes)
       8-bit constants max value 0x11111111, always fits in unsigned long
    */
    #define HEX__(n) 0x##n##LU
    /* 8-bit conversion function */
    #define B8__(x) ((x&0x0000000FLU)?1:0)      \
                   +((x&0x000000F0LU)?2:0)      \
                   +((x&0x00000F00LU)?4:0)      \
                   +((x&0x0000F000LU)?8:0)      \
                   +((x&0x000F0000LU)?16:0)     \
                   +((x&0x00F00000LU)?32:0)     \
                   +((x&0x0F000000LU)?64:0)     \
                   +((x&0xF0000000LU)?128:0)
    /* *** user macros *** */
    /* for upto 8-bit binary constants */
    #define B8(d) ((unsigned char)B8__(HEX__(d)))
    /* for upto 16-bit binary constants, MSB first */
    #define B16(dmsb,dlsb) (((unsigned short)B8(dmsb)<<8)     \
                            + B8(dlsb))
    /* for upto 32-bit binary constants, MSB first */
    #define B32(dmsb,db2,db3,dlsb) (((unsigned long)B8(dmsb)<<24)      \
                                      + ((unsigned long)B8(db2)<<16) \
                                      + ((unsigned long)B8(db3)<<8)    \
                                      + B8(dlsb))
    /* Sample usage:
          B8(01010101) = 85
          B16(10101010,01010101) = 43605
          B32(10000000,11111111,10101010,01010101) = 2164238933
    */

  • Ha! Looks exactly like the macros in the link I posted. I guess that's where they originated.

**Attention** This is a public forum