This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Consecutive const array allignement

I am porting a project over to a TI Cortex-M4 from another very old 32bit micro. In this old application there are HUGE sets of lookup tables defined something like this:

const unsigned char array1[] = {blah blah blah};

const unsigned char array2[] = {blah blah blah};

const unsigned char array3[] = {blah blah blah};

const unsigned char array4[] = {blah blah blah};

.  ......... and so on......

my problem is that the original application required these arrays to be compiled into consecutive memory locations (ie. placed directly one after the other into code)

Is there anyway I can get Code Composer to do the same? Looking at the memory browser these arrays are not placed into consecutive memory at all.

  • Without putting them inside of a structure or a larger array, there's no simple way from within the C language to guarantee that anything follows anything else.  On your old project, you were lucky that as an artifact the compiler actually put them together.  In particular, if two of the arrays are initialized to the same values, then an optimizing compiler will feel free to use the same location for both of them.

    If you have a bunch of data that MUST be together, then I'd recommend creating a structure that holds all of that data.

  • Thanks.

    I was afraid someone might say something like that! :( no shortcut for me

    I was also thinking of using a structure, but that won't be easy for this application, other alternative is to rewrite the driver that uses pointers to these tables.

  • how about

    const unsigned char bf_array[] = {bla bla bla bla bla bla bla bla bla bla bla bla};

    unsigned char * const array1 = &bf_array[0];

    unsigned char * const array2 = &bf_array[n];

    etc.

    As long as you set up your bf_array and corresponding indexes into it correctly, this would guarantee all of your data would be in a contiguous immutable block of memory.  You would still be able to index into *separate* arrays from your existing code.

  • Simon Thome said:
    I was also thinking of using a structure, but that won't be easy for this application, other alternative is to rewrite the driver

    Could you explain why using a structure is "not easy", if you're considering re-writing the driver anyhow...?

    Do you still have access to the old tools, etc, for the old micro?

    If you do, I would suggest that you also make your changes in the old environment, and also test them there - that way, when you hit probelms,  you will have some clue as to whether it's just a porting issue, or if you've broken the logic of the code...

  • Thanks to everyone for their input.

    I am going to us the following solution

    struct mystruct{

    unsigned char array1 [x];

    unsigned char array2 [y];

    unsigned char array3 [z];

    ......

    };

    struct mystruct const data = {blah, blah, blah ...... };

    I think the person who wrote the code in the first place was lucky that their compiler placed all the arrays consecutively in memory, but in my situation 12 years later with a different platform and compiler, the code is not very portable. Wish me luck, I have a few hundred of these array tables I need to move into structures. Hopefully I don't make a mistake whilst changing the initialization method of these arrays into the structures!

  • It should be relatively easy to write a filter program that takes apart your original source for the array initialization, and puts it together in the new way you want.

    You can even create #define statements so that your code can refer to the arrays in the old manner.

    For example, in the following size1, size2, etc, are computed by counting the values in the initializers for array1, array2, etc:

    const unsigned char array1[] = {value1, value2, value3, ... }

    const unsigned char array2[] = {valuex, valuey, valuez, ... }

    becomes:

    struct mystruct {

    unsigned char newarray1[size1];

    unsigned char newarray2[size2];

    ...

    };

    struct mystruct const mydata = {

    {value1, value2, value3, ... },

    {valuex, valuey, valuez, ... },

    ...

    };

    #define array1 mydata.newarray1

    #define array2 mydata.newarray2

    ...

    Even operators such as sizeof should work referencing the old names, and you only have to debug the code that transforms the array setup.  Once you have that filter, you can trust that you won't make any transcription errors in the actual array data.

    One last thing, if the arrays are of lengths that don't add to multiples of the 4, you may have to use a #pragma or some other compiler specific directive to force the structure to be packed.

  • Thanks salndrum,

    That's a good idea, I didn't think of looking at it that way.

    Thanks for bouncing the idea!

  • Simon Thome said:
    I think the person who wrote the code in the first place was lucky...

    We all get lucky sometimes:  our stuff "works" - but for the wrong (or, at least, not the right) reasons!

    I call this the "Proven Product Syndrome" - where someone tells you, "this must be right because it's a proven (sic) product!", or, "I know this works (sic) - I've used it before", etc, etc,...

    eg, http://www.8052.com/forumchat/read/183935