This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

C++ Object Placement in Memory

Expert 2430 points


Say have I have a global variable which is a C++ class object.  What's the easiest way to ensure all the object's members (data and methods) get assigned to a named memory section (so I can easily place it, i.e., on-chip or off-chip)?  I want to avoid littering the code with scores of #pragma's as that is going to make the class definition very unreadable and hard to follow.

Doing it all with a set of elaborate rules in my linker command file is fine, and, I think, that will be easy for the class methods (since they are defined in a specific .obj file), but I am unsure how to handle the member variables since are only defined in the header file and are not allocated until runtime by whatever module creates the object.  (Actually, that would be true for methods, too, if the method is defined in the header file, right?)

 

Small Example (skipping standard coding practices):

 

cMyClass.h


class cMyClass

{

     int m_Var1;

     int m_Var2;

public:

     int m_Var3;

 

public:

     cMyClass();

     ~cMyClass(){}

 

     int Method1();

     int Method2(){return(m_Var1 + m_Var2);}

};



cMyClass.cpp


#include “cMyClass.h”

 

cMyClass::cMyClass()

{

     m_Var1 = m_Var2 = m_Var3 = 0;

}

 

int cMyClass::Method1()

{

     return m_Var3;

}



SomeFile.cpp


#include <stdio.h>

#include “cMyClass.h”

 

cMyClass AClassObj;

 

int main()

{

     printf( “m_Var3 = %d\n”, AClassObj.Method1() );

 

     return 0;

}


Now if I want to place the entire "AClassObj" object in my DDR2 memory section, how would I do that?

 

  • Unfortunately, there is no neat answer.  If you want to keep all of the odd syntax on this in the linker command file (which I understand), then the main tip you need to know is:

    out_name : {      /* appears inside SECTIONS directive */
       file1.obj(in_name)
       file2.obj(in_name)
       ...
    } > MR
    

    That says the input section in_name from file1.obj and file2.obj (and ...) is to become part of the output section out_name, and it is to be placed in the memory range MR.  If you know which section and file contains each part of the object, then you can use this syntax to place it.  You need to consider three cases: object data, non-inline object methods, inline object methods.

    Object data usually goes in the .bss section.  Depending on your target and/or memory model, it may go in the .far section, or some other target specific section.  See the compiler manual for your target.  All of them are here.  Suppose the input section name is .bss.  For the case above, you would use syntax like this:

    object_data : {
       SomeFile.obj(.bss)
    } > MR
    

    Object methods which are not inline are placed in the .text section.  So, to place them, use syntax like this:

    object_methods : {
       cMyClass.obj(.text)
    } > MR
    

    Object methods which are inline (i.e. the implementation appears in the class definition, or the inline keyword is used) are trickier.  If you build with optimization (--opt_level=2 or higher), then such functions are often inlined, and thus no placement is needed.  If inlining fails for some reason, then you often end up with multiple copies of the inline function.  See this wiki article for the details.  I presume most such function really do get inlined in your final production build, and so few functions fail to inline that it just isn't an issue.

    If this all seems a bit daunting, then here is an alternative to consider.  Use profiling to figure out which code and data really need to be in faster memory.  Place those functions and data with the methods above.  Leave everything else to the catch all lines in the SECTIONS directive of the link command file:

          .text > SLOW_MEMORY
          .bss  > SLOW_MEMORY
          ...
    

    Thanks and regards,

    -George

  • First, thanks for your detailed reply!

    Yeah, that is exactly how I'm already doing it.  You're right in that profiling will be the key because putting all the methods (functions) I can off-chip is producing a bajallion linker-created trampoline functions to get to it, and the performance hit scares me.  (These involve not only my library [which is what I am trying to place off-chip], but the rts library as well.)

    My goal here is memory space, though, not performance since this code and data will mainly only accessed at boot-time, so we'll see.  I'm on a C6474, so I have < 1 MB of SRAM (with L2 cache enabled), and I'm processing massive amounts of image data.  Between the buffers and the code, and very tight on memory, and with 512 MB of DDR at my disposal, you can maybe understand why I am trying to put everything off-chip that I can. :-)

    Anyway, thanks, again, for confirming what I already figured to be true.