This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Out of Memory Error from mmBulkAlloc - how to get heap information from handle number?

Other Parts Discussed in Thread: SYSBIOS

Hello,

I have been very careful with my memory allocation.  I have one heap where all my variable allocate from and then one heap for each task that is running so that they will have their own stack.

Then I run some code that uses alot of memory.  I am expecting to get an out of memory error from a heap handle that I recognize.  however, the error is from a handle that I did not create.

How can I determine which heap overflowed?  All the tasks I created and memory setup do not have the same handle that the error listed and neither is that handle list in the ROV tool.  Any suggestions?

Here is the error:

[C66xx_0] ti.sysbios.heaps.HeapMem: line 294: out of memory: handle=0x862670, size=8200

[C66xx_0] 00506.991 mmBulkAlloc(): could not allocate memory.

[C66xx_0] 00506.992 out of memory: handle=0x8620f0, size=9

[C66xx_0] 00506.993 SBNew: Buffer OOM

[C66xx_0] [ERROR] [0000000008:28.493] [CORE0] Error connecting to app server. Trying again... SockHandle: -1, ConnectErr: -1

[C66xx_0] 00507.000 LLIGenArpPacket: Illegal ARP Attempt - Check Configuration

[C66xx_0] [INFO ] [0000000008:28.498] [CORE0] Network shutting down.

[C66xx_0] [ERROR] [0000000008:28.498] [CORE0] Error connecting to app server. Trying again... SockHandle: 8730052, ConnectErr: -1

[C66xx_0] ti.sysbios.heaps.HeapMem: line 329: assertion failure: A_invalidFree: Invalid free

[C66xx_0] xdc.runtime.Error.raise: terminating execution

 

Here are the debug messages with the heaps I know about:

 

[C66xx_0] [DEBUG] [0000000000:00.000] [CORE0] Creating network task with heap 0x862904

[C66xx_0] [DEBUG] [0000000000:00.000] [CORE0] Starting Core 0 Task with heap 0x862684.

 [C66xx_0] [DEBUG] [0000000000:04.028] [CORE0] Trying to create Association Task with heap 0x862738.

 [C66xx_0] [DEBUG] [0000000000:04.033] [CORE0] Trying to create Solver Task with heap 0x8627d8

[C66xx_0] [DEBUG] [0000000000:04.038] [CORE0] Set Allocator Detections with heap 0x862724.

 [C66xx_0] [DEBUG] [0000000000:04.143] [CORE0] Create new EmbMHT instance with heap 0x862724

 [C66xx_0] [DEBUG] [0000000000:04.148] [CORE0] enterCEmbMHT Constructor: all allocations made with heap 0x862724

[C66xx_0] [DEBUG] [0000000000:04.148] [CORE0] Setting Allocator for mScanObservationCounts

[C66xx_0] [DEBUG] [0000000000:04.148] [CORE0] Setting Allocator for mTimeBuffer

[C66xx_0] [DEBUG] [0000000000:04.148] [CORE0] Setting Allocator for mSecondarySolverDimensions

[C66xx_0] [DEBUG] [0000000000:04.153] [CORE0] Setting Allocator for mPrioritySolverDimensions

[C66xx_0] [DEBUG] [0000000000:04.153] [CORE0] Setting Allocator for Scan Tree.

[C66xx_0] [DEBUG] [0000000000:04.153] [CORE0] Setting Allocator for ActiveTrackMap.

[C66xx_0] [DEBUG] [0000000000:04.153] [CORE0] Setting Allocator for Sectormap.

[C66xx_0] [DEBUG] [0000000000:04.153] [CORE0] End embMHT Constructor.

[C66xx_0] [DEBUG] [0000000000:04.158] [CORE0] Initializing Assoc Task.

[C66xx_0] [DEBUG] [0000000000:04.158] [CORE0] Initializing Solver Task.

Here is the ROV print screen of the basic tab:

 

 

Here’s the ROV print screen of the detail tab:

 

Please advise on how to figure out which heap is 0x862670 so I can make it larger.

 thanks so much,

Brandy

  • Hi Brandy,

    Can you attach the ROV views of HeapMem?

    Also which of SYS/BIOS and NDK are you using?

    Todd

  • Hi Todd,

    I am using ndk_2_21_00_32 and bios_6_33_04_39.  I attached the HeapMem tab, very informative.  This at least allows me to equate a handle to a location.  Looks like I have a memory leak in L2 somewhere.  Is there anything else helpful about this tab that I should consider?

     Thanks,

    Brandy

  • That's quite a few heaps! I think we need to step back and look at how you are managing your memory. First of all, can you attach your .cfg file? I'd like to see if you are creating the tasks and heaps statically (e.g. in the .cfg file) or dynamically (e.g. calling Task_create).

    Note: If you create the Tasks statically, you do not need a heap. The stack is simply defined in the big generated .c file.

    If you are creating the tasks dynamically, you have a options.

    If you want the stack to be dynamically allocated, set the params.stackHeap to a created heap. If you want a single heap for multiple tasks, you can set the Task.defaultStackHeap field in the .cfg and pass NULL for the params.stackHeap.

    If you simply want to supply the stack yourself, you can set the params.stack field and leave the stackHeap as NULL. For example

    Char taskFooStack[1024];

    main() {
    ...

        Task_Params_init(&taskParams);
        taskParams.stack = taskFooStack;
        taskParams.stackSize = sizeof(taskFooStack);
        Task_create((Task_FuncPtr)fooFxn, &taskParams, &eb);

    Todd

  • Hi Todd,

    Yes, it quite a lot of heaps.  I am happy to have my designed reviewed by an expert though, perhaps you have a better idea.  I can only say so much about it on the posts but my application is basically this:

    8 cores running, one core as master with NDK, SRIO, PA etc.  This core will also set some global flags in MSMCSRAM to tell the other cores when to "stop/go".  The other cores will each be running duplicate copies of two tasks on a shared memory structure which is very large.  The code is C++ which presents other difficulties and all the text is in DDR3 becuase the application is quite large.  Due to this fact, I seem to have memory corruption issues if I do not control the heaps/stacks explicitedly etc.

    So my idea is that each task on each core should have its own heap for its stack.  That is 16 heaps.

    Then I have a very large "scratch" heap as I call it to create this shared memory structure.  This memory structure uses the C++ STLs vector and map.  I have posted previously on this (http://e2e.ti.com/support/development_tools/compiler/f/343/p/200748/719930.aspx#719930) but a summary to this point is that these STL libraries generally use new/delete.  New/delete allocate memory from sysmem.  I was afraid to move sysmem from L2 becuase I was concerned about the unknown reprecussions with TI libraries, multicore etc by moving it to DD3.  L2 is not large enough for these structures.  Instead, I create custom allocators that allocate from a given heap as determined by a passed in variable.

    Here is my config file:4113.pxmTracking.cfg

    There are few heaps that just haven't been cleaned up yet that I won't need.  They are remenant from another project.  I also changed my platform memory map (again, the naming is more specific to the other project but the division are approximately when I need for this project.), here a sample from my last build:

             name            origin    length      used     unused   attr    fill

    ----------------------  --------  ---------  --------  --------  ----  --------

      L2SRAM                00800000   00080000  0006997b  00016685  RW X

      L1PSRAM               00e00000   00008000  00000000  00008000  RW X

      L1DSRAM               00f00000   00008000  00000000  00008000  RW 

      MSMCSRAM              0c000000   00080000  00046e31  000391cf  RW X

      MSMCSRAM_NDK          0c080000   000ffa00  00053100  000ac900  RW X

      MSMCSRAM_IMG_HDR      0c17fa00   00000600  00000000  00000600  RW X

      DDR3                  80000000   0ce00000  0340617e  099f9e82  RW X

      DDR3_SCRATCH          8ce00000   03200000  03200000  00000000  RW X

      DDR3_IMAGERY          90000000   10000000  01900000  0e700000  RW X

    I still have some places that aren't full, so I am not so worried yet about running out of memory.  And when I look at the stack size of the tasks on the slave cores, they are not using near as much as is allotted at the moment.  I will probably be able to shrink that down quite a bit.

    Please some criticism and advice is welcome.  Can you say more about what kind of stack is created when you create the task statically and not dynamically?  Mine are all dynamically.

    Thanks,
    Brandy

  • Also, I found the memory leak and corrected it.  Thanks for the tip about the HeapMem page.

  • Glad you found the memory leak!

    Regarding the design...multi-core and shared memory can be a pain:) I'm assuming you are using the same .out for all 8 cores. Instead of having two sets of 8 heaps, you could just have a big two-dimensional array

    Char taskAHeaps[8][TASKA_SIZE];
    Char taskBHeaps[8][TASKB_SIZE];

    And use DNUM to index as needed to set the taskParams.stack field.

    Of course, if it aint broke, don't fix it.

    Todd

  • Yes, quite a pain :) I hope I'll survive though.

    Would you mind saying more about:

    "If you create the Tasks statically, you do not need a heap. The stack is simply defined in the big generated .c file." - where would this stack be declared?

    and

    "I'm assuming you are using the same .out for all 8 cores" - would it be different/easier if I had different .out files?

     

    Thanks,
    Brandy

  • If a Task is created in the .cfg file, the actual stack is defined in the big generated .c file (e.g. debug\configPkg\package\cfg\<nameOfProject>_<targetSuffix>.c). Here is a snippet from that generated file

    __T1_ti_sysbios_knl_Task_Instance_State__stack ti_sysbios_knl_Task_Instance_State_0_stack__A[2048] __attribute__ ((section(".far:taskStackSection")));

    Note: since the file is generated, it hard to read.

    Single .out for all cores vs multiple .outs...Personally I think it is easier to manage a single .out that runs on multiple cores. Otherwise build times are long and keeping all the versions in sync is a pain. Also seeing the memory map in one place is nice. The downside is on the static configuration front. You have to do tricks like you are doing (e.g. make multiple ones) or move creation to the startup in the application.

    Todd

  • That is what I was thinking too about the .out file maintaince but I wanted to make sure I didn't miss some brilliant idea!


    Thanks for all your help!

    Brandy