This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IWR6843AOP: Extend DSP functionality -> Memory Issues

Part Number: IWR6843AOP
Other Parts Discussed in Thread: MMWAVEICBOOST, AWR6843AOPEVM

Hello, 

I am right now using the Overhead 3D People counting Lab from the Industrial toolbox. And for testing purposes, I wanted to cluster the Point cloud data, generated on the DSP before passing the point cloud via the L3 shared Memory to the R4F. 

Now, I implemented the DBSCAN library, that can be found in the mmWave SDK into the main_dss, adapted the structure that is then passed to the L3. So far so good, everything is clear up until now. 

When compiling, I receive the error from the compiler that the program won't fit into the memory. When using the "memory allocation" view in CCS, I can see that all the available memories are already filled, only the L3 has some space left, whereas I am not sure to what extent I can/should use that to store program code. 

I edited the linker file, so that the ".text" partition overflows into the L3, to test that, but when I build the project, which now compiles successfully, and flash it to the Board, the IWR just refuses to run properly. I tried debugging it, but I get some unexpected behavior, where the IWR just refuses to continue the execution without any debug messages or something else. As if it runs into unrelated memory regions or something else. 

I also reduced the .dpc_l2Heap from the stock 131kB to 126kB instead of overflowing the .text region into L3, which also resulted in a successful compilation but no working outcome either.

Long story short, I have no clue how I can fit the DBSCAN into the DSP, but I just refuse to believe, that the whole memory is almost 100% loaded. 

Please give me a hint into which direction I have to look, I already looked into all the reference manuals I could find like the C674x Megamodule Reference Guide, C6000 Programmers Guide and so on. 

PS: I am on purpose trying to get DBSCAN running, instead of using the gTrack Library. 

PPS: If there was a implementation of DBSCAN for the R4F instead of the DSP, I would also use that...since the R4F certainly has some memory left to code on.

Thanks in advance, 

best regards, 

Sebastian

  • Hi Sebastian,

    The overhead 3D People Counting code is pretty full, as you're experiencing, so it may be smart to think about what can be removed. If you're not interested in using gtrack then there is certainly going to be room in the R4F when we remove it. Let's start with just a few basic questions:

    Where are you getting the DSP implementation of DBSCAN from?

    What toolbox version are you using?

    How much memory do you need in order fit your DBSCAN clustering algorithm into the DSP?

    Best,

    Nate

  • Hi Nathan!

    I use the DBSCAN Implementation, that can be found in the mmWave SDK. I am using the SDK Version 03_05_00_04.

    I use the Toolbox version 4_11_0.

    When adding the DBSCAN implementation, I lack around 6kB, so I might assume that the whole implementation requires around 7kB or so.

    A little update regarding that problem: I managed to fit everything more or less properly by doing the following:

    - Reducing the L2HEAPSIZE in mss_main from 0x1FF00 to 0x1E848.

    - Increasing the systemHeap in the pcount3D_dss.cfg from 11 * 1024 to 20 * 1024

    - Increasing the Stack size of the DPC Task to 5 * 1024 (Because I am calling the DBSCAN right before transmitting the gathered point cloud to L3)

    Everything seems to fit into the memory, compiles and runs as far as I can tell. 

    The Problem is, that I cannot really prove if it is working, because I am not receiving UART Output of the cluster Information, as I intend to do, and when I hook up the MMWAVEICBOOST and Debug on the AWR6843AOPEVM, I keep getting a Assertion fail in the object detection.c on des DSS side as well as on the MSS side. (line 417 on DSS, 663 on MSS). 

    BUT when not hooked up to the Debugger, the indication LED on the EVM that indicates a successful sensorStart lights up..

    So right now Im a little clueless how to set up the debugger in a correct way, so that I can monitor the output of the DBSCAN without creating a Assertion failure...

    Sorry for the weird hopping between problems, I hope you can follow my storyline...

    Best regards, 

    Sebastian

    ----------------

    EDIT:

    While scrolling through e2e I found the following Question:

    https://e2e.ti.com/support/sensors-group/sensors/f/sensors-forum/1073883/awr1843-error-while-debugging

    This describes a similar problem with the assertion fault. 

    I will try the recommendations in there as well.

  • Hi Sebastian,

    That thread recommends exactly what I would do. Let me know how it goes.

    Best,

    Nate

  • Hi Sebastian,

    That thread recommends exactly what I would do. Let me know how it goes.

    Best,

    Nate

  • Hi Nathan,

    I tried it all, but it always comes down to memory issues on the DSP again. 

    So whatever problem came along, it always occurred due to handshakes being not "answered" by the DSP and that sort of issues. So I was only able to resolve all the issues by taking DBSCAN out of the code and "resetting" all the memory settings to default again. 

    I tried the following things: 

    - changing the compiler options for optimization vs. speed to 0 and 1, therefore more memory was accessible and I wasn't forced to decrease heap size and I could also use L1 as memory. Didn't work. 

    - I tried again to reduce the L2 heap from 131kB to 125kB, everything fitted into the memory but the MSS wouldn't accept the Hard Coded configuration due to some handshake not being answered by the DSS.

    - I moved around memory spaces in the linked file from the L2 to the L3 (i.e. the .const), I also tried to let the L2SRAMUMAP1 overflow into the L3SRAM. Didn't work. 

    I am also thinking about porting the DBSCAN lib to the R4F, but since the functions, used datatypes and everything seems to be quite optimized to run on the DSP, I wouldn't really want to mess with that...

    I also seem to have too little understanding about how that system really works, so I don't really get why the communication between the R4F and the DSP just refuses to work if I move the memory partitions around. Is there any literature you can recommend? 

    After all, it seems like I need to either remove some parts in the code that are not necessary, but I guess there already is just the necessary stuff left. Or if I could just move a memory partition to another location, to make room for more source code, that might help as well. 

    I hope I could outline the state of my issue!

    Best regards, 

    Sebastian

  • Hi Sebastian,

    Let me confer with some colleagues on this question and get back to you tomorrow. Feel free to ping the thread if I do not respond by then.

    Best,

    Nate

  • Hi Sebastian

    Can you run the program in CCS debug and take note of how much of L2HEAP is being used? (it is printed in the console in ccs right after you send the config file).

    After this, reduce L2HEAPSIZE to the smallest possible size. If you aren't tracking you should reduce the number of tracks you allocate in the tracking config and see what your minimum L2HEAPSIZE is.

    Best,

    Nate

  • Hi Nathan!

    I did the following things:

    - in the dss_main.c I reduced the "L2HEAPSIZE" to 0x1E848.

    - in the pcount_3D_dss.cfg I increased the "heapMemParams.size" to 12 * 1024, to make room for the DBSCAN implementation.

    - I inserted the DBSCAN implementation in the "Pcount3DDemo_DPC_RadarProc_dpmTask" right before the pointCloud would be transferred to L3. This can of course later be placed somewhere more suitable but I thought for now that should do it. 

    - I am using the hard coded config, so I edited the config parameter "trackingCfg" to a maximum of 1 track to be allocated.

    I then entered debug mode, but ran into the same issue as I already had before, the hard coded config is not being applied completely, so the setup-process is not being finished. I stepped through the code, until I came to the point where it got stuck:

    This is the last executed command, after that the program ist "stuck" in the "Pcount3DDemo_sleep" in the mss_main.

    This also means, that I cannot read the used L2HEAP size. 

    But in this process I found something other rather interesting:

    The hard coded config seems not to be loaded properly. I am printing every config line to the console, and if the command was handled properly:

    and it seems like, the commands, that are stored in the "mmwave extension table" in cli_mmwave.c in the hcc, are not loaded properly...

    But when I run the code on the EVM, not in debug Mode, the LED that indicates a successful setup lights up and I get the expected output on TeraTerm, which makes me think that its not a memory issue after all, but something with the debugger...anyways, I need the debugger to work as well...

    Best regards, 

    Sebastian

    -------------

    EDIT:

    I upgraded to the latest industrial toobox version (4.12.1) but that didnt change anything compared to the previous version I used (4.11.0)

  • Update: 

    I just managed to get the config going, while debugging: 

    I just increased the stack size from the "Pcount3DDemo_DPC_RadarProc_dpmTask" to 5 * 1024, but I already had that setting before, so I am not entirely sure why its working now to be honest...

    I will do some more testing to see if its consistently working now!

    Regards

    Sebastian

    ---------

    EDIT:

    I cannot reproduce the working behaviour. As soon as I hit "Run" in the debugger, it refuses to complete the setup process. 

    But when I plug in the board into a USB cable, it just works.