This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

CC1352R: Questions about using GPRAM (dynamically?)

Part Number: CC1352R

Hello,

The application we are developing is almost finished, but there there is one part that is used relatively rarely, but can lead to RAM exhaustion.
We have minimized the RAM usage quite a bit, but not yet far enough. E.g. minimizing static buffer usages and sizes, tasks that are only used sporadically are dynamically created, minimizing stack sizes, etc.

Winning the 8 kB of GPRAM by disabling the cache would solve our problem. However, the sacrificed speed might introduce other problems.
As the use case that leads to the RAM problems is very very rare (once or twice a year), it would be very nice if we could sacrifice the cache dynamically. This is described here.

However, it is not quite clear to us how to use this, and what problems it can cause.
We would like to implement it in the Simple Peripheral code of SimpleLink CC13x2/CC26x2 SDK v3.20.00.68.

Could someone give us some details?

  • Hi,

    The dynamic approach is a bit trickier as you would not be able to "assign" RAM in the same way (as the linker could not really determine what could be in GPRAM and not). One way is to follow the steps that is in regards to "enabling" the cache as RAM in SW and leave out the linker part, this would prevent you from placing anything in this area during link time. 

    You could then in software perform the enable/disable steps as you need them and manually address the memory range that makes up the GPRAM (in other words, you would need to manually point into it). 

  • Hi M-W,

    Thank you for your response. So If I understand correctly:
    We could use the GPRAM as a sort of scratchpad when the cache is disabled, but it is not possible to dynamically switch from cache to making it available as heap?

    Edit:
    Or perhaps we could make a sort of wrapper around the malloc() functions to work around this (i.e. normal malloc() operation when using cache, but allocating from GPRAM when disabled)

  • Hi,

    That would be my suggestion, yes. I think that this might be more efficient and take "less time" to implement compared to making it dynamically available as heap. If your application is fine without this in all cases except this special case, then I would go for the more "direct" approach.

    You could possibly wrap the malloc function if you wish but depending on the use of malloc in your application, the extra "is cache used?" checks might add unwanted overhead as you say you rarely need the extra RAM.

  • Thank you for your nice and thorough answers. I will mark this thread as resolved.