This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: Neural Network model Inference Memory management

Part Number: TDA4VM

Hi Team, 

I am trying to understand, how memory will be managed during the Neural Network model Inference on DSP.

Question1: Will the two .bin files(net.bin and io.bin, which are required for model inference) be loaded entirely into a ram from the SD card.

Question2: For example The size of two bin files is 5Mb then, Will the 5Mb be loaded entirely at application initialization and released at deinitialization (OR) It will be loaded and released only at inference time.

Could you please help me to understand the answers to these two questions?

Thanks and Regards,

Aneesh