Hi Team,
I am trying to understand, how memory will be managed during the Neural Network model Inference on DSP.
Question1: Will the two .bin files(net.bin and io.bin, which are required for model inference) be loaded entirely into a ram from the SD card.
Question2: For example The size of two bin files is 5Mb then, Will the 5Mb be loaded entirely at application initialization and released at deinitialization (OR) It will be loaded and released only at inference time.
Could you please help me to understand the answers to these two questions?
Thanks and Regards,
Aneesh