hello,
I am using ti-processor-sdk-rtos-j721e-evm-07_03_00_07 and it's tidl_j7_02_00_00_07 to deploy our deep learning model. I use onnxrt to compile model and deploy, and wrote my application according to onnxrt samples. but when i run the application. I found that the a72 memory will increase a lot (mainly memory needed by the model).
I think that model is inferenced in c7x and c7x has its own memory, is that right? Then how can I allocate model memory on c7x memory instead of a72 memory?