This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
hello,
I am using ti-processor-sdk-rtos-j721e-evm-07_03_00_07 and it's tidl_j7_02_00_00_07 to deploy our deep learning model. I use onnxrt to compile model and deploy, and wrote my application according to onnxrt samples. but when i run the application. I found that the a72 memory will increase a lot (mainly memory needed by the model).
I think that model is inferenced in c7x and c7x has its own memory, is that right? Then how can I allocate model memory on c7x memory instead of a72 memory?
Hi,
Just a basic check: Are you compiling the model for TIDL inference by setting TIDL Execution Provider as part of EP list when creating ONNXRT inference session?
Regards,
Anand