A. TIDL-RT
Following this particular instruction set:
The demo is carried out properly and the required output files are being generated after compilation.
1. How to use those output files to run the same model on the EVM?
2. If I want to use one of the other models present in "ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tidl_j721e_08_02_00_11/ti_dl/test/testvecs/config/import/public/onnx/" where should I download the particular model from since the "../../test/testvecs/models/public/onnx/" folder is empty?
3. If I want to use a custom model then how should I generate a .txt file similar to that present in "ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tidl_j721e_08_02_00_11/ti_dl/test/testvecs/config/import/public/onnx". Considering I already have the model and its artifacts generated.
Should the above link be referenced every time for a new custom model?
B. Edgeai TIDL Tools
The other method for generating inference: https://github.com/TexasInstruments/edgeai-tidl-tools is used
Since this git repository is outside of the PSDK, which files from the repository are to be moved to which folder in the PSDK after compilation of a model on PC?