This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Hi, i'm using edge-AI tools to inference compiled onnx models.
The quantized model (.bin file) was imported by TIDL importer
However, the model structures are different, some data_convert layers are in the model compiled by edge-AI.
Is it possible to inference model using edge AI and the bin file imported by TIDL importer? Thanks!
Here's a comparison of visualized models (compiled by edge-AI vs imported by TIDL importer)
Hi,
As I understand this query, you have compiled the onnx model for TDA4VM SoC, and you want to perform model inference using the same ?
On edgeai-linux-sdk you can refer to this FAQ : https://e2e.ti.com/support/processors-group/processors/f/processors-forum/1228053/faq-sk-tda4vm-how-to-do-inferencing-with-custom-compiled-model-on-target
Please note that on edgeai-linux sdk flow mentioned above needs param.yaml file to be present as part of artifacts (This can be achieved by compiling your model using edgeai benchmark repos here : https://github.com/TexasInstruments/edgeai-benchmark)
If you have generated model artifacts using edgeai tidl tools repo : https://github.com/TexasInstruments/edgeai-tidl-tools
You can refer the doc here : https://github.com/TexasInstruments/edgeai-tidl-tools#benchmark-on-ti-soc