This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Hi,
The examples of TIDL model conversion show using the open source runtime platforms. They also suggest that these are more user friendly for model conversion process. Indeed they are very easy to use. Normally the onnxrt or tflite are converted to TIDL models using this process.
However to run these tidl models in vision_apps we need to provide just the artifacts. However, there is a small preprocessing step in rt library of tidl that happens when we run the execution from onnxrt_ep or tfl_delegate. Is there an example of TI or is it possible to run the onnxrt_ep or tfl_delegate as part of one of the openvx nodes?
Also sometimes we run a small part of our model on cpu as onnx model in cpu execution mode. Would it be possible to package the onnx runtime as part of openvx nodes that could be used in vision_apps?
We want to know how easy or difficult it is to the above two points? Also, if you could point any examples that do the similar thing, it would be really helpful.
Thank You
Niranjan
Hi Niranjan,
there is a small preprocessing step in rt library of tidl that happens when we run the execution from onnxrt_ep or tfl_delegate
Could you please elaborate on which preprocessing step are you referring here?
Also sometimes we run a small part of our model on cpu as onnx model in cpu execution mode. Would it be possible to package the onnx runtime as part of openvx nodes that could be used in vision_apps?
Could you please elaborate on this too.
Regards,
Nikhil