Tool/software:
Hello,
I am working on a TI AM62A board and would like to run my pre-trained custom deep learning model (converted to ONNX/TIDL format) on the accelerator using C++.
I am aware of the Python-based examples (EdgeAI-TIDL tools, SDK demos), but my requirement is a C++ end-to-end pipeline where I can:
-
Capture input frames from a USB/MIPI camera.
-
Run inference on the accelerator.
-
Perform minimal post-processing.
-
Render the output on HDMI display.
Could you please guide me with:
-
What is the recommended approach to integrate a custom ONNX model in C++?
-
Are there sample applications or reference pipelines (beyond Python demos) that I can build on?
-
How do I set up the inference node in a C++ with a custom model?
-
Any step-by-step documentation or examples for an end-to-end C++ pipeline (camera → inference → display)?
-
Please also suggest which framework is better for inference performance and flexibility in this use case:
edgeai_gst_app
,OpTIFlow
, orTIOVX
?
This will help me accelerate development on my project where Python is not an option.
Thanks in advance!