This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

SK-AM62A-LP: How to run pre trained custom deep learning model on accelerator in CPP? (end-to-end pipeline)

Part Number: SK-AM62A-LP


Tool/software:

Hello,

I am working on a TI AM62A board and would like to run my pre-trained custom deep learning model (converted to ONNX/TIDL format) on the accelerator using C++.

I am aware of the Python-based examples (EdgeAI-TIDL tools, SDK demos), but my requirement is a C++ end-to-end pipeline where I can:

  1. Capture input frames from a USB/MIPI camera.

  2. Run inference on the accelerator.

  3. Perform minimal post-processing.

  4. Render the output on HDMI display.

Could you please guide me with:

  • What is the recommended approach to integrate a custom ONNX model in C++?

  • Are there sample applications or reference pipelines (beyond Python demos) that I can build on?

  • How do I set up the inference node in a C++ with a custom model?

  • Any step-by-step documentation or examples for an end-to-end C++ pipeline (camera → inference → display)?

  • Please also suggest which framework is better for inference performance and flexibility in this use case: edgeai_gst_app, OpTIFlow, or TIOVX?

This will help me accelerate development on my project where Python is not an option.

Thanks in advance!

  • Hi Aniket, 

    Please see the CPP side of edgeai-gst-apps -- I think this covers most of your stated needs. OpTIflow is a similar option, since all components of the application are effectively C-code within the gstreamer plugins, and reference the underlying TIOVX applications.

    What is the recommended approach to integrate a custom ONNX model in C++?

    See edgeai-tidl-tools/examples/osrt_cpp for this. There are minimal examples leveraging ONNX Runtime. You will first need to compile the model -- please use the osrt_python examples in the same repo for this purpose. There is a set of documentation for custom model evaluation in this repo to describe the steps and setup process. 

    • Please also suggest which framework is better for inference performance and flexibility in this use case: edgeai_gst_app, OpTIFlow, or TIOVX?

    TIOVX will be most performant, but probably the least flexible. Gstreamer will be more flexible, but has slight performance loss (a few % delta). 

    BR,
    Reese