Hello TI Team,
I have been trying to use my yolox-nano model trained using edgeai-modelmaker with the default configuration and a custom dataset. I compile the model with tensor_bits set to 16 bits. I have referred 2 options for performing inference using this custom model -
- onnx_ep.py script from edgeai-tidl-tools/examples
- using my model with edgeai-gst-apps.
I have tried two SDK versions - r10.1(with r11.0 tidl tools patched on) and r11.1. I have trained the model using the same method on both. When performing inference with onnx_ep.py I am able to see the predictions as follows -
Predictions with r10.1(with r11.0 tidl tools patched on) -

Predictions with r11.1 -

The issues arises when I want to perform inference using the gstreamer pipeline that is being used in the edgeai-gst-apps/apps_python/app_edgeai.py. Following are the images generated as output after running app_edgeai.py on the same images -
Predictions with r11.1 app_edgeai.py


I am inclined to think that both the methods use a different preprocessing for the input method. I would love to get the same/similar results using the gstreamer pipeline from app_edgeai.py instead of onnx based inference from onnx_ep.py to ensure I am running the pipeline as fast as possible as my application prioritizes the predictions being real-time.
Best