This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

SK-TDA4VM: The results obtained by inferring with the python version of the inference code on the PC side are completely different from those obtained by inferring on the board side.

Part Number: SK-TDA4VM


Hi Team,

I modified the official python script onnxrt_ep.py on the PC side to infer my own target detection model and got the correct results. Then I uploaded the modified script onnxrt_ep.py to the board to replace the onnxrt_ep.py script that comes with the board. The result obtained by using the modified script onnxrt_ep.py on the board is wrong. Could you please tell me the reason? Is there any difference between the initial script onnxrt_ep.py officially provided for the PC side and the initial script onnxrt_ep.py provided on the board? Is there any difference between the initial script onnxrt_ep.py officially provided for the PC side and the initial script onnxrt_ep.py provided on the board?

The path of the initial script onnxrt_ep.py officially provided for the PC side: edgeai-tidl-tools-08_02_00_05/examples/osrt_python/ort/onnxrt_ep.py

The path of the initial script onnxrt_ep.py on the board: /opt/edgeai-tidl-tools/examples/osrt_python/ort/onnxrt_ep.py

Kind regards,

Katherine

  • Hi Katherine,

    Can you do a git log within /opt/edgeai-tidl-tools on the board side to see which tagged version of edgeai-tidl-tools is being used? Other than that, can you try running the model with no offload to the hardware accelerators?

    It could be that the accuracy is dropping after quantization to allow the model to be deployed on the hardware accelerators. As an example, there is a significant difference in accuracy and speed when comparing the detection of dogs in our academy example: https://dev.ti.com/tirex/explore/node?node=A__ADfCsvk39hxg0V35TzXtTA__com.ti.Jacinto%20EdgeAI%20Academy__Y9QU2Ei__LATEST

    Regards,

    Takuma

  • Hi Takuma,

    I am using the 0802 version. Currently there are no problem with the results obtained when I use these two commands on the PC side to infer our model:

    python3 onnxrt_ep.py

    python3 onnxrt_ep.py -d

    onnxrt_ep.py has been modified to infer our model. There is no problem with the results from inferring our model using the modified c++ code, 'onnx_main.cpp', on the pc side. But when we use the same c++ code and python code on the board side to infer our model, the results are completely wrong. However, there is no problem with the results from using the same code to infer the official model on the board side. What is the reason? Could you please tell me how to solve it?

    Regards,

    Katherine

  • Hi Katherine,

    Would you be able to send the model and any code changes to us so that we can reproduce the issue on our side? If it is something that requires more privacy, you can send them over through email or other means.

    Regards,

    Takuma