This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Hi Team,
I have configured the environment for compiling and inferring the model, and my model has been compiled with python code. There is no problem with the model-artifacts and models folders. How could I modify the official c++ version of the inference code onnx_main.cpp to infer my model? At present, the inference results obtained by using the python version of the code are different from those obtained by the c++ version of the code. The inference results obtained by the python version are correct, but the inference results obtained by the c++ version are wrong.
Kind regards,
Katherine
Hi Katherine,
Is this question related to: https://e2e.ti.com/support/processors-group/processors/f/processors-forum/1185391/sk-tda4vm-the-results-obtained-by-inferring-with-the-python-version-of-the-inference-code-on-the-pc-side-are-completely-different-from-those-obtained-by-inferring-on-the-board-side?
And is the inference for both python and c++ in this current thread's case being done on the PC?
Regards,
Takuma