Hello,
We have a new model that we receive a different results while running inference in TIDL emulation and on the device.
Alex.
This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Hello,
We have a new model that we receive a different results while running inference in TIDL emulation and on the device.
Alex.
Due to regional holiday, the expert handling this thread is currently out of office. Please expect a 1~2 day delay in responses.
Apologies for the delay, and thank you for you patience.
Hi,
Which sdk version you are using ?
Are you using OSRT or TIDL RT ?
Also is it possible to share the model compilation logs for initial analysis?
How are you comparing the results ? fixed pointed converted to float and diff with reference float model values ? What bit quantized is your model (8 / 16)
May i know what percentage of offset you are experiencing ?
You can refer to our doc on model accuracy here : https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/docs/tidl_osr_debug.md#steps-to-debug-functional-mismatch-in-host-emulation