This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Float mode execution is supported in EVM. Is this issue related to the below two, if yes, I would recommend tracking all these issues in the same thread.
Hi Kumar,
No above issue is with float model, other two issues are with int16 and int8 models.
Float model is giving values nan, int8 and int16 are running properly only results deviation is there.
Float mode execution in EVM is not supported. Can we track the other threads and clos this one?
Hi Kumar,
this issue is with PC Emulator only and we are doing this activity only for debug purpose.
Please find the snap I took from Ti documentation.

Cab you try this TIDL standalone application - instead of "ap_tidl_od"? TIDL standalone application supports saving each output and intermediate tensors in float format.
You means
TI_DEVICE_a72_test_dl_algo_host_rt.out this one?
Actually I am looking into it but not able to find how to configure model in that application.
Looks like, the "num_input_tensors" variable is inference wrongly. This could happen because of version mismatch between the tools used for model import on the pC and and Inefernce software on the EVM.
Could you please check versios?
Hi Kumar,
I converted onnx model using same SDK 7.1 and still showing invalid values for "num_input_tensors or num_output_tensors"
Can you please try with the PC application available in "ti_dl/test/PC_dsp_test_dl_algo.out" for REF only Host emulation validation of Float model. If you still face any issue, please make sure that you did not make any changes to configuration file available in the SDK for this test case