TDA4VE-Q1: edgeai quantized and the result is right when the .bin model infered on PC . But the result in board is not right, the precision and recall of the result decreased a lot .How can we find the reason?

Part Number: TDA4VE-Q1

Tool/software:

We quantized the 3dOD model smoke  in  edgeai  , we use the  script  'onnxrt_ep.py' under the edgeai  tool to  quantized the model and infered the bin model。the tensor_bit is 8 .  We visualize the result , and  it looks fine .  But, when we run it  on the TDA4VME board , the  result is not right , we visualized the result , we can hardly find the 3d box in the picture , the result is terrible.   

Wen I quantized the model again ,and set the quantized model to  16bit , we run ti on PC , the result is fine too. But when we try to run ti on the TDA4VE , it occurs to  error.  

How can we find the reason and solve it ?

Our edgeai is edgeai-tidl-tools-09_01_08_00 , Our  sdk that run on the TDA4VE board is  SDK09_01 。Any details can ask me , if you need .

the int16 model infer error 

our onnx model

our int16 and int8 bin model