This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: why model output is worry when using 8bits quantization in edgeai-tidl-tools, while 16-bits is correct?

Part Number: TDA4VM

Dear author,

When I tried to use OSRT to compile Superpoint model (please find this network in https://github.com/eric-yyjau/pytorch-superpoint), I always got worry output when seting number of bits for quantization as 8 bits.

But When number of bits for quantization is 16bits, the output is correct, please what is the reason for this? How can I got correct results by using 8 bits quantization?