Part Number: TDA4VM
Dear author,
When I tried to use OSRT to compile Superpoint model (please find this network in https://github.com/eric-yyjau/pytorch-superpoint), I always got worry output when seting number of bits for quantization as 8 bits.
But When number of bits for quantization is 16bits, the output is correct, please what is the reason for this? How can I got correct results by using 8 bits quantization?
