This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-TDAX: The int8 quantization scheme has serious loss of accuracy

Part Number: PROCESSOR-SDK-TDAX


The resnet50 network uses int8 quantization, and the model accuracy drops very seriously. Then use int16 quantization scheme, and the accuracy loss is acceptable. Is there a problem with int8 quantization?

The model used can be downloaded from here:github.com/.../pytorch-image-models

Below are my configuration parameters:

modelType = 2
numParamBits = 8
numFeatureBits = 8
quantizationStyle = 2
inputNetFile      = "./resnet50.onnx"
outputNetFile      = ".resnet50.bin"
outputParamsFile   = "./resnet50_io_"
inDataNorm = 1
inMean = 123.675 116.28 103.53
inScale =  0.017125 0.017507 0.017429
inWidth  = 224
inHeight = 224
inNumChannels = 3
inData = ./calibration.txt
postProcType = 0
#debugTraceLevel = 3
#writeTraceLevel = 3
calibrationOption = 0
flowCtrl = 0