This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: Query regarding different PTQ calibration methods

Part Number: TDA4VM

Hi,

We are trying to compare precision and recall values with different quantization techniques for centernet object detection. We used 50 calibration images for all 4 different quantization. 

Please find below the quantization method we tried.

1. Simple Calibration

2. Histogram based activation range collection - calibrationOption = 1

3. Advanced bias calibration - calibrationOption = 7

4. Per Channel weight quantization for depthwise convolution Layers - calibrationOption = 13

When checked, all the 4 quantization techniques are giving the same precision(about 85%) and recall value(about 50%), regardless of the calibrationOption set. Can you please tell us if there is any issue?

Model - CenterNet with MobileNetv2 as backbone

Precision - INT8

RTOS SDK version - 7.01

regards,

Gina