This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: In the process of TIDL quantization, does quantization between layers affect each other?

Part Number: TDA4VM

Now there is a model that includes RU detection and cone detection, sharing the same backbone and neck. After quantizing the model with pth1, the RU head board end test is fine. Based on pth1, cone head was retrained with all other parameters unchanged, resulting in pth2. After quantizing pth2, there is a significant increase in false positives for RU detection compared to the quantized model of pth. The model export quantization configurations are consistent. Could you please explain why this might be the case? In theory, the quantization results for layers other than the cone head should be consistent between the two models, right?