This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Now there is a model that includes RU detection and cone detection, sharing the same backbone and neck. After quantizing the model with pth1, the RU head board end test is fine. Based on pth1, cone head was retrained with all other parameters unchanged, resulting in pth2. After quantizing pth2, there is a significant increase in false positives for RU detection compared to the quantized model of pth. The model export quantization configurations are consistent. Could you please explain why this might be the case? In theory, the quantization results for layers other than the cone head should be consistent between the two models, right?
Can you share which sdk version you have used ?
What was the flow TIDL-RT or OSRT ?
Hi,
The recent SDK has gone through the significant amount of changes and feature addition.
Can you try out experiment on 9.1 SDK and check the observation, let me know if still face the same issue there, we can go into deeper details if required.