问题现象：
使用8bit方式量化yolox模型会出现大目标检测框偏小的情况

实验记录：
曾训练过多种yolox模型，均会出现不同程度上的大目标检测框偏小问题。
1、使用16bit量化可以改善，但项目不能接受推理时间增加太多
2、使用TI提供的Troubleshooting Guide for Accuracy/Functional Issues 方法调查，没有定位到具体的问题在哪？
3、尝试使用TIDL-RT: Quantization 建议的方法，实验结论还表现为大目标检测框偏小问题。（不包含使用QAT）

希望能TI给出更好的建议或解决方案。

Used SDK8.2

Problem phenomenon:
Using the 8bit method to quantify the yolox model will cause the large target detection position becomes smaller

Experimental record:
I have trained a variety of yolox models, and there will be a similar problem that the large target detection position becomes smaller to varying degrees.
1. Using 16bit quantization can be improved, but the project cannot accept the increase in inference time too much.
2. Using the Troubleshooting Guide for Accuracy/Functional Issues provided by TI to investigate, where is the specific problem that has not been located?
3. Try to use the method suggested by TIDL-RT: Quantization, and the experimental conclusion also shows that the large target detection position becomes smaller. (excluding the use of QAT)

Hope TI can give better suggestions or solutions.
