Other Parts Discussed in Thread: AM62A7, SK-AM62A-LP
Tool/software:
Hi, TI experts,
I’m encountering challenges with object detection using the YOLOX-S model on an AM62A7 board and would like to seek advice. Here’s the context:
When processing a 30fps input video, the system’s output frame rate is limited to ~20fps, leading to noticeable misalignment in detected bounding boxes (the boxes fail to track moving vehicles accurately). However, with a 10fps input, the output frame rate matches exactly, and the bounding boxes align correctly.
Key question:
When the input frame rate exceeds the output frame rate (e.g., 30fps input vs. 20fps output), are there specific optimizations or configuration adjustments—such as frame dropping strategies, model quantization, inference pipeline tuning, or hardware resource allocation—that can improve detection accuracy, even if the output frame rate does not fully match the input?
When the input frame rate exceeds the output frame rate (e.g., 30fps input vs. 20fps output), are there specific optimizations or configuration adjustments—such as frame dropping strategies, model quantization, inference pipeline tuning, or hardware resource allocation—that can improve detection accuracy, even if the output frame rate does not fully match the input?
I’m particularly interested in methods to ensure precise bounding box localization despite the frame rate mismatch. Any insights or technical suggestions would be highly valuable!
Thanks.


SDK: 10.01.00.05
Hardware Platform: SK-AM62A-LP