Tool/software:
Hi TI expert,
After I modified the onnx model, the quantized model inference results are correct, but the model inference time increased from 19FPS to 34FPS. Is there any way to speed up the model?
As shown in the figure below, model_output_gray is the original onnx model, and model_output_nv12 is the modified onnx model.