This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VH-Q1: Different Inference Results Between Quantized Model and On-Board Execution for the Same Image

Part Number: TDA4VH-Q1
Other Parts Discussed in Thread: TDA4VH

Tool/software:

Hello,

I'm currently running a quantized segmentation model on both my PC (via TIDL compilation) and on the TDA4VH board using SDK version vh0902.

On my PC, the quantized model produces reasonable and distinguishable results across three classes: background (0), target (1), and non-target (2). However, when deploying the same model to the board, the inference output seems incorrect — most of the pixels are predicted as class 1, and classes 0 and 2 are rarely (or never) predicted.

Here’s what I’ve confirmed:

The same input image is used for both PC and board inference.

The model is compiled using the same TIDL compilation settings.

Other models (e.g., object detection) work fine on the board.

This issue seems specific to this segmentation model.

Could there be any known issues related to quantized model behavior on TDA4VH, or specific settings I should check (e.g., activation ranges, post-processing, inference engine setup)?

Any help or suggestions would be appreciated.

Thanks,
Minho Park.