This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: Errors between pytroch 16 bit quantized model and TIDL tools

Part Number: TDA4VM

Hi,

We have and issue with high errors injected at some particular layers in the model. We compare the 16 bit quantized (after QAT) model and the output of the TIDL PC emulation.

Some details –

  • Model which causes the difference is the BEV model.
  • Model is fully quantized to 16b.
  • Model’s difference between TorchVision & TIDL has a mean of 1e-1 and maximum difference of 17 (Very large differences).
  • We found that a single “Conv Transpose” layer and a single “Conv” layer causes the issue (reason unclear), each multiplying the error by a factor of 10.
    • For reference – removing this layer has a mean difference of 1e-3 (acceptable difference).

Thank you,

Alex.