This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA2E: Model inference accuracy loss on the board

Part Number: TDA2E


Tool/software:

Hi :TI

After quantization, the object detection model experiences a loss of accuracy during on-board inference, with a decrease of approximately 4.5% compared to the PC-side performance.  Could you please advise on how to address this issue?  Previously, a similar problem occurred with a segmentation model, and it was resolved by modifying the offsets of outAddrB, outAddrG, and outAddrR in the tidlpreproclink.  However, applying the same method to the detection model resulted in even worse performance.

  • Hi Q,

    Have you tried to increase the data bits to 16?  First, increase the data bits to 16 for the entire model.  If that improves performance, you can increase the bits on a specific layer instead of the entire model.  Finding the correct layer will be model dependant.  

    Regards,

    Chris

  • Hello Q xn;

    Have you had a chance to try Chris' suggestion?

    In addition, you may also set the accuracy level to 1 or higher. This will increase the compiling time. But it will improve the accuracy. 

    Here is an example for how to increase the tensor bits and accuracy level.

    compile_options = {
    ......
    'tensor_bits' : 16,
    'accuracy_level' : 1,
    'advanced_options:calibration_iterations' : 3 # used if accuracy_level = 1
    }

     

    Since this thread is older than 30 days. I will close this ticket for now; but if you still have questions after you have tried our suggestions. you can easily re-open this ticket or submit a new one.

    Thanks and regards

    Wen Li