This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: Importing tflite model

Part Number: TDA4VM

Hello,

We are trying to import a mobilenetv2 tflite model to the TDA4VM. We tried using a trained and quantized tflite model but the results were very unsatisfactory. We narrowed down the problem to the quantization after we followed the Troubleshooting recommendations in this link. The weights in the "*_paramDebug.csv" file had too big discrepancies. We have tried both using the example configs without changing them and following the recommended steps in this link. The best performance in terms of accuracy that we achieved was without any calibration, which was ~10% less than tflite model. We also tried using floating point tflite model, but the results were even worse.

We also tried importing it with Edge-AI Benchmark Tool and there the drop in the accuracy was ~40%.

We also tried to import efficientnet lite models directly downloaded from TF Hub and use the configs that were present in the ti-processor-sdk-rtos-j721e-evm-08_00_00_12/tidl_j721e_08_06_00_10/.. but the results were again unsatisfactory.

Could you please investigate the quantization in the Importer as we suspect there might be a problem with it?

Best regards,

Mitko Hadzhiev

  • Please use the quantization configuration from here for mobilenetv2 and efficient nets,

    https://github.com/TexasInstruments/edgeai-benchmark/blob/master/configs/classification.py

    refer below for 8bit quantization accuracy that we achieved for these models

    https://github.com/TexasInstruments/edgeai-modelzoo/blob/master/modelartifacts/report_TDA4VM.csv

  • Hi Kumar, 

    We tested mobilenetv2 (more specifically mobilenet_v2_1.0_224.tflite that we directly downloaded from http://software-dl.ti.com/jacinto7/esd/modelzoo/08_06_00_01/modelartifacts/TDA4VM/8bits/cl-0010_tflitert_imagenet1k_tf1-models_mobilenet_v2_1.0_224_tflite.tar.gz) that is already build using the correct configs. Although the accuracy that is shown in the report might be comparable to the official mobilenet v2 results, the actual predictions (i.e. the confidence of the model in the predicted class) differs significantly from the prediction of the .tflite model. For instance, we tested it on these four pictures that we took from "ti-processor-sdk-rtos-j721e-evm-08_00_00_12":

    image                                                           real label                         tflite prediction                  tidl model prediction                 top1 tflite         top1 tidl
    testvecs/input/airshow.jpg                               895                                    0.61735                              0.490337                              896                 896
    testvecs/input/ti_lindau_I00000.jpg                 557                                 0.25838175                           0.365260                              868                 868
    testvecs/input/ti_lindau_000020.jpg                442                                 0.36412942                           0.219044                              830                 830
    testvecs/input/0000000271.png                      498                                 0.20371029                           0.076484                              451                 512


    Although the label predictions for the first three images match, the confidence is quite different. In the fourth image both the labels and the predictions differ, which we are afraid will happen often with these discrepancies in the predictions. Seemingly there is no obvious pattern in the differences between the tidl and tflite model predictions, so we cannot account for it.

    For problems with fewer output labels, this problem proves to be crucial for the accuracy metric.

    Best regards,

    Mitko Hadzhiev