This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-J721S2: How to import pre-quantized network TFLITE CNN models using TIDL without calibration

Part Number: PROCESSOR-SDK-J721S2

Hello,

I am using the last PROCESSOR-SDK-J721S2 version ( 8.6.1 ) , and i am trying to import CNN TFLITE models pre-quantized using TIDL OSRT .

With this version of SDK , and on this family of chip ( J721S2 ) , the TIDL roadmap states that it is possible to import  pre-quantized TFLITE models ( i quote "TFLite pre quantized model
support with asymmetric quantization" ) 

Am I correct  to assume that this means :  import without prior dequantization and then re-calibration, but directly using the int8 weights "as is". ?

I don't know how to enable this feature with the current TIDL parameters , as described in this doc  https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/examples/osrt_python/README.md#basic-options 

Please note i am using TIDL OSRT API from Python.

Can you help me to enable this feature and check it works ?

Thank you.