Hi,
I have compiled a NN with the following settings:
compile_options = { "tidl_tools_path": os.environ["TIDL_TOOLS_PATH"], "artifacts_folder": "/home/TestCode/compiled_model", "tensor_bits": 16, "accuracy_level": 1, "advanced_options:calibration_frames": len(calib_images), "advanced_options:calibration_iterations": 3, "debug_level": 1, "deny_list": "Slice", }
Initially, I tried with 8 bits, but the accuracy was quite low. I updated it to 16 bits, but the performance seems to be bad. Around 50 images were used for calibration. Checked it with 32 bits and there the performance seems to be reasonable.
Lastly, is it possible to have a "statically quantized model using onnx" to be compiled with edge ai tidl? If yes, how can this be done?
PS: this is on host emulation, more specifically on a docker container.
Thanks
Ashay