Tool/software:
Hi,
We are trying to run out model on EdgeAI 9.20.07 https://github.com/TexasInstruments/edgeai-tidl-tools/tree/09_02_07_00?tab=readme-ov-file.
There is a problem with the int16/int8 quantization calibration when multiply broadcast with a constant is implemented with Batch Normalization as is done in SDK9.2. The first quantized calibration iteration runs fine, but in the second iteration the multiply broadcast layer returns all zeros and in the third iteration it returns all -1s.
Please Note that the above problem doesn't occur when the constant multiply broadcast is implemented as element wise layer.