Hello,
I have trained my model with QAT, however to achieve acceptable accuracy I set few layers to be quantized with 16 bits accuracy by setting their bitwidth_weights and bitwidth_activations to 16.
After finishing the QAT, I export the onnx file to TIDL. The documentation instructs to set the advanced_options:output_feature_16bit_names_list with the layer names of the original model.
However the names are not exported into the ONNX correctly, they are swapped with node numbers, like in the figure below: I would like to set the Conv_9 to run at 16 bits, the layer name should appear at W field, but it is swapped with node 525. If I set 525 in the output_feature_16bit_names_list,
I receive error with non-existing layer name. How to overcome this issue?
Thank you,
Alex.