This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-AM62A: Issues with AM62A 9.2 TIDL Toolchain

Part Number: PROCESSOR-SDK-AM62A

Tool/software:

We are encountering several issues with the AM62A 9.2 TIDL toolchain that require support:

1. Bin Inference:

When exporting the bin in the X86 Container, using PCxxx.out for inference results in a Segment Fault. We need this method for X86 simulation to integrate into our toolchain for evaluation.

2. Abnormal Accuracy with QDQ Model:

During the QDQ process with the edgeai-tidl tool, we compared output features of the ONNX and bin after QDQ and found discrepancies. Do you have any demos or guidance on this?

Parameter Configuration:

platform: J7
version: 7.2
tensor_bits: 8
debug_level: 2
max_num_subgraphs: 16
deny_list: ''
deny_list:layer_type: ''
deny_list:layer_name: ''
model_type: ''
accuracy_level: 0
advanced_options:calibration_frames: 1
advanced_options:calibration_iterations: 1
advanced_options:output_feature_16bit_names_list: ''
advanced_options:params_16bit_names_list: ''
advanced_options:mixed_precision_factor: -1
advanced_options:quantization_scale_type: 4
advanced_options:high_resolution_optimization: 0
advanced_options:pre_batchnorm_fold: 1
ti_internal_nc_flag: 1601
advanced_options:activation_clipping: 1
advanced_options:weight_clipping: 1
advanced_options:bias_calibration: 1
advanced_options:add_data_convert_ops: 3
advanced_options:channel_wise_quantization: 0
advanced_options:inference_mode: 0
advanced_options:num_cores: 1
advanced_options:prequantized_model: 1

Compilation Errors:

While compiling mobilenet_v2_lite_wt-v2_qat-v2-wc8-at8_20231120_model.onnx, the following errors were reported:

In TIDL_runtimesOptimizeNet: LayerIndex = 315, dataIndex = 314Unable to merge Dequantize upwards - DQ without initializer?
Error: Layer 14, /0_4/Conv:/0_4/Conv_output_0 is missing inputs in the network and cannot be topologically sorted
Input 0: /0_4/DequantizeLinear_output_0, dataId=229

Inference Results:

After simplifying the ONNX model(with onnxsim), compilation succeeded, but precision was misaligned:

Inference Image::airshow.jpg

Inference QDQ ONNX: 0 17.885122 warplane, military plane

Inference bin: 0 18.246786 warplane, military plane

We would appreciate any assistance or guidance on these issues.