This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM68A: QAT v2 cannot compilation by edgeai-tidl-tools-11_00_06_00/

Part Number: AM68A


Tool/software:

Hi TI,
    we try to compilation qat v2 model mobilenet_v2_lite_wt-v2_qat-v2-wc8-at8_20231120_model.onnx downloaded from model zoo link http://software-dl.ti.com/jacinto7/esd/modelzoo/11_01_00/modelartifacts/AM68A/8bits/cl-6508_onnxrt_imagenet1k_edgeai-tv2_mobilenet_v2_lite_wt-v2_qat-v2-wc8-at8_20231120_model_onnx.tar.gz, and I change the model config 'advanced_options:prequantized_model': 1

data_convert = 0

then get the log as follow




(edgeai-tidl-1100) szz@szz-Victus-by-HP-Gaming-Laptop-16-r0xxx:~/zx/code/edgeai-tidl-tools-11_00_06_00/examples/osrt_python/ort$ python3 onnxrt_ep.py -c -m mobilenet_v2_lite_wt-v2_qat-v2-wc8-at8_20231120_model
Available execution providers : ['TIDLExecutionProvider', 'TIDLCompilationProvider', 'CPUExecutionProvider']

Running 1 Models - ['mobilenet_v2_lite_wt-v2_qat-v2-wc8-at8_20231120_model']


Running_Model : mobilenet_v2_lite_wt-v2_qat-v2-wc8-at8_20231120_model


Running shape inference on model ../../../models/public/mobilenet_v2_lite_wt-v2_qat-v2-wc8-at8_20231120_model.onnx

========================= [Model Compilation Started] =========================

Model compilation will perform the following stages:
1. Parsing
2. Graph Optimization
3. Quantization & Calibration
4. Memory Planning

============================== [Version Summary] ==============================

-------------------------------------------------------------------------------
| TIDL Tools Version | 11_00_06_00 |
-------------------------------------------------------------------------------
| C7x Firmware Version | 11_00_00_00 |
-------------------------------------------------------------------------------
| Runtime Version | 1.15.0 |
-------------------------------------------------------------------------------
| Model Opset Version | 18 |
-------------------------------------------------------------------------------

============================== [Parsing Started] ==============================

[TIDL Import] [PARSER] WARNING: Network not identified as Object Detection network : (1) Ignore if network is not Object Detection network (2) If network is Object Detection network, please specify "model_type":"OD" as part of OSRT compilation options

------------------------- Subgraph Information Summary -------------------------
-------------------------------------------------------------------------------
| Core | No. of Nodes | Number of Subgraphs |
-------------------------------------------------------------------------------
| C7x | 316 | 1 |
| CPU | 0 | x |
-------------------------------------------------------------------------------
============================= [Parsing Completed] =============================

==================== [Optimization for subgraph_0 Started] ====================

[TIDL Import] [PARSER] ERROR: Unable to merge Dequantize - /DequantizeLinear_output_0 upwards - DQ without initializer? -- [tidl_import_common.cpp, 7973]
[TIDL Import] ERROR: - Failed in function: tidl_optimizeNet -- [tidl_import_core.cpp, 2678]
[TIDL Import] ERROR: Network Optimization failed - Failed in function: TIDL_runtimesOptimizeNet -- [tidl_runtimes_import_common.cpp, 1392]
[TIDL Import] [PARSER] ERROR: - Failed in function: TIDL_computeImportFunc -- [tidl_onnxRtImport_EP.cpp, 2672]
2025-10-10 13:44:10.415294458 [E:onnxruntime:, sequential_executor.cc:514 ExecuteKernel] Non-zero status code returned while running TIDL_0 node. Name:'TIDLExecutionProvider_TIDL_0_0' Status Message: TIDL Compute Import Failed.
Process Process-1:
Traceback (most recent call last):
File "/home/szz/anaconda3/envs/edgeai-tidl-1100/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/szz/anaconda3/envs/edgeai-tidl-1100/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/szz/zx/code/edgeai-tidl-tools-11_00_06_00/examples/osrt_python/ort/onnxrt_ep.py", line 392, in run_model
imgs, output, proc_time, sub_graph_time, height, width = infer_image(sess, input_images, config)
File "/home/szz/zx/code/edgeai-tidl-tools-11_00_06_00/examples/osrt_python/ort/onnxrt_ep.py", line 208, in infer_image
output = list(sess.run(None, {input_name: input_data}))
File "/home/szz/anaconda3/envs/edgeai-tidl-1100/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 217, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running TIDL_0 node. Name:'TIDLExecutionProvider_TIDL_0_0' Status Message: TIDL Compute Import Failed.