Unable to compile ONNX models - pointpillars

Part Number: PROCESSOR-SDK-AM62A

Tool/software:

Dear supporters, 

I am trying to inference pointpillars network with TIDL acceleration on my AM62A.

Used Models: 

  1. PointPillars trained model on a custom dataset and exported to ONNX - Used edgeai-mmdetection3d and its configs
    1. Firstly, edgeai-mmdetection3d/configs/pointpillars/tidl_hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py
    2. Next, edgeai-mmdetection3d/configs/pointpillars/tidl_hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class_qat.py
  2. PointPillars Pretrained and ONNX exported model, provided by TI, edgeai-modelzoo/models/vision/detection_3d/kitti/mmdet3d/lidar_point_pillars_10k_496x432_3class_qat-p2.onnx.link and its prototxt.

When I compile the artifacts, it raises an error, and I cannot proceed further steps.

Below is the error with some printed settings for debugging.

root@dfai:~/tda4vh/edgeai-tensorlab/edgeai-benchmark# python3 scripts/benchmark_custom.py 
{'3dod-7110': {'task_type': 'detection_3d', 'dataset_category': 'kitti_lidar_det_3class', 'calibration_dataset': <edgeai_benchmark.datasets.kitti_lidar_det.KittiLidar3D object at 0x77f0616bffa0>, 'input_dataset': <edgeai_benchmark.datasets.kitti_lidar_det.KittiLidar3D object at 0x77f0616bffd0>, 'postprocess': <edgeai_benchmark.postprocess.PostProcessTransforms object at 0x77efc9f16410>, 'preprocess': <edgeai_benchmark.preprocess.PreProcessTransforms object at 0x77efc9f140d0>, 'session': <edgeai_benchmark.sessions.onnxrt_session.ONNXRTSession object at 0x77efc9f162f0>, 'metric': {'label_offset_pred': None}, 'model_info': {'metric_reference': {'accuracy_ap_3d_moderate%': 76.5}, 'model_shortlist': None}}}
work_dir = ./work_dirs/modelartifacts/AM62A/8bits
packaged_dir = ./work_dirs/modelpackage/AM62A/8bits
configs to run: ["['3dod-7110', 'onnxrt', '3class_quant_train_dir_2', 'best_KITTI', 'combined_model', 'onnx', 'pointpillars']"]
number of configs: 1

INFO:20240722-062402: starting - ['3dod-7110', 'onnxrt', '3class_quant_train_dir_2', 'best_KITTI', 'combined_model', 'onnx', 'pointpillars']
INFO:20240722-062402: model_path - /root/tda4vh/edgeai-mmdetection3d/work_dirs/3class_quant_train_dir_2/best_KITTI/combined_model.onnx
INFO:20240722-062402: model_file - /root/tda4vh/edgeai-tensorlab/edgeai-benchmark/work_dirs/['3dod-7110', 'onnxrt', '3class_quant_train_dir_2', 'best_KITTI', 'combined_model', 'onnx', 'pointpillars']/model/combined_model.onnx
INFO:20240722-062402: quant_file - /root/tda4vh/edgeai-tensorlab/edgeai-benchmark/work_dirs/['3dod-7110', 'onnxrt', '3class_quant_train_dir_2', 'best_KITTI', 'combined_model', 'onnx', 'pointpillars']/model/combined_model_qparams.prototxt
Downloading 1/1: /root/tda4vh/edgeai-mmdetection3d/work_dirs/3class_quant_train_dir_2/best_KITTI/combined_model.onnx
Download done for /root/tda4vh/edgeai-mmdetection3d/work_dirs/3class_quant_train_dir_2/best_KITTI/combined_model.onnx
Downloading 1/1: /root/tda4vh/edgeai-mmdetection3d/work_dirs/3class_quant_train_dir_2/best_KITTI/combined_model.onnx
Download done for /root/tda4vh/edgeai-mmdetection3d/work_dirs/3class_quant_train_dir_2/best_KITTI/combined_model.onnx

INFO:20240722-062402: running - ['3dod-7110', 'onnxrt', '3class_quant_train_dir_2', 'best_KITTI', 'combined_model', 'onnx', 'pointpillars']
INFO:20240722-062402: pipeline_config - {'task_type': 'detection_3d', 'dataset_category': 'kitti_lidar_det_3class', 'calibration_dataset': <edgeai_benchmark.datasets.kitti_lidar_det.KittiLidar3D object at 0x77f0616bffa0>, 'input_dataset': <edgeai_benchmark.datasets.kitti_lidar_det.KittiLidar3D object at 0x77f0616bffd0>, 'postprocess': <edgeai_benchmark.postprocess.PostProcessTransforms object at 0x77efc9f16410>, 'preprocess': <edgeai_benchmark.preprocess.PreProcessTransforms object at 0x77efc9f140d0>, 'session': <edgeai_benchmark.sessions.onnxrt_session.ONNXRTSession object at 0x77efc9f162f0>, 'metric': {'label_offset_pred': None}, 'model_info': {'metric_reference': {'accuracy_ap_3d_moderate%': 76.5}, 'model_shortlist': None}}
INFO:20240722-062402: import  - ['3dod-7110', 'onnxrt', '3class_quant_train_dir_2', 'best_KITTI', 'combined_model', 'onnx', 'pointpillars'] - this may take some time...providers:  ['TIDLCompilationProvider', 'CPUExecutionProvider']
providers_options:  [{'platform': 'J7', 'version': '9.2.0', 'tidl_tools_path': '/root/tda4vh/edgeai-tidl-tools_0902/tidl_tools', 'artifacts_folder': "/root/tda4vh/edgeai-tensorlab/edgeai-benchmark/work_dirs/['3dod-7110', 'onnxrt', '3class_quant_train_dir_2', 'best_KITTI', 'combined_model', 'onnx', 'pointpillars']/artifacts", 'tensor_bits': '8', 'import': 'yes', 'accuracy_level': '0', 'debug_level': 6, 'inference_mode': '0', 'advanced_options:high_resolution_optimization': '0', 'advanced_options:pre_batchnorm_fold': '1', 'advanced_options:calibration_frames': '12', 'advanced_options:calibration_iterations': '1', 'advanced_options:quantization_scale_type': '1', 'advanced_options:activation_clipping': '1', 'advanced_options:weight_clipping': '1', 'advanced_options:bias_calibration': '1', 'advanced_options:output_feature_16bit_names_list': '', 'advanced_options:params_16bit_names_list': '', 'advanced_options:add_data_convert_ops': '3', 'ti_internal_nc_flag': '83886080', 'info': "{'prequantized_model_type': 1}", 'object_detection:confidence_threshold': '0.3', 'object_detection:top_k': '200', 'object_detection:meta_arch_type': '7', 'object_detection:meta_layers_names_list': "/root/tda4vh/edgeai-tensorlab/edgeai-benchmark/work_dirs/['3dod-7110', 'onnxrt', '3class_quant_train_dir_2', 'best_KITTI', 'combined_model', 'onnx', 'pointpillars']/model/pointPillars.prototxt", 'advanced_options:quant_params_proto_path': "/root/tda4vh/edgeai-tensorlab/edgeai-benchmark/work_dirs/['3dod-7110', 'onnxrt', '3class_quant_train_dir_2', 'best_KITTI', 'combined_model', 'onnx', 'pointpillars']/model/combined_model_qparams.prototxt"}, {}]
disabled_optimizers:  set()
<class 'list'> <class 'list'> <class 'set'>
Traceback (most recent call last):
  File "/root/tda4vh/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/pipelines/pipeline_runner.py", line 204, in _run_pipeline
    result = cls._run_pipeline_impl(basic_settings, pipeline_config, description)
  File "/root/tda4vh/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/pipelines/pipeline_runner.py", line 177, in _run_pipeline_impl
    accuracy_result = accuracy_pipeline(description)
  File "/root/tda4vh/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 87, in __call__
    param_result = self._run(description=description)
  File "/root/tda4vh/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 114, in _run
    self._import_model(description)
  File "/root/tda4vh/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 180, in _import_model
    self._run_with_log(session.import_model, calib_data)
  File "/root/tda4vh/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 284, in _run_with_log
    return func(*args, **kwargs)
  File "/root/tda4vh/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/sessions/onnxrt_session.py", line 51, in import_model
    self.interpreter = self._create_interpreter(is_import=True)
  File "/root/tda4vh/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/sessions/onnxrt_session.py", line 148, in _create_interpreter
    interpreter = onnxruntime.InferenceSession(self.kwargs['model_file'], providers=ep_list,
  File "/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 362, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 417, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
TypeError: initialize_session(): incompatible function arguments. The following argument types are supported:
    1. (self: onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession, arg0: List[str], arg1: List[Dict[str, str]], arg2: Set[str]) -> None

Invoked with: <onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession object at 0x77efc8206df0>, ['TIDLCompilationProvider', 'CPUExecutionProvider'], [{'platform': 'J7', 'version': '9.2.0', 'tidl_tools_path': '/root/tda4vh/edgeai-tidl-tools_0902/tidl_tools', 'artifacts_folder': "/root/tda4vh/edgeai-tensorlab/edgeai-benchmark/work_dirs/['3dod-7110', 'onnxrt', '3class_quant_train_dir_2', 'best_KITTI', 'combined_model', 'onnx', 'pointpillars']/artifacts", 'tensor_bits': '8', 'import': 'yes', 'accuracy_level': '0', 'debug_level': 6, 'inference_mode': '0', 'advanced_options:high_resolution_optimization': '0', 'advanced_options:pre_batchnorm_fold': '1', 'advanced_options:calibration_frames': '12', 'advanced_options:calibration_iterations': '1', 'advanced_options:quantization_scale_type': '1', 'advanced_options:activation_clipping': '1', 'advanced_options:weight_clipping': '1', 'advanced_options:bias_calibration': '1', 'advanced_options:output_feature_16bit_names_list': '', 'advanced_options:params_16bit_names_list': '', 'advanced_options:add_data_convert_ops': '3', 'ti_internal_nc_flag': '83886080', 'info': "{'prequantized_model_type': 1}", 'object_detection:confidence_threshold': '0.3', 'object_detection:top_k': '200', 'object_detection:meta_arch_type': '7', 'object_detection:meta_layers_names_list': "/root/tda4vh/edgeai-tensorlab/edgeai-benchmark/work_dirs/['3dod-7110', 'onnxrt', '3class_quant_train_dir_2', 'best_KITTI', 'combined_model', 'onnx', 'pointpillars']/model/pointPillars.prototxt", 'advanced_options:quant_params_proto_path': "/root/tda4vh/edgeai-tensorlab/edgeai-benchmark/work_dirs/['3dod-7110', 'onnxrt', '3class_quant_train_dir_2', 'best_KITTI', 'combined_model', 'onnx', 'pointpillars']/model/combined_model_qparams.prototxt"}, {}], set()
initialize_session(): incompatible function arguments. The following argument types are supported:
    1. (self: onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession, arg0: List[str], arg1: List[Dict[str, str]], arg2: Set[str]) -> None

How can I solve this problem and go further steps? Both models do not work with the same error.