This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: Edgeai benchmark model compilation error

Part Number: TDA4VM


Hello champs!
I have been compiling several models using edgeai benchmark repository,  I have trained yolox_m_lite and tried compile the model using benchmark, But i got an issue during compilation.

[ONNXRuntimeError] : 1 : FAIL : Load model from /home/mugu/edgeai-benchmark/work_dirs/modelartifacts/TDA4VM/8bits/od-8230_onnxrt_edgeai-benchmark_model_yolox_m_onnx/model/yolox_m.onnx failed:/home/kumar/work/ort_1.14/onnxruntime/onnxruntime/core/graph/model.cc:145 onnxruntime::Model::Model(onnx::ModelProto&&, const PathString&, const IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 9, max supported IR version: 8

  • NFO:20240224-193409: running - od-8230_onnxrt_edgeai-benchmark_model_yolox_m_onnx
    INFO:20240224-193409: pipeline_config - {'task_type': 'detection', 'dataset_category': 'coco', 'calibration_dataset': <edgeai_benchmark.datasets.coco_det.COCODetection object at 0x7ff587ce1fc0>, 'input_dataset': <edgeai_benchmark.datasets.coco_det.COCODetection object at 0x7ff587ce20b0>, 'preprocess': <edgeai_benchmark.preprocess.PreProcessTransforms object at 0x7ff586552620>, 'session': <edgeai_benchmark.sessions.onnxrt_session.ONNXRTSession object at 0x7ff5865523b0>, 'postprocess': <edgeai_benchmark.postprocess.PostProcessTransforms object at 0x7ff586553010>, 'metric': {'label_offset_pred': {0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8, 8: 9, 9: 10, 10: 11, 11: 13, 12: 14, 13: 15, 14: 16, 15: 17, 16: 18, 17: 19, 18: 20, 19: 21, 20: 22, 21: 23, 22: 24, 23: 25, 24: 27, 25: 28, 26: 31, 27: 32, 28: 33, 29: 34, 30: 35, 31: 36, 32: 37, 33: 38, 34: 39, 35: 40, 36: 41, 37: 42, 38: 43, 39: 44, 40: 46, 41: 47, 42: 48, 43: 49, 44: 50, 45: 51, 46: 52, 47: 53, 48: 54, 49: 55, 50: 56, 51: 57, 52: 58, 53: 59, 54: 60, 55: 61, 56: 62, 57: 63, 58: 64, 59: 65, 60: 67, 61: 70, 62: 72, 63: 73, 64: 74, 65: 75, 66: 76, 67: 77, 68: 78, 69: 79, 70: 80, 71: 81, 72: 82, 73: 84, 74: 85, 75: 86, 76: 87, 77: 88, 78: 89, 79: 90, 80: 91}}, 'model_info': {'metric_reference': {'accuracy_ap[.5:.95]%': 44.4}, 'model_shortlist': None}}
    INFO:20240224-193409: import  - od-8230_onnxrt_edgeai-benchmark_model_yolox_m_onnx - this may take some time...Traceback (most recent call last):
      File "/home/mugu/edgeai-benchmark/edgeai_benchmark/pipelines/pipeline_runner.py", line 174, in _run_pipeline
        result = cls._run_pipeline_impl(basic_settings, pipeline_config, description)
      File "/home/mugu/edgeai-benchmark/edgeai_benchmark/pipelines/pipeline_runner.py", line 147, in _run_pipeline_impl
        accuracy_result = accuracy_pipeline(description)
      File "/home/mugu/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 123, in __call__
        param_result = self._run(description=description)
      File "/home/mugu/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 156, in _run
        self._import_model(description)
      File "/home/mugu/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 220, in _import_model
        self._run_with_log(session.import_model, calib_data)
      File "/home/mugu/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 324, in _run_with_log
        return func(*args, **kwargs)
      File "/home/mugu/edgeai-benchmark/edgeai_benchmark/sessions/onnxrt_session.py", line 51, in import_model
        self.interpreter = self._create_interpreter(is_import=True)
      File "/home/mugu/edgeai-benchmark/edgeai_benchmark/sessions/onnxrt_session.py", line 135, in _create_interpreter
        interpreter = onnxruntime.InferenceSession(self.kwargs['model_file'], providers=ep_list,
      File "/home/mugu/.pyenv/versions/bench/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 362, in __init__
        self._create_inference_session(providers, provider_options, disabled_optimizers)
      File "/home/mugu/.pyenv/versions/bench/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 399, in _create_inference_session
        sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
    onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from /home/mugu/edgeai-benchmark/work_dirs/modelartifacts/TDA4VM/8bits/od-8230_onnxrt_edgeai-benchmark_model_yolox_m_onnx/model/yolox_m.onnx failed:/home/kumar/work/ort_1.14/onnxruntime/onnxruntime/core/graph/model.cc:145 onnxruntime::Model::Model(onnx::ModelProto&&, const PathString&, const IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 9, max supported IR version: 8

    [ONNXRuntimeError] : 1 : FAIL : Load model from /home/mugu/edgeai-benchmark/work_dirs/modelartifacts/TDA4VM/8bits/od-8230_onnxrt_edgeai-benchmark_model_yolox_m_onnx/model/yolox_m.onnx failed:/home/kumar/work/ort_1.14/onnxruntime/onnxruntime/core/graph/model.cc:145 onnxruntime::Model::Model(onnx::ModelProto&&, const PathString&, const IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 9, max supported IR version: 8

    TASKS                                                       | 100%|██████████||

  • >>>Unsupported model IR version: 9, max supported IR version: 8

    This is due to the version of onnx python package.

    You can install the correct version by:

    pip3 install --no-input protobuf==3.20.2 onnx==1.13.0

    setup_pc.sh file had a bug that caused this version to be not installed. It is fixed now.