This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VH-Q1: edgeai-benchmark r11.0 and latest rtmdet model compile

Part Number: TDA4VH-Q1
Other Parts Discussed in Thread: AM69A

Tool/software:

Hi all,

I have recently updated to the latest branch of edgeai-benchmark r11.0, as latest rtmdet lite models are now available. The focus here is on two models:

od-9206_onnxrt_coco_edgeai-mmdet_rtmdet_m_coco_lite_640x640_20250404_model_onnx

od-9208_onnxrt_coco_edgeai-mmdet_rtmdet_l_coco_orig_640x640_20250310_model_onnx

All dependencies were setup via fresh anaconda virtual environment, running the requirements scripts. Before getting to the rtmdet models, I did some pipe clean test with od-8220 (yolox) and od-8850/od-8860 (yolov7), which seem to compile ok.

Overall I still think there are issues with the rtmdet model compile. One observation is, that the run_benchmark_pc.sh is running into a dependency issue on the first run, but on the second run it goes through normally. The dependency issue looks like this:

INFO:20250723-222804: number of configs - 1
TASKS TOTAL=1, NUM_RUNNING=1: 0%| | 0/1 [00:44<?, ?it/s, postfix={'RUNNING': ['od-9208:import'], 'COMPLETED': []}]
ERROR:20250723-222848: model_id:od-9208 run_import:True run_inference:False - No module named 'osrt_model_tools.onnx_tools.tidl_onnx_model_optimizer'
Traceback (most recent call last):
File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/pipelines/pipeline_runner.py", line 291, in _run_pipeline
result = cls._run_pipeline_impl(settings, pipeline_config, description)
File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/pipelines/pipeline_runner.py", line 326, in _run_pipeline_impl
result = accuracy_pipeline(description)
File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 76, in __call__
param_result = self._run(description=description)
File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 109, in _run
self._import_model(description)
File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 170, in _import_model
is_ok = session.start_import()
File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/sessions/onnxrt_session.py", line 47, in start_import
BaseRTSession.start_import(self)
File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/sessions/basert_session.py", line 158, in start_import
self._prepare_model()
File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/sessions/basert_session.py", line 153, in _prepare_model
self.get_model()
File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/sessions/basert_session.py", line 445, in get_model
apply_input_optimization = self._optimize_model(model_file, is_new_file=is_new_file)
File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/sessions/basert_session.py", line 504, in _optimize_model
from osrt_model_tools.onnx_tools.tidl_onnx_model_optimizer.ops import get_optimizers
ModuleNotFoundError: No module named 'osrt_model_tools.onnx_tools.tidl_onnx_model_optimizer'

Running run_benchmarks_pc.sh a second time gets us past this error and runs to completion, generating model artifacts. But the run.log file shows this under Optimization for subgraph started:

[0m[35;1m==================== [Optimization for subgraph_0 Started] ====================

[0mInvalid Layer Name 455
Invalid Layer Name 472
Invalid Layer Name 489
Invalid Layer Name 448
Invalid Layer Name 465
Invalid Layer Name 482
Invalid Layer Name 455

...

I am attaching the run.log file for od-9208 rtmdet large lite

4857.run.log

Can someone review that, and let me know of potential issues in the setup?

Please let me know if you need more information.

Thanks!

--Gunter

  • Hi,

    I should also say that osrt_model_tools seem to be properly installed

    @Linux-005:~/anaconda3/envs/edgeai-benchmark/lib/python3.10/site-packages/osrt_model_tools/onnx_tools/tidl_onnx_model_optimizer$ ls
    __init__.py ops.py optimize.py __pycache__ src test_optimize.py

    It is only at the first compile attempt of rtmdet, that the error appears. Also for other models, I cannot see it happening.

    Regards,

    --Gunter

  • Hi Gunter,

    I think model optimizer is not installed.  Please go to edgeai-tidl-tools/tree/master/osrt-model-tools/osrt_model_tools/onnx_tools and run source ./setup.sh.  This should install the model optimizer module.

    Please see:

    https://github.com/TexasInstruments/edgeai-tidl-tools/tree/master/osrt-model-tools/osrt_model_tools/onnx_tools/tidl_onnx_model_optimizer

    Regards,

    Chris

  • Hi Chris,

    sounds good, let me do that.

    But before, let me just share the current state

    (edgeai-benchmark) gunter@Linux-005:~/anaconda3/envs/edgeai-benchmark/lib/python3.10/site-packages/osrt_model_tools/onnx_tools/tidl_onnx_model_optimizer$ ls
    __init__.py  ops.py  optimize.py  __pycache__  src  test_optimize.py
    
    (edgeai-benchmark) gunter@Linux-005:~/ti-edgeai/edgeai-tensorlab/edgeai-benchmark$ pip list |grep osrt
    osrt_model_tools        1.2
    
    

    Also when the edgeai-benchmark ./setup_pc.sh ran it had the following log (osrt portions)

    ...
    --------------------------------------------------------------------------------------------------------------
    Found local edgeai-tidl-tools, installing osrt_model_tools in develop mode
    --------------------------------------------------------------------------------------------------------------
    WARNING: Skipping osrt_model_tools as it is not installed.
    running develop
    /home/gunter/.local/lib/python3.10/site-packages/setuptools/command/develop.py:41: EasyInstallDeprecationWarning: easy_install command is deprecated.
    !!
    
            ********************************************************************************
            Please avoid running ``setup.py`` and ``easy_install``.
            Instead, use pypa/build, pypa/installer or other
            standards-based tools.
    
            See https://github.com/pypa/setuptools/issues/917 for details.
            ********************************************************************************
    
    !!
      easy_install.initialize_options(self)
    /home/gunter/.local/lib/python3.10/site-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated.
    !!
    
            ********************************************************************************
            Please avoid running ``setup.py`` directly.
            Instead, use pypa/build, pypa/installer or other
            standards-based tools.
    
            See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details.
            ********************************************************************************
    
    !!
      self.initialize_options()
    running egg_info
    writing osrt_model_tools.egg-info/PKG-INFO
    writing dependency_links to osrt_model_tools.egg-info/dependency_links.txt
    writing requirements to osrt_model_tools.egg-info/requires.txt
    writing top-level names to osrt_model_tools.egg-info/top_level.txt
    reading manifest file 'osrt_model_tools.egg-info/SOURCES.txt'
    writing manifest file 'osrt_model_tools.egg-info/SOURCES.txt'
    running build_ext
    Creating /home/gunter/anaconda3/envs/edgeai-benchmark/lib/python3.10/site-packages/osrt-model-tools.egg-link (link to .)
    Adding osrt-model-tools 1.2 to easy-install.pth file
    
    Installed /home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-tidl-tools/osrt-model-tools
    Processing dependencies for osrt-model-tools==1.2
    Searching for setuptools==78.1.1
    Best match: setuptools 78.1.1
    Adding setuptools 78.1.1 to easy-install.pth file
    
    Using /home/gunter/anaconda3/envs/edgeai-benchmark/lib/python3.10/site-packages
    Finished processing dependencies for osrt-model-tools==1.2
    --------------------------------------------------------------------------------------------------------------
    INFO: installing tidl-tools-package version: 11.0
    Collecting dlr==1.13.0 (from -r ./tools/requirements/requirements_11.0.txt (line 2))
      Using cached https://software-dl.ti.com/jacinto7/esd/tidl-tools/11_00_08_00/OSRT_TOOLS/X86_64_LINUX/UBUNTU_22_04/dlr-1.13.0-py3-none-any.whl (1.4 MB)
    Collecting tvm==0.12.0 (from -r ./tools/requirements/requirements_11.0.txt (line 3))
      Using cached https://software-dl.ti.com/jacinto7/esd/tidl-tools/11_00_08_00/OSRT_TOOLS/X86_64_LINUX/UBUNTU_22_04/tvm-0.12.0-cp310-cp310-linux_x86_64.whl (52.1 MB)
    Collecting onnxruntime-tidl==1.15.0 (from -r ./tools/requirements/requirements_11.0.txt (line 4))
      Using cached https://software-dl.ti.com/jacinto7/esd/tidl-tools/11_00_08_00/OSRT_TOOLS/X86_64_LINUX/UBUNTU_22_04/onnxruntime_tidl-1.15.0-cp310-cp310-linux_x86_64.whl (7.6 MB)
    ...
    Collecting git+https://github.com/TexasInstruments/edgeai-tidl-tools.git@11_00_08_00#subdirectory=osrt-model-tools (from -r ./requirements/requirements_pc.txt (line 18))
      Cloning https://github.com/TexasInstruments/edgeai-tidl-tools.git (to revision 11_00_08_00) to /tmp/pip-req-build-t6ku0okm
      Running command git clone --filter=blob:none --quiet https://github.com/TexasInstruments/edgeai-tidl-tools.git /tmp/pip-req-build-t6ku0okm
      Running command git checkout -q 23b72b5781569a261792d98f6c17503b30b4a283
      Resolved https://github.com/TexasInstruments/edgeai-tidl-tools.git to commit 23b72b5781569a261792d98f6c17503b30b4a283
      Preparing metadata (setup.py) ... done
    ...
    Requirement already satisfied: setuptools>=18.0 in /home/gunter/.local/lib/python3.10/site-packages (from osrt_model_tools==1.2->-r ./requirements/requirements_pc.txt (line 18)) (73.0.0)
    ...
    Building wheels for collected packages: osrt_model_tools, fire
      Building wheel for osrt_model_tools (setup.py) ... done
      Created wheel for osrt_model_tools: filename=osrt_model_tools-1.2-py3-none-any.whl size=271356 sha256=d5effc54133556a0a6574cb0005e39123ba4491c8f93e0a7b499f22111c84695
      Stored in directory: /tmp/pip-ephem-wheel-cache-zu0hr78d/wheels/95/77/e5/118410696c3982b079b2d47c1df68d82da615c4322a6775ce7
      Building wheel for fire (setup.py) ... done
      Created wheel for fire: filename=fire-0.7.0-py3-none-any.whl size=114248 sha256=b43932f5ed82455431c1202d9285e65e330a222b79d7c7468b49918fde5e7111
      Stored in directory: /home/gunter/.cache/pip/wheels/19/39/2f/2d3cadc408a8804103f1c34ddd4b9f6a93497b11fa96fe738e
    Successfully built osrt_model_tools fire
    Installing collected packages: termcolor, Shapely, pyparsing, pluggy, pillow, osrt_model_tools, kiwisolver, iniconfig, fonttools, exceptiongroup, cycler, colored, colorama, pytest, matplotlib, fire, descartes, nuscenes-devkit
      Attempting uninstall: osrt_model_tools
        Found existing installation: osrt_model_tools 1.2
        Uninstalling osrt_model_tools-1.2:
          Successfully uninstalled osrt_model_tools-1.2
    ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
    cityscapesscripts 2.2.3 requires appdirs, which is not installed.
    interrogate 1.7.0 requires click>=7.1, which is not installed.
    interrogate 1.7.0 requires py, which is not installed.
    Successfully installed Shapely-1.8.5.post1 colorama-0.4.6 colored-2.3.0 cycler-0.12.1 descartes-1.1.0 exceptiongroup-1.3.0 fire-0.7.0 fonttools-4.59.0 iniconfig-2.1.0 kiwisolver-1.4.8 matplotlib-3.5.3 nuscenes-devkit-1.1.11 osrt_model_tools-1.2 pillow-11.3.0 pluggy-1.6.0 pyparsing-3.2.3 pytest-8.4.1 termcolor-3.1.0
    ...
    

    Correct me if I am wrong, but I think the model optimizer is installed.

    Regards,

    --Gunter

  • Hi Chris,

    I ran the following

    (edgeai-benchmark) gunter@Linux-005:~/ti-edgeai/edgeai-tensorlab/edgeai-tidl-tools/osrt-model-tools/osrt_model_tools/onnx_tools$ source ./setup.sh 
    Requirement already satisfied: pip in /home/gunter/anaconda3/envs/edgeai-benchmark/lib/python3.10/site-packages (24.2)
    Collecting pip
      Using cached pip-25.1.1-py3-none-any.whl.metadata (3.6 kB)
    Requirement already satisfied: setuptools in /home/gunter/.local/lib/python3.10/site-packages (73.0.0)
    Collecting setuptools
      Using cached setuptools-80.9.0-py3-none-any.whl.metadata (6.6 kB)
    Using cached pip-25.1.1-py3-none-any.whl (1.8 MB)
    Using cached setuptools-80.9.0-py3-none-any.whl (1.2 MB)
    Installing collected packages: setuptools, pip
      Attempting uninstall: setuptools
        Found existing installation: setuptools 73.0.0
        Uninstalling setuptools-73.0.0:
          Successfully uninstalled setuptools-73.0.0
      Attempting uninstall: pip
        Found existing installation: pip 24.2
        Uninstalling pip-24.2:
          Successfully uninstalled pip-24.2
    Successfully installed pip-25.1.1 setuptools-78.1.1
    Installing python packages...
    pip3 install --no-input wheel
    Requirement already satisfied: wheel in /home/gunter/.local/lib/python3.10/site-packages (0.44.0)
    pip3 install --no-input numpy==1.23.0
    Requirement already satisfied: numpy==1.23.0 in /home/gunter/.local/lib/python3.10/site-packages (1.23.0)
    pip3 install --no-input protobuf==3.20.3
    Requirement already satisfied: protobuf==3.20.3 in /home/gunter/.local/lib/python3.10/site-packages (3.20.3)
    pip3 install --no-input onnx==1.14.0
    Requirement already satisfied: onnx==1.14.0 in /home/gunter/.local/lib/python3.10/site-packages (1.14.0)
    Requirement already satisfied: numpy in /home/gunter/.local/lib/python3.10/site-packages (from onnx==1.14.0) (1.23.0)
    Requirement already satisfied: protobuf>=3.20.2 in /home/gunter/.local/lib/python3.10/site-packages (from onnx==1.14.0) (3.20.3)
    Requirement already satisfied: typing-extensions>=3.6.2.1 in /home/gunter/.local/lib/python3.10/site-packages (from onnx==1.14.0) (4.12.2)
    pip3 install --no-input onnxsim==0.4.35
    Requirement already satisfied: onnxsim==0.4.35 in /home/gunter/.local/lib/python3.10/site-packages (0.4.35)
    Requirement already satisfied: onnx in /home/gunter/.local/lib/python3.10/site-packages (from onnxsim==0.4.35) (1.14.0)
    Requirement already satisfied: rich in /home/gunter/.local/lib/python3.10/site-packages (from onnxsim==0.4.35) (13.8.1)
    Requirement already satisfied: numpy in /home/gunter/.local/lib/python3.10/site-packages (from onnx->onnxsim==0.4.35) (1.23.0)
    Requirement already satisfied: protobuf>=3.20.2 in /home/gunter/.local/lib/python3.10/site-packages (from onnx->onnxsim==0.4.35) (3.20.3)
    Requirement already satisfied: typing-extensions>=3.6.2.1 in /home/gunter/.local/lib/python3.10/site-packages (from onnx->onnxsim==0.4.35) (4.12.2)
    Requirement already satisfied: markdown-it-py>=2.2.0 in /home/gunter/.local/lib/python3.10/site-packages (from rich->onnxsim==0.4.35) (3.0.0)
    Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /home/gunter/.local/lib/python3.10/site-packages (from rich->onnxsim==0.4.35) (2.18.0)
    Requirement already satisfied: mdurl~=0.1 in /home/gunter/.local/lib/python3.10/site-packages (from markdown-it-py>=2.2.0->rich->onnxsim==0.4.35) (0.1.2)
    pip3 install --no-input git+https://github.com/NVIDIA/TensorRT@release/8.5#subdirectory=tools/onnx-graphsurgeon
    Collecting git+https://github.com/NVIDIA/TensorRT@release/8.5#subdirectory=tools/onnx-graphsurgeon
      Cloning https://github.com/NVIDIA/TensorRT (to revision release/8.5) to /tmp/pip-req-build-xux7z7my
      Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA/TensorRT /tmp/pip-req-build-xux7z7my
      Running command git checkout -b release/8.5 --track origin/release/8.5
      Switched to a new branch 'release/8.5'
      Branch 'release/8.5' set up to track remote branch 'release/8.5' from 'origin'.
      Resolved https://github.com/NVIDIA/TensorRT to commit 68b5072fdb9df6b6edab1392b02a705394b2e906
      Running command git submodule update --init --recursive -q
      Preparing metadata (setup.py) ... done
    Requirement already satisfied: numpy in /home/gunter/.local/lib/python3.10/site-packages (from onnx_graphsurgeon==0.3.26) (1.23.0)
    Requirement already satisfied: onnx in /home/gunter/.local/lib/python3.10/site-packages (from onnx_graphsurgeon==0.3.26) (1.14.0)
    Requirement already satisfied: protobuf>=3.20.2 in /home/gunter/.local/lib/python3.10/site-packages (from onnx->onnx_graphsurgeon==0.3.26) (3.20.3)
    Requirement already satisfied: typing-extensions>=3.6.2.1 in /home/gunter/.local/lib/python3.10/site-packages (from onnx->onnx_graphsurgeon==0.3.26) (4.12.2)
    installing the onnx graph optimization toolkit...
    version_file=/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-tidl-tools/osrt-model-tools/osrt_model_tools/onnx_tools/version.py
    running develop
    /home/gunter/anaconda3/envs/edgeai-benchmark/lib/python3.10/site-packages/setuptools/_distutils/cmd.py:90: DevelopDeprecationWarning: develop command is deprecated.
    !!
    
            ********************************************************************************
            Please avoid running ``setup.py`` and ``develop``.
            Instead, use standards-based tools like pip or uv.
    
            By 2025-Oct-31, you need to update your project and remove deprecated calls
            or your builds will no longer be supported.
    
            See https://github.com/pypa/setuptools/issues/917 for details.
            ********************************************************************************
    
    !!
      self.initialize_options()
    Obtaining file:///home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-tidl-tools/osrt-model-tools/osrt_model_tools/onnx_tools
      Installing build dependencies ... done
      Checking if build backend supports build_editable ... done
      Getting requirements to build editable ... done
      Preparing editable metadata (pyproject.toml) ... error
      error: subprocess-exited-with-error
      
      × Preparing editable metadata (pyproject.toml) did not run successfully.
      │ exit code: 1
      ╰─> [35 lines of output]
          version_file=/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-tidl-tools/osrt-model-tools/osrt_model_tools/onnx_tools/version.py
          running dist_info
          creating /tmp/pip-modern-metadata-aw5knkao/tidl_onnx_model_optimizer.egg-info
          writing /tmp/pip-modern-metadata-aw5knkao/tidl_onnx_model_optimizer.egg-info/PKG-INFO
          writing dependency_links to /tmp/pip-modern-metadata-aw5knkao/tidl_onnx_model_optimizer.egg-info/dependency_links.txt
          writing top-level names to /tmp/pip-modern-metadata-aw5knkao/tidl_onnx_model_optimizer.egg-info/top_level.txt
          writing manifest file '/tmp/pip-modern-metadata-aw5knkao/tidl_onnx_model_optimizer.egg-info/SOURCES.txt'
          reading manifest file '/tmp/pip-modern-metadata-aw5knkao/tidl_onnx_model_optimizer.egg-info/SOURCES.txt'
          writing manifest file '/tmp/pip-modern-metadata-aw5knkao/tidl_onnx_model_optimizer.egg-info/SOURCES.txt'
          creating '/tmp/pip-modern-metadata-aw5knkao/tidl_onnx_model_optimizer-10.1.0.dist-info'
          running dist_info
          creating /tmp/pip-modern-metadata-aw5knkao/tidl_onnx_model_utils.egg-info
          writing /tmp/pip-modern-metadata-aw5knkao/tidl_onnx_model_utils.egg-info/PKG-INFO
          writing dependency_links to /tmp/pip-modern-metadata-aw5knkao/tidl_onnx_model_utils.egg-info/dependency_links.txt
          writing top-level names to /tmp/pip-modern-metadata-aw5knkao/tidl_onnx_model_utils.egg-info/top_level.txt
          writing manifest file '/tmp/pip-modern-metadata-aw5knkao/tidl_onnx_model_utils.egg-info/SOURCES.txt'
          reading manifest file '/tmp/pip-modern-metadata-aw5knkao/tidl_onnx_model_utils.egg-info/SOURCES.txt'
          writing manifest file '/tmp/pip-modern-metadata-aw5knkao/tidl_onnx_model_utils.egg-info/SOURCES.txt'
          creating '/tmp/pip-modern-metadata-aw5knkao/tidl_onnx_model_utils-10.1.0.dist-info'
          Traceback (most recent call last):
            File "/home/gunter/anaconda3/envs/edgeai-benchmark/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 389, in <module>
              main()
            File "/home/gunter/anaconda3/envs/edgeai-benchmark/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 373, in main
              json_out["return_val"] = hook(**hook_input["kwargs"])
            File "/home/gunter/anaconda3/envs/edgeai-benchmark/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 209, in prepare_metadata_for_build_editable
              return hook(metadata_directory, config_settings)
            File "/tmp/pip-build-env-6pai0jum/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 478, in prepare_metadata_for_build_editable
              return self.prepare_metadata_for_build_wheel(
            File "/tmp/pip-build-env-6pai0jum/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 376, in prepare_metadata_for_build_wheel
              self._bubble_up_info_directory(metadata_directory, ".egg-info")
            File "/tmp/pip-build-env-6pai0jum/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 345, in _bubble_up_info_directory
              info_dir = self._find_info_directory(metadata_directory, suffix)
            File "/tmp/pip-build-env-6pai0jum/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 356, in _find_info_directory
              assert len(candidates) == 1, f"Multiple {suffix} directories found"
          AssertionError: Multiple .egg-info directories found
          [end of output]
      
      note: This error originates from a subprocess, and is likely not a problem with pip.
    error: metadata-generation-failed
    
    × Encountered error while generating package metadata.
    ╰─> See above for output.
    
    note: This is an issue with the package mentioned above, not pip.
    hint: See above for details.
    Traceback (most recent call last):
      File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-tidl-tools/osrt-model-tools/osrt_model_tools/onnx_tools/./setup.py", line 139, in <module>
        main()
      File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-tidl-tools/osrt-model-tools/osrt_model_tools/onnx_tools/./setup.py", line 91, in main
        setup(
      File "/home/gunter/anaconda3/envs/edgeai-benchmark/lib/python3.10/site-packages/setuptools/__init__.py", line 115, in setup
        return distutils.core.setup(**attrs)
      File "/home/gunter/anaconda3/envs/edgeai-benchmark/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 186, in setup
        return run_commands(dist)
      File "/home/gunter/anaconda3/envs/edgeai-benchmark/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 202, in run_commands
        dist.run_commands()
      File "/home/gunter/anaconda3/envs/edgeai-benchmark/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 1002, in run_commands
        self.run_command(cmd)
      File "/home/gunter/anaconda3/envs/edgeai-benchmark/lib/python3.10/site-packages/setuptools/dist.py", line 1102, in run_command
        super().run_command(command)
      File "/home/gunter/anaconda3/envs/edgeai-benchmark/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 1021, in run_command
        cmd_obj.run()
      File "/home/gunter/anaconda3/envs/edgeai-benchmark/lib/python3.10/site-packages/setuptools/command/develop.py", line 39, in run
        subprocess.check_call(cmd)
      File "/home/gunter/anaconda3/envs/edgeai-benchmark/lib/python3.10/subprocess.py", line 369, in check_call
        raise CalledProcessError(retcode, cmd)
    subprocess.CalledProcessError: Command '['/home/gunter/anaconda3/envs/edgeai-benchmark/bin/python3', '-m', 'pip', 'install', '-e', '.', '--use-pep517']' returned non-zero exit status 1.
    

    Can you check that error above?

    Thanks!

    --Gunter

  • The error is still there unfortunately, on the first run with od-9206 rtmdet model

    (edgeai-benchmark) gunter@Linux-005:~/ti-edgeai/edgeai-tensorlab/edgeai-benchmark$ ./run_benchmarks_pc.sh AM69A
    TARGET_SOC: AM69A
    TARGET_MACHINE: pc
    TIDL_TOOLS_PATH=/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/tools/tidl_tools_package/AM69A/tidl_tools
    LD_LIBRARY_PATH=/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/tools/tidl_tools_package/AM69A/tidl_tools:
    PYTHONPATH=:
    INFO: settings the correct symlinks in tvmdlr compiled artifacts
    ===================================================================
    argv: ['./scripts/benchmark_modelzoo.py', 'settings_import_on_pc.yaml', '--target_device', 'AM69A']
    settings: {'include_files': None, 'pipeline_type': 'accuracy', 'c7x_firmware_version': None, 'num_frames': 10, 'calibration_frames': 12, 'calibration_iterations': 12, 'configs_path': './configs', 'models_path': '../edgeai-modelzoo/models', 'modelartifacts_path': './work_dirs/modelartifacts/AM69A', 'modelpackage_path': './work_dirs/modelpackage/AM69A', 'datasets_path': './dependencies/datasets', 'target_device': 'AM69A', 'target_machine': 'pc', 'run_suffix': None, 'parallel_devices': None, 'parallel_processes': 12, 'tensor_bits': 8, 'runtime_options': {'advanced_options:quantization_scale_type': 4}, 'run_import': True, 'run_inference': True, 'run_incremental': True, 'detection_threshold': 0.3, 'detection_top_k': 200, 'detection_nms_threshold': None, 'detection_keep_top_k': None, 'save_output': False, 'num_output_frames': 50, 'model_selection': 'od-9206', 'model_shortlist': 100, 'model_exclusion': None, 'task_selection': None, 'runtime_selection': None, 'session_type_dict': {'onnx': 'onnxrt', 'tflite': 'tflitert', 'mxnet': 'tvmdlr'}, 'dataset_type_dict': None, 'dataset_selection': 'coco', 'dataset_loading': True, 'config_range': None, 'write_results': True, 'verbose': True, 'log_file': True, 'additional_models': True, 'experimental_models': False, 'rewrite_results': False, 'with_udp': True, 'flip_test': False, 'model_transformation_dict': None, 'report_perfsim': False, 'tidl_offload': True, 'input_optimization': None, 'run_dir_tree_depth': None, 'target_device_preset': True, 'calibration_iterations_factor': None, 'instance_timeout': None, 'overall_timeout': None, 'sort_pipeline_configs': True, 'check_errors': True, 'param_template_file': None, 'c7x_codegen': False, 'external_models_path': None, 'enable_logging': True, 'basic_keys': ['include_files', 'pipeline_type', 'c7x_firmware_version', 'num_frames', 'calibration_frames', 'calibration_iterations', 'configs_path', 'models_path', 'modelartifacts_path', 'modelpackage_path', 'datasets_path', 'target_device', 'target_machine', 'run_suffix', 'parallel_devices', 'parallel_processes', 'tensor_bits', 'runtime_options', 'run_import', 'run_inference', 'run_incremental', 'detection_threshold', 'detection_top_k', 'detection_nms_threshold', 'detection_keep_top_k', 'save_output', 'num_output_frames', 'model_selection', 'model_shortlist', 'model_exclusion', 'task_selection', 'runtime_selection', 'session_type_dict', 'dataset_type_dict', 'dataset_selection', 'dataset_loading', 'config_range', 'write_results', 'verbose', 'log_file', 'additional_models', 'experimental_models', 'rewrite_results', 'with_udp', 'flip_test', 'model_transformation_dict', 'report_perfsim', 'tidl_offload', 'input_optimization', 'run_dir_tree_depth', 'target_device_preset', 'calibration_iterations_factor', 'instance_timeout', 'overall_timeout', 'sort_pipeline_configs', 'check_errors', 'param_template_file', 'c7x_codegen', 'external_models_path', 'enable_logging'], 'dataset_cache': {'imagenet': {'calibration_dataset': 'imagenet', 'input_dataset': 'imagenet', 'dataset_init': False}, 'coco': {'calibration_dataset': 'coco', 'input_dataset': 'coco', 'dataset_init': False}, 'widerface': {'calibration_dataset': 'widerface', 'input_dataset': 'widerface', 'dataset_init': False}, 'ade20k32': {'calibration_dataset': 'ade20k32', 'input_dataset': 'ade20k32', 'dataset_init': False}, 'ade20k': {'calibration_dataset': 'ade20k', 'input_dataset': 'ade20k', 'dataset_init': False}, 'voc2012': {'calibration_dataset': 'voc2012', 'input_dataset': 'voc2012', 'dataset_init': False}, 'cocoseg21': {'calibration_dataset': 'cocoseg21', 'input_dataset': 'cocoseg21', 'dataset_init': False}, 'ti-robokit_semseg_zed1hd': {'calibration_dataset': 'ti-robokit_semseg_zed1hd', 'input_dataset': 'ti-robokit_semseg_zed1hd', 'dataset_init': False}, 'ti-robokit_visloc_zed1hd': {'calibration_dataset': 'ti-robokit_visloc_zed1hd', 'input_dataset': 'ti-robokit_visloc_zed1hd', 'dataset_init': False}, 'cocokpts': {'calibration_dataset': 'cocokpts', 'input_dataset': 'cocokpts', 'dataset_init': False}, 'nyudepthv2': {'calibration_dataset': 'nyudepthv2', 'input_dataset': 'nyudepthv2', 'dataset_init': False}, 'ycbv': {'calibration_dataset': 'ycbv', 'input_dataset': 'ycbv', 'dataset_init': False}, 'pandaset_frame': {'calibration_dataset': 'pandaset_frame', 'input_dataset': 'pandaset_frame', 'dataset_init': False}, 'pandaset_mv_image': {'calibration_dataset': 'pandaset_mv_image', 'input_dataset': 'pandaset_mv_image', 'dataset_init': False}}}

    INFO: model compilation in PC can use CUDA gpus (if it is available) - setup using setup_pc_gpu.sh
    /bin/sh: 1: nvidia-smi: not found
    INFO:20250724-181432: setting parallel_devices to the number of cuda gpus found - 0

    INFO:20250724-181432: model_shortlist has been set - it will cause only a subset of models to run:
    INFO:20250724-181432: model_shortlist - 100

    INFO: work_dir: ./work_dirs/modelartifacts/AM69A/8bits
    INFO: using model configs from Python module: ./configs

    INFO:20250724-181432: loading dataset - category:coco variant:coco

    INFO:20250724-181432: dataset exists - will reuse - ./dependencies/datasets/coco
    loading annotations into memory...
    Done (t=0.36s)
    creating index...
    index created!
    loading annotations into memory...
    Done (t=0.45s)
    creating index...
    index created!
    WARNING:20250724-181434: model_shortlist=100 - this will cause only a subset of models to be selected for run
    WARNING:20250724-181434: if the model that you wish is not being selected for run, then remove this model_shortlist -
    WARNING:20250724-181434: this model_shortlist could be being set in settings_base.yaml or passed inside run_benchmarks_pc.sh -

    INFO:20250724-181434: number of configs - 1
    TASKS TOTAL=1, NUM_RUNNING=1: 0%| | 0/1 [00:00<?, ?it/s, postfix={'RUNNING': ['od-9206:import'], 'COMPLETED': []}]
    ERROR:20250724-181435: model_id:od-9206 run_import:True run_inference:False - No module named 'osrt_model_tools.onnx_tools.tidl_onnx_model_optimizer'
    Traceback (most recent call last):
    File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/pipelines/pipeline_runner.py", line 291, in _run_pipeline
    result = cls._run_pipeline_impl(settings, pipeline_config, description)
    File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/pipelines/pipeline_runner.py", line 326, in _run_pipeline_impl
    result = accuracy_pipeline(description)
    File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 76, in __call__
    param_result = self._run(description=description)
    File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 109, in _run
    self._import_model(description)
    File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 170, in _import_model
    is_ok = session.start_import()
    File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/sessions/onnxrt_session.py", line 47, in start_import
    BaseRTSession.start_import(self)
    File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/sessions/basert_session.py", line 158, in start_import
    self._prepare_model()
    File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/sessions/basert_session.py", line 153, in _prepare_model
    self.get_model()
    File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/sessions/basert_session.py", line 445, in get_model
    apply_input_optimization = self._optimize_model(model_file, is_new_file=is_new_file)
    File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/sessions/basert_session.py", line 504, in _optimize_model
    from osrt_model_tools.onnx_tools.tidl_onnx_model_optimizer.ops import get_optimizers
    ModuleNotFoundError: No module named 'osrt_model_tools.onnx_tools.tidl_onnx_model_optimizer'

    ...

    Regards,

    --Gunter

  • Hi Gunter,

    I got the same error but it still worked.  Please try the ONX optimization and it should work for you too.

    Regards,

    Chris

  • Hi Chris,

    the behavior is the same as before: After installing the optimizer as above,  a *first* run_benchmarks_pc leads still to the optimizer error

    ModuleNotFoundError: No module named 'osrt_model_tools.onnx_tools.tidl_onnx_model_optimizer'

    But a subsequent *second* run_benchmarks_pc runs to completion, but I am not 100% sure if the run.log has issues.

    I will post the run.log again shortly for visibility.

    Regards,

    --Gunter