Other Parts Discussed in Thread: AM69A
Tool/software:
Hi all,
I have recently updated to the latest branch of edgeai-benchmark r11.0, as latest rtmdet lite models are now available. The focus here is on two models:
od-9206_onnxrt_coco_edgeai-mmdet_rtmdet_m_coco_lite_640x640_20250404_model_onnx
od-9208_onnxrt_coco_edgeai-mmdet_rtmdet_l_coco_orig_640x640_20250310_model_onnx
All dependencies were setup via fresh anaconda virtual environment, running the requirements scripts. Before getting to the rtmdet models, I did some pipe clean test with od-8220 (yolox) and od-8850/od-8860 (yolov7), which seem to compile ok.
Overall I still think there are issues with the rtmdet model compile. One observation is, that the run_benchmark_pc.sh is running into a dependency issue on the first run, but on the second run it goes through normally. The dependency issue looks like this:
INFO:20250723-222804: number of configs - 1
TASKS TOTAL=1, NUM_RUNNING=1: 0%| | 0/1 [00:44<?, ?it/s, postfix={'RUNNING': ['od-9208:import'], 'COMPLETED': []}]
ERROR:20250723-222848: model_id:od-9208 run_import:True run_inference:False - No module named 'osrt_model_tools.onnx_tools.tidl_onnx_model_optimizer'
Traceback (most recent call last):
File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/pipelines/pipeline_runner.py", line 291, in _run_pipeline
result = cls._run_pipeline_impl(settings, pipeline_config, description)
File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/pipelines/pipeline_runner.py", line 326, in _run_pipeline_impl
result = accuracy_pipeline(description)
File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 76, in __call__
param_result = self._run(description=description)
File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 109, in _run
self._import_model(description)
File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 170, in _import_model
is_ok = session.start_import()
File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/sessions/onnxrt_session.py", line 47, in start_import
BaseRTSession.start_import(self)
File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/sessions/basert_session.py", line 158, in start_import
self._prepare_model()
File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/sessions/basert_session.py", line 153, in _prepare_model
self.get_model()
File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/sessions/basert_session.py", line 445, in get_model
apply_input_optimization = self._optimize_model(model_file, is_new_file=is_new_file)
File "/home/gunter/ti-edgeai/edgeai-tensorlab/edgeai-benchmark/edgeai_benchmark/sessions/basert_session.py", line 504, in _optimize_model
from osrt_model_tools.onnx_tools.tidl_onnx_model_optimizer.ops import get_optimizers
ModuleNotFoundError: No module named 'osrt_model_tools.onnx_tools.tidl_onnx_model_optimizer'
Running run_benchmarks_pc.sh a second time gets us past this error and runs to completion, generating model artifacts. But the run.log file shows this under Optimization for subgraph started:
[0m[35;1m==================== [Optimization for subgraph_0 Started] ====================
[0mInvalid Layer Name 455
Invalid Layer Name 472
Invalid Layer Name 489
Invalid Layer Name 448
Invalid Layer Name 465
Invalid Layer Name 482
Invalid Layer Name 455
...
I am attaching the run.log file for od-9208 rtmdet large lite
Can someone review that, and let me know of potential issues in the setup?
Please let me know if you need more information.
Thanks!
--Gunter