./run_python_examples.sh
X64 Architecture
/usr/lib/python3/dist-packages/requests/__init__.py:80: RequestsDependencyWarning: urllib3 (1.26.16) or chardet (3.0.4) doesn't match a supported version!
RequestsDependencyWarning)
Available execution providers : ['CPUExecutionProvider']
Running 1 Models - ['yolov5m6_640_ti_lite_44p1_62p9']
Running_Model : yolov5m6_640_ti_lite_44p1_62p9
Running shape inference on model ../../../model/yolov5m6_640_ti_lite_44p1_62p9.onnx
Traceback (most recent call last):
File "onnxrt_ep.py", line 281, in <module>
run_model(model, mIdx)
File "onnxrt_ep.py", line 185, in run_model
sess = rt.InferenceSession(config['model_path'] ,providers=EP_list, provider_options=[delegate_options, {}], sess_options=so)
File "/home/quest/.local/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 283, in __init__
self._create_inference_session(providers, provider_options)
File "/home/quest/.local/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 300, in _create_inference_session
available_providers)
File "/home/quest/.local/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 80, in check_and_normalize_provider_args
set_provider_options(name, options)
File "/home/quest/.local/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 54, in set_provider_options
name, ", ".join(available_provider_names)))
ValueError: Specified provider 'TIDLCompilationProvider' is unavailable. Available providers: 'CPUExecutionProvider'
/usr/lib/python3/dist-packages/requests/__init__.py:80: RequestsDependencyWarning: urllib3 (1.26.16) or chardet (3.0.4) doesn't match a supported version!
RequestsDependencyWarning)
Available execution providers : ['CPUExecutionProvider']
Running 1 Models - ['yolov5m6_640_ti_lite_44p1_62p9']
Running_Model : yolov5m6_640_ti_lite_44p1_62p9
Traceback (most recent call last):
File "onnxrt_ep.py", line 281, in <module>
run_model(model, mIdx)
File "onnxrt_ep.py", line 188, in run_model
sess = rt.InferenceSession(config['model_path'] ,providers=EP_list, provider_options=[delegate_options, {}], sess_options=so)
File "/home/quest/.local/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 283, in __init__
self._create_inference_session(providers, provider_options)
File "/home/quest/.local/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 300, in _create_inference_session
available_providers)
File "/home/quest/.local/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 80, in check_and_normalize_provider_args
set_provider_options(name, options)
File "/home/quest/.local/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 54, in set_provider_options
name, ", ".join(available_provider_names)))
ValueError: Specified provider 'TIDLExecutionProvider' is unavailable. Available providers: 'CPUExecutionProvider'
Hi i manually given path in model_config.py but giving these error