python3 onnxrt_ep.py -c
/home/quest/t-i/edgeai-tidl-tools/examples/osrt_python/ort
Available execution providers : ['CPUExecutionProvider']
Running 1 Models - ['yolov5s6_640_ti_lite_37p4_56p0']
Running_Model : yolov5s6_640_ti_lite_37p4_56p0
Running shape inference on model yolov5s6_640_ti_lite_37p4_56p0/yolov5s6_640_ti_lite_37p4_56p0.onnx
Traceback (most recent call last):
File "onnxrt_ep.py", line 281, in <module>
run_model(model, mIdx)
File "onnxrt_ep.py", line 185, in run_model
sess = rt.InferenceSession(config['model_path'] ,providers=EP_list, provider_options=[delegate_options, {}], sess_options=so)
File "/home/quest/.local/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 283, in __init__
self._create_inference_session(providers, provider_options)
File "/home/quest/.local/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 300, in _create_inference_session
available_providers)
File "/home/quest/.local/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 80, in check_and_normalize_provider_args
set_provider_options(name, options)
File "/home/quest/.local/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 54, in set_provider_options
name, ", ".join(available_provider_names)))
ValueError: Specified provider 'TIDLCompilationProvider' is unavailable. Available providers: 'CPUExecutionProvider'