This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: graph resolve error while compiling onnx model using TIDL

Part Number: TDA4VM

Hello,

Background:
I have trained model using repository - https://github.com/TexasInstruments/edgeai-yolov5

No modifications are done to network, only custom data is used to train for few epochs.

Pretrained network at https://github.com/TexasInstruments/edgeai-yolov5/tree/master/pretrained_models/models/yolov5s6_640_ti_lite was used.

After exporting to onnx and trying to compile on TDA4VM I get following error:

RuntimeExceptionTraceback (most recent call last)
<ipython-input-4-bb2e73c005ce> in <module>
     30 so = rt.SessionOptions()
     31 EP_list = ['TIDLCompilationProvider','CPUExecutionProvider']
---> 32 sess = rt.InferenceSession(onnx_model_path ,providers=EP_list, provider_options=[compile_options, {}], sess_options=so)
     33 
     34 input_details = sess.get_inputs()

/usr/local/lib/python3.6/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py in __init__(self, path_or_bytes, sess_options, providers, provider_options)
    281 
    282         try:
--> 283             self._create_inference_session(providers, provider_options)
    284         except RuntimeError:
    285             if self._enable_fallback:

/usr/local/lib/python3.6/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py in _create_inference_session(self, providers, provider_options)
    313 
    314         # initialize the C++ InferenceSession
--> 315         sess.initialize_session(providers, provider_options)
    316 
    317         self._sess = sess

RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: /home/a0133185/ti/GIT_C7x_MMA_TIDL/c7x-mma-tidl/ti_dl/release/build_cloud/test/onnxruntime/onnxruntime/core/providers/tidl/tidl_execution_provider.cc:170 virtual std::vector<std::unique_ptr<onnxruntime::ComputeCapability> > onnxruntime::TidlExecutionProvider::GetCapability(const onnxruntime::GraphViewer&, const std::vector<const onnxruntime::KernelRegistry*>&) const graph_build.Resolve().IsOK() was false. 

The compilation option used are as below:

compile_options = {
'tidl_tools_path' : os.environ['TIDL_TOOLS_PATH'],
'artifacts_folder' : output_dir,
'tensor_bits' : 16,
'accuracy_level' : 0,
'advanced_options:calibration_frames' : len(calib_images),
'advanced_options:calibration_iterations' : 3 # used if accuracy_level = 1
}
so = rt.SessionOptions()
EP_list = ['TIDLCompilationProvider','CPUExecutionProvider']
sess = rt.InferenceSession(onnx_model_path ,providers=EP_list, provider_options=[compile_options, {}], sess_options=so)

I am not sure about the error. It says failure during initialization (may be memory issue.)

Can you please check.
Thanks in advance.