Hi all,
I want to import Yolov3 model from ONNX models for performance evaluation, using the Edge AI Cloud. (https://dev.ti.com/edgeaisession/index.html?welcome)
I'm using the pre-trained Yolov3 model (yolov3-10.onnx) given in:
https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/yolov3
Since the Edge AI Cloud service does not provide examples for Yolov3 object detection in the "Custom" menu,
I'm trying to write codes to import 'yolov3-10.onnx' using onnx rt, refering to the notebook example. (custom-model-onnx.ipynb)
Below is my code snippet for compiling yolov3-10.onnx:
custom_det_onnx.py: --------------------------------------------------------------------------------
import os
import cv2
import numpy as np
import ipywidgets as widgets
from scripts.utils import get_eval_configs
def preprocess(image_path, size, mean, scale, layout, reverse_channels):
# Step 1
img = cv2.imread(image_path)
# Step 2
img = img[:,:,::-1]
# Step 3
img = cv2.resize(img, (size[1], size[0]), interpolation=cv2.INTER_CUBIC)
# Step 4
img = img.astype('float32')
for mean, scale, ch in zip(mean, scale, range(img.shape[2])):
img[:,:,ch] = ((img.astype('float32')[:,:,ch] - mean) * scale)
# Step 5
if reverse_channels:
img = img[:,:,::-1]
# Step 6
if layout == 'NCHW':
img = np.expand_dims(np.transpose(img, (2,0,1)),axis=0)
else:
img = np.expand_dims(img,axis=0)
return img
## Create the model using the stored artifacts
import onnxruntime as rt
import tqdm
calib_images = [
'sample-images/elephant.bmp',
'sample-images/bus.bmp',
'sample-images/bicycle.bmp',
'sample-images/zebra.bmp',
]
output_dir = 'custom-artifacts/onnx/yolov3_darknet'
onnx_model_path = 'custom/yolov3.onnx'
compile_options = {
'tidl_tools_path' : os.environ['TIDL_TOOLS_PATH'],
'artifacts_folder' : output_dir,
'tensor_bits' : 8,
'accuracy_level' : 1,
'advanced_options:calibration_frames' : len(calib_images),
'advanced_options:calibration_iterations' : 3 # used if accuracy_level = 1
}
size=[416, 416]
mean=[0,0,0]#mean=[123.675, 116.28, 103.53]
scale=[0.017125, 0.017507, 0.017429]
layout='NCHW'
reverse_channels=False
os.makedirs(output_dir, exist_ok=True)
for root, dirs, files in os.walk(output_dir, topdown=False):
[os.remove(os.path.join(root, f)) for f in files]
[os.rmdir(os.path.join(root, d)) for d in dirs]
so = rt.SessionOptions()
EP_list = ['TIDLCompilationProvider','CPUExecutionProvider']
sess = rt.InferenceSession(onnx_model_path ,providers=EP_list, provider_options=[compile_options, {}], sess_options=so)
input_details = sess.get_inputs()
for num in tqdm.trange(len(calib_images)):
output = list(sess.run(None, {input_details[0].name : preprocess(calib_images[num] , size, mean, scale, layout, reverse_channels)}))[0]
--------------------------------------------------------------------------------
I have ran this code (custom_det_onnx.py) in the Jupyter terminal but I got an error as:
# python3 custom_det_onnx.py
/home/root/custom/yolov3-10.onnx: 100%|#########################################| 236M/236M [02:58<00:00, 1.39MB/s]
0.0s: VX_ZONE_INIT:Enabled
0.25s: VX_ZONE_ERROR:Enabled
0.28s: VX_ZONE_WARNING:Enabled
Preliminary subgraphs created = 1
Final number of subgraphs created are : 1, - Offloaded Nodes - 3, Total Nodes - 3
Preliminary subgraphs created = 1
Final number of subgraphs created are : 1, - Offloaded Nodes - 3, Total Nodes - 3
Traceback (most recent call last):
File "custom_det_onnx.py", line 85, in <module>
sess = rt.InferenceSession(onnx_model_path ,providers=EP_list, provider_options=[compile_options, {}], sess_options=so)
File "/usr/local/lib/python3.6/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 283, in __init__
self._create_inference_session(providers, provider_options)
File "/usr/local/lib/python3.6/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 315, in _create_inference_session
sess.initialize_session(providers, provider_options)onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Failed to add kernel for TIDL_0 com.microsoft TIDLExecutionProvider: Conflicting with a registered kernel with op versions.
#
I seems to the Edge AI Cloud does not support the yolov3 from ONNX (but I expected this should be OK, as in https://e2e.ti.com/support/processors/f/processors-forum/938780/compiler-processor-sdk-dra8x-tda4x-tidl-to-import-yolov3/3477388?tisearch=e2e-sitesearch&keymatch=yolov3%25252520onnx#3477388)
So...... my question is:
1) Is my code snippet correct ? (I'm not sure, because I'm a real beginner in TIDL...)
2) Is it just a problem of the Edge AI Cloud (or SDK on the evaluation boards connected to the cloud), which means that newest SDK will not have such a problem ?
3) Can I have an example codes or a pre-compiled model for Yolov3, similar to the models provided in the Edge AI Cloud ?
Any help would be appreciated.
Thanks in advance.
Jb Yim.