Hi Team,
I am using the below onnx model. Please rename the file to pfld.onnx
This is a face landmarks model taken from open source.
The TI SDK version I am using is 8.1
When I tried to import the model using the TIDL-RT import tool. I am getting errors. Please check the attached log file for errors.
\In put of TIDL_InnerProductLayer layer needs to be Faltten. Please add Flatten layer to import this mdoels
paneesh@awsmblx404bs017:~/compute/middleware/ti-psdkra/tidl_j7/ti_dl/utils/tidlModelImport$ ./out/tidl_model_import.out /data/home/paneesh/compute/middleware/ti-psdkra/tidl_j7/ti_dl/test/testvecs/config/import/public/onnx/tidl_import_face_landmarks.txt
ONNX Model (Proto) File : ../../test/testvecs/models/public/onnx/facelandmarks/pfld.onnx
TIDL Network File : ../../test/testvecs/config/tidl_models/onnx/face_landmarks_net.bin
TIDL IO Info File : ../../test/testvecs/config/tidl_models/onnx/tidl_io_face_landmarks__
Current ONNX OpSet Version : 9
Could not find const or initializer of layer Reshape_87 !!!
Only float and INT64 tensor is suported
Could not find const or initializer of layer Reshape_90 !!!
Only float and INT64 tensor is suported
Running tidl_optimizeNet
Warning : Merging Pad layer with Average Pooling layer. This is expected to work but this flow is functionally not validated with ONNX model format.
Warning : Merging Pad layer with Average Pooling layer. This is expected to work but this flow is functionally not validated with ONNX model format.
printing Current net
0|TIDL_DataLayer | |input_1_original | 0| 0|
1|TIDL_BatchNormLayer |input_1_original |input_1 | 0| 1|
2|TIDL_ConvolutionLayer |input_1 |input.4 | 1| 2|
3|TIDL_ReLULayer |input.4 |onnx::Conv_264 | 2| 3|
4|TIDL_ConvolutionLayer |onnx::Conv_264 |input.12 | 3| 4|
5|TIDL_ReLULayer |input.12 |onnx::Conv_267 | 4| 5|
6|TIDL_ConvolutionLayer |onnx::Conv_267 |input.20 | 5| 6|
7|TIDL_ReLULayer |input.20 |onnx::Conv_270 | 6| 7|
8|TIDL_ConvolutionLayer |onnx::Conv_270 |input.28 | 7| 8|
9|TIDL_ReLULayer |input.28 |onnx::Conv_273 | 8| 9|
10|TIDL_ConvolutionLayer |onnx::Conv_273 |input.36 | 9| 10|
11|TIDL_ConvolutionLayer |input.36 |input.44 | 10| 11|
12|TIDL_ReLULayer |input.44 |onnx::Conv_278 | 11| 12|
13|TIDL_ConvolutionLayer |onnx::Conv_278 |input.52 | 12| 13|
14|TIDL_ReLULayer |input.52 |onnx::Conv_281 | 13| 14|
15|TIDL_ConvolutionLayer |onnx::Conv_281 |onnx::Add_431 | 14| 15|
16|TIDL_EltWiseLayer |input.36 |input.60 | 10| 16|
17|TIDL_ConvolutionLayer |input.60 |input.68 | 16| 17|
18|TIDL_ReLULayer |input.68 |onnx::Conv_287 | 17| 18|
19|TIDL_ConvolutionLayer |onnx::Conv_287 |input.76 | 18| 19|
20|TIDL_ReLULayer |input.76 |onnx::Conv_290 | 19| 20|
21|TIDL_ConvolutionLayer |onnx::Conv_290 |onnx::Add_440 | 20| 21|
22|TIDL_EltWiseLayer |input.60 |input.84 | 16| 22|
23|TIDL_ConvolutionLayer |input.84 |input.92 | 22| 23|
24|TIDL_ReLULayer |input.92 |onnx::Conv_296 | 23| 24|
25|TIDL_ConvolutionLayer |onnx::Conv_296 |input.100 | 24| 25|
26|TIDL_ReLULayer |input.100 |onnx::Conv_299 | 25| 26|
27|TIDL_ConvolutionLayer |onnx::Conv_299 |onnx::Add_449 | 26| 27|
28|TIDL_EltWiseLayer |input.84 |input.108 | 22| 28|
29|TIDL_ConvolutionLayer |input.108 |input.116 | 28| 29|
30|TIDL_ReLULayer |input.116 |onnx::Conv_305 | 29| 30|
31|TIDL_ConvolutionLayer |onnx::Conv_305 |input.124 | 30| 31|
32|TIDL_ReLULayer |input.124 |onnx::Conv_308 | 31| 32|
33|TIDL_ConvolutionLayer |onnx::Conv_308 |onnx::Add_458 | 32| 33|
34|TIDL_EltWiseLayer |input.108 |output_1 | 28| 34|
35|TIDL_ConvolutionLayer |output_1 |input.140 | 34| 35|
36|TIDL_ReLULayer |input.140 |onnx::Conv_314 | 35| 36|
37|TIDL_ConvolutionLayer |onnx::Conv_314 |input.148 | 36| 37|
38|TIDL_ReLULayer |input.148 |onnx::Conv_317 | 37| 38|
39|TIDL_ConvolutionLayer |onnx::Conv_317 |input.156 | 38| 39|
40|TIDL_ConvolutionLayer |input.156 |input.164 | 39| 40|
41|TIDL_ReLULayer |input.164 |onnx::Conv_322 | 40| 41|
42|TIDL_ConvolutionLayer |onnx::Conv_322 |input.172 | 41| 42|
43|TIDL_ReLULayer |input.172 |onnx::Conv_325 | 42| 43|
44|TIDL_ConvolutionLayer |onnx::Conv_325 |input.180 | 43| 44|
45|TIDL_ConvolutionLayer |input.180 |input.188 | 44| 45|
46|TIDL_ReLULayer |input.188 |onnx::Conv_330 | 45| 46|
47|TIDL_ConvolutionLayer |onnx::Conv_330 |input.196 | 46| 47|
48|TIDL_ReLULayer |input.196 |onnx::Conv_333 | 47| 48|
49|TIDL_ConvolutionLayer |onnx::Conv_333 |onnx::Add_485 | 48| 49|
50|TIDL_EltWiseLayer |input.180 |input.204 | 44| 50|
51|TIDL_ConvolutionLayer |input.204 |input.212 | 50| 51|
52|TIDL_ReLULayer |input.212 |onnx::Conv_339 | 51| 52|
53|TIDL_ConvolutionLayer |onnx::Conv_339 |input.220 | 52| 53|
54|TIDL_ReLULayer |input.220 |onnx::Conv_342 | 53| 54|
55|TIDL_ConvolutionLayer |onnx::Conv_342 |onnx::Add_494 | 54| 55|
56|TIDL_EltWiseLayer |input.204 |input.228 | 50| 56|
57|TIDL_ConvolutionLayer |input.228 |input.236 | 56| 57|
58|TIDL_ReLULayer |input.236 |onnx::Conv_348 | 57| 58|
59|TIDL_ConvolutionLayer |onnx::Conv_348 |input.244 | 58| 59|
60|TIDL_ReLULayer |input.244 |onnx::Conv_351 | 59| 60|
61|TIDL_ConvolutionLayer |onnx::Conv_351 |onnx::Add_503 | 60| 61|
62|TIDL_EltWiseLayer |input.228 |input.252 | 56| 62|
63|TIDL_ConvolutionLayer |input.252 |input.260 | 62| 63|
64|TIDL_ReLULayer |input.260 |onnx::Conv_357 | 63| 64|
65|TIDL_ConvolutionLayer |onnx::Conv_357 |input.268 | 64| 65|
66|TIDL_ReLULayer |input.268 |onnx::Conv_360 | 65| 66|
67|TIDL_ConvolutionLayer |onnx::Conv_360 |onnx::Add_512 | 66| 67|
68|TIDL_EltWiseLayer |input.252 |input.276 | 62| 68|
69|TIDL_ConvolutionLayer |input.276 |input.284 | 68| 69|
70|TIDL_ReLULayer |input.284 |onnx::Conv_366 | 69| 70|
71|TIDL_ConvolutionLayer |onnx::Conv_366 |input.292 | 70| 71|
72|TIDL_ReLULayer |input.292 |onnx::Conv_369 | 71| 72|
73|TIDL_ConvolutionLayer |onnx::Conv_369 |onnx::Add_521 | 72| 73|
74|TIDL_EltWiseLayer |input.276 |input.300 | 68| 74|
75|TIDL_ConvolutionLayer |input.300 |input.308 | 74| 75|
76|TIDL_ReLULayer |input.308 |onnx::Conv_375 | 75| 76|
77|TIDL_ConvolutionLayer |onnx::Conv_375 |input.316 | 76| 77|
78|TIDL_ReLULayer |input.316 |onnx::Conv_378 | 77| 78|
79|TIDL_ConvolutionLayer |onnx::Conv_378 |input.324 | 78| 79|
80|TIDL_ConvolutionLayer |input.324 |input.332 | 79| 80|
81|TIDL_ReLULayer |input.332 |onnx::Pad_391 | 80| 81|
82|TIDL_PoolingLayer |input.324 |onnx::Reshape_382 | 79| 82|
83|TIDL_ConvolutionLayer |onnx::Pad_391 |input.336 | 81| 83|
84|TIDL_ReshapeLayer |onnx::Reshape_382 |onnx::Concat_388 | 82| 84|
85|TIDL_PoolingLayer |onnx::Pad_391 |onnx::Reshape_393 | 81| 85|
86|TIDL_ReLULayer |input.336 |onnx::Reshape_401 | 83| 86|
87|TIDL_ReshapeLayer |onnx::Reshape_393 |onnx::Concat_399 | 85| 87|
88|TIDL_ReshapeLayer |onnx::Reshape_401 |onnx::Concat_407 | 86| 88|
89|TIDL_ConcatLayer |onnx::Concat_388 |onnx::Gemm_408 | 84| 89|
90|TIDL_InnerProductLayer |onnx::Gemm_408 |409 | 89| 90|
91|TIDL_DataLayer |409 |409 | 90| 0|
WARNING: Inner Product Layer Gemm_92's coeff cannot be found(or not match) in coef file, Random coeff will be generated! Only for evaluation usage! Results are all random!
WARNING: Inner Product Layer Gemm_92's coeff cannot be found(or not match) in coef file, Random coeff will be generated! Only for evaluation usage! Results are all random!
In put of TIDL_InnerProductLayer layer needs to be Faltten. Please add Flatten layer to import this mdoels
paneesh@awsmblx404bs017:~/compute/middleware/ti-psdkra/tidl_j7/ti_dl/utils/tidlModelImport$
After this, I tried to import an onnx model using the TI edge AI cloud platform, I am getting the following error while running the onnxruntime inference session.
--------------------------------------------------------------------------- RuntimeException Traceback (most recent call last) <ipython-input-10-c8ebf5693570> in <module> 1 so = rt.SessionOptions() 2 EP_list = ['TIDLCompilationProvider','CPUExecutionProvider'] ----> 3 sess = rt.InferenceSession(onnx_model_path ,providers=EP_list, provider_options=[compile_options, {}], sess_options=so) 4 5 input_details = sess.get_inputs() /usr/local/lib/python3.6/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py in __init__(self, path_or_bytes, sess_options, providers, provider_options) 281 282 try: --> 283 self._create_inference_session(providers, provider_options) 284 except RuntimeError: 285 if self._enable_fallback: /usr/local/lib/python3.6/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py in _create_inference_session(self, providers, provider_options) 313 314 # initialize the C++ InferenceSession --> 315 sess.initialize_session(providers, provider_options) 316 317 self._sess = sess RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: basic_string::_M_create
After the above error i have tried to add some layers in deny list based on the log.
'deny_list' : "MaxPool, Pad, Gemm"
Then i am able to generate the binary files but inference is failing. Please check the error message below.
2022-09-20 08:59:53.264647513 [E:onnxruntime:, sequential_executor.cc:339 Execute] Non-zero status code returned while running Gemm node. Name:'Gemm_94'
Status Message: /home/a0133185/ti/GIT_cloud_build_ta/cloud_build_ta/test/onnxruntime/onnxruntime/core/providers/cpu/math/gemm_helper.h:13
onnxruntime::GemmHelper::GemmHelper(const onnxruntime::TensorShape&, bool, const onnxruntime::TensorShape&, bool, const onnxruntime::TensorShape&)
left.NumDimensions() == 2 || left.NumDimensions() == 1 was false.
Could you please help me solving this issue , and port the model on the tda4.
Thanks and Regards,
Aneesh
