This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM69A: Parsing error: false depthwise convolution layer

Part Number: AM69A

Tool/software:

I have created a simple model with single convolution layer, one input and one output channel:

import onnx
import onnx.helper as helper
import onnx.numpy_helper as numpy_helper
import numpy as np
 
# Create a tensor for the weights of the convolutional layer
weights = np.random.randn(1, 1, 9, 1).astype(np.float32)
weights_tensor = numpy_helper.from_array(weights, name='conv1_weight')

# Create the input tensor value info
input_tensor = helper.make_tensor_value_info('input', onnx.TensorProto.FLOAT, [1, 1, 5, 5])

# Create the output tensor value info
output_tensor = helper.make_tensor_value_info('output', onnx.TensorProto.FLOAT, [1, 1, 5, 5])

# Create the node (layer)
conv_node = helper.make_node(
    'Conv',
    inputs=['input', 'conv1_weight'],
    outputs=['output'],
    kernel_shape=[9, 1],
    pads=[4, 0, 4, 0],
    strides=[1, 1]
)

# Create the graph
graph = helper.make_graph(
    [conv_node],
    'conv_graph',
    [input_tensor],
    [output_tensor],
    [weights_tensor]
)
 
# Create the model
model = helper.make_model(graph, producer_name='onnx-example')
 
# Save the model to a file
onnx.save(model, 'single_conv_layer.onnx')
 
print("ONNX model for a single convolutional layer has been created and saved as 'single_conv_layer.onnx'")

During the model compilation I have received the following message from the parser while there is no depthwise convolution layer in the model:

[TIDL Import]  UNSUPPORTED: Allowlisting : Layer name - output : Depthwise convolution layer with Kernel 9x1 and Stride 1x1 is not supported -- [tidlAllowlistingConstraints/tidl_constraint.cpp, 85]
[TIDL Import] [PARSER] UNSUPPORTED: Layer is not supported by TIDL --- layer type - Conv, Node name -  -- [tidl_onnxRtImport_core.cpp, 519]

Thus, the parser reports a false depthwise convolution layer since the number of groups equals to 1 and all inputs are convolved to all outputs. Here is the Python code used for model compilation:

import numpy as np
import onnxruntime as rt

##########################################
###          Script parameters         ###
##########################################

model_path = 'single_conv_layer.onnx'
EP_list = ['TIDLCompilationProvider','CPUExecutionProvider']
options = {}
options['artifacts_folder'] = './model-artifacts-dir/'
options['tidl_tools_path'] = '/home/root/tidl_tools'
options['debug_level'] = 1

##########################################


so = rt.SessionOptions()

# Load the ONNX model
session = rt.InferenceSession(model_path, providers=EP_list, provider_options=[options, {}], sess_options=so)

# Create a random input tensor with the same shape as the input tensor defined in the model
input_data = np.random.rand(1, 1, 5, 5).astype(np.float32)

# Run the model
outputs = session.run(None, {'input': input_data})

# Print the output
print("Model output:", outputs[0])

You can find single_conv_layer.onnx file in the archive:

 single_conv_layer.zip

Below is complete console log for model compilation with debug_level=3:

tidl_tools_path                                 = /home/root/tidl_tools
artifacts_folder                                = ./model-artifacts-dir/
tidl_tensor_bits                                = 8
debug_level                                     = 3
num_tidl_subgraphs                              = 16
tidl_denylist                                   =
tidl_denylist_layer_name                        =
tidl_denylist_layer_type                        =
tidl_allowlist_layer_name                       =
model_type                                      =
tidl_calibration_accuracy_level                 = 7
tidl_calibration_options:num_frames_calibration = 20
tidl_calibration_options:bias_calibration_iterations = 50
mixed_precision_factor = -1.000000
model_group_id = 0
power_of_2_quantization                         = 2
ONNX QDQ Enabled                                = 0
enable_high_resolution_optimization             = 0
pre_batchnorm_fold                              = 1
add_data_convert_ops                            = 0
output_feature_16bit_names_list                 =
m_params_16bit_names_list                       =
m_single_core_layers_names_list                 =
Inference mode                                  = 0
Number of cores                                 = 1
reserved_compile_constraints_flag               = 1601
partial_init_during_compile                     = 0
ti_internal_reserved_1                          =

========================= [Model Compilation Started] =========================

Model compilation will perform the following stages:
1. Parsing
2. Graph Optimization
3. Quantization & Calibration
4. Memory Planning

============================== [Version Summary] ==============================

-------------------------------------------------------------------------------
|          TIDL Tools Version          |              10_00_04_00             |
-------------------------------------------------------------------------------
|         C7x Firmware Version         |              10_00_02_00             |
-------------------------------------------------------------------------------
|            Runtime Version           |            1.14.0+10000005           |
-------------------------------------------------------------------------------
|          Model Opset Version         |                  18                  |
-------------------------------------------------------------------------------

NOTE: The runtime version here specifies ONNXRT_VERSION+TIDL_VERSION
Ex: 1.14.0+1000XXXX -> ONNXRT 1.14.0 and a TIDL_VERSION 10.00.XX.XX

============================== [Parsing Started] ==============================

[TIDL Import] [PARSER] WARNING: Network not identified as Object Detection network : (1) Ignore if network is not Object Detection network (2) If network is Object Detection network, please specify "model_type":"OD" as part of OSRT compilation options
[TIDL Import]  UNSUPPORTED: Allowlisting : Layer name - output : Depthwise convolution layer with Kernel 9x1 and Stride 1x1 is not supported -- [tidlAllowlistingConstraints/tidl_constraint.cpp, 85]
[TIDL Import] [PARSER] UNSUPPORTED: Layer is not supported by TIDL --- layer type - Conv, Node name -  -- [tidl_onnxRtImport_core.cpp, 519]

------------------------- Subgraph Information Summary -------------------------
-------------------------------------------------------------------------------
|          Core           |      No. of Nodes       |   Number of Subgraphs   |
-------------------------------------------------------------------------------
| C7x                     |                       0 |                       0 |
| CPU                     |                       1 |                       x |
-------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------
| Node | Node Name |                                   Reason                                    |
--------------------------------------------------------------------------------------------------
| Conv | output    | Depthwise convolution layer with Kernel 9x1 and Stride 1x1 is not supported |
--------------------------------------------------------------------------------------------------
Running Runtimes GraphViz - /home/root/tidl_tools/tidl_graphVisualiser_runtimes.out ./model-artifacts-dir//allowedNode.txt ./model-artifacts-dir//tempDir/graphvizInfo.txt ./model-artifacts-dir//tempDir/runtimes_visualization.svg
============================= [Parsing Completed] =============================

Model output: [[[[ 0.24346259  0.7302017  -0.3997852  -0.09163873  0.1756232 ]
   [ 0.5447106   0.14750001  0.44192222  0.50972265  0.668015  ]
   [ 0.25353688  1.1781881   0.46983877  0.47023544  0.4859341 ]
   [-0.5246689  -0.97710145 -0.54835     0.01153768 -0.42153066]
   [-0.13848214 -0.18936723 -0.2901531  -0.37395775  0.07250781]]]]

I faced with the same error in TFLite runtime. Here is the TF model code:

import tensorflow as tf

# Define the model
model = tf.keras.Sequential([
    tf.keras.layers.Input(shape=(5, 5, 1)),
    tf.keras.layers.Conv2D(
        filters=1, 
        kernel_size=(9, 1), 
        padding='same'
    )
])

# Print the model summary
model.summary()

# Convert the model to TFLite format
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()

# Save the TFLite model
with open('single_conv_layer.tflite', 'wb') as f:
    f.write(tflite_model)

Python code used for model compilation:

import tflite_runtime.interpreter as tf
import numpy as np

##########################################
###          Script parameters         ###
##########################################

tflite_model_path = 'single_conv_layer.tflite'
options = {}
options['artifacts_folder'] = './model-artifacts-dir/'
options['tidl_tools_path'] = '/home/root/tidl_tools'
options['debug_level'] = 3

##########################################

# Load the TFLite model
# interpreter = tf.lite.Interpreter(model_path=tflite_model_path)
interpreter = tf.Interpreter(model_path=tflite_model_path, experimental_delegates=[tf.load_delegate('tidl_model_import_tflite.so', options)])
interpreter.allocate_tensors()

# Get input and output tensors
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

# Prepare a sample input (5x5 matrix with a single channel)
input_data = np.random.rand(1, 5, 5, 1).astype(np.float32)

# Set the tensor to point to the input data to be inferred
interpreter.set_tensor(input_details[0]['index'], input_data)

# Run the inference
interpreter.invoke()

# Get the output tensor
output_data = interpreter.get_tensor(output_details[0]['index'])

print("Input:")
print(input_data)
print("Output:")
print(output_data)

You can find single_conv_layer.tflite file in the archive:

single_conv_layer_tflite.zip

Below is complete console log for model compilation with debug_level=3:

tidl_tools_path                                 = /home/root/tidl_tools
artifacts_folder                                = ./model-artifacts-dir/
tidl_tensor_bits                                = 8
debug_level                                     = 3
num_tidl_subgraphs                              = 16
tidl_denylist                                   =
tidl_denylist_layer_name                        =
tidl_denylist_layer_type                        =
tidl_allowlist_layer_name                       =
model_type                                      =
tidl_calibration_accuracy_level                 = 7
tidl_calibration_options:num_frames_calibration = 20
tidl_calibration_options:bias_calibration_iterations = 50
mixed_precision_factor = -1.000000
model_group_id = 0
power_of_2_quantization                         = 2
ONNX QDQ Enabled                                = 0
enable_high_resolution_optimization             = 0
pre_batchnorm_fold                              = 1
add_data_convert_ops                            = 0
output_feature_16bit_names_list                 =
m_params_16bit_names_list                       =
m_single_core_layers_names_list                 =
Inference mode                                  = 0
Number of cores                                 = 1
reserved_compile_constraints_flag               = 1601
partial_init_during_compile                     = 0
ti_internal_reserved_1                          =

========================= [Model Compilation Started] =========================

Model compilation will perform the following stages:
1. Parsing
2. Graph Optimization
3. Quantization & Calibration
4. Memory Planning

============================== [Version Summary] ==============================

-------------------------------------------------------------------------------
|          TIDL Tools Version          |              10_00_04_00             |
-------------------------------------------------------------------------------
|         C7x Firmware Version         |              10_00_02_00             |
-------------------------------------------------------------------------------

============================== [Parsing Started] ==============================

[TIDL Import] [PARSER] WARNING: Network not identified as Object Detection network : (1) Ignore if network is not Object Detection network (2) If network is Object Detection network, please specify "model_type":"OD" as part of OSRT compilation options
[TIDL Import]  UNSUPPORTED: Allowlisting : Layer name -  : Depthwise convolution layer with Kernel 9x1 and Stride 1x1 is not supported -- [tidlAllowlistingConstraints/tidl_constraint.cpp, 85]
[TIDL Import] [PARSER] UNSUPPORTED: Unsupported (TIDL check) TIDL layer type --- 1 Tflite layer type --- 3 layer output name--- StatefulPartitionedCall_1:0  -- [tidl_tfLiteRtImport_core.cpp, 3085]

Total Nodes = 1
-------------------------------------------------------------------------------
|          Core           |      No. of Nodes       |   Number of Subgraphs   |
-------------------------------------------------------------------------------
| C7x                     |                       0 |                       1 |
| CPU                     |                       1 |                       x |
-------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------------------
|  Node   |          Node Name          |                                   Reason                                    |
-----------------------------------------------------------------------------------------------------------------------
| CONV_2D | StatefulPartitionedCall_1:0 | Depthwise convolution layer with Kernel 9x1 and Stride 1x1 is not supported |
-----------------------------------------------------------------------------------------------------------------------
============================= [Parsing Completed] =============================

INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Input:
[[[[0.23353179]
   [0.45543978]
   [0.20377085]
   [0.92193395]
   [0.9892934 ]]

  [[0.51404977]
   [0.11027818]
   [0.26475325]
   [0.7566942 ]
   [0.23980846]]

  [[0.9398893 ]
   [0.5406374 ]
   [0.8115559 ]
   [0.85103273]
   [0.6420306 ]]

  [[0.18294057]
   [0.55361813]
   [0.8881557 ]
   [0.46144766]
   [0.53694516]]

  [[0.23608164]
   [0.50198156]
   [0.20152731]
   [0.7523323 ]
   [0.8528347 ]]]]
Output:
[[[[-0.06449867]
   [-0.08553893]
   [-0.15111698]
   [ 0.18098627]
   [ 0.06116613]]

  [[ 0.3414664 ]
   [-0.2084126 ]
   [ 0.01171003]
   [-0.08144074]
   [-0.43445703]]

  [[ 0.16004087]
   [ 0.33077648]
   [ 0.49060634]
   [ 0.1142568 ]
   [ 0.3345111 ]]

  [[-0.24745493]
   [ 0.11357499]
   [ 0.04526915]
   [ 0.1404091 ]
   [ 0.1878376 ]]

  [[ 0.19007032]
   [-0.03635289]
   [-0.24612081]
   [ 0.11062435]
   [ 0.05937098]]]]