This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM69A: Not able to run custom ONXX model for object detection on on Eagle AI studio

Part Number: AM69A

Hi,

I have been trying to run custom ONXX model on eagle ai studio for object detection. Below steps I followed-

Model analyzer -> selected AM69A - 32 TOPS -> selected custom model onnx runtime

Then I downloaded the pre trained model from this link https://github.com/onnx/models/tree/main/vision/object_detection_segmentation/mask-rcnn (Mask R-CNN R-50-FPN) and tried to run.

Each time I am trying to compile the model, the kernel is getting dead and in some cases, it is giving bad malloc error. This happens when I am using custom onnx model only. So far I am not able to compile and run any single onnx custom model. I have 3 models. 

Mask RCNN, yolact (resnet50) and yolact (resnet18).

Even I tried the simple onxx code to check the model -

import onnx
import os


# Preprocessing: load the ONNX model
#model_path = os.path.join("resources", "single_relu.onnx")
onnx_model = onnx.load('path to uploaded model')

print("The model is:\n{}".format(onnx_model))

# Check the model
try:
    onnx.checker.check_model(onnx_model)
except onnx.checker.ValidationError as e:
    print("The model is invalid: %s" % e)
else:
    print("The model is valid!")

Here also the kernel keeps getting disconnected and dead. I tried 1 more custom model onnx based but got the same issue. I have to do log out and login to make it work again. Restarting EVM or kernel is not fixing this issue.

Please let me know why is it happening.

Thanks

  • Hi Akhilesh, 

    I tried to compile "mask_rcnn_R_50_FPN_1" and I saw in notebooks/logs error message "Could not find const or initializer of layer 2170 !!!" (attached log for your reference)

    I am not familiar with this model, but maybe this could give you a hint.

    One thing we recommend is to run models in ARM only, to confirm they run OK. I tried to run it, I used vseg-onnx notebook as a baseline, as I saw in github this is a segmentation model, but you could use either as only postprocessing and visualization would change. Anyhow, it didn't work for me. Attached notebook used. Could you try to run it on ARM only? to discard any possible mistake from my side. Let me know your results.

    tidl_tools_path                                 = /home/root/notebooks/tidl_tools 
    artifacts_folder                                = custom-artifacts/onnx/mask_rcnn_R_50_FPN_1x.onnx 
    tidl_tensor_bits                                = 8 
    debug_level                                     = 1 
    num_tidl_subgraphs                              = 16 
    tidl_denylist                                   = 
    tidl_denylist_layer_name                        = 
    tidl_denylist_layer_type                         = 
    tidl_allowlist_layer_name                        = 
    model_type                                      =  
    tidl_calibration_accuracy_level                 = 7 
    tidl_calibration_options:num_frames_calibration = 4 
    tidl_calibration_options:bias_calibration_iterations = 3 
    mixed_precision_factor = -1.000000 
    model_group_id = 0 
    power_of_2_quantization                         = 2 
    enable_high_resolution_optimization             = 0 
    pre_batchnorm_fold                              = 1 
    add_data_convert_ops                          = 0 
    output_feature_16bit_names_list                 =  
    m_params_16bit_names_list                       =  
    reserved_compile_constraints_flag               = 1601 
    ti_internal_reserved_1                          = 
    
    WARNING : 'meta_layers_names_list' is not provided - running OD post processing in ARM mode 
     
    Number of OD backbone nodes = 0 
    Size of odBackboneNodeIds = 0 
    Could not find const or initializer of layer 2170 !!!
    Only float and INT64 tensor is supported 
    Could not find const or initializer of layer 2178 !!!
    Only float and INT64 tensor is supported 
    Could not find const or initializer of layer 2170 !!!
    Only float and INT64 tensor is supported 
    Could not find const or initializer of layer 2178 !!!
    Only float and INT64 tensor is supported 
    Could not find const or initializer of layer 2170 !!!
    Only float and INT64 tensor is supported 
    Could not find const or initializer of layer 2178 !!!
    Only float and INT64 tensor is supported 
    Could not find const or initializer of layer 2170 !!!
    Only float and INT64 tensor is supported 
    Could not find const or initializer of layer 2178 !!!
    Only float and INT64 tensor is supported 
    Could not find const or initializer of layer 2170 !!!
    Only float and INT64 tensor is supported 
    Could not find const or initializer of layer 2178 !!!
    Only float and INT64 tensor is supported 
    Could not find const or initializer of layer 2170 !!!
    Only float and INT64 tensor is supported 
    Could not find const or initializer of layer 2178 !!!
    Only float and INT64 tensor is supported 
    Could not find const or initializer of layer 2170 !!!
    Only float and INT64 tensor is supported 
    Could not find const or initializer of layer 2178 !!!
    Only float and INT64 tensor is supported 
    Could not find const or initializer of layer 2170 !!!
    Only float and INT64 tensor is supported 
    Could not find const or initializer of layer 2178 !!!
    Only float and INT64 tensor is supported 
    Could not find const or initializer of layer 2170 !!!
    Only float and INT64 tensor is supported 
    Could not find const or initializer of layer 2178 !!!
    Only float and INT64 tensor is supported 
    Could not find const or initializer of layer 2170 !!!
    Only float and INT64 tensor is supported 
    Could not find const or initializer of layer 2178 !!!
    Only float and INT64 tensor is supported 
    Layer 0 -- layer name -- 0 
     Input dims size = 3     dims --- 3   0   0   
    Layer 0 --- op type -  Unsqueeze,   Number of input dims 3  !=  4 .. not supported by TIDL 
    Layer 1 -- layer name -- 2 
     Input dims size = 4     dims --- 1   3   0   0   
    Supported TIDL layer type ---            Conv -- 2 
    Layer 2 -- layer name -- 7 
     Input dims size = 4     dims --- 1   64   0   0   
    Supported TIDL layer type ---            Relu -- 7 
    Layer 3 -- layer name -- 8 
     Input dims size = 4     dims --- 1   64   0   0   
    Supported TIDL layer type ---         MaxPool -- 8 
    Layer 4 -- layer name -- 30 
     Input dims size = 4     dims --- 1   64   0   0   
    Supported TIDL layer type ---            Conv -- 30 
    Layer 5 -- layer name -- 10 
     Input dims size = 4     dims --- 1   64   0   0   
    Supported TIDL layer type ---            Conv -- 10 
    Layer 6 -- layer name -- 15 
     Input dims size = 4     dims --- 1   64   0   0   
    Supported TIDL layer type ---            Relu -- 15 
    Layer 7 -- layer name -- 17 
     Input dims size = 4     dims --- 1   64   0   0   
    Supported TIDL layer type ---            Conv -- 17 
    Layer 8 -- layer name -- 22 
     Input dims size = 4     dims --- 1   64   0   0   
    Supported TIDL layer type ---            Relu -- 22 
    Layer 9 -- layer name -- 24 
     Input dims size = 4     dims --- 1   64   0   0   
    Supported TIDL layer type ---            Conv -- 24 
    Layer 10 -- layer name -- 35 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---             Add -- 35 
    Layer 11 -- layer name -- 36 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Relu -- 36 
    Layer 12 -- layer name -- 38 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Conv -- 38 
    Layer 13 -- layer name -- 43 
     Input dims size = 4     dims --- 1   64   0   0   
    Supported TIDL layer type ---            Relu -- 43 
    Layer 14 -- layer name -- 45 
     Input dims size = 4     dims --- 1   64   0   0   
    Supported TIDL layer type ---            Conv -- 45 
    Layer 15 -- layer name -- 50 
     Input dims size = 4     dims --- 1   64   0   0   
    Supported TIDL layer type ---            Relu -- 50 
    Layer 16 -- layer name -- 52 
     Input dims size = 4     dims --- 1   64   0   0   
    Supported TIDL layer type ---            Conv -- 52 
    Layer 17 -- layer name -- 57 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---             Add -- 57 
    Layer 18 -- layer name -- 58 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Relu -- 58 
    Layer 19 -- layer name -- 60 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Conv -- 60 
    Layer 20 -- layer name -- 65 
     Input dims size = 4     dims --- 1   64   0   0   
    Supported TIDL layer type ---            Relu -- 65 
    Layer 21 -- layer name -- 67 
     Input dims size = 4     dims --- 1   64   0   0   
    Supported TIDL layer type ---            Conv -- 67 
    Layer 22 -- layer name -- 72 
     Input dims size = 4     dims --- 1   64   0   0   
    Supported TIDL layer type ---            Relu -- 72 
    Layer 23 -- layer name -- 74 
     Input dims size = 4     dims --- 1   64   0   0   
    Supported TIDL layer type ---            Conv -- 74 
    Layer 24 -- layer name -- 79 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---             Add -- 79 
    Layer 25 -- layer name -- 80 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Relu -- 80 
    Layer 26 -- layer name -- 102 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Conv -- 102 
    Layer 27 -- layer name -- 82 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Conv -- 82 
    Layer 28 -- layer name -- 87 
     Input dims size = 4     dims --- 1   128   0   0   
    Supported TIDL layer type ---            Relu -- 87 
    Layer 29 -- layer name -- 89 
     Input dims size = 4     dims --- 1   128   0   0   
    Supported TIDL layer type ---            Conv -- 89 
    Layer 30 -- layer name -- 94 
     Input dims size = 4     dims --- 1   128   0   0   
    Supported TIDL layer type ---            Relu -- 94 
    Layer 31 -- layer name -- 96 
     Input dims size = 4     dims --- 1   128   0   0   
    Supported TIDL layer type ---            Conv -- 96 
    Layer 32 -- layer name -- 107 
     Input dims size = 4     dims --- 1   512   0   0   
    Supported TIDL layer type ---             Add -- 107 
    Layer 33 -- layer name -- 108 
     Input dims size = 4     dims --- 1   512   0   0   
    Supported TIDL layer type ---            Relu -- 108 
    Layer 34 -- layer name -- 110 
     Input dims size = 4     dims --- 1   512   0   0   
    Supported TIDL layer type ---            Conv -- 110 
    Layer 35 -- layer name -- 115 
     Input dims size = 4     dims --- 1   128   0   0   
    Supported TIDL layer type ---            Relu -- 115 
    Layer 36 -- layer name -- 117 
     Input dims size = 4     dims --- 1   128   0   0   
    Supported TIDL layer type ---            Conv -- 117 
    Layer 37 -- layer name -- 122 
     Input dims size = 4     dims --- 1   128   0   0   
    Supported TIDL layer type ---            Relu -- 122 
    Layer 38 -- layer name -- 124 
     Input dims size = 4     dims --- 1   128   0   0   
    Supported TIDL layer type ---            Conv -- 124 
    Layer 39 -- layer name -- 129 
     Input dims size = 4     dims --- 1   512   0   0   
    Supported TIDL layer type ---             Add -- 129 
    Layer 40 -- layer name -- 130 
     Input dims size = 4     dims --- 1   512   0   0   
    Supported TIDL layer type ---            Relu -- 130 
    Layer 41 -- layer name -- 132 
     Input dims size = 4     dims --- 1   512   0   0   
    Supported TIDL layer type ---            Conv -- 132 
    Layer 42 -- layer name -- 137 
     Input dims size = 4     dims --- 1   128   0   0   
    Supported TIDL layer type ---            Relu -- 137 
    Layer 43 -- layer name -- 139 
     Input dims size = 4     dims --- 1   128   0   0   
    Supported TIDL layer type ---            Conv -- 139 
    Layer 44 -- layer name -- 144 
     Input dims size = 4     dims --- 1   128   0   0   
    Supported TIDL layer type ---            Relu -- 144 
    Layer 45 -- layer name -- 146 
     Input dims size = 4     dims --- 1   128   0   0   
    Supported TIDL layer type ---            Conv -- 146 
    Layer 46 -- layer name -- 151 
     Input dims size = 4     dims --- 1   512   0   0   
    Supported TIDL layer type ---             Add -- 151 
    Layer 47 -- layer name -- 152 
     Input dims size = 4     dims --- 1   512   0   0   
    Supported TIDL layer type ---            Relu -- 152 
    Layer 48 -- layer name -- 154 
     Input dims size = 4     dims --- 1   512   0   0   
    Supported TIDL layer type ---            Conv -- 154 
    Layer 49 -- layer name -- 159 
     Input dims size = 4     dims --- 1   128   0   0   
    Supported TIDL layer type ---            Relu -- 159 
    Layer 50 -- layer name -- 161 
     Input dims size = 4     dims --- 1   128   0   0   
    Supported TIDL layer type ---            Conv -- 161 
    Layer 51 -- layer name -- 166 
     Input dims size = 4     dims --- 1   128   0   0   
    Supported TIDL layer type ---            Relu -- 166 
    Layer 52 -- layer name -- 168 
     Input dims size = 4     dims --- 1   128   0   0   
    Supported TIDL layer type ---            Conv -- 168 
    Layer 53 -- layer name -- 173 
     Input dims size = 4     dims --- 1   512   0   0   
    Supported TIDL layer type ---             Add -- 173 
    Layer 54 -- layer name -- 174 
     Input dims size = 4     dims --- 1   512   0   0   
    Supported TIDL layer type ---            Relu -- 174 
    Layer 55 -- layer name -- 196 
     Input dims size = 4     dims --- 1   512   0   0   
    Supported TIDL layer type ---            Conv -- 196 
    Layer 56 -- layer name -- 176 
     Input dims size = 4     dims --- 1   512   0   0   
    Supported TIDL layer type ---            Conv -- 176 
    Layer 57 -- layer name -- 181 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Relu -- 181 
    Layer 58 -- layer name -- 183 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Conv -- 183 
    Layer 59 -- layer name -- 188 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Relu -- 188 
    Layer 60 -- layer name -- 190 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Conv -- 190 
    Layer 61 -- layer name -- 201 
     Input dims size = 4     dims --- 1   1024   0   0   
    Supported TIDL layer type ---             Add -- 201 
    Layer 62 -- layer name -- 202 
     Input dims size = 4     dims --- 1   1024   0   0   
    Supported TIDL layer type ---            Relu -- 202 
    Layer 63 -- layer name -- 204 
     Input dims size = 4     dims --- 1   1024   0   0   
    Supported TIDL layer type ---            Conv -- 204 
    Layer 64 -- layer name -- 209 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Relu -- 209 
    Layer 65 -- layer name -- 211 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Conv -- 211 
    Layer 66 -- layer name -- 216 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Relu -- 216 
    Layer 67 -- layer name -- 218 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Conv -- 218 
    Layer 68 -- layer name -- 223 
     Input dims size = 4     dims --- 1   1024   0   0   
    Supported TIDL layer type ---             Add -- 223 
    Layer 69 -- layer name -- 224 
     Input dims size = 4     dims --- 1   1024   0   0   
    Supported TIDL layer type ---            Relu -- 224 
    Layer 70 -- layer name -- 226 
     Input dims size = 4     dims --- 1   1024   0   0   
    Supported TIDL layer type ---            Conv -- 226 
    Layer 71 -- layer name -- 231 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Relu -- 231 
    Layer 72 -- layer name -- 233 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Conv -- 233 
    Layer 73 -- layer name -- 238 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Relu -- 238 
    Layer 74 -- layer name -- 240 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Conv -- 240 
    Layer 75 -- layer name -- 245 
     Input dims size = 4     dims --- 1   1024   0   0   
    Supported TIDL layer type ---             Add -- 245 
    Layer 76 -- layer name -- 246 
     Input dims size = 4     dims --- 1   1024   0   0   
    Supported TIDL layer type ---            Relu -- 246 
    Layer 77 -- layer name -- 248 
     Input dims size = 4     dims --- 1   1024   0   0   
    Supported TIDL layer type ---            Conv -- 248 
    Layer 78 -- layer name -- 253 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Relu -- 253 
    Layer 79 -- layer name -- 255 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Conv -- 255 
    Layer 80 -- layer name -- 260 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Relu -- 260 
    Layer 81 -- layer name -- 262 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Conv -- 262 
    Layer 82 -- layer name -- 267 
     Input dims size = 4     dims --- 1   1024   0   0   
    Supported TIDL layer type ---             Add -- 267 
    Layer 83 -- layer name -- 268 
     Input dims size = 4     dims --- 1   1024   0   0   
    Supported TIDL layer type ---            Relu -- 268 
    Layer 84 -- layer name -- 270 
     Input dims size = 4     dims --- 1   1024   0   0   
    Supported TIDL layer type ---            Conv -- 270 
    Layer 85 -- layer name -- 275 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Relu -- 275 
    Layer 86 -- layer name -- 277 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Conv -- 277 
    Layer 87 -- layer name -- 282 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Relu -- 282 
    Layer 88 -- layer name -- 284 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Conv -- 284 
    Layer 89 -- layer name -- 289 
     Input dims size = 4     dims --- 1   1024   0   0   
    Supported TIDL layer type ---             Add -- 289 
    Layer 90 -- layer name -- 290 
     Input dims size = 4     dims --- 1   1024   0   0   
    Supported TIDL layer type ---            Relu -- 290 
    Layer 91 -- layer name -- 292 
     Input dims size = 4     dims --- 1   1024   0   0   
    Supported TIDL layer type ---            Conv -- 292 
    Layer 92 -- layer name -- 297 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Relu -- 297 
    Layer 93 -- layer name -- 299 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Conv -- 299 
    Layer 94 -- layer name -- 304 
     Input dims size = 4     dims --- 1   256   0   0   
    Supported TIDL layer type ---            Relu -- 304 
    L

    https://e2e.ti.com/cfs-file/__key/communityserver-discussions-components-files/791/vseg_2D00_onnx_2D00_cat.ipynb

    Thank you,

    Paula

  • Hi,

    Thanks for reply. I check the same and found the same issue. Let's ignore this model.

    I tried to run one more model which is a custom model with resnet18 backbone. This model also I am not able to compile. This is an instance segmentation model, yolact github.com/.../yolact

    I am attaching the logs. It says unsupported TIDL layers for some of the ops. Does this mean TI framework has not supported these operations yet? like transpose, padding?

    tidl_tools_path                                 = /home/root/notebooks/tidl_tools 
    artifacts_folder                                = custom-artifacts/yolact_resnet18_54_400000.onnx 
    tidl_tensor_bits                                = 8 
    debug_level                                     = 1 
    num_tidl_subgraphs                              = 16 
    tidl_denylist                                   = MaxPool   
    tidl_denylist_layer_name                        = 
    tidl_denylist_layer_type                         = 
    tidl_allowlist_layer_name                        = 
    model_type                                      =  
    tidl_calibration_accuracy_level                 = 7 
    tidl_calibration_options:num_frames_calibration = 4 
    tidl_calibration_options:bias_calibration_iterations = 3 
    mixed_precision_factor = -1.000000 
    model_group_id = 0 
    power_of_2_quantization                         = 2 
    enable_high_resolution_optimization             = 0 
    pre_batchnorm_fold                              = 1 
    add_data_convert_ops                          = 0 
    output_feature_16bit_names_list                 =  
    m_params_16bit_names_list                       =  
    reserved_compile_constraints_flag               = 1601 
    ti_internal_reserved_1                          = 
    
     ****** WARNING : Network not identified as Object Detection network : (1) Ignore if network is not Object Detection network (2) If network is Object Detection network, please specify "model_type":"OD" as part of OSRT compilation options******
    
    Supported TIDL layer type ---            Conv -- Conv_16 
    Supported TIDL layer type ---            Relu -- Relu_17 
    Op type 'MaxPool'  added to unsupported nodes as specified in deny list 
    Supported TIDL layer type ---            Conv -- Conv_19 
    Supported TIDL layer type ---            Relu -- Relu_20 
    Supported TIDL layer type ---            Conv -- Conv_21 
    Supported TIDL layer type ---             Add -- Add_22 
    Supported TIDL layer type ---            Relu -- Relu_23 
    Supported TIDL layer type ---            Conv -- Conv_24 
    Supported TIDL layer type ---            Relu -- Relu_25 
    Supported TIDL layer type ---            Conv -- Conv_26 
    Supported TIDL layer type ---             Add -- Add_27 
    Supported TIDL layer type ---            Relu -- Relu_28 
    Supported TIDL layer type ---            Conv -- Conv_32 
    Supported TIDL layer type ---            Conv -- Conv_29 
    Supported TIDL layer type ---            Relu -- Relu_30 
    Supported TIDL layer type ---            Conv -- Conv_31 
    Supported TIDL layer type ---             Add -- Add_33 
    Supported TIDL layer type ---            Relu -- Relu_34 
    Supported TIDL layer type ---            Conv -- Conv_35 
    Supported TIDL layer type ---            Relu -- Relu_36 
    Supported TIDL layer type ---            Conv -- Conv_37 
    Supported TIDL layer type ---             Add -- Add_38 
    Supported TIDL layer type ---            Relu -- Relu_39 
    Supported TIDL layer type ---            Conv -- Conv_43 
    Supported TIDL layer type ---            Conv -- Conv_40 
    Supported TIDL layer type ---            Relu -- Relu_41 
    Supported TIDL layer type ---            Conv -- Conv_42 
    Supported TIDL layer type ---             Add -- Add_44 
    Supported TIDL layer type ---            Relu -- Relu_45 
    Supported TIDL layer type ---            Conv -- Conv_46 
    Supported TIDL layer type ---            Relu -- Relu_47 
    Supported TIDL layer type ---            Conv -- Conv_48 
    Supported TIDL layer type ---             Add -- Add_49 
    Supported TIDL layer type ---            Relu -- Relu_50 
    Supported TIDL layer type ---            Conv -- Conv_54 
    Supported TIDL layer type ---            Conv -- Conv_51 
    Supported TIDL layer type ---            Relu -- Relu_52 
    Supported TIDL layer type ---            Conv -- Conv_53 
    Supported TIDL layer type ---             Add -- Add_55 
    Supported TIDL layer type ---            Relu -- Relu_56 
    Supported TIDL layer type ---            Conv -- Conv_57 
    Supported TIDL layer type ---            Relu -- Relu_58 
    Supported TIDL layer type ---            Conv -- Conv_59 
    Supported TIDL layer type ---             Add -- Add_60 
    Supported TIDL layer type ---            Relu -- Relu_61 
    Supported TIDL layer type ---            Conv -- Conv_62 
    Supported TIDL layer type ---            Conv -- Conv_75 
    Supported TIDL layer type ---            Relu -- Relu_76 
    Supported TIDL layer type ---            Conv -- Conv_81 
    Supported TIDL layer type ---            Conv -- Conv_82 
    Supported TIDL layer type ---            Conv -- Conv_175 
    Supported TIDL layer type ---            Relu -- Relu_176 
    Supported TIDL layer type ---            Conv -- Conv_180 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_182 
    Supported TIDL layer type ---            Conv -- Conv_164 
    Supported TIDL layer type ---            Relu -- Relu_165 
    Supported TIDL layer type ---            Conv -- Conv_169 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_171 
    Supported TIDL layer type ---            Conv -- Conv_153 
    Supported TIDL layer type ---            Relu -- Relu_154 
    Supported TIDL layer type ---            Conv -- Conv_158 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_160 
    Supported TIDL layer type ---            Conv -- Conv_67 
    Error : Unsupported Opset for Resize OP 
    Resize layer delegated to ARM -- 'Resize_63' 
    Unsupported (TIDL check) TIDL layer type ---          Resize 
    ERROR : Opset 13 pad is not supported 
    Unsupported (TIDL check) TIDL layer type ---             Pad 
    Unsupported (TIDL check) TIDL layer type ---     AveragePool 
    Supported TIDL layer type ---             Add -- Add_68 
    Supported TIDL layer type ---            Conv -- Conv_77 
    Supported TIDL layer type ---            Relu -- Relu_78 
    Supported TIDL layer type ---            Conv -- Conv_124 
    Supported TIDL layer type ---            Relu -- Relu_125 
    Supported TIDL layer type ---            Conv -- Conv_135 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_143 
    Supported TIDL layer type ---            Conv -- Conv_73 
    Error : Unsupported Opset for Resize OP 
    Resize layer delegated to ARM -- 'Resize_69' 
    Unsupported (TIDL check) TIDL layer type ---          Resize 
    ERROR : Opset 13 pad is not supported 
    Unsupported (TIDL check) TIDL layer type ---             Pad 
    Unsupported (TIDL check) TIDL layer type ---     AveragePool 
    Supported TIDL layer type ---             Add -- Add_74 
    Supported TIDL layer type ---            Conv -- Conv_79 
    Supported TIDL layer type ---            Relu -- Relu_80 
    Supported TIDL layer type ---            Conv -- Conv_95 
    Supported TIDL layer type ---            Relu -- Relu_96 
    Supported TIDL layer type ---            Conv -- Conv_106 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_114 
    Supported TIDL layer type ---          Concat -- Concat_187 
    Supported TIDL layer type ---         Softmax -- Softmax_190 
    Supported TIDL layer type ---            Conv -- Conv_183 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_185 
    Supported TIDL layer type ---            Conv -- Conv_172 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_174 
    Supported TIDL layer type ---            Conv -- Conv_161 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_163 
    Supported TIDL layer type ---            Conv -- Conv_144 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_152 
    Supported TIDL layer type ---            Conv -- Conv_115 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_123 
    Supported TIDL layer type ---          Concat -- Concat_188 
    Supported TIDL layer type ---            T

    Let me know about unsupported TIDL layers.

    Any recommendations on instance segmentation model? Because I checked the TI model zoo, it does not have any instance seg models.

    Thanks

    Akhilesh Gangwar

  • Also, I am not able to run yolov5s and yolov5l as well. Kernel is getting dead. This is the device log-

    Internal Error: do_recv() expected MSG_ID 5005, got 0!
    
    Stack trace:
      [bt] (0) /usr/lib/libti_inference_client.so(StackTrace[abi:cxx11](unsigned long, unsigned long)+0x1ed) [0x7f5d7fca505d]
      [bt] (1) /usr/lib/libti_inference_client.so(send_once(unsigned int, void*, int, progressbar*, progressbar*)+0x838) [0x7f5d7fca4ac8]
      [bt] (2) /usr/local/lib/python3.6/dist-packages/onnxruntime/capi/onnxruntime_pybind11_state.so(+0x8074c) [0x7f5d7ff3774c]
      [bt] (3) /usr/local/lib/python3.6/dist-packages/onnxruntime/capi/onnxruntime_pybind11_state.so(+0x80aa9) [0x7f5d7ff37aa9]
      [bt] (4) /usr/local/lib/python3.6/dist-packages/onnxruntime/capi/onnxruntime_pybind11_state.so(+0x880ef) [0x7f5d7ff3f0ef]
      [bt] (5) /usr/local/lib/python3.6/dist-packages/onnxruntime/capi/onnxruntime_pybind11_state.so(+0xa5a67) [0x7f5d7ff5ca67]
      [bt] (6) /usr/local/lib/python3.6/dist-packages/onnxruntime/capi/onnxruntime_pybind11_state.so(+0x74433) [0x7f5d7ff2b433]
      [bt] (7) /usr/bin/python3(_PyCFunction_FastCallDict+0x35c) [0x566b0c]
      [bt] (8) /usr/bin/python3() [0x594741]
      [bt] (9) /usr/bin/python3() [0x549ea5]
      [bt] (10) /usr/bin/python3() [0x5513f1]
      [bt] (11) /usr/local/lib/python3.6/dist-packages/onnxruntime/capi/onnxruntime_pybind11_state.so(+0x767a7) [0x7f5d7ff2d7a7]
      [bt] (12) /usr/bin/python3(_PyObject_FastCallKeywords+0x19c) [0x5a9b9c]
      [bt] (13) /usr/bin/python3() [0x50a2c3]
      [bt] (14) /usr/bin/python3(_PyEval_EvalFrameDefault+0x444) [0x50bcb4]
      [bt] (15) /usr/bin/python3() [0x509459]
    

    Thanks

    Akhilesh

  • Hi Akhilesh, 

    From logs I see "Maxpool" is in "deny_list" could you please delete this from "compile_options". It is there just as an example.

    Also, one option to avoid creation of a lot of subgraphs, which are probably causing the crash, is to use TIDL's meta-architecture in order to replace postprocessing nodes with an OD layer supported by TI. 

    edgeai-tidl-tools/docs/tidl_fsg_od_meta_arch.md at master · TexasInstruments/edgeai-tidl-tools · GitHub

    Training video, with some additional details: Process This: Efficient human pose estimation using the YOLO-Pose model and TI processors | Video | TI.com Video is for YoloP

    The main idea is to replace unsupported layers by an OD layer and, be able to offload most (or all) of the model to TIDL.

    In Model Analyzer notebooks/prebuilt-models/8bits you can untar and check OD models to get prototex examples

    Thank you,

    Paula

  • Hi Paula,

    I am able to run yolov5 provided by TI model zoo as a custom model. I am trying one instance segmentation model and getting some undefined behavior.

    Sometimes I am able to compile the model and can see the input and output details. Other times I am getting error while compiling the model itself. 

    error -

    ---------------------------------------------------------------------------
    RuntimeException                          Traceback (most recent call last)
    <ipython-input-16-5cc78ed11a7d> in <module>
    ----> 1 sess = rt.InferenceSession(onnx_model_path ,providers=EP_list, provider_options=[compile_options, {}], sess_options=so)
    
    /usr/local/lib/python3.6/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py in __init__(self, path_or_bytes, sess_options, providers, provider_options)
        281 
        282         try:
    --> 283             self._create_inference_session(providers, provider_options)
        284         except RuntimeError:
        285             if self._enable_fallback:
    
    /usr/local/lib/python3.6/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py in _create_inference_session(self, providers, provider_options)
        313 
        314         # initialize the C++ InferenceSession
    --> 315         sess.initialize_session(providers, provider_options)
        316 
        317         self._sess = sess
    
    RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: basic_string::_M_create

    I checked the logs. There are some unsupported operators as well. 

    5, 256
     87,          Concat, 4, 1, 250, 257
     88,            Conv, 3, 1, 257, 258
     89,            Relu, 1, 1, 258, 259
     90,            Conv, 3, 1, 259, 260
     91,            Relu, 1, 1, 260, 261
     92,            Conv, 3, 1, 261, 262
     93,            Relu, 1, 1, 262, 263
     94,            Conv, 3, 1, 263, 264
     95,            Relu, 1, 1, 264, 265
     96,            Conv, 3, 1, 259, 266
     97,            Relu, 1, 1, 266, 267
     98,          Concat, 2, 1, 265, 268
     99,            Conv, 3, 1, 268, 269
    100,            Relu, 1, 1, 269, 270
    101,            Conv, 3, 1, 270, 271
    102,            Relu, 1, 1, 271, 272
    103,          Resize, 3, 1, 272, 277
    104,          Concat, 2, 1, 277, 278
    105,            Conv, 3, 1, 278, 279
    106,            Relu, 1, 1, 279, 280
    107,            Conv, 3, 1, 280, 281
    108,            Relu, 1, 1, 281, 282
    109,            Conv, 3, 1, 282, 283
    110,            Relu, 1, 1, 283, 284
    111,            Conv, 3, 1, 278, 285
    112,            Relu, 1, 1, 285, 286
    113,          Concat, 2, 1, 284, 287
    114,            Conv, 3, 1, 287, 288
    115,            Relu, 1, 1, 288, 289
    116,            Conv, 3, 1, 289, 290
    117,            Relu, 1, 1, 290, 291
    118,          Resize, 3, 1, 291, 296
    119,          Concat, 2, 1, 296, 297
    120,            Conv, 3, 1, 297, 298
    121,            Relu, 1, 1, 298, 299
    122,            Conv, 3, 1, 299, 300
    123,            Relu, 1, 1, 300, 301
    124,            Conv, 3, 1, 301, 302
    125,            Relu, 1, 1, 302, 303
    126,            Conv, 3, 1, 297, 304
    127,            Relu, 1, 1, 304, 305
    128,          Concat, 2, 1, 303, 306
    129,            Conv, 3, 1, 306, 307
    130,            Relu, 1, 1, 307, 308
    131,            Conv, 3, 1, 308, 309
    132,            Relu, 1, 1, 309, 310
    133,          Resize, 3, 1, 310, 315
    134,          Concat, 2, 1, 315, 316
    135,            Conv, 3, 1, 316, 317
    136,            Relu, 1, 1, 317, 318
    137,            Conv, 3, 1, 318, 319
    138,            Relu, 1, 1, 319, 320
    139,            Conv, 3, 1, 320, 321
    140,            Relu, 1, 1, 321, 322
    141,            Conv, 3, 1, 316, 323
    142,            Relu, 1, 1, 323, 324
    143,          Concat, 2, 1, 322, 325
    144,            Conv, 3, 1, 325, 326
    145,            Relu, 1, 1, 326, 327
    146,            Conv, 3, 1, 327, 370
    147,            Conv, 3, 1, 327, 328
    148,            Relu, 1, 1, 328, 329
    149,          Concat, 2, 1, 329, 330
    150,            Conv, 3, 1, 330, 331
    151,            Relu, 1, 1, 331, 332
    152,            Conv, 3, 1, 332, 333
    153,            Relu, 1, 1, 333, 334
    154,            Conv, 3, 1, 334, 335
    155,            Relu, 1, 1, 335, 336
    156,            Conv, 3, 1, 330, 337
    157,            Relu, 1, 1, 337, 338
    158,          Concat, 2, 1, 336, 339
    159,            Conv, 3, 1, 339, 340
    160,            Relu, 1, 1, 340, 341
    161,            Conv, 3, 1, 341, 680
    162,            Conv, 3, 1, 341, 342
    163,            Relu, 1, 1, 342, 343
    164,          Concat, 2, 1, 343, 344
    165,            Conv, 3, 1, 344, 345
    166,            Relu, 1, 1, 345, 346
    167,            Conv, 3, 1, 346, 347
    168,            Relu, 1, 1, 347, 348
    169,            Conv, 3, 1, 348, 349
    170,            Relu, 1, 1, 349, 350
    171,            Conv, 3, 1, 344, 351
    172,            Relu, 1, 1, 351, 352
    173,          Concat, 2, 1, 350, 353
    174,            Conv, 3, 1, 353, 354
    175,            Relu, 1, 1, 354, 355
    176,            Conv, 3, 1, 355, 990
    177,            Conv, 3, 1, 355, 356
    178,            Relu, 1, 1, 356, 357
    179,          Concat, 2, 1, 357, 358
    180,            Conv, 3, 1, 358, 359
    181,            Relu, 1, 1, 359, 360
    182,            Conv, 3, 1, 360, 361
    183,            Relu, 1, 1, 361, 362
    184,            Conv, 3, 1, 362, 363
    185,            Relu, 1, 1, 363, 364
    186,            Conv, 3, 1, 358, 365
    187,            Relu, 1, 1, 365, 366
    188,          Concat, 2, 1, 364, 367
    189,            Conv, 3, 1, 367, 368
    190,            Relu, 1, 1, 368, 369
    191,            Conv, 3, 1, 369, 1300
    
    Input tensor name -  images 
    Output tensor name - 1300 
    Output tensor name - 990 
    Output tensor name - 680 
    Output tensor name - 370 
    tidl_tools_path                                 = /home/root/notebooks/tidl_tools 
    artifacts_folder                                = custom-artifacts/onnx/yolact_resnet18_54_400000.onnx 
    tidl_tensor_bits                                = 8 
    debug_level                                     = 1 
    num_tidl_subgraphs                              = 16 
    tidl_denylist                                   = 
    tidl_denylist_layer_name                        = 
    tidl_denylist_layer_type                         = 
    tidl_allowlist_layer_name                        = 
    model_type                                      =  
    tidl_calibration_accuracy_level                 = 7 
    tidl_calibration_options:num_frames_calibration = 5 
    tidl_calibration_options:bias_calibration_iterations = 3 
    mixed_precision_factor = -1.000000 
    model_group_id = 0 
    power_of_2_quantization                         = 2 
    enable_high_resolution_optimization             = 0 
    pre_batchnorm_fold                              = 1 
    add_data_convert_ops                          = 0 
    output_feature_16bit_names_list                 =  
    m_params_16bit_names_list                       =  
    reserved_compile_constraints_flag               = 1601 
    ti_internal_reserved_1                          = 
    
     ****** WARNING : Network not identified as Object Detection network : (1) Ignore if network is not Object Detection network (2) If network is Object Detection network, please specify "model_type":"OD" as part of OSRT compilation options******
    
    Supported TIDL layer type ---            Conv -- Conv_16 
    Supported TIDL layer type ---            Relu -- Relu_17 
    Supported TIDL layer type ---         MaxPool -- MaxPool_18 
    Supported TIDL layer type ---            Conv -- Conv_19 
    Supported TIDL layer type ---            Relu -- Relu_20 
    Supported TIDL layer type ---            Conv -- Conv_21 
    Supported TIDL layer type ---             Add -- Add_22 
    Supported TIDL layer type ---            Relu -- Relu_23 
    Supported TIDL layer type ---            Conv -- Conv_24 
    Supported TIDL layer type ---            Relu -- Relu_25 
    Supported TIDL layer type ---            Conv -- Conv_26 
    Supported TIDL layer type ---             Add -- Add_27 
    Supported TIDL layer type ---            Relu -- Relu_28 
    Supported TIDL layer type ---            Conv -- Conv_32 
    Supported TIDL layer type ---            Conv -- Conv_29 
    Supported TIDL layer type ---            Relu -- Relu_30 
    Supported TIDL layer type ---            Conv -- Conv_31 
    Supported TIDL layer type ---             Add -- Add_33 
    Supported TIDL layer type ---            Relu -- Relu_34 
    Supported TIDL layer type ---            Conv -- Conv_35 
    Supported TIDL layer type ---            Relu -- Relu_36 
    Supported TIDL layer type ---            Conv -- Conv_37 
    Supported TIDL layer type ---             Add -- Add_38 
    Supported TIDL layer type ---            Relu -- Relu_39 
    Supported TIDL layer type ---            Conv -- Conv_43 
    Supported TIDL layer type ---            Conv -- Conv_40 
    Supported TIDL layer type ---            Relu -- Relu_41 
    Supported TIDL layer type ---            Conv -- Conv_42 
    Supported TIDL layer type ---             Add -- Add_44 
    Supported TIDL layer type ---            Relu -- Relu_45 
    Supported TIDL layer type ---            Conv -- Conv_46 
    Supported TIDL layer type ---            Relu -- Relu_47 
    Supported TIDL layer type ---            Conv -- Conv_48 
    Supported TIDL layer type ---             Add -- Add_49 
    Supported TIDL layer type ---            Relu -- Relu_50 
    Supported TIDL layer type ---            Conv -- Conv_54 
    Supported TIDL layer type ---            Conv -- Conv_51 
    Supported TIDL layer type ---            Relu -- Relu_52 
    Supported TIDL layer type ---            Conv -- Conv_53 
    Supported TIDL layer type ---             Add -- Add_55 
    Supported TIDL layer type ---            Relu -- Relu_56 
    Supported TIDL layer type ---            Conv -- Conv_57 
    Supported TIDL layer type ---            Relu -- Relu_58 
    Supported TIDL layer type ---            Conv -- Conv_59 
    Supported TIDL layer type ---             Add -- Add_60 
    Supported TIDL layer type ---            Relu -- Relu_61 
    Supported TIDL layer type ---            Conv -- Conv_62 
    Supported TIDL layer type ---            Conv -- Conv_75 
    Supported TIDL layer type ---            Relu -- Relu_76 
    Supported TIDL layer type ---            Conv -- Conv_81 
    Supported TIDL layer type ---            Conv -- Conv_82 
    Supported TIDL layer type ---            Conv -- Conv_175 
    Supported TIDL layer type ---            Relu -- Relu_176 
    Supported TIDL layer type ---            Conv -- Conv_180 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_182 
    Supported TIDL layer type ---            Conv -- Conv_164 
    Supported TIDL layer type ---            Relu -- Relu_165 
    Supported TIDL layer type ---            Conv -- Conv_169 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_171 
    Supported TIDL layer type ---            Conv -- Conv_153 
    Supported TIDL layer type ---            Relu -- Relu_154 
    Supported TIDL layer type ---            Conv -- Conv_158 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_160 
    Supported TIDL layer type ---            Conv -- Conv_67 
    Error : Unsupported Opset for Resize OP 
    Resize layer delegated to ARM -- 'Resize_63' 
    Unsupported (TIDL check) TIDL layer type ---          Resize 
    ERROR : Opset 13 pad is not supported 
    Unsupported (TIDL check) TIDL layer type ---             Pad 
    Unsupported (TIDL check) TIDL layer type ---     AveragePool 
    Supported TIDL layer type ---             Add -- Add_68 
    Supported TIDL layer type ---            Conv -- Conv_77 
    Supported TIDL layer type ---            Relu -- Relu_78 
    Supported TIDL layer type ---            Conv -- Conv_124 
    Supported TIDL layer type ---            Relu -- Relu_125 
    Supported TIDL layer type ---            Conv -- Conv_135 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_143 
    Supported TIDL layer type ---            Conv -- Conv_73 
    Error : Unsupported Opset for Resize OP 
    Resize layer delegated to ARM -- 'Resize_69' 
    Unsupported (TIDL check) TIDL layer type ---          Resize 
    ERROR : Opset 13 pad is not supported 
    Unsupported (TIDL check) TIDL layer type ---             Pad 
    Unsupported (TIDL check) TIDL layer type ---     AveragePool 
    Supported TIDL layer type ---             Add -- Add_74 
    Supported TIDL layer type ---            Conv -- Conv_79 
    Supported TIDL layer type ---            Relu -- Relu_80 
    Supported TIDL layer type ---            Conv -- Conv_95 
    Supported TIDL layer type ---            Relu -- Relu_96 
    Supported TIDL layer type ---            Conv -- Conv_106 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_114 
    Supported TIDL layer type ---          Concat -- Concat_187 
    Supported TIDL layer type ---         Softmax -- Softmax_190 
    Supported TIDL layer type ---            Conv -- Conv_183 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_185 
    Supported TIDL layer type ---            Conv -- Conv_172 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_174 
    Supported TIDL layer type ---            Conv -- Conv_161 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_163 
    Supported TIDL layer type ---            Conv -- Conv_144 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_152 
    Supported TIDL layer type ---            Conv -- Conv_115 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_123 
    Supported TIDL layer type ---          Concat -- Concat_188 
    Unsupported (TIDL check) TIDL layer type ---            Tanh 
    Supported TIDL layer type ---            Conv -- Conv_177 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_179 
    Supported TIDL layer type ---            Conv -- Conv_166 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_168 
    Supported TIDL layer type ---            Conv -- Conv_155 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_157 
    Supported TIDL layer type ---            Conv -- Conv_126 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_134 
    Supported TIDL layer type ---            Conv -- Conv_97 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    Supported TIDL layer type ---         Reshape -- Reshape_105 
    Supported TIDL layer type ---          Concat -- Concat_186 
    Supported TIDL layer type ---            Conv -- Conv_83 
    Supported TIDL layer type ---            Relu -- Relu_84 
    Supported TIDL layer type ---            Conv -- Conv_85 
    Supported TIDL layer type ---            Relu -- Relu_86 
    Supported TIDL layer type ---            Conv -- Conv_87 
    Supported TIDL layer type ---            Relu -- Relu_88 
    Error : Unsupported Opset for Resize OP 
    Resize layer delegated to ARM -- 'Resize_89' 
    Unsupported (TIDL check) TIDL layer type ---          Resize 
    Supported TIDL layer type ---            Conv -- Conv_90 
    Supported TIDL layer type ---            Relu -- Relu_91 
    Supported TIDL layer type ---            Conv -- Conv_92 
    Supported TIDL layer type ---            Relu -- Relu_93 
    Unsupported (TIDL check) TIDL layer type ---       Transpose 
    
    Preliminary subgraphs created = 20 
    
    *** WARNING : Number of subgraphs generated > max_num_subgraphs provided in options - additional subgraphs are delegated to ARM *** 
    Final number of subgraphs created are : 16, - Offloaded Nodes - 99, Total Nodes - 137 
    TIDL ALLOWLISTING LAYER CHECK -- [TIDL_TransposeLayer]  should be removed in import process. If not, this model will not work!  
    TIDL ALLOWLISTING LAYER CHECK -- [TIDL_TransposeLayer]  should be removed in import process. If not, this model will not work!  
    TIDL ALLOWLISTING LAYER CHECK -- [TIDL_TransposeLayer]  should be removed in import process. If not, this model will not work!  
    TIDL ALLOWLISTING LAYER CHECK -- TIDL_PoolingLayer '': kernel size 2x2 with stride 1x1 not supported  
    TIDL ALLOWLISTING LAYER CHECK -- [TIDL_TransposeLayer]  should be removed in import process. If not, this model will not work!  
    TIDL ALLOWLISTING LAYER CHECK -- TIDL_PoolingLayer '': kernel size 2x2 with stride 1x1 not supported  
    TIDL ALLOWLISTING LAYER CHECK -- [TIDL_TransposeLayer]  should be removed in import process. If not, this model will not work!  
    TIDL ALLOWLISTING LAYER CHECK -- [TIDL_TransposeLayer]  should be removed in import process. If not, this model will not work!  
    TIDL ALLOWLISTING LAYER CHECK -- [TIDL_TransposeLayer]  should be removed in import process. If not, this model will not work!  
    TIDL ALLOWLISTING LAYER CHECK -- [TIDL_TransposeLayer]  should be removed in import process. If not, this model will not work!  
    TIDL ALLOWLISTING LAYER CHECK -- [TIDL_TransposeLayer]  should be removed in import process. If not, this model will not work!  
    TIDL ALLOWLISTING LAYER CHECK -- [TIDL_TransposeLayer]  should be removed in import process. If not, this model will not work!  
    TIDL ALLOWLISTING LAYER CHECK -- [TIDL_TanhLayer]  should be removed in import process. This activation type is not supported now !!!  
    TIDL ALLOWLISTING LAYER CHECK -- [TIDL_TransposeLayer]  should be removed in import process. If not, this model will not work!  
    TIDL ALLOWLISTING LAYER CHECK -- [TIDL_TransposeLayer]  should be removed in import process. If not, this model will not work!  
    TIDL ALLOWLISTING LAYER CHECK -- [TIDL_TransposeLayer]  should be removed in import process. If not, this model will not work!  
    TIDL ALLOWLISTING LAYER CHECK -- [TIDL_TransposeLayer]  should be removed in import process. If not, this model will not work!  
    TIDL ALLOWLISTING LAYER CHECK -- [TIDL_TransposeLayer]  should be removed in import process. If not, this model will not work!  
    TIDL ALLOWLISTING LAYER CHECK -- [TIDL_TransposeLayer]  should be removed in import process. If not, this model will not work!  
    Running runtimes graphviz - /home/root/notebooks/tidl_tools/tidl_graphVisualiser_runtimes.out custom-artifacts/onnx/yolact_resnet18_54_400000.onnx/allowedNode.txt custom-artifacts/onnx/yolact_resnet18_54_400000.onnx/tempDir/graphvizInfo.txt custom-artifacts/onnx/yolact_resnet18_54_400000.onnx/tempDir/runtimes_visualization.svg 
    *** In TIDL_createStateImportFunc *** 
    Compute on node : TIDLExecutionProvider_TIDL_0_0
      0,            Conv, 3, 1, input.1, input.4
      1,            Relu, 1, 1, input.4, onnx::MaxPool_159
      2,         MaxPool, 1, 1, onnx::MaxPool_159, input.8
      3,            Conv, 3, 1, input.8, input.16
      4,            Relu, 1, 1, input.16, onnx::Conv_163
      5,            Conv, 3, 1, onnx::Conv_163, onnx::Add_496
      6,             Add, 2, 1, onnx::Add_496, onnx::Relu_166
      7,            Relu, 1, 1, onnx::Relu_166, input.24
      8,            Conv, 3, 1, input.24, input.32
      9,            Relu, 1, 1, input.32, onnx::Conv_170
     10,            Conv, 3, 1, onnx::Conv_170, onnx::Add_502
     11,             Add, 2, 1, onnx::Add_502, onnx::Relu_173
     12,            Relu, 1, 1, onnx::Relu_173, input.40
     13,            Conv, 3, 1, input.40, input.48
     14,            Relu, 1, 1, input.48, onnx::Conv_177
     15,            Conv, 3, 1, onnx::Conv_177, onnx::Add_508
     16,            Conv, 3, 1, input.40, onnx::Add_511
     17,             Add, 2, 1, onnx::Add_508, onnx::Relu_182
     18,            Relu, 1, 1, onnx::Relu_182, input.60
     19,            Conv, 3, 1, input.60, input.68
     20,            Relu, 1, 1, input.68, onnx::Conv_186
     21,            Conv, 3, 1, onnx::Conv_186, onnx::Add_517
     22,             Add, 2, 1, onnx::Add_517, onnx::Relu_189
     23,            Relu, 1, 1, onnx::Relu_189, input.76
     24,            Conv, 3, 1, input.76, input.84
     25,            Relu, 1, 1, input.84, onnx::Conv_193
     26,            Conv, 3, 1, onnx::Conv_193, onnx::Add_523
     27,            Conv, 3, 1, input.76, onnx::Add_526
     28,             Add, 2, 1, onnx::Add_523, onnx::Relu_198
     29,            Relu, 1, 1, onnx::Relu_198, input.96
     30,            Conv, 3, 1, input.96, input.104
     31,            Relu, 1, 1, input.104, onnx::Conv_202
     32,            Conv, 3, 1, onnx::Conv_202, onnx::Add_532
     33,             Add, 2, 1, onnx::Add_532, onnx::Relu_205
     34,            Relu, 1, 1, onnx::Relu_205, input.112
     35,            Conv, 3, 1, input.112, input.120
     36,            Relu, 1, 1, input.120, onnx::Conv_209
     37,            Conv, 3, 1, onnx::Conv_209, onnx::Add_538
     38,            Conv, 3, 1, input.112, onnx::Add_541
     39,             Add, 2, 1, onnx::Add_538, onnx::Relu_214
     40,            Relu, 1, 1, onnx::Relu_214, input.132
     41,            Conv, 3, 1, input.132, input.140
     42,            Relu, 1, 1, input.140, onnx::Conv_218
     43,            Conv, 3, 1, onnx::Conv_218, onnx::Add_547
     44,             Add, 2, 1, onnx::Add_547, onnx::Relu_221
     45,            Relu, 1, 1, onnx::Relu_221, input.148
     46,            Conv, 3, 1, input.148, x
     47,            Conv, 3, 1, x, onnx::Relu_244
     48,            Relu, 1, 1, onnx::Relu_244, onnx::Conv_245
     49,            Conv, 3, 1, onnx::Conv_245, input.156
     50,            Conv, 3, 1, input.156, input.160
     51,            Conv, 3, 1, input.160, input.200
     52,            Relu, 1, 1, input.200, onnx::Conv_445
     53,            Conv, 3, 1, onnx::Conv_445, onnx::Transpose_446
     54,            Conv, 3, 1, onnx::Conv_445, onnx::Transpose_459
    
    Input tensor name -  input.1 
    Output tensor name - input.76 
    Output tensor name - input.112 
    Output tensor name - x 
    Output tensor name - onnx::Conv_245 
    Output tensor name - input.156 
    Output tensor name - onnx::Conv_445 
    Output tensor name - onnx::Transpose_459 
    Output tensor name - onnx::Transpose_446 
    *** In TIDL_createStateImportFunc *** 
    Compute on node : TIDLExecutionProvider_TIDL_1_1
      0,            Conv, 3, 1, input.156, input.196
      1,            Relu, 1, 1, input.196, onnx::Conv_404
      2,            Conv, 3, 1, onnx::Conv_404, onnx::Transpose_418
      3,         Reshape, 2, 1, onnx::Reshape_460, onnx::Concat_471
    
    Input tensor name -  onnx::Reshape_460 
    
    Input tensor name -  input.156 
    Output tensor name - onnx::Concat_471 
    Output tensor name - onnx::Conv_404 
    Output tensor name - onnx::Transpose_418 
    *** In TIDL_createStateImportFunc *** 
    Compute on node : TIDLExecutionProvider_TIDL_2_2
      0,            Conv, 3, 1, onnx::Conv_245, input.192
      1,            Relu, 1, 1, input.192, onnx::Conv_363
      2,            Conv, 3, 1, onnx::Conv_363, onnx::Transpose_377
      3,         Reshape, 2, 1, onnx::Reshape_419, onnx::Concat_430
    
    Input tensor name -  onnx::Reshape_419 
    
    Input tensor name -  onnx::Conv_245 
    Output tensor name - onnx::Concat_430 
    Output tensor name - onnx::Conv_363 
    Output tensor name - onnx::Transpose_377 
    *** In TIDL_createStateImportFunc *** 
    Compute on node : TIDLExecutionProvider_TIDL_3_3
      0,            Conv, 3, 1, input.112, onnx::Add_232
      1,         Reshape, 2, 1, onnx::Reshape_378, onnx::Concat_389
    
    Input tensor name -  onnx::Reshape_378 
    
    Input tensor name -  input.112 
    Output tensor name - onnx::Concat_389 
    Output tensor name - onnx::Add_232 
    *** In TIDL_createStateImportFunc *** 
    Compute on node : TIDLExecutionProvider_TIDL_4_4
      0,             Add, 2, 1, onnx::Add_231, x.3
      1,            Conv, 3, 1, x.3, onnx::Relu_246
      2,            Relu, 1, 1, onnx::Relu_246, onnx::Conv_247
      3,            Conv, 3, 1, onnx::Conv_247, input.188
      4,            Relu, 1, 1, input.188, onnx::Conv_316
      5,            Conv, 3, 1, onnx::Conv_316, onnx::Transpose_332
    
    Input tensor name -  onnx::Add_231 
    
    Input tensor name -  onnx::Add_232 
    Output tensor name - x.3 
    Output tensor name - onnx::Conv_316 
    Output tensor name - onnx::Transpose_332 
    *** In TIDL_createStateImportFunc *** 
    Compute on node : TIDLExecutionProvider_TIDL_5_5
      0,            Conv, 3, 1, input.76, onnx::Add_242
      1,         Reshape, 2, 1, onnx::Reshape_333, onnx::Concat_346
    
    Input tensor name -  onnx::Reshape_333 
    
    Input tensor name -  input.76 
    Output tensor name - onnx::Concat_346 
    Output tensor name - onnx::Add_242 
    *** In TIDL_createStateImportFunc *** 
    Compute on node : TIDLExecutionProvider_TIDL_6_6
      0,             Add, 2, 1, onnx::Add_241, input.152
      1,            Conv, 3, 1, input.152, onnx::Relu_248
      2,            Relu, 1, 1, onnx::Relu_248, onnx::Conv_249
      3,            Conv, 3, 1, onnx::Conv_249, input.184
      4,            Relu, 1, 1, input.184, onnx::Conv_269
      5,            Conv, 3, 1, onnx::Conv_269, onnx::Transpose_285
    
    Input tensor name -  onnx::Add_241 
    
    Input tensor name -  onnx::Add_242 
    Output tensor name - onnx::Conv_249 
    Output tensor name - onnx::Conv_269 
    Output tensor name - onnx::Transpose_285 
    *** In TIDL_createStateImportFunc *** 
    Compute on node : TIDLExecutionProvider_TIDL_7_7
      0,            Conv, 3, 1, onnx::Conv_445, onnx::Transpose_472
      1,         Reshape, 2, 1, onnx::Reshape_286, onnx::Concat_299
      2,          Concat, 5, 1, onnx::Concat_299, onnx::Softmax_486
      3,         Softmax, 1, 1, onnx::Softmax_486, 489
    
    Input tensor name -  onnx::Reshape_286 
    
    Input tensor name -  onnx::Concat_346 
    
    Input tensor name -  onnx::Concat_389 
    
    Input tensor name -  onnx::Concat_430 
    
    Input tensor name -  onnx::Concat_471 
    
    Input tensor name -  onnx::Conv_445 
    Output tensor name - 489 
    Output tensor name - onnx::Transpose_472 
    *** In TIDL_createStateImportFunc *** 
    Compute on node : TIDLExecutionProvider_TIDL_8_8
      0,            Conv, 3, 1, onnx::Conv_404, onnx::Transpose_431
      1,         Reshape, 2, 1, onnx::Reshape_473, onnx::Concat_484
    
    Input tensor name -  onnx::Reshape_473 
    
    Input tensor name -  onnx::Conv_404 
    Output tensor name - onnx::Concat_484 
    Output tensor name - onnx::Transpose_431 
    *** In TIDL_createStateImportFunc *** 
    Compute on node : TIDLExecutionProvider_TIDL_9_9
      0,            Conv, 3, 1, onnx::Conv_363, onnx::Transpose_390
      1,         Reshape, 2, 1, onnx::Reshape_432, onnx::Concat_443
    
    Input tensor name -  onnx::Reshape_432 
    
    Input tensor name -  onn

    The logs actually interrupted and gave the error.

    I have also seen that sometimes the kernel is dead too. 

    Let me know what do you think of above logs and erros.

    Thanks

    Akhilesh

  • Also, when I am trying on tvm for custom model, the example you people has given is for resnet18. Here also I am seeing some issues.

    For TVM-

    I am getting this error 

    curl_easy_perform: Peer certificate cannot be authenticated with given CA certificates (from logs).

    For ONXX-

    kernel is getting dead and got the same error -

    curl_easy_perform: Peer certificate cannot be authenticated with given CA certificates

    Let me know how are you people able to cop up with these issues. 

    Thanks

  • Hi Akhilesh, Curl error seems a communication error, maybe it was a hiccup in the communication between the container's server and the EVM farm. Could you try again and let me know if the error persists? if so, can you share the TVM model or the other model so I can reproduce it?. 

    thank you,

    Paula

  • Hi Paula,

    Curl error exists for multiple time I tried. It was always there.

    For TVM - 

    I was using onnx model for instance segmentation. I was able to compile the model using tvm but during runtime, I am getting this error - 

    DLRError                                  Traceback (most recent call last)
    <ipython-input-16-41f189c6abd0> in <module>
          2 #Running inference several times to get an stable performance output
          3 for i in range(2):
    ----> 4     res = model.run({input_name : preprocess_for_onnx_resnet18v2('sample-images/elephant.bmp')})
    
    /usr/local/lib/python3.6/dist-packages/dlr/api.py in run(self, input_values)
        117         except Exception as ex:
        118             self.neo_logger.exception("error in running inference {} {}".format(self._impl.__class__.__name__, ex))
    --> 119             raise ex
        120 
        121     def get_input_names(self):
    
    /usr/local/lib/python3.6/dist-packages/dlr/api.py in run(self, input_values)
        114         """
        115         try:
    --> 116             return self._impl.run(input_values)
        117         except Exception as ex:
        118             self.neo_logger.exception("error in running inference {} {}".format(self._impl.__class__.__name__, ex))
    
    /usr/local/lib/python3.6/dist-packages/dlr/dlr_model.py in run(self, input_values)
        449                              "or a np.ndarray/generic (representing treelite models)")
        450         # run model
    --> 451         self._run()
        452         # get output
        453         for i in range(self.num_outputs):
    
    /usr/local/lib/python3.6/dist-packages/dlr/dlr_model.py in _run(self)
        331     def _run(self):
        332         """A light wrapper to call run in the DLR backend."""
    --> 333         self._check_call(self._lib.RunDLRModel(byref(self.handle)))
        334         if self.backend == "relayvm":
        335             self._lazy_init_output_shape()
    
    /usr/local/lib/python3.6/dist-packages/dlr/dlr_model.py in _check_call(self, ret)
        158         """
        159         if ret != 0:
    --> 160             raise DLRError(self._lib.DLRGetLastError().decode('ascii'))
        161 
        162     def get_input_names(self):
    
    DLRError: 

    It took around 35-40 minutes to compile the model. I am attaching the compiled logs too.

    For ONXX - 

    The same instance segmentation model when I am trying to compile gives me this error - 

    RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : 
    Exception during initialization: std::bad_alloc

    Please find the model, compiled logs for tvm and notebook in this zip. 

    notebooks.zip

  • hi Akhilesh, In the cloud, I was able to reproduce the errors you posted for attached "yolact_resnet18_54_400000.onnx". I tried ARM only and didn't work for me either, I am wondering if you were able to run it in ARM only OK as a sanity check. Please let me know.

    Also, I would recommend you to start testing using edgeai-tidl-tools (https://github.com/TexasInstruments/edgeai-tidl-tools) so you can compile and run inference in host emulation using your local setup instead of the cloud. This could give you more debugging freedom as you will have all the scripts running locally. Also, edgeai-tidl-tools is using latest PSDK9.0, in the cloud we are one release behind for now (soon to be updated).

    thank you,

    Paula

  • Hi Akhilesh, I did a quick try using edgeai-tidl-tools. I am getting unsupported Opset error for Resize. We currently support Opset vesions 10 and 11 but those resize layer are 13. Could be possible for you to change them?.

    Those unsupported layers are delegated to ARM, as mentioned in below log, which is creating more subgraphs and probably some other issue gets uncovered. If we can get those layers to run in TIDL we would only have one graph, which is more ideal.

    Running_Model :  yolact_resnet18_54_400000  


    Running shape inference on model ../../../models/public/yolact_resnet18_54_400000.onnx

    Error : Unsupported Opset for Resize OP
    Resize layer delegated to ARM -- 'Resize_63'
    ERROR : Opset 13 pad is not supported
    Error : Unsupported Opset for Resize OP
    Resize layer delegated to ARM -- 'Resize_69'
    ERROR : Opset 13 pad is not supported
    Error : Unsupported Opset for Resize OP
    Resize layer delegated to ARM -- 'Resize_89'

    thank you,

    Paula

  • " We currently support Opset visions 10 and 11 but those resize layer are 13 "

    Something interesting happened. I did change opset from 13 to 10 and tried to compile the onnx model for tvm. I was successfully able to compile and run the model. I did not visualize the outputs but checked the shapes of the output and those are expected. Then I ran the benchmarking function and checked the FPS and all. 

    I thought just by changing the opset to 10 works but by the next time I tried to recompile and run, I am still able to compile but while running, I am getting error. 

    While executing this line - 

    model = DLRmodel(output_dir, 'cpu')

    I got this error - 

    DLRError                                  Traceback (most recent call last)
    <ipython-input-55-a370abb07017> in <module>
          1 # use deployed artifacts from the compiled model
    ----> 2 model = DLRModel(output_dir, 'cpu')
    
    /usr/local/lib/python3.6/dist-packages/dlr/api.py in __init__(self, model_path, dev_type, dev_id, error_log_file, use_default_dlr)
         90         except Exception as ex:
         91             self.neo_logger.exception("error in DLRModel instantiation {}".format(ex))
    ---> 92             raise ex
         93 
         94     def run(self, input_values):
    
    /usr/local/lib/python3.6/dist-packages/dlr/api.py in __init__(self, model_path, dev_type, dev_id, error_log_file, use_default_dlr)
         87             if dev_id is None:
         88                 dev_id = 0
    ---> 89             self._impl = DLRModelImpl(model_path, dev_type, dev_id, error_log_file, use_default_dlr)
         90         except Exception as ex:
         91             self.neo_logger.exception("error in DLRModel instantiation {}".format(ex))
    
    /usr/local/lib/python3.6/dist-packages/dlr/dlr_model.py in __init__(self, model_path, dev_type, dev_id, error_log_file, use_default_dlr)
         80                                         c_char_p(model_path.encode()),
         81                                         c_int(device_table[dev_type]),
    ---> 82                                         c_int(dev_id)))
         83 
         84         self.backend = self._parse_backend()
    
    /usr/local/lib/python3.6/dist-packages/dlr/dlr_model.py in _check_call(self, ret)
        158         """
        159         if ret != 0:
    --> 160             raise DLRError(self._lib.DLRGetLastError().decode('ascii'))
        161 
        162     def get_input_names(self):
    
    DLRError: 

    Only 1 time I am able to run. This was happening on cloud.

    I'll setup everything locally and will update you again. 

    I don't understand this behavior of executing everything sometimes and another time getting errors. 

    Thanks

    Akhilesh

  • Hi Akhilesh, no sure. if you onnxrt instead of tvm does it works OK?

    thank you,

    Paula

  • Hi Paula,

    Using onnxrt, I am not even able to compile the model. The kernel is getting dead. Let me setup local environment and will see if the same error comes there as well. 

    Thanks

    Akhilesh

  • Hi Paula,

    I am trying to setup things locally as you mentioned and following the below link - https://github.com/TexasInstruments/edgeai-tidl-tools

    I have setup the docker and built it. When I am trying to run the ./setup.sh script, I was getting protobuf compiler not found error and then I installed it separately and ran again the script.sh. Everything ran smoothly without any error. 

    Now I am trying to run and compile the examples by running - 

    cmake ../examples && make -j && cd ..

    But I am getting some linking errors. Attaching the error logs. Please let me know.
    Device - am69a

    Thanks
    Akhilesh

    root@b9da96a9c641:/home/root/build# cmake ../examples && make -j && cd ..
    -- The C compiler identification is GNU 11.4.0
    -- The CXX compiler identification is GNU 11.4.0
    -- Detecting C compiler ABI info
    -- Detecting C compiler ABI info - done
    -- Check for working C compiler: /usr/bin/cc - skipped
    -- Detecting C compile features
    -- Detecting C compile features - done
    -- Detecting CXX compiler ABI info
    -- Detecting CXX compiler ABI info - done
    -- Check for working CXX compiler: /usr/bin/c++ - skipped
    -- Detecting CXX compile features
    -- Detecting CXX compile features - done
    -- Detected processor: x86_64
    -- TARGET_DEVICE setting to: am69a
    -- TARGET_CPU not specicfied using x86
    -- CMAKE_BUILD_TYPE = Release PROJECT_NAME = edgeai_tidl_examples
    -- setting TENSORFLOW_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/tflite_2.8_x86_u22/
    -- setting DLR_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/dlr_1.10.0_x86_u22/
    -- setting OPENCV_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/opencv_4.2.0_x86_u22/
    -- Compiling for x86 with am69a config
    -- CMAKE_BUILD_TYPE = Release PROJECT_NAME = edgeai_tidl_examples
    -- setting TENSORFLOW_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/tflite_2.8_x86_u22/
    -- setting DLR_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/dlr_1.10.0_x86_u22/
    -- setting OPENCV_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/opencv_4.2.0_x86_u22/
    -- Compiling for x86 with am69a config
    -- CMAKE_BUILD_TYPE = Release PROJECT_NAME = edgeai_tidl_examples
    -- setting TENSORFLOW_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/tflite_2.8_x86_u22/
    -- setting DLR_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/dlr_1.10.0_x86_u22/
    -- setting OPENCV_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/opencv_4.2.0_x86_u22/
    -- Compiling for x86 with am69a config
    -- CMAKE_BUILD_TYPE = Release PROJECT_NAME = tfl_main
    -- setting TENSORFLOW_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/tflite_2.8_x86_u22/
    -- setting DLR_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/dlr_1.10.0_x86_u22/
    -- setting OPENCV_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/opencv_4.2.0_x86_u22/
    -- Compiling for x86 with am69a config
    -- CMAKE_BUILD_TYPE = Release PROJECT_NAME = tfl_priority_scheduling
    -- setting TENSORFLOW_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/tflite_2.8_x86_u22/
    -- setting DLR_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/dlr_1.10.0_x86_u22/
    -- setting OPENCV_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/opencv_4.2.0_x86_u22/
    -- Compiling for x86 with am69a config
    -- CMAKE_BUILD_TYPE = Release PROJECT_NAME = ort_priority_scheduling
    -- setting TENSORFLOW_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/tflite_2.8_x86_u22/
    -- setting DLR_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/dlr_1.10.0_x86_u22/
    -- setting OPENCV_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/opencv_4.2.0_x86_u22/
    -- Compiling for x86 with am69a config
    -- CMAKE_BUILD_TYPE = Release PROJECT_NAME = edgeai_tidl_examples
    -- setting TENSORFLOW_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/tflite_2.8_x86_u22/
    -- setting DLR_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/dlr_1.10.0_x86_u22/
    -- setting OPENCV_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/opencv_4.2.0_x86_u22/
    -- Compiling for x86 with am69a config
    -- CMAKE_BUILD_TYPE = Release PROJECT_NAME = tidlrt_clasification
    -- setting TENSORFLOW_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/tflite_2.8_x86_u22/
    -- setting DLR_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/dlr_1.10.0_x86_u22/
    -- setting OPENCV_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/opencv_4.2.0_x86_u22/
    -- Compiling for x86 with am69a config
    -- CMAKE_BUILD_TYPE = Release PROJECT_NAME = dlr_main
    -- setting TENSORFLOW_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/tflite_2.8_x86_u22/
    -- setting DLR_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/dlr_1.10.0_x86_u22/
    -- setting OPENCV_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/opencv_4.2.0_x86_u22/
    -- Compiling for x86 with am69a config
    -- CMAKE_BUILD_TYPE = Release PROJECT_NAME = ort_main
    -- setting TENSORFLOW_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/tflite_2.8_x86_u22/
    -- setting DLR_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/dlr_1.10.0_x86_u22/
    -- setting OPENCV_INSTALL_DIR path:/home/root/tidl_tools/osrt_deps/opencv_4.2.0_x86_u22/
    -- Compiling for x86 with am69a config
    -- Configuring done
    -- Generating done
    -- Build files have been written to: /home/root/build
    [  7%] Building CXX object osrt_cpp/utils/CMakeFiles/utils.dir/src/utility_functs.cpp.o
    [  7%] Building CXX object osrt_cpp/pre_process/CMakeFiles/pre_process.dir/pre_process.cpp.o
    [ 11%] Building CXX object osrt_cpp/advanced_examples/utils/CMakeFiles/utils_adv.dir/src/print_utils.cpp.o
    [ 14%] Building CXX object osrt_cpp/utils/CMakeFiles/utils.dir/src/model_info.cpp.o
    [ 18%] Building CXX object osrt_cpp/advanced_examples/utils/CMakeFiles/utils_adv.dir/src/arg_parsing.cpp.o
    [ 22%] Building CXX object osrt_cpp/utils/CMakeFiles/utils.dir/src/ti_logger.cpp.o
    [ 25%] Building CXX object osrt_cpp/utils/CMakeFiles/utils.dir/src/edgeai_classnames.cpp.o
    [ 29%] Building CXX object osrt_cpp/post_process/CMakeFiles/post_process.dir/post_process.cpp.o
    [ 33%] Building CXX object osrt_cpp/utils/CMakeFiles/utils.dir/src/arg_parsing.cpp.o
    [ 37%] Building CXX object osrt_cpp/utils/CMakeFiles/utils.dir/src/pbPlots.cpp.o
    [ 40%] Building CXX object osrt_cpp/utils/CMakeFiles/utils.dir/src/supportLib.cpp.o
    g++: warning: /bin/sh:: linker input file unused because linking not done
    g++: error: /bin/sh:: linker input file not found: No such file or directory
    g++: warning: 1:: linker input file unused because linking not done
    g++: error: 1:: linker input file not found: No such file or directory
    g++: warning: pkg-config:: linker input file unused because linking not done
    g++: error: pkg-config:: linker input file not found: No such file or directory
    g++: warning: not: linker input file unused because linking not done
    g++: error: not: linker input file not found: No such file or directory
    g++: warning: found: linker input file unused because linking not done
    g++: error: found: linker input file not found: No such file or directory
    make[2]: *** [osrt_cpp/advanced_examples/utils/CMakeFiles/utils_adv.dir/build.make:76: osrt_cpp/advanced_examples/utils/CMakeFiles/utils_adv.dir/src/arg_parsing.cpp.o] Error 1
    make[2]: *** Deleting file 'osrt_cpp/advanced_examples/utils/CMakeFiles/utils_adv.dir/src/arg_parsing.cpp.o'
    make[2]: *** Waiting for unfinished jobs....
    g++: warning: /bin/sh:: linker input file unused because linking not done
    g++: error: /bin/sh:: linker input file not found: No such file or directory
    g++: warning: 1:: linker input file unused because linking not done
    g++: error: 1:: linker input file not found: No such file or directory
    g++: warning: pkg-config:: linker input file unused because linking not done
    g++: error: pkg-config:: linker input file not found: No such file or directory
    g++: warning: not: linker input file unused because linking not done
    g++: error: not: linker input file not found: No such file or directory
    g++: warning: found: linker input file unused because linking not done
    g++: error: found: linker input file not found: No such file or directory
    make[2]: *** [osrt_cpp/utils/CMakeFiles/utils.dir/build.make:160: osrt_cpp/utils/CMakeFiles/utils.dir/src/supportLib.cpp.o] Error 1
    make[2]: *** Deleting file 'osrt_cpp/utils/CMakeFiles/utils.dir/src/supportLib.cpp.o'
    make[2]: *** Waiting for unfinished jobs....
    g++: warning: /bin/sh:: linker input file unused because linking not done
    g++: error: /bin/sh:: linker input file not found: No such file or directory
    g++: warning: 1:: linker input file unused because linking not done
    g++: error: 1:: linker input file not found: No such file or directory
    g++: warning: pkg-config:: linker input file unused because linking not done
    g++: error: pkg-config:: linker input file not found: No such file or directory
    g++: warning: not: linker input file unused because linking not done
    g++: error: not: linker input file not found: No such file or directory
    g++: warning: found: linker input file unused because linking not done
    g++: error: found: linker input file not found: No such file or directory
    g++: warning: /bin/sh:: linker input file unused because linking not done
    g++: error: /bin/sh:: linker input file not found: No such file or directory
    g++: warning: 1:: linker input file unused because linking not done
    g++: error: 1:: linker input file not found: No such file or directory
    g++: warning: pkg-config:: linker input file unused because linking not done
    g++: error: pkg-config:: linker input file not found: No such file or directory
    g++: warning: not: linker input file unused because linking not done
    g++: error: not: linker input file not found: No such file or directory
    g++: warning: found: linker input file unused because linking not done
    g++: error: found: linker input file not found: No such file or directory
    make[2]: *** [osrt_cpp/advanced_examples/utils/CMakeFiles/utils_adv.dir/build.make:90: osrt_cpp/advanced_examples/utils/CMakeFiles/utils_adv.dir/src/print_utils.cpp.o] Error 1
    make[2]: *** Deleting file 'osrt_cpp/advanced_examples/utils/CMakeFiles/utils_adv.dir/src/print_utils.cpp.o'
    make[2]: *** [osrt_cpp/utils/CMakeFiles/utils.dir/build.make:76: osrt_cpp/utils/CMakeFiles/utils.dir/src/utility_functs.cpp.o] Error 1
    make[2]: *** Deleting file 'osrt_cpp/utils/CMakeFiles/utils.dir/src/utility_functs.cpp.o'
    make[1]: *** [CMakeFiles/Makefile2:425: osrt_cpp/advanced_examples/utils/CMakeFiles/utils_adv.dir/all] Error 2
    make[1]: *** Waiting for unfinished jobs....
    g++: warning: /bin/sh:: linker input file unused because linking not done
    g++: error: /bin/sh:: linker input file not found: No such file or directory
    g++: warning: 1:: linker input file unused because linking not done
    g++: error: 1:: linker input file not found: No such file or directory
    g++: warning: pkg-config:: linker input file unused because linking not done
    g++: error: pkg-config:: linker input file not found: No such file or directory
    g++: warning: not: linker input file unused because linking not done
    g++: error: not: linker input file not found: No such file or directory
    g++: warning: found: linker input file unused because linking not done
    g++: error: found: linker input file not found: No such file or directory
    make[2]: *** [osrt_cpp/utils/CMakeFiles/utils.dir/build.make:132: osrt_cpp/utils/CMakeFiles/utils.dir/src/arg_parsing.cpp.o] Error 1
    make[2]: *** Deleting file 'osrt_cpp/utils/CMakeFiles/utils.dir/src/arg_parsing.cpp.o'
    g++: warning: /bin/sh:: linker input file unused because linking not done
    g++: error: /bin/sh:: linker input file not found: No such file or directory
    g++: warning: 1:: linker input file unused because linking not done
    g++: error: 1:: linker input file not found: No such file or directory
    g++: warning: pkg-config:: linker input file unused because linking not done
    g++: error: pkg-config:: linker input file not found: No such file or directory
    g++: warning: not: linker input file unused because linking not done
    g++: error: not: linker input file not found: No such file or directory
    g++: warning: found: linker input file unused because linking not done
    g++: error: found: linker input file not found: No such file or directory
    make[2]: *** [osrt_cpp/utils/CMakeFiles/utils.dir/build.make:118: osrt_cpp/utils/CMakeFiles/utils.dir/src/ti_logger.cpp.o] Error 1
    make[2]: *** Deleting file 'osrt_cpp/utils/CMakeFiles/utils.dir/src/ti_logger.cpp.o'
    g++: warning: /bin/sh:: linker input file unused because linking not done
    g++: error: /bin/sh:: linker input file not found: No such file or directory
    g++: warning: 1:: linker input file unused because linking not done
    g++: error: 1:: linker input file not found: No such file or directory
    g++: warning: pkg-config:: linker input file unused because linking not done
    g++: error: pkg-config:: linker input file not found: No such file or directory
    g++: warning: not: linker input file unused because linking not done
    g++: error: not: linker input file not found: No such file or directory
    g++: warning: found: linker input file unused because linking not done
    g++: error: found: linker input file not found: No such file or directory
    make[2]: *** [osrt_cpp/utils/CMakeFiles/utils.dir/build.make:104: osrt_cpp/utils/CMakeFiles/utils.dir/src/edgeai_classnames.cpp.o] Error 1
    make[2]: *** Deleting file 'osrt_cpp/utils/CMakeFiles/utils.dir/src/edgeai_classnames.cpp.o'
    g++: warning: /bin/sh:: linker input file unused because linking not done
    g++: error: /bin/sh:: linker input file not found: No such file or directory
    g++: warning: 1:: linker input file unused because linking not done
    g++: error: 1:: linker input file not found: No such file or directory
    g++: warning: pkg-config:: linker input file unused because linking not done
    g++: error: pkg-config:: linker input file not found: No such file or directory
    g++: warning: not: linker input file unused because linking not done
    g++: error: not: linker input file not found: No such file or directory
    g++: warning: found: linker input file unused because linking not done
    g++: error: found: linker input file not found: No such file or directory
    make[2]: *** [osrt_cpp/pre_process/CMakeFiles/pre_process.dir/build.make:76: osrt_cpp/pre_process/CMakeFiles/pre_process.dir/pre_process.cpp.o] Error 1
    make[2]: *** Deleting file 'osrt_cpp/pre_process/CMakeFiles/pre_process.dir/pre_process.cpp.o'
    make[1]: *** [CMakeFiles/Makefile2:286: osrt_cpp/pre_process/CMakeFiles/pre_process.dir/all] Error 2
    g++: warning: /bin/sh:: linker input file unused because linking not done
    g++: error: /bin/sh:: linker input file not found: No such file or directory
    g++: warning: 1:: linker input file unused because linking not done
    g++: error: 1:: linker input file not found: No such file or directory
    g++: warning: pkg-config:: linker input file unused because linking not done
    g++: error: pkg-config:: linker input file not found: No such file or directory
    g++: warning: not: linker input file unused because linking not done
    g++: error: not: linker input file not found: No such file or directory
    g++: warning: found: linker input file unused because linking not done
    g++: error: found: linker input file not found: No such file or directory
    make[2]: *** [osrt_cpp/post_process/CMakeFiles/post_process.dir/build.make:76: osrt_cpp/post_process/CMakeFiles/post_process.dir/post_process.cpp.o] Error 1
    make[2]: *** Deleting file 'osrt_cpp/post_process/CMakeFiles/post_process.dir/post_process.cpp.o'
    make[1]: *** [CMakeFiles/Makefile2:260: osrt_cpp/post_process/CMakeFiles/post_process.dir/all] Error 2
    g++: warning: /bin/sh:: linker input file unused because linking not done
    g++: error: /bin/sh:: linker input file not found: No such file or directory
    g++: warning: 1:: linker input file unused because linking not done
    g++: error: 1:: linker input file not found: No such file or directory
    g++: warning: pkg-config:: linker input file unused because linking not done
    g++: error: pkg-config:: linker input file not found: No such file or directory
    g++: warning: not: linker input file unused because linking not done
    g++: error: not: linker input file not found: No such file or directory
    g++: warning: found: linker input file unused because linking not done
    g++: error: found: linker input file not found: No such file or directory
    make[2]: *** [osrt_cpp/utils/CMakeFiles/utils.dir/build.make:90: osrt_cpp/utils/CMakeFiles/utils.dir/src/model_info.cpp.o] Error 1
    make[2]: *** Deleting file 'osrt_cpp/utils/CMakeFiles/utils.dir/src/model_info.cpp.o'
    g++: warning: /bin/sh:: linker input file unused because linking not done
    g++: error: /bin/sh:: linker input file not found: No such file or directory
    g++: warning: 1:: linker input file unused because linking not done
    g++: error: 1:: linker input file not found: No such file or directory
    g++: warning: pkg-config:: linker input file unused because linking not done
    g++: error: pkg-config:: linker input file not found: No such file or directory
    g++: warning: not: linker input file unused because linking not done
    g++: error: not: linker input file not found: No such file or directory
    g++: warning: found: linker input file unused because linking not done
    g++: error: found: linker input file not found: No such file or directory
    make[2]: *** [osrt_cpp/utils/CMakeFiles/utils.dir/build.make:146: osrt_cpp/utils/CMakeFiles/utils.dir/src/pbPlots.cpp.o] Error 1
    make[2]: *** Deleting file 'osrt_cpp/utils/CMakeFiles/utils.dir/src/pbPlots.cpp.o'
    make[1]: *** [CMakeFiles/Makefile2:312: osrt_cpp/utils/CMakeFiles/utils.dir/all] Error 2
    make: *** [Makefile:136: all] Error 2
    root@b9da96a9c641:/home/root/build#
    
  • Hi Akhilesh, I am not too familiar with current c model examples from edgeai-tidl-tools. Could you instead try python examples?

    Location: /edgeai-tidl-tools/examples/osrt_python/ort
    Run: python3 onnxrt_ep.py

    Probably you are working in a docker container. If so, and you exit it, you need to run again ""source ./setup.sh"

    if you container is still running, or using another setup in which you already run "source ./setup.sh" then you only need to do:

    export SOC=am69a
    export TIDL_TOOLS_PATH=$(pwd)/tidl_tools
    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$TIDL_TOOLS_PATH
    export ARM64_GCC_PATH=$(pwd)/gcc-arm-9.2-2019.12-x86_64-aarch64-none-linux-gnu

    In order to add your model, please check https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/examples/osrt_python/model_configs.py

    Something like this needs to be added (please correct it if wrong):

        'yolact_resnet18_54_400000' : {
            'model_path' : os.path.join(models_base_path, 'yolact_resnet18_54_400000.onnx'),
            'mean': [123.675, 116.28, 103.53],
            'scale' : [0.017125, 0.017507, 0.017429],
            'num_images' : numImages,
            'num_classes': 19,
            'session_name' : 'onnxrt' ,
            'model_type': 'seg'
        },

    And then you can use it inside onnxrt_ep.py by changing models list to test. Ex:models = ['yolact_resnet18_54_400000']

    Also, could you share your modified model (opset 10)?

    thank you,

    Paula

  • Sure. Let me try python examples. Attaching yolact onnxyolact_resnet18_54_400000opset10.zip model with opest10

  • Hi Paula

    I setup the docker based system and able to compile the same model as well. I ran the code with compile option -d that says Disable offload to TIDL. I am not sure whether I have fully understood this or not. Is it running model on PC or on ARM of emulated hardware?

    When I tried to run without any compile option, this supposed to run on emulated device using TIDL but I got this error -

    python3: src/workload_ref_exec.c:107: int32_t WorkloadRefExec_getIndexFromDataId(int32_t, int32_t*, int32_t): Assertion `arrIdx != WORKLOAD_REF_BUF_NOT_FOUND' failed.
    Aborted (core dumped)
    

    Am I not supposed to run the part that has to be run on device in this emulated env? I thought since we are emulating the hardware itself, I could have able to run. 

    The next part I have doubt over is the operator. I saw in the document that Pooling has been validated for the following kernel sizes: 3x3,2x2,1x1, with a maximum stride of 2.

    I am getting unsupported ops for average pooling with kernel 2 and stride 1. Not sure why.

    Thanks

    Akhilesh

  • Hi Akhilesh, I was able to reproduce it. Let me check why we are denying in alllowlist below two layers and come back to you

    69 AveragePool_65 AveragePool outputAdjNodes 1 70 inputAdjNodes 1 68 additionalOutput 0 diagInfo TIDL ALLOWLISTING LAYER CHECK -- TIDL_PoolingLayer '': kernel size 2x2 with stride 1x1 not supported

    81 AveragePool_70 AveragePool outputAdjNodes 1 82 inputAdjNodes 1 80 additionalOutput 0 diagInfo TIDL ALLOWLISTING LAYER CHECK -- TIDL_PoolingLayer '': kernel size 2x2 with stride 1x1 not supported

    thank you,

    Paula

  • Hi Paula, when I am running the model using TIDL artifacts, I am getting runtime issues.

    This is the logs I got using gdb.

    root@50a41c466951:/home/root/edgeai-tidl-tools/examples/osrt_python/ort# gdb --args python3 onnxrt_ep.py
    GNU gdb (Ubuntu 12.1-0ubuntu1~22.04) 12.1
    Copyright (C) 2022 Free Software Foundation, Inc.
    License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.
    Type "show copying" and "show warranty" for details.
    This GDB was configured as "x86_64-linux-gnu".
    Type "show configuration" for configuration details.
    For bug reporting instructions, please see:
    <https://www.gnu.org/software/gdb/bugs/>.
    Find the GDB manual and other documentation resources online at:
        <http://www.gnu.org/software/gdb/documentation/>.
    
    For help, type "help".
    Type "apropos word" to search for commands related to "word"...
    Reading symbols from python3...
    (No debugging symbols found in python3)
    (gdb) run
    Starting program: /usr/bin/python3 onnxrt_ep.py
    warning: Error disabling address space randomization: Operation not permitted
    [Thread debugging using libthread_db enabled]
    Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
    [New Thread 0x7f318e3ff640 (LWP 1946)]
    [New Thread 0x7f318dbfe640 (LWP 1947)]
    [New Thread 0x7f318b3fd640 (LWP 1948)]
    [New Thread 0x7f3188bfc640 (LWP 1949)]
    [New Thread 0x7f31863fb640 (LWP 1950)]
    [New Thread 0x7f3181bfa640 (LWP 1951)]
    [New Thread 0x7f317f3f9640 (LWP 1952)]
    [New Thread 0x7f317cbf8640 (LWP 1953)]
    [New Thread 0x7f317a3f7640 (LWP 1954)]
    [New Thread 0x7f3177bf6640 (LWP 1955)]
    [New Thread 0x7f31753f5640 (LWP 1956)]
    [New Thread 0x7f3172bf4640 (LWP 1957)]
    [New Thread 0x7f31723f3640 (LWP 1958)]
    [New Thread 0x7f316dbf2640 (LWP 1959)]
    [New Thread 0x7f316d3f1640 (LWP 1960)]
    args,  Namespace(compile=False, disable_offload=False, run_model_zoo=False, models=[])
    Available execution providers :  ['TIDLExecutionProvider', 'TIDLCompilationProvider', 'CPUExecutionProvider']
    
    Running_Model :  yolact_resnet18_54_400000_opset10_noAvgPool
    
    platform.machine() :  x86_64
    model config :  {'model_path': '../../../models/akhilesh/yolact_resnet18_54_400000_opset10_noAvgPool.onnx', 'mean': [123.675, 116.28, 103.53], 'scale': [0.017125, 0.017507, 0.017429], 'num_images': 3, 'num_classes': 81, 'session_name': 'onnxrt', 'model_type': 'seg'}
    ***** executing model *****
    [New Thread 0x7f315c20a640 (LWP 1961)]
    [New Thread 0x7f315ba09640 (LWP 1962)]
    [New Thread 0x7f315b208640 (LWP 1963)]
    [New Thread 0x7f315aa07640 (LWP 1964)]
    [New Thread 0x7f315a206640 (LWP 1965)]
    [New Thread 0x7f3159a05640 (LWP 1966)]
    [New Thread 0x7f3159204640 (LWP 1967)]
    libtidl_onnxrt_EP loaded 0x564292e93d30
    Final number of subgraphs created are : 3, - Offloaded Nodes - 133, Total Nodes - 135
    The soft limit is 2048
    The hard limit is 2048
    MEM: Init ... !!!
    MEM: Init ... Done !!!
     0.0s:  VX_ZONE_INIT:Enabled
     0.5s:  VX_ZONE_ERROR:Enabled
     0.7s:  VX_ZONE_WARNING:Enabled
    [New Thread 0x7f31589f2640 (LWP 1968)]
    [New Thread 0x7f313a95e640 (LWP 1969)]
    [New Thread 0x7f313a15d640 (LWP 1970)]
    [New Thread 0x7f313995c640 (LWP 1971)]
    [New Thread 0x7f313915b640 (LWP 1972)]
    [New Thread 0x7f3122ff2640 (LWP 1973)]
    [New Thread 0x7f31227f1640 (LWP 1974)]
    [New Thread 0x7f3121ff0640 (LWP 1975)]
    [New Thread 0x7f31217ef640 (LWP 1976)]
    [New Thread 0x7f3120fee640 (LWP 1977)]
    [New Thread 0x7f31207ed640 (LWP 1978)]
    [New Thread 0x7f311ffec640 (LWP 1979)]
    [New Thread 0x7f311f7eb640 (LWP 1980)]
    [New Thread 0x7f311efea640 (LWP 1981)]
    [New Thread 0x7f311e7e9640 (LWP 1982)]
    [New Thread 0x7f311dfe8640 (LWP 1983)]
    [New Thread 0x7f311d7e7640 (LWP 1984)]
    [New Thread 0x7f311cfe6640 (LWP 1985)]
    [New Thread 0x7f311c7e5640 (LWP 1986)]
    [New Thread 0x7f311bfe4640 (LWP 1987)]
    [New Thread 0x7f311b7e3640 (LWP 1988)]
    [New Thread 0x7f311afe2640 (LWP 1989)]
    [New Thread 0x7f311a7e1640 (LWP 1990)]
    [New Thread 0x7f3119fe0640 (LWP 1991)]
    [New Thread 0x7f31197df640 (LWP 1992)]
    [New Thread 0x7f3118fde640 (LWP 1993)]
    [New Thread 0x7f31187dd640 (LWP 1994)]
    [New Thread 0x7f3117fdc640 (LWP 1995)]
    [New Thread 0x7f31177db640 (LWP 1996)]
    [New Thread 0x7f3116fda640 (LWP 1997)]
    [New Thread 0x7f31167d9640 (LWP 1998)]
    [New Thread 0x7f3115fd8640 (LWP 1999)]
    [New Thread 0x7f31157d7640 (LWP 2000)]
    [New Thread 0x7f3114fd6640 (LWP 2001)]
    [New Thread 0x7f31147d5640 (LWP 2002)]
    [New Thread 0x7f3113fd4640 (LWP 2003)]
    [New Thread 0x7f31137d3640 (LWP 2004)]
    [New Thread 0x7f3112fd2640 (LWP 2005)]
    [New Thread 0x7f31127d1640 (LWP 2006)]
    [New Thread 0x7f3111fd0640 (LWP 2007)]
    [New Thread 0x7f31117cf640 (LWP 2008)]
    [New Thread 0x7f3110fce640 (LWP 2009)]
    [New Thread 0x7f31107cd640 (LWP 2010)]
    [New Thread 0x7f310ffcc640 (LWP 2011)]
    [New Thread 0x7f310f7cb640 (LWP 2012)]
    [New Thread 0x7f310efca640 (LWP 2013)]
    [New Thread 0x7f310e7c9640 (LWP 2014)]
    [New Thread 0x7f310dfc8640 (LWP 2015)]
    [New Thread 0x7f310d7c7640 (LWP 2016)]
    [New Thread 0x7f310cfc6640 (LWP 2017)]
    [New Thread 0x7f310c7c5640 (LWP 2018)]
    [New Thread 0x7f310bfc4640 (LWP 2019)]
    [New Thread 0x7f310b7c3640 (LWP 2020)]
    [New Thread 0x7f310afc2640 (LWP 2021)]
    [New Thread 0x7f310a7c1640 (LWP 2022)]
    [New Thread 0x7f3109fc0640 (LWP 2023)]
    [New Thread 0x7f31097bf640 (LWP 2024)]
    [New Thread 0x7f3108fbe640 (LWP 2025)]
    [New Thread 0x7f31087bd640 (LWP 2026)]
    [New Thread 0x7f3107fbc640 (LWP 2027)]
    [New Thread 0x7f31077bb640 (LWP 2028)]
    [New Thread 0x7f3106fba640 (LWP 2029)]
    [New Thread 0x7f31067b9640 (LWP 2030)]
    [New Thread 0x7f3105fb8640 (LWP 2031)]
    [New Thread 0x7f31057b7640 (LWP 2032)]
    [New Thread 0x7f3104fb6640 (LWP 2033)]
    [New Thread 0x7f31047b5640 (LWP 2034)]
    [New Thread 0x7f3103fb4640 (LWP 2035)]
    [New Thread 0x7f31037b3640 (LWP 2036)]
    [New Thread 0x7f3102fb2640 (LWP 2037)]
    [New Thread 0x7f31027b1640 (LWP 2038)]
    [New Thread 0x7f3101fb0640 (LWP 2039)]
    [New Thread 0x7f31017af640 (LWP 2040)]
    [New Thread 0x7f3100fae640 (LWP 2041)]
    [New Thread 0x7f31007ad640 (LWP 2042)]
     0.5446s:  VX_ZONE_INIT:[tivxInit:185] Initialization Done !!!
    created session for model.
    height, width, channel, batch, floating_model:  550 550 3 1 True
    running session for the model
    input image shape :  (1, 3, 550, 550)
    python3: src/workload_ref_exec.c:107: int32_t WorkloadRefExec_getIndexFromDataId(int32_t, int32_t*, int32_t): Assertion `arrIdx != WORKLOAD_REF_BUF_NOT_FOUND' failed.
    
    Thread 24 "python3" received signal SIGABRT, Aborted.
    [Switching to Thread 0x7f31589f2640 (LWP 1968)]
    __pthread_kill_implementation (no_tid=0, signo=6, threadid=139849916950080) at ./nptl/pthread_kill.c:44
    44      ./nptl/pthread_kill.c: No such file or directory.
    (gdb) bt
    #0  __pthread_kill_implementation (no_tid=0, signo=6, threadid=139849916950080) at ./nptl/pthread_kill.c:44
    #1  __pthread_kill_internal (signo=6, threadid=139849916950080) at ./nptl/pthread_kill.c:78
    #2  __GI___pthread_kill (threadid=139849916950080, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
    #3  0x00007f31929a8476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
    #4  0x00007f319298e7f3 in __GI_abort () at ./stdlib/abort.c:79
    #5  0x00007f319298e71b in __assert_fail_base (fmt=0x7f3192b43150 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
        assertion=0x7f312c909280 "arrIdx != WORKLOAD_REF_BUF_NOT_FOUND",
        file=0x7f312c9092fd "src/workload_ref_exec.c", line=107, function=<optimized out>) at ./assert/assert.c:92
    #6  0x00007f319299fe96 in __GI___assert_fail (assertion=0x7f312c909280 "arrIdx != WORKLOAD_REF_BUF_NOT_FOUND",
        file=0x7f312c9092fd "src/workload_ref_exec.c", line=107,
        function=0x7f312c909238 "int32_t WorkloadRefExec_getIndexFromDataId(int32_t, int32_t*, int32_t)")
        at ./assert/assert.c:101
    #7  0x00007f312bf530ef in WorkloadRefExec_getPtrsFromWorkload(TIDL_NetworkCommonParams*, sWorkloadUnit_t*, sTIDL_AlgLayer_t*, sTIDL_Layer_t*, void**, void**) () from /home/root/edgeai-tidl-tools/tidl_tools/libvx_tidl_rt.so
    #8  0x00007f312bf53c67 in WorkloadRefExec_Process(TIDL_Obj*, TIDL_NetworkCommonParams*, sWorkloadUnit_t*, sTIDL_AlgLayer_t*, sTIDL_Layer_t*, void**, void**, int, int, int*) ()
       from /home/root/edgeai-tidl-tools/tidl_tools/libvx_tidl_rt.so
    #9  0x00007f312bef3046 in TIDL_process(IVISION_Obj*, IVISION_BufDescList*, IVISION_BufDescList*, IVISION_InArgs*, IVISION_OutArgs*) () from /home/root/edgeai-tidl-tools/tidl_tools/libvx_tidl_rt.so
    #10 0x00007f312bee2f0a in tivxKernelTIDLProcess () from /home/root/edgeai-tidl-tools/tidl_tools/libvx_tidl_rt.so
    #11 0x00007f312becfa71 in ownTargetKernelExecute () from /home/root/edgeai-tidl-tools/tidl_tools/libvx_tidl_rt.so
    #12 0x00007f312bece23f in ownTargetNodeDescNodeExecuteTargetKernel ()
       from /home/root/edgeai-tidl-tools/tidl_tools/libvx_tidl_rt.so
    #13 0x00007f312bece7cf in ownTargetNodeDescNodeExecute ()
       from /home/root/edgeai-tidl-tools/tidl_tools/libvx_tidl_rt.so
    #14 0x00007f312becec0c in ownTargetTaskMain () from /home/root/edgeai-tidl-tools/tidl_tools/libvx_tidl_rt.so
    #15 0x00007f312bedef5c in tivxTaskMain () from /home/root/edgeai-tidl-tools/tidl_tools/libvx_tidl_rt.so
    #16 0x00007f31929faac3 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
    #17 0x00007f3192a8bbf4 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:100
    (gdb)
    

    Did you look into 3 subgraphs and why average pooling is not in support list?

    I even tried to change average pooling to maxpooling and got the same issue. I also can not see any other operator that is not supported. 

    Thanks

    Akhilesh

  • Hi Akhilesh, I got the same "WORKLOAD_REF_BUF_NOT_FOUND" error during runtime. I requested help from a colleague w.r.t average pooling not being supported. He will reply back to you. Please wait for few days as there are some Holidays.

    thank you,

    Paula

  • Sure Paula.

    I also checked if I was putting average pooling or max pooling in the deny list of compile option. That's not the case. We don't have anything in the deny list. I am using the compile options given in common_utils.py .

    Thanks

    Akhilesh

  • Hi Paula, any update on this?

    Thanks

    Akhilesh