This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: Segmentation Fault loading model with edge ai tidl

Part Number: TDA4VM

Hi all,

I am trying to load a tflite model, however it fails. The only message recieved is Segmentation Fault without further details(with different Debug Levels).

Following is the piece of code:

Fullscreen
1
2
3
4
5
6
7
8
9
10
11
12
import tflite_runtime.interpreter as tflite
compile_options = {"tidl_tools_path": os.environ['TIDL_TOOLS_PATH'],
"tensor_bits":8,
"accuracy":1,
"debug_level":1, # also tested with debug level 3
"artifacts_folder": "/home/tda4/output"}
tidl_delegate = [tflite.load_delegate(os.path.join(os.environ['TIDL_TOOLS_PATH'], 'tidl_model_import_tflite.so'),
compile_options)]
interpreter = tflite.Interpreter(model_path="/home/tda4/abc/tfl/data/model.tflite",
experimental_delegates=tidl_delegate)
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

And the output is:

Fullscreen
1
2
3
****** WARNING : Network not identified as Object Detection network : (1) Ignore if network is not Object Detection network (2) If network is Object Detection network, please specify "model_type":"OD" as part of OSRT compilation options******
Segmentation fault
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Some more information related to why the Segmentation fault occurs would be helpful, but I am not sure how to get this.

I have tested other tflite models they seem to work.

Edit: Tested the same model with ONNX EP now, this writes out more things in log. For ex: Slice and Sub layer were not supported. I added them in the deny_list. Now I still get the Segmentation Fault with following log:

Fullscreen
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
tidl_tools_path = /home/root/tidl_tools
artifacts_folder = /home/root/output
tidl_tensor_bits = 8
debug_level = 1
num_tidl_subgraphs = 16
tidl_denylist = Slice Sub
tidl_denylist_layer_name =
tidl_denylist_layer_type =
tidl_allowlist_layer_name =
model_type =
tidl_calibration_accuracy_level = 7
tidl_calibration_options:num_frames_calibration = 20
tidl_calibration_options:bias_calibration_iterations = 50
mixed_precision_factor = -1.000000
model_group_id = 0
power_of_2_quantization = 2
enable_high_resolution_optimization = 0
pre_batchnorm_fold = 1
add_data_convert_ops = 0
output_feature_16bit_names_list =
m_params_16bit_names_list =
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

The log does not seem to be complete though. 

Best

Ashay

  • Hi Ashay, can this model correctly run on ARM only mode? Mainly running interpreter without tidl_delegates example line below:

    Tflite:
    interpreter = tflite.Interpreter(model_path=tflite_model_path)

    Onnx: 
    EP_list = ['CPUExecutionProvider']
    sess = rt.InferenceSession(onnx_model_path ,providers=EP_list, sess_options=so)

    If so, can you share your model?

    thank you,

    Paula

  • Hi Paula, yes I can confirm that the model runs on ARM mode only (tested this for TFLite). Unfortunately, it is not possible to share the model. Can you suggest what can I try to at least generate some logs to better debug?

    If it is helpful I can provide list of the layers/operations which we are using:

    StridedSlice
    Equal
    Sub
    Mul
    Cast
    Reshape
    FullyConnected
    Add
    Concatenate
    LeakyRelu
    Sigmoid
    Best
    Ashay
  • Hi Ashay, one option is to use "allow_list" and start offloading one by one the layer to TIDL and see which one is causing the issue. 

    edgeai-tidl-tools/examples/osrt_python at master · TexasInstruments/edgeai-tidl-tools · GitHub

    allow_list:layer_name This option forcefully enables offload of a particular operator to TIDL DSP using layer name Comma separated string Model Compilation Only the layer/layers specified are accelerated, others are delegated to ARM. Experimental for Tflite/ONNX runtime and currently not applicable for TVM

    Depending on model size this could be a bit cumbersome, but it is an option.

    thank you,

    Paula

  • Hi Paula, thanks for your suggestion. To be sure I got it right, please find an example:

    Fullscreen
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    compile_options = {"tidl_tools_path": os.environ['TIDL_TOOLS_PATH'],
    "tensor_bits": 8,
    "debug_level": 3,
    "allow_list:layer_name": "FullyConnected",
    "artifacts_folder": "/home/tda4/output"}
    tidl_delegate = [tflite.load_delegate(os.path.join(os.environ['TIDL_TOOLS_PATH'], 'tidl_model_import_tflite.so'),
    compile_options)]
    interpreter = tflite.Interpreter(model_path=tflite_model_path, experimental_delegates=tidl_delegate)
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

    Best

    Ashay

  • Hi Ashay, that is correct.

    thank you,

    Paula

  • Hi Paula, 

    apologies for the late response. I tried this approach. However it was not successful. For reference, I have created a dummy model which you can test. Please find the link here: https://drive.google.com/file/d/1xMwYQfIr1e9tTs2e1x5pD8lngbC29c86/view?usp=drive_link

    Maybe you can provide more ideas after looking at the model structure if there is something which is incompatible.

    Thank you in advance.

    Best

    Ashay