This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VH-Q1: Compiled onnx .bin artifact cannot has poor accuracy compared to the original model.

Part Number: TDA4VH-Q1

Hi, 

This is the continuation discussion of this thread:

e2e.ti.com/.../4871255

I compiled a onnx model to .bin file and do inference using the compiled artifacts.

I compared the result of tidl with the original onnx model output, this is my graph comparison graph:

I have 21 outputs, I would expect the output will be a bottom-left to top-right line.

The experiment I was running is using a single image as calibration input, and I also test the result with the exact same image.

I think the above setting is the easiest case and the quantization should be doing good, but it turns out not.

This is the link to my model:

drive.google.com/.../1mDr9f7LKvgiXw9hf6fpAuHZYWCuCIw7U

This is my config for `delegate_options` (github.com/.../onnxrt_ep.py) for compiling, this is config can compile

{'tidl_tools_path': '/home/root/tidl_tools', 'artifacts_folder': '../../../model-artifacts//modelResize_onnx/', 'platform': 'J7', 'version': '7.2', 'tensor_bits': 8, 'debug_level': 0, 'max_num_subgraphs': 16, 'deny_list': '', 'deny_list:layer_type': '', 'deny_list:layer_name': '', 'model_type': '', 'accuracy_level': 1, 'advanced_options:calibration_frames': 1, 'advanced_options:calibration_iterations': 1, 'advanced_options:output_feature_16bit_names_list': '', 'advanced_options:params_16bit_names_list': '', 'advanced_options:mixed_precision_factor': -1, 'advanced_options:quantization_scale_type': 0, 'advanced_options:high_resolution_optimization': 0, 'advanced_options:pre_batchnorm_fold': 1, 'ti_internal_nc_flag': 1601, 'advanced_options:activation_clipping': 1, 'advanced_options:weight_clipping': 1, 'advanced_options:bias_calibration': 1, 'advanced_options:add_data_convert_ops': 3, 'advanced_options:channel_wise_quantization': 0, 'advanced_options:inference_mode': 0, 'advanced_options:num_cores': 1}

btw, if i change it to 

"advanced_options:quantization_scale_type": 0,

"advanced_options:add_data_convert_ops": 3,

just for testing, it cannot compile, the error:

Running_Model :  modelResize_onnx  


Running shape inference on model ../../../models/public/../onnx/resizeStrip.onnx 


Preliminary subgraphs created = 1 
Final number of subgraphs created are : 1, - Offloaded Nodes - 143, Total Nodes - 143 

 ************** Frame index 1 : Running float import ************* 
INFORMATION: [TIDL_ResizeLayer] Resize_82 Any resize ratio which is power of 2 and greater than 4 will be placed by combination of 4x4 resize layer and 2x2 resize layer. For example a 8x8 resize will be replaced by 4x4 resize followed by 2x2 resize.
INFORMATION: [TIDL_ResizeLayer] Resize_85 Any resize ratio which is power of 2 and greater than 4 will be placed by combination of 4x4 resize layer and 2x2 resize layer. For example a 8x8 resize will be replaced by 4x4 resize followed by 2x2 resize.
****************************************************
**          2 WARNINGS          0 ERRORS          **
****************************************************
The soft limit is 2048
The hard limit is 2048
MEM: Init ... !!!
MEM: Init ... Done !!!
 0.0s:  VX_ZONE_INIT:Enabled
 0.5s:  VX_ZONE_ERROR:Enabled
 0.6s:  VX_ZONE_WARNING:Enabled
 0.1818s:  VX_ZONE_INIT:[tivxInit:185] Initialization Done !!!
 0.157478s:  VX_ZONE_ERROR:[tivxAlgiVisionCreate:335] Calling ialg.algInit failed with status = -1121
Segmentation fault (core dumped)

Could you help me to compile the model?

Thanks!