Tool/software:
Hi
I am trying the custom instance segmentation model. I have exported it to onnx runtime and able to compile it successfully on edgeai-tidl tool version 09_02_07 .
After compiling, I ran on host machine inside tidl docker with and without tidl offloaded and found everytime it is running fine and I can visualize the results too.
Then I tried on am69a board the same model using compiled model artifacts and found I am not getting any detections. Then I checked more and while preprocessing, I found this -
def infer_image(sess, image_files, config): input_details = sess.get_inputs() input_name = input_details[0].name floating_model = (input_details[0].type == 'tensor(float)') height = input_details[0].shape[2] width = input_details[0].shape[3] channel = input_details[0].shape[1] batch = input_details[0].shape[0] print("height, width, channel, batch, floating_model: ", height, width, channel, batch, floating_model) # height, width, channel, batch, floating_model: 550 550 3 1 True
The model is showing as floating point model, not int8. Even though while compiling, I sat the tensor_bits=8. Not sure if I am missing anything else. I am attaching the model here too. It is trained on ms coco dataset.
My settings in common_utils.py
tensor_bits = 8 debug_level = 0 max_num_subgraphs = 16 accuracy_level = 1 calibration_frames = 2 calibration_iterations = 5 output_feature_16bit_names_list = ""#"conv1_2, fire9/concat_1" params_16bit_names_list = "" #"fire3/squeeze1x1_2" mixed_precision_factor = -1 quantization_scale_type = 0 high_resolution_optimization = 0 pre_batchnorm_fold = 1 inference_mode = 0 num_cores = 1 ti_internal_nc_flag = 1601
Model - 8182.yolact_resnet18_54_400000_v2.zip
Thanks
Akhilesh