This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: Edgeai-benchmark yolov5 model compilation error.

Part Number: TDA4VM

Hi,

I have the Jacinto J7 EVM kit.

https://github.com/TexasInstruments/edgeai-yolov5 Using the relevant repo, I'm using my special data set to train the yolov5s6. After I do the onnx conversion, I have two files. "best.onnx" and "best.prototxt"

The contents of the "best.prototxt" file looks like this.

Fullscreen
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
name: "yolo_v3"
tidl_yolo {
yolo_param {
input: "370"
anchor_width: 19.0
anchor_width: 44.0
anchor_width: 38.0
anchor_height: 27.0
anchor_height: 40.0
anchor_height: 94.0
}
yolo_param {
input: "426"
anchor_width: 96.0
anchor_width: 86.0
anchor_width: 180.0
anchor_height: 68.0
anchor_height: 152.0
anchor_height: 137.0
}
yolo_param {
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

https://github.com/TexasInstruments/edgeai-benchmark I want to compile my custom model to run on TDA4VM using the corresponding repo. 

For this, I set my pipeline_config settings in benchmark_custom.py as follows.

Fullscreen
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
'imagedet-best': dict(
task_type='detection',
calibration_dataset=imagedet_calib_dataset,
input_dataset=imagedet_val_dataset,
preprocess=preproc_transforms.get_transform_onnx((640,640), (640,640), resize_with_pad=[True], backend='cv2'),
session=sessions.ONNXRTSession(**onnx_session_cfg,
runtime_options=utils.dict_update(settings.runtime_options_onnx_np2(),
{'object_detection:meta_arch_type': 6,
'object_detection:meta_layers_names_list':'/home/sefau18/edgeai-modelzoo/models/vision/detection/coco/bests6v2/best.prototxt',
'advanced_options:output_feature_16bit_names_list':'370, 426, 482, 538'
}),
model_path='/home/sefau18/edgeai-modelzoo/models/vision/detection/coco/bests6v2/best.onnx',
postprocess=postproc_transforms.get_transform_detection_yolov5_onnx(squeeze_axis=None, normalized_detections=False, resize_with_pad=True, formatter=postprocess.DetectionBoxSL2BoxLS()),
metric=dict(label_offset_pred=datasets.coco_det_label_offset_90to90()),
model_info=dict(metric_reference={'accuracy_ap[.5:.95]%':45.0})
))
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

I am sharing the error I got.

Fullscreen
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
(benchmark) sefau18@ubuntu:~/edgeai-benchmark$ ./run_custom_pc.sh
Entering: ./work_dirs/modelartifacts/8bits/cl-3420_tvmdlr_imagenet1k_gluoncv-mxnet_resnet50_v1d-symbol_json.tar.gz.link
run_set_target_device.sh: line 59: cd: ./work_dirs/modelartifacts/8bits/cl-3420_tvmdlr_imagenet1k_gluoncv-mxnet_resnet50_v1d-symbol_json.tar.gz.link/artifacts: Not a directory
Entering: ./work_dirs/modelartifacts/8bits/cl-3410_tvmdlr_imagenet1k_gluoncv-mxnet_mobilenetv2_1.0-symbol_json.tar.gz.link
run_set_target_device.sh: line 59: cd: ./work_dirs/modelartifacts/8bits/cl-3410_tvmdlr_imagenet1k_gluoncv-mxnet_mobilenetv2_1.0-symbol_json.tar.gz.link/artifacts: Not a directory
Entering: ./work_dirs/modelartifacts/8bits/cl-3420_tvmdlr_imagenet1k_gluoncv-mxnet_resnet50_v1d-symbol_json.tar.gz.link.link.link
run_set_target_device.sh: line 59: cd: ./work_dirs/modelartifacts/8bits/cl-3420_tvmdlr_imagenet1k_gluoncv-mxnet_resnet50_v1d-symbol_json.tar.gz.link.link.link/artifacts: Not a directory
Entering: ./work_dirs/modelartifacts/8bits/od-5040_tvmdlr_coco_gluoncv-mxnet_ssd_512_mobilenet1.0_coco-symbol_json.tar.gz.link.link
run_set_target_device.sh: line 59: cd: ./work_dirs/modelartifacts/8bits/od-5040_tvmdlr_coco_gluoncv-mxnet_ssd_512_mobilenet1.0_coco-symbol_json.tar.gz.link.link/artifacts: Not a directory
Entering: ./work_dirs/modelartifacts/8bits/cl-3410_tvmdlr_imagenet1k_gluoncv-mxnet_mobilenetv2_1.0-symbol_json.tar.gz.link.link.link
run_set_target_device.sh: line 59: cd: ./work_dirs/modelartifacts/8bits/cl-3410_tvmdlr_imagenet1k_gluoncv-mxnet_mobilenetv2_1.0-symbol_json.tar.gz.link.link.link/artifacts: Not a directory
Entering: ./work_dirs/modelartifacts/8bits/od-5040_tvmdlr_coco_gluoncv-mxnet_ssd_512_mobilenet1.0_coco-symbol_json.tar.gz.link.link.link
run_set_target_device.sh: line 59: cd: ./work_dirs/modelartifacts/8bits/od-5040_tvmdlr_coco_gluoncv-mxnet_ssd_512_mobilenet1.0_coco-symbol_json.tar.gz.link.link.link/artifacts: Not a directory
Entering: ./work_dirs/modelartifacts/8bits/ss-5818_tvmdlr_ti-robokit_edgeai-tv_deeplabv3plus_mobilenetv2_tv_edgeailite_robokit-zed1hd_768x432_qat-p2_onnx.tar.gz.link.link.link
run_set_target_device.sh: line 59: cd: ./work_dirs/modelartifacts/8bits/ss-5818_tvmdlr_ti-robokit_edgeai-tv_deeplabv3plus_mobilenetv2_tv_edgeailite_robokit-zed1hd_768x432_qat-p2_onnx.tar.gz.link.link.link/artifacts: Not a directory
Entering: ./work_dirs/modelartifacts/8bits/ss-5720_tvmdlr_cocoseg21_edgeai-tv_fpn_aspp_regnetx800mf_edgeailite_512x512_20210405_onnx.tar.gz.link.link
run_set_target_device.sh: line 59: cd: ./work_dirs/modelartifacts/8bits/ss-5720_tvmdlr_cocoseg21_edgeai-tv_fpn_aspp_regnetx800mf_edgeailite_512x512_20210405_onnx.tar.gz.link.link/artifacts: Not a directory
Entering: ./work_dirs/modelartifacts/8bits/cl-3480_tvmdlr_imagenet1k_gluoncv-mxnet_hrnet_w18_small_v2_c-symbol_json.tar.gz.link.link
run_set_target_device.sh: line 59: cd: ./work_dirs/modelartifacts/8bits/cl-3480_tvmdlr_imagenet1k_gluoncv-mxnet_hrnet_w18_small_v2_c-symbol_json.tar.gz.link.link/artifacts: Not a directory
Entering: ./work_dirs/modelartifacts/8bits/od-5030_tvmdlr_coco_gluoncv-mxnet_ssd_512_resnet50_v1_coco-symbol_json.tar.gz.link.link
run_set_target_device.sh: line 59: cd: ./work_dirs/modelartifacts/8bits/od-5030_tvmdlr_coco_gluoncv-mxnet_ssd_512_resnet50_v1_coco-symbol_json.tar.gz.link.link/artifacts: Not a directory
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Editing the prototxt file according to the original doesn't make any sense.

Is there a step-by-step documentation on how to compile a custom yolov5 model and perform inference using "edgeai apps" on the TDA4VM?

Thanks already for your help.

  • Can you zip your onnx model and prototxt and attach here. We can try it out at our end.

  • Hi Mathew,


    Thank you for the answer.
    https://github.com/TexasInstruments/edgeai-yolov5 

    Below are the files you requested for two different trainings I made using the same data set.

    1- $ python3 train.py --data data.yaml --cfg yolov5s6.yaml --weights 'yolov5s6.pt' --batch-size 128

    yolov5s6.yaml file

    only nc number edited

    Fullscreen
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    # Parameters
    nc: 36 # number of classes
    depth_multiple: 0.33 # model depth multiple
    width_multiple: 0.50 # layer channel multiple
    anchors:
    - [ 19,27, 44,40, 38,94 ] # P3/8
    - [ 96,68, 86,152, 180,137 ] # P4/16
    - [ 140,301, 303,264, 238,542 ] # P5/32
    - [ 436,615, 739,380, 925,792 ] # P6/64
    # YOLOv5 backbone
    backbone:
    # [from, number, module, args]
    [ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2
    [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
    [ -1, 3, C3, [ 128 ] ],
    [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
    [ -1, 9, C3, [ 256 ] ],
    [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
    [ -1, 9, C3, [ 512 ] ],
    [ -1, 1, Conv, [ 768, 3, 2 ] ], # 7-P5/32
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

    best.zip

    2- $ python3 train.py --data data.yaml --cfg yolov5s6.yaml --weights '' --batch-size 128

    Training started with no initial weight given and change to anchors in yaml file. Anchor values were created automatically by the software during conversion.

    yolov5s6.yaml file 

    only anchors edited

    Fullscreen
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    # Parameters
    nc: 36 # number of classes
    depth_multiple: 0.33 # model depth multiple
    width_multiple: 0.50 # layer channel multiple
    anchors: 3
    # YOLOv5 backbone
    backbone:
    # [from, number, module, args]
    [ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2
    [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
    [ -1, 3, C3, [ 128 ] ],
    [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
    [ -1, 9, C3, [ 256 ] ],
    [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
    [ -1, 9, C3, [ 512 ] ],
    [ -1, 1, Conv, [ 768, 3, 2 ] ], # 7-P5/32
    [ -1, 3, C3, [ 768 ] ],
    [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P6/64
    [ -1, 1, SPP, [ 1024, [ 3, 5, 7 ] ] ],
    [ -1, 3, C3, [ 1024, False ] ], # 11
    ]
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

    2best.zip

    Thanks for the help.

  • I have added an example for this model in the custom script 

    https://github.com/TexasInstruments/edgeai-benchmark/blob/master/scripts/benchmark_custom.py#L260

    (notice that input_optimization is switched off for this model - you can put the path of your model from 2best.zip here - I tried it and it worked)

    It is also possible to switch off input_optimization globally for all models.

    input_optimization: False

    in edgeai-benchmark/settings_base.yaml

    (https://github.com/TexasInstruments/edgeai-benchmark/blob/master/settings_base.yaml#L88)