This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

SK-TDA4VM: The model is compiled and verified on EVM. Run this script failed.

Part Number: SK-TDA4VM

Hi,

My sdk version is  Processor SDK Linux for Edge AI 08.04.00. The model can be successfully compiled on PC. Inferencing can also be completed using this script

[/opt/edgeai-tidl-tools/examples/osrt_python/ort/onnxrt_ep.py] normally on EVM. 

But, i want to use the script [root@tda4vm-sk:/opt/edge_ai_apps/apps_python# ./app_edgeai.py ../configs/object_detection.yaml] to inference. So, i created a new folder under model_zoon. I copy od-ort-ssd-lite_mobilenetv2_fpn model to the folder. 

When I was running the script [/app_edgeai.py ../configs/object_detection.yaml], there was an error. 

I found that param.yaml file content is less than the SDK 's own model. 

I manually modified the param.yaml file according to the param.yaml file format under [/opt/model_zoo/ONR-OD-8030-ssd-lite-mobv2-fpn-mmdet-coco-512x512]. Although can run, but there is no detection box. 

My question:

1. How is param.yaml file obtained ? Manually modified ?

2. How to solve the problem in the last picture ?

Thanks,

Maiunlei

  • Hi Maiunlei,

    The param.yaml files are generated as product of model compilation, hence they are very specific, and it is not recommended to modify it.

    However I could not locate any documentation which followed similar experiment you tried above, we have verified the EVM based inferencing flow as mentioned in section 

    Model Inference on EVM : https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/examples/osrt_python/README.md

    If you want to use app_edgeai.py for running custom compiled model, you can generate the model artifacts and then use it for edge inferencing.

    I would recommend you to please take look at below links, for custom model compilation.

    Edgeai Benchmark : https://github.com/TexasInstruments/edgeai-benchmark#readme

    Edgeai ModelMaker : https://github.com/TexasInstruments/edgeai-modelmaker#readme

    Regards,

    Pratik

  • Hi Pratik,

    According to https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/examples/osrt_python/README.md,the model is compiled. Copy model-

    artifacts and models from PC to the EVM. I run the script [/app_edgeai.py ../configs/object_detection.yaml], there was an error.

    How does the param.yaml file match the script [/app_edgeai.py ../configs/object_detection.yaml]

    Regards,

    Maiunlei.

  • Hi Pratik,

    My problem is similar to the problem in this link:e2e.ti.com/.../4164033


    However, the link below fails. Can you help me find the file(The red box circled the file)?

    Thanks,

    Maiunlei

  • Hi,

    setp 1, I downloaded the model yolov5s from [http://software-dl.ti.com/jacinto7/esd/modelzoo/gplv3/08_04_00_12/edgeai-yolov5/pretrained_models/modelartifacts/8bits/od-8100_onnxrt_coco_edgeai-yolov5_yolov5s6_640_ti_lite_37p4_56p0_onnx.tar.gz]. 

    setp 2, I create floders [/home/machunlei/opt/edgeai-benchmark-r8.4/work_dirs/modelartifacts_myown/8bits] and [/home/machunlei/opt/edgeai-benchmark-r8.4/work_dirs/modelartifacts_myown_package/8bits]. Put the model into the folder, like this.

    setp 3, I modified settings_base.yaml file.

    setp 4,  I modified benchmark_custom.py file. 

            'imagedet-7': dict(
                task_type='detection',
                calibration_dataset=imagedet_calib_dataset,
                input_dataset=imagedet_val_dataset,
                preprocess=preproc_transforms.get_transform_onnx(640, 640,  resize_with_pad=True, backend='cv2', pad_color=[114,114,114]),
                session=sessions.ONNXRTSession(**utils.dict_update(onnx_session_cfg, input_optimization=False, input_mean=(0.0, 0.0, 0.0), input_scale=(0.003921568627, 0.003921568627, 0.003921568627)),
                    runtime_options=settings.runtime_options_onnx_np2(
                        det_options=True, ext_options={'object_detection:meta_arch_type': 6,
                         'object_detection:meta_layers_names_list':f'/home/machunlei/opt/edgeai-benchmark-r8.4/work_dirs/modelartifacts/8bits/yolov5s6_640_ti_lite_metaarch.prototxt',
                         'advanced_options:output_feature_16bit_names_list':''
                         }),
                    model_path=f'/home/machunlei/opt/edgeai-benchmark-r8.4/work_dirs/modelartifacts/8bits/yolov5s6_640_ti_lite_37p4_56p0.onnx'),
                postprocess=postproc_transforms.get_transform_detection_yolov5_onnx(squeeze_axis=None, normalized_detections=False, resize_with_pad=True, formatter=postprocess.DetectionBoxSL2BoxLS()), #TODO: check this
                metric=dict(label_offset_pred=datasets.coco_det_label_offset_80to90(label_offset=1)),
                model_info=dict(metric_reference={'accuracy_ap[.5:.95]%':37.4})
            ),
    setp 5, run ./run_custom_pc.sh and get a error.
    (py36) root@VM-8-5-ubuntu:/home/machunlei/opt/edgeai-benchmark-r8.4# ls work_dirs/
    modelartifacts modelartifacts_myown/ modelartifacts_myown_package/ readme.txt
    (py36) root@VM-8-5-ubuntu:/home/machunlei/opt/edgeai-benchmark-r8.4# ls work_dirs/modelartifacts_myown/8bits/
    yolov5s6_640_ti_lite_37p4_56p0.onnx yolov5s6_640_ti_lite_metaarch.prototxt
    (py36) root@VM-8-5-ubuntu:/home/machunlei/opt/edgeai-benchmark-r8.4# ./run_custom_pc.sh
    find: ‘./work_dirs/modelartifacts/8bits/’: No such file or directory
    TIDL_TOOLS_PATH=/home/machunlei/opt/edgeai-benchmark-r8.4/tidl_tools
    LD_LIBRARY_PATH=/home/machunlei/opt/edgeai-benchmark-r8.4/tidl_tools
    PYTHONPATH=:
    ===================================================================
    work_dir = ./work_dirs/modelartifacts_myown/8bits
    packaged_dir = ./work_dirs/modelartifacts_myown_package/8bits
    loading annotations into memory...
    Done (t=0.72s)
    creating index...
    index created!
    loading annotations into memory...
    Done (t=0.79s)
    creating index...
    index created!
    configs to run: ['imagedet-7_onnxrt_modelartifacts_8bits_yolov5s6_640_ti_lite_37p4_56p0_onnx']
    number of configs: 1
    TASKS | | 0% 0/1| [< ]
    INFO:20230222-193156: starting process on parallel_device - 0 0%| || 0/1 [00:00<?, ?it/s]

    INFO:20230222-193206: starting - imagedet-7_onnxrt_modelartifacts_8bits_yolov5s6_640_ti_lite_37p4_56p0_onnx
    INFO:20230222-193206: model_path - /home/machunlei/opt/edgeai-benchmark-r8.4/work_dirs/modelartifacts/8bits/yolov5s6_640_ti_lite_37p4_56p0.onnx
    INFO:20230222-193206: model_file - /home/machunlei/opt/edgeai-benchmark-r8.4/work_dirs/modelartifacts_myown/8bits/imagedet-7_onnxrt_modelartifacts_8bits_yolov5s6_640_ti_lite_37p4_56p0_onnx/model/yolov5s6_640_ti_lite_37p4_56p0.onnx
    Downloading 1/1: /home/machunlei/opt/edgeai-benchmark-r8.4/work_dirs/modelartifacts/8bits/yolov5s6_640_ti_lite_37p4_56p0.onnx
    Download done for /home/machunlei/opt/edgeai-benchmark-r8.4/work_dirs/modelartifacts/8bits/yolov5s6_640_ti_lite_37p4_56p0.onnx
    Traceback (most recent call last):
    File "/home/machunlei/opt/edgeai-benchmark-r8.4/edgeai_benchmark/pipelines/pipeline_runner.py", line 154, in _run_pipeline
    result = cls._run_pipeline_impl(settings, pipeline_config, description)
    File "/home/machunlei/opt/edgeai-benchmark-r8.4/edgeai_benchmark/pipelines/pipeline_runner.py", line 125, in _run_pipeline_impl
    accuracy_result = accuracy_pipeline(description)
    File "/home/machunlei/opt/edgeai-benchmark-r8.4/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 103, in __call__
    self.session.start()
    File "/home/machunlei/opt/edgeai-benchmark-r8.4/edgeai_benchmark/sessions/onnxrt_session.py", line 47, in start
    super().start()
    File "/home/machunlei/opt/edgeai-benchmark-r8.4/edgeai_benchmark/sessions/basert_session.py", line 140, in start
    self.get_model()
    File "/home/machunlei/opt/edgeai-benchmark-r8.4/edgeai_benchmark/sessions/basert_session.py", line 413, in get_model
    onnx.shape_inference.infer_shapes_path(model_file0, model_file0)
    File "/root/apps/conda/envs/py36/lib/python3.6/site-packages/onnx/shape_inference.py", line 60, in infer_shapes_path
    C.infer_shapes_path(model_path, output_path, check_type, strict_mode)
    onnx.onnx_cpp2py_export.checker.ValidationError: Unable to open model file:/home/machunlei/opt/edgeai-benchmark-r8.4/work_dirs/modelartifacts_myown/8bits/imagedet-7_onnxrt_modelartifacts_8bits_yolov5s6_640_ti_lite_37p4_56p0_onnx/model/yolov5s6_640_ti_lite_37p4_56p0.onnx. Please check if it is a valid file.
    Unable to open model file:/home/machunlei/opt/edgeai-benchmark-r8.4/work_dirs/modelartifacts_myown/8bits/imagedet-7_onnxrt_modelartifacts_8bits_yolov5s6_640_ti_lite_37p4_56p0_onnx/model/yolov5s6_640_ti_lite_37p4_56p0.onnx. Please check if it is a valid file.
    TASKS | 100%|██████████|| 1/1 [00:13<00:00, 13.07s/it]
    TASKS | 100%|██████████|| 1/1 [00:13<00:00, 13.06s/it]

    packaging artifacts to ./work_dirs/modelartifacts_myown_package/8bits please wait...
    run_dir=== ./work_dirs/modelartifacts_myown/8bits/imagedet-7_onnxrt_modelartifacts_8bits_yolov5s6_640_ti_lite_37p4_56p0_onnx
    WARNING:20230222-193207: could not package - ./work_dirs/modelartifacts_myown/8bits/imagedet-7_onnxrt_modelartifacts_8bits_yolov5s6_640_ti_lite_37p4_56p0_onnx
    run_dir=== ./work_dirs/modelartifacts_myown/8bits/yolov5s6_640_ti_lite_37p4_56p0.onnx
    run_dir=== ./work_dirs/modelartifacts_myown/8bits/yolov5s6_640_ti_lite_metaarch.prototxt
    packaged_artifacts_dict====== {}
    Traceback (most recent call last):
    File "./scripts/benchmark_custom.py", line 299, in <module>
    tools.run_package(settings, work_dir, packaged_dir, custom_model=True)
    File "/home/machunlei/opt/edgeai-benchmark-r8.4/edgeai_benchmark/tools/run_package.py", line 42, in run_package
    package_artifacts(settings, work_dir, out_dir, include_results=include_results, custom_model=custom_model)
    File "/home/machunlei/opt/edgeai-benchmark-r8.4/edgeai_benchmark/tools/run_package.py", line 271, in package_artifacts
    packaged_artifacts_keys = list(packaged_artifacts_dict.values())[0].keys()
    IndexError: list index out of range
    -------------------------------------------------------------------
    ===================================================================
    settings: {'include_files': None, 'pipeline_type': 'accuracy', 'num_frames': 10000, 'calibration_frames': 25, 'calibration_iterations': 25, 'configs_path': './configs', 'models_path': '../edgeai-modelzoo/models', 'modelartifacts_path': './work_dirs/modelartifacts_myown', 'datasets_path': './dependencies/datasets', 'target_device': None, 'target_machine': 'pc', 'run_suffix': None, 'parallel_devices': [0], 'tensor_bits': 8, 'runtime_options': None, 'run_import': True, 'run_inference': True, 'run_missing': True, 'detection_threshold': 0.3, 'detection_top_k': 200, 'detection_nms_threshold': None, 'detection_keep_top_k': None, 'save_output': False, 'num_output_frames': 50, 'model_selection': None, 'model_shortlist': None, 'model_exclusion': None, 'task_selection': None, 'runtime_selection': None, 'session_type_dict': {'onnx': 'onnxrt', 'tflite': 'tflitert', 'mxnet': 'tvmdlr'}, 'dataset_type_dict': {'imagenet': 'imagenetv2c'}, 'dataset_selection': None, 'dataset_loading': True, 'config_range': None, 'enable_logging': True, 'verbose': False, 'capture_log': False, 'experimental_models': False, 'rewrite_results': False, 'with_udp': True, 'flip_test': False, 'model_transformation_dict': None, 'report_perfsim': False, 'tidl_offload': True, 'input_optimization': None, 'run_dir_tree_depth': None, 'settings_file': 'settings_import_on_pc.yaml', 'basic_keys': ['include_files', 'pipeline_type', 'num_frames', 'calibration_frames', 'calibration_iterations', 'configs_path', 'models_path', 'modelartifacts_path', 'datasets_path', 'target_device', 'target_machine', 'run_suffix', 'parallel_devices', 'tensor_bits', 'runtime_options', 'run_import', 'run_inference', 'run_missing', 'detection_threshold', 'detection_top_k', 'detection_nms_threshold', 'detection_keep_top_k', 'save_output', 'num_output_frames', 'model_selection', 'model_shortlist', 'model_exclusion', 'task_selection', 'runtime_selection', 'session_type_dict', 'dataset_type_dict', 'dataset_selection', 'dataset_loading', 'config_range', 'enable_logging', 'verbose', 'capture_log', 'experimental_models', 'rewrite_results', 'with_udp', 'flip_test', 'model_transformation_dict', 'report_perfsim', 'tidl_offload', 'input_optimization', 'run_dir_tree_depth', 'settings_file'], 'dataset_cache': None}
    no results found - no report to generate.
    Report generated at ./work_dirs/modelartifacts_myown
    -------------------------------------------------------------------
    My steps seem to be no problem. Please help solve this problem.
  • Hi Pratik,

    I did the test again. I copy .onnx and .prototxt files to [/home/machunlei/opt/edgeai-benchmark-r8.4/work_dirs/modelartifacts_myown/8bits/imagedet-7_onnxrt_modelartifacts_8bits_yolov5s6_640_ti_lite_37p4_56p0_onnx/model/].

    I tun  ./run_custom_pc.sh on PC. The result is:

    The version is matched.

    The . onnx and .prototxt  are from link http://software-dl.ti.com/jacinto7/esd/modelzoo/gplv3/08_04_00_12/edgeai-yolov5/pretrained_models/modelartifacts/8bits/od-8100_onnxrt_coco_edgeai-yolov5_yolov5s6_640_ti_lite_37p4_56p0_onnx.tar.gz

    I don 't know how to solve this problem. Please give some suggestions.

    Regards,

    Maiunlei

  • Hi Pratik,

    I download .prototxt and .onnx from https://github.com/TexasInstruments/edgeai-yolov5/blob/r8.4/pretrained_models/models/detection/coco/edgeai-yolov5.

    I run ./run_custom_pc.sh. The error is:

    (py36) root@VM-8-5-ubuntu:/home/machunlei/opt/edgeai-benchmark-r8.4# ./run_custom_pc.sh
    find: ‘./work_dirs/modelartifacts/8bits/’: No such file or directory
    TIDL_TOOLS_PATH=/home/machunlei/opt/edgeai-benchmark-r8.4/tidl_tools
    LD_LIBRARY_PATH=/home/machunlei/opt/edgeai-benchmark-r8.4/tidl_tools
    PYTHONPATH=:
    ===================================================================
    work_dir = ./work_dirs/modelartifacts_myown/8bits
    packaged_dir = ./work_dirs/modelartifacts_myown_package/8bits
    loading annotations into memory...
    Done (t=0.72s)
    creating index...
    index created!
    loading annotations into memory...
    Done (t=0.69s)
    creating index...
    index created!
    configs to run: ['imagedet-7_onnxrt_modelartifacts_8bits_yolov5s6_384_ti_lite_32p8_51p2_onnx']
    number of configs: 1
    TASKS | | 0% 0/1| [< ]
    INFO:20230223-112750: starting process on parallel_device - 0 0%| || 0/1 [00:00<?, ?it/s]

    INFO:20230223-112801: starting - imagedet-7_onnxrt_modelartifacts_8bits_yolov5s6_384_ti_lite_32p8_51p2_onnx
    INFO:20230223-112801: model_path - /home/machunlei/opt/edgeai-benchmark-r8.4/work_dirs/modelartifacts/8bits/yolov5s6_384_ti_lite_32p8_51p2.onnx
    INFO:20230223-112801: model_file - /home/machunlei/opt/edgeai-benchmark-r8.4/work_dirs/modelartifacts_myown/8bits/imagedet-7_onnxrt_modelartifacts_8bits_yolov5s6_384_ti_lite_32p8_51p2_onnx/model/yolov5s6_384_ti_lite_32p8_51p2.onnx

    INFO:20230223-112801: running - imagedet-7_onnxrt_modelartifacts_8bits_yolov5s6_384_ti_lite_32p8_51p2_onnx
    INFO:20230223-112801: pipeline_config - {'task_type': 'detection', 'calibration_dataset': <edgeai_benchmark.datasets.coco_det.COCODetection object at 0x7f10b7d75080>, 'input_dataset': <edgeai_benchmark.datasets.coco_det.COCODetection object at 0x7f10a1a4fe80>, 'preprocess': <edgeai_benchmark.preprocess.PreProcessTransforms object at 0x7f10bd2ddf98>, 'session': <edgeai_benchmark.sessions.onnxrt_session.ONNXRTSession object at 0x7f10a19c9630>, 'postprocess': <edgeai_benchmark.postprocess.PostProcessTransforms object at 0x7f10a19c96a0>, 'metric': {'label_offset_pred': {0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8, 8: 9, 9: 10, 10: 11, 11: 13, 12: 14, 13: 15, 14: 16, 15: 17, 16: 18, 17: 19, 18: 20, 19: 21, 20: 22, 21: 23, 22: 24, 23: 25, 24: 27, 25: 28, 26: 31, 27: 32, 28: 33, 29: 34, 30: 35, 31: 36, 32: 37, 33: 38, 34: 39, 35: 40, 36: 41, 37: 42, 38: 43, 39: 44, 40: 46, 41: 47, 42: 48, 43: 49, 44: 50, 45: 51, 46: 52, 47: 53, 48: 54, 49: 55, 50: 56, 51: 57, 52: 58, 53: 59, 54: 60, 55: 61, 56: 62, 57: 63, 58: 64, 59: 65, 60: 67, 61: 70, 62: 72, 63: 73, 64: 74, 65: 75, 66: 76, 67: 77, 68: 78, 69: 79, 70: 80, 71: 81, 72: 82, 73: 84, 74: 85, 75: 86, 76: 87, 77: 88, 78: 89, 79: 90, 80: 91}}, 'model_info': {'metric_reference': {'accuracy_ap[.5:.95]%': 37.4}}}
    INFO:20230223-112801: import - imagedet-7_onnxrt_modelartifacts_8bits_yolov5s6_384_ti_lite_32p8_51p2_onnx - this may take some time...Error - libvx_tidl_rt.so: cannot map zero-fill pages
    python3: tidl_onnxRtImport_EP.cpp:142: bool TIDL_populateOptions(std::vector<std::pair<std::__cxx11::basic_string<char>, std::__cxx11::basic_string<char> > >): Assertion `data_->infer_ops.lib' failed.

    The program stuck here. The error is from tidl_onnxRtImport_EP.cpp. 

  • I download .prototxt and .onnx from https://github.com/TexasInstruments/edgeai-yolov5/blob/r8.4/pretrained_models/models/detection/coco/edgeai-yolov5.

    I run ./run_custom_pc.sh. The error is:

    Hi maiunlei,

    Could you please create new e2e thread so that I can connect you to respected domain expert.

    Please consider adding last 2 replies or required details to it.

    Closing this issue.

    Regards,

    Pratik