This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

SK-TDA4VM: failed to compile yolov5 pretrained model

Part Number: SK-TDA4VM

I failed to compile the pretrained yolov5 model edgeaiyolov5(https://github.com/TexasInstruments/edgeai-yolov5/tree/master/pretrained_models/models/detection/coco/edgeai-yolov5)

and showing this error when i follow up this jupyter notebook- github.com/.../tutorial_detection.ipynb

Please find if anything i missed

edited pipeline- pipeline_configs = {
    'od-mlpefmnv1': dict(
        task_type='detection',
        calibration_dataset=calib_dataset,
        input_dataset=val_dataset,
        preprocess=preproc_transforms.get_transform_onnx((640,640), (640,640), backend='cv2'),
        session=session_type(**onnx_session_cfg,
            runtime_options=runtime_options,
            model_path='/home/root/notebooks/my_folder/yolov5s6_640_ti_lite_37p4_56p0.onnx'),
        postprocess=postproc_transforms.get_transform_detection_onnx(),
        metric=dict(label_offset_pred=datasets.coco_det_label_offset_90to90()),
        model_info=dict(metric_reference={'accuracy_ap[.5:.95]%':23.0})
    )
}
print(pipeline_configs)

error-

# run the model compliation/import and inference
tools.run_accuracy(settings, work_dir, pipeline_configs)
configs to run: ['od-mlpefmnv1_tflitert_notebooks_my_folder_yolov5s6_640_ti_lite_37p4_56p0_onnx']
number of configs: 1
TASKS | 100%|██████████|| 1/1 [00:12<00:00, 12.97s/it]
TASKS                                                       |          |     0% 0/1| [< ]
INFO:20230607-115748: starting process on parallel_device - 0

INFO:20230607-115758: starting - od-mlpefmnv1_tflitert_notebooks_my_folder_yolov5s6_640_ti_lite_37p4_56p0_onnx
INFO:20230607-115758: model_path - /home/root/notebooks/my_folder/yolov5s6_640_ti_lite_37p4_56p0.onnx
INFO:20230607-115758: model_file - /tmp/tmpls6095fn/modelartifacts/8bits/od-mlpefmnv1_tflitert_notebooks_my_folder_yolov5s6_640_ti_lite_37p4_56p0_onnx/model/yolov5s6_640_ti_lite_37p4_56p0.onnx

INFO:20230607-115758: running - od-mlpefmnv1_tflitert_notebooks_my_folder_yolov5s6_640_ti_lite_37p4_56p0_onnx
INFO:20230607-115758: pipeline_config - {'task_type': 'detection', 'calibration_dataset': <edgeai_benchmark.datasets.coco_det.COCODetection object at 0x7f29c5321da0>, 'input_dataset': <edgeai_benchmark.datasets.coco_det.COCODetection object at 0x7f2a1ad82ef0>, 'preprocess': <edgeai_benchmark.preprocess.PreProcessTransforms object at 0x7f2987a30470>, 'session': <edgeai_benchmark.sessions.tflitert_session.TFLiteRTSession object at 0x7f29717eab00>, 'postprocess': <edgeai_benchmark.postprocess.PostProcessTransforms object at 0x7f29717eab70>, 'metric': {'label_offset_pred': {0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8, 8: 9, 9: 10, 10: 11, 11: 12, 12: 13, 13: 14, 14: 15, 15: 16, 16: 17, 17: 18, 18: 19, 19: 20, 20: 21, 21: 22, 22: 23, 23: 24, 24: 25, 25: 26, 26: 27, 27: 28, 28: 29, 29: 30, 30: 31, 31: 32, 32: 33, 33: 34, 34: 35, 35: 36, 36: 37, 37: 38, 38: 39, 39: 40, 40: 41, 41: 42, 42: 43, 43: 44, 44: 45, 45: 46, 46: 47, 47: 48, 48: 49, 49: 50, 50: 51, 51: 52, 52: 53, 53: 54, 54: 55, 55: 56, 56: 57, 57: 58, 58: 59, 59: 60, 60: 61, 61: 62, 62: 63, 63: 64, 64: 65, 65: 66, 66: 67, 67: 68, 68: 69, 69: 70, 70: 71, 71: 72, 72: 73, 73: 74, 74: 75, 75: 76, 76: 77, 77: 78, 78: 79, 79: 80, 80: 81, 81: 82, 82: 83, 83: 84, 84: 85, 85: 86, 86: 87, 87: 88, 88: 89, 89: 90, -1: 0, 90: 91}}, 'model_info': {'metric_reference': {'accuracy_ap[.5:.95]%': 23.0}}}
INFO:20230607-115758: import  - od-mlpefmnv1_tflitert_notebooks_my_folder_yolov5s6_640_ti_lite_37p4_56p0_onnx - this may take some time...
Traceback (most recent call last):
  File "/opt/edgeai-benchmark/edgeai_benchmark/pipelines/pipeline_runner.py", line 154, in _run_pipeline
    result = cls._run_pipeline_impl(settings, pipeline_config, description)
  File "/opt/edgeai-benchmark/edgeai_benchmark/pipelines/pipeline_runner.py", line 125, in _run_pipeline_impl
    accuracy_result = accuracy_pipeline(description)
  File "/opt/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 122, in __call__
    param_result = self._run(description=description)
  File "/opt/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 146, in _run
    self._import_model(description)
  File "/opt/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 203, in _import_model
    self._run_with_log(session.import_model, calib_data)
  File "/opt/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 303, in _run_with_log
    return func(*args, **kwargs)
  File "/opt/edgeai-benchmark/edgeai_benchmark/sessions/tflitert_session.py", line 51, in import_model
    self.interpreter = self._create_interpreter(is_import=True)
  File "/opt/edgeai-benchmark/edgeai_benchmark/sessions/tflitert_session.py", line 108, in _create_interpreter
    tidl_delegate = [tflitert_interpreter.load_delegate('tidl_model_import_tflite.so', self.kwargs["runtime_options"])]
  File "/usr/local/lib/python3.6/dist-packages/tflite_runtime/interpreter.py", line 175, in load_delegate
    delegate = Delegate(library, options)
  File "/usr/local/lib/python3.6/dist-packages/tflite_runtime/interpreter.py", line 83, in __init__
    self._library = ctypes.pydll.LoadLibrary(library)
  File "/usr/lib/python3.6/ctypes/__init__.py", line 426, in LoadLibrary
    return self._dlltype(name)
  File "/usr/lib/python3.6/ctypes/__init__.py", line 348, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: tidl_model_import_tflite.so: cannot open shared object file: No such file or directory
tidl_model_import_tflite.so: cannot open shared object file: No such file or directory
Exception ignored in: <bound method Delegate.__del__ of <tflite_runtime.interpreter.Delegate object at 0x7f29717eafd0>>
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tflite_runtime/interpreter.py", line 118, in __del__
    if self._library is not None:
AttributeError: 'Delegate' object has no attribute '_library'

  • Hi,

    Could you confirm did you able to run the notebook "tutorial_detection.ipynb" without any change ?

    Also if possible please share the log of default notebook run for our reference.

    Regards,

    Pratik

  • no change made only pipeline stated above changed.

    The code i have run

    import os
    import tempfile
    import argparse
    import cv2
    from edgeai_benchmark import *

    # the cwd must be the root of the respository
    if os.path.split(os.getcwd())[-1] in ('scripts', 'tutorials'):
        os.chdir('../')
    #
    print(os.environ['TIDL_TOOLS_PATH'])
    print(os.getcwd())

    modelartifacts_tempdir = tempfile.TemporaryDirectory()
    print(modelartifacts_tempdir)
    modelartifacts_custom = os.path.join(modelartifacts_tempdir.name, 'modelartifacts')

    settings = config_settings.ConfigSettings('./settings_import_on_pc.yaml',
                    modelartifacts_path=modelartifacts_custom,
                    calibration_frames=10, calibration_iterations=10, num_frames=100)

    work_dir = os.path.join(settings.modelartifacts_path, f'{settings.tensor_bits}bits')
    print(f'work_dir = {work_dir}')

    dataset_calib_cfg = dict(
        path=f'{settings.datasets_path}/coco',
        split='val2017',
        shuffle=True,
        num_frames=min(settings.calibration_frames,5000),
        name='coco'
    )

    # dataset parameters for actual inference
    dataset_val_cfg = dict(
        path=f'{settings.datasets_path}/coco',
        split='val2017',
        shuffle=False, # can be set to True as well, if needed
        num_frames=min(settings.num_frames,5000),
        name='coco'
    )

    calib_dataset = datasets.COCODetection(**dataset_calib_cfg, download=True)
    val_dataset = datasets.COCODetection(**dataset_val_cfg, download=True)

    session_name = constants.SESSION_NAME_ONNXRT
    #session_name = constants.SESSION_NAME_TVMDLR

    session_type = settings.get_session_type(session_name)
    runtime_options = settings.get_runtime_options(session_name, is_qat=False)

    print(session_type)
    print(runtime_options)

    preproc_transforms = preprocess.PreProcessTransforms(settings)
    postproc_transforms = postprocess.PostProcessTransforms(settings)

    # these session cfgs also has some default input mean and scale.
    # if your model needs a difference mean and scale, update the session cfg dict being used with those values
    onnx_session_cfg = sessions.get_onnx_session_cfg(settings, work_dir=work_dir)

    pipeline_configs = {
        'od-mlpefmnv1': dict(
            task_type='detection',
            calibration_dataset=calib_dataset,
            input_dataset=val_dataset,
            preprocess=preproc_transforms.get_transform_onnx(640, 640,  resize_with_pad=True, backend='cv2', pad_color=[114,114,114]),
            session=session_type(**onnx_session_cfg,
                runtime_options=settings.runtime_options_onnx_np2(
                        det_options=True, ext_options={'object_detection:meta_arch_type': 6,
                         'object_detection:meta_layers_names_list':f'yolov5s6_640_ti_lite_metaarch.prototxt',
                         'advanced_options:output_feature_16bit_names_list':'370, 680, 990, 1300'}),
                model_path=f'yolov5s6_640_ti_lite_37p4_56p0.onnx'),
            postprocess=postproc_transforms.get_transform_detection_yolov5_onnx(squeeze_axis=None, normalized_detections=False, resize_with_pad=True, formatter=postprocess.DetectionBoxSL2BoxLS()),
            metric=dict(label_offset_pred=datasets.coco_det_label_offset_90to90()),
            model_info=dict(metric_reference={'accuracy_ap[.5:.95]%':23.0})
        )
    }
    print(pipeline_configs)

    # run the model compliation/import and inference
    tools.run_accuracy(settings, work_dir, pipeline_configs)

    OUTPUT-

    {'od-mlpefmnv1': {'task_type': 'detection', 'calibration_dataset': <edgeai_benchmark.datasets.coco_det.COCODetection object at 0x7f3cf86cbb70>, 'input_dataset': <edgeai_benchmark.datasets.coco_det.COCODetection object at 0x7f3cf86cb828>, 'preprocess': <edgeai_benchmark.preprocess.PreProcessTransforms object at 0x7f3ce20226d8>, 'session': <edgeai_benchmark.sessions.onnxrt_session.ONNXRTSession object at 0x7f3ce2022710>, 'postprocess': <edgeai_benchmark.postprocess.PostProcessTransforms object at 0x7f3ce2022908>, 'metric': {'label_offset_pred': {0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8, 8: 9, 9: 10, 10: 11, 11: 12, 12: 13, 13: 14, 14: 15, 15: 16, 16: 17, 17: 18, 18: 19, 19: 20, 20: 21, 21: 22, 22: 23, 23: 24, 24: 25, 25: 26, 26: 27, 27: 28, 28: 29, 29: 30, 30: 31, 31: 32, 32: 33, 33: 34, 34: 35, 35: 36, 36: 37, 37: 38, 38: 39, 39: 40, 40: 41, 41: 42, 42: 43, 43: 44, 44: 45, 45: 46, 46: 47, 47: 48, 48: 49, 49: 50, 50: 51, 51: 52, 52: 53, 53: 54, 54: 55, 55: 56, 56: 57, 57: 58, 58: 59, 59: 60, 60: 61, 61: 62, 62: 63, 63: 64, 64: 65, 65: 66, 66: 67, 67: 68, 68: 69, 69: 70, 70: 71, 71: 72, 72: 73, 73: 74, 74: 75, 75: 76, 76: 77, 77: 78, 78: 79, 79: 80, 80: 81, 81: 82, 82: 83, 83: 84, 84: 85, 85: 86, 86: 87, 87: 88, 88: 89, 89: 90, -1: 0, 90: 91}}, 'model_info': {'metric_reference': {'accuracy_ap[.5:.95]%': 23.0}}}}
    
    configs to run: ['od-mlpefmnv1_onnxrt_notebooks_my_folder_yolov5s6_640_ti_lite_37p4_56p0_onnx']
    number of configs: 1
    
    TASKS | 100%|██████████|| 1/1 [00:12<00:00, 12.60s/it]
    TASKS                                                       |          |     0% 0/1| [< ]
    INFO:20230609-074556: starting process on parallel_device - 0
    
    INFO:20230609-074606: starting - od-mlpefmnv1_onnxrt_notebooks_my_folder_yolov5s6_640_ti_lite_37p4_56p0_onnx
    INFO:20230609-074606: model_path - /home/root/notebooks/my_folder/yolov5s6_640_ti_lite_37p4_56p0.onnx
    INFO:20230609-074606: model_file - /tmp/tmprkm51e2d/modelartifacts/8bits/od-mlpefmnv1_onnxrt_notebooks_my_folder_yolov5s6_640_ti_lite_37p4_56p0_onnx/model/yolov5s6_640_ti_lite_37p4_56p0.onnx
    Downloading 1/1: /home/root/notebooks/my_folder/yolov5s6_640_ti_lite_37p4_56p0.onnx
    Download done for /home/root/notebooks/my_folder/yolov5s6_640_ti_lite_37p4_56p0.onnx
    
    Traceback (most recent call last):
      File "/opt/edgeai-benchmark/edgeai_benchmark/pipelines/pipeline_runner.py", line 154, in _run_pipeline
        result = cls._run_pipeline_impl(settings, pipeline_config, description)
      File "/opt/edgeai-benchmark/edgeai_benchmark/pipelines/pipeline_runner.py", line 125, in _run_pipeline_impl
        accuracy_result = accuracy_pipeline(description)
      File "/opt/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 103, in __call__
        self.session.start()
      File "/opt/edgeai-benchmark/edgeai_benchmark/sessions/onnxrt_session.py", line 47, in start
        super().start()
      File "/opt/edgeai-benchmark/edgeai_benchmark/sessions/basert_session.py", line 140, in start
        self.get_model()
      File "/opt/edgeai-benchmark/edgeai_benchmark/sessions/basert_session.py", line 484, in get_model
        optimization_done = self._optimize_model(is_new_file=(not model_file_exists))
      File "/opt/edgeai-benchmark/edgeai_benchmark/sessions/basert_session.py", line 525, in _optimize_model
        from osrt_model_tools.onnx_tools import onnx_model_opt as onnxopt
    ModuleNotFoundError: No module named 'osrt_model_tools'
    
    No module named 'osrt_model_tools'
    
    
    
  • Hi,

    From above posted logs it appears that environment paths might not be set correctly.

    Could please check below mentioned environment variables set to what ?

    1. TIDL_TOOLS_PATH

    2. LD_LIBRARY_PATH

    You can use below commands,

    echo $TIDL_TOOLS_PATH
    echo $LD_LIBRARY_PATH

    Make sure that, these path are set to tidl_tools directory.

    Regards,

    Pratik

  • After export SOC name and  run source ./setup.sh . Now Iam getting this error when ruuning "onnxrt_ep.py -c"

    Running shape inference on model ../../../models/public/resnet18_opset9.onnx

    Traceback (most recent call last):
    File "onnxrt_ep.py", line 281, in <module>
    run_model(model, mIdx)
    File "onnxrt_ep.py", line 185, in run_model
    sess = rt.InferenceSession(config['model_path'] ,providers=EP_list, provider_options=[delegate_options, {}], sess_options=so)
    File "/home/quest/.local/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 283, in __init__
    self._create_inference_session(providers, provider_options)
    File "/home/quest/.local/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 300, in _create_inference_session
    available_providers)
    File "/home/quest/.local/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 80, in check_and_normalize_provider_args
    set_provider_options(name, options)
    File "/home/quest/.local/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 54, in set_provider_options
    name, ", ".join(available_provider_names)))
    ValueError: Specified provider 'TIDLCompilationProvider' is unavailable. Available providers: 'CPUExecutionProvider'

    Please check if anything i missed

    Regards

    Nandu

  • python3 onnxrt_ep.py -c
    /home/quest/t-i/edgeai-tidl-tools/examples/osrt_python/ort
    Available execution providers :  ['CPUExecutionProvider']

    Running 1 Models - ['yolov5s6_640_ti_lite_37p4_56p0']


    Running_Model :  yolov5s6_640_ti_lite_37p4_56p0  


    Running shape inference on model yolov5s6_640_ti_lite_37p4_56p0/yolov5s6_640_ti_lite_37p4_56p0.onnx

    Traceback (most recent call last):
      File "onnxrt_ep.py", line 281, in <module>
        run_model(model, mIdx)
      File "onnxrt_ep.py", line 185, in run_model
        sess = rt.InferenceSession(config['model_path'] ,providers=EP_list, provider_options=[delegate_options, {}], sess_options=so)
      File "/home/quest/.local/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 283, in __init__
        self._create_inference_session(providers, provider_options)
      File "/home/quest/.local/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 300, in _create_inference_session
        available_providers)
      File "/home/quest/.local/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 80, in check_and_normalize_provider_args
        set_provider_options(name, options)
      File "/home/quest/.local/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 54, in set_provider_options
        name, ", ".join(available_provider_names)))
    ValueError: Specified provider 'TIDLCompilationProvider' is unavailable. Available providers: 'CPUExecutionProvider'

    Please check if anything i missed

    Regards

    Nandu

  • Hi,

    Could you please check this.

    Hi,

    From above posted logs it appears that environment paths might not be set correctly.

    Could please check below mentioned environment variables set to what ?

    1. TIDL_TOOLS_PATH

    2. LD_LIBRARY_PATH

    You can use below commands,

    Fullscreen
    1
    2
    echo $TIDL_TOOLS_PATH
    echo $LD_LIBRARY_PATH
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

    Make sure that, these path are set to tidl_tools directory.

    Regards,

    Pratik

    Also this thread is specific to jupyter notebook issues inside edgeai-benchmark repo.

    We recommend to create new thread for anything which is out of scope with respect to current discussion topic,

    Regards,

    Pratik

  • when echo command both shows this path  /edgeai-tidl-tools/tidl_tools

    Can u tell where the mistake

    Regads,

    Nandu

  • We recommend to create new thread for anything which is not related to current thread subject.

    If you have anything to post about subject of this e2e tread, "Issues in running jupyter notebook" you can post here.

    Regards,

    Pratik

  • I think i successfully compiled the model and for inference i getting error and my code are

    import os
    import numpy as np
    import cv2
    import argparse
    import onnxruntime as rt


    model_path = "yolov5s6_640_ti_lite_37p4_56p0.onnx"
    model_prototxt = "yolov5s6_640_ti_lite_metaarch.prototxt"
    img_calib_path="sample_calib_ips_640x640.txt"
    output_dir = "artifacts1/"
    img_path="sample_ips_640x640.txt"
    dst_path ="sample_ops_onnxrt_det1"

    os.makedirs(output_dir, exist_ok=True)

    # if compile:
    for root, dirs, files in os.walk(output_dir, topdown=False):
    [os.remove(os.path.join(root, f)) for f in files]
    [os.rmdir(os.path.join(root, d)) for d in dirs]

    calibration_frames = 2
    calibration_iterations = 5

    compile_options = {
    "artifacts_folder": "artifacts1/",
    "tensor_bits":8,
    "accuracy_level":1,
    #"debug_level": 3,
    "advanced_options:calibration_frames": img_calib_path,
    "advanced_options:calibration_iterations": calibration_iterations,
    "advanced_options:output_feature_16bit_names_list" : "370, 680, 990, 1300",
    'object_detection:meta_layers_names_list' : model_prototxt,
    'object_detection:meta_arch_type' : 6,
    "ti_internal_nc_flag" : 1601,
    #"add_data_convert_ops" : 3,
    }

    # if inference:
    # compile_options["tidl_tools_path"] = ""
    # else:
    compile_options["tidl_tools_path"] = os.environ["TIDL_TOOLS_PATH"]

    so = rt.SessionOptions()
    # EP_list = ['TIDLCompilationProvider','CPUExecutionProvider']
    # sess = rt.InferenceSession(args.model_path ,providers=EP_list, provider_options=[compile_options, {}], sess_options=so)
    # input_details = sess.get_inputs()


    _CLASS_COLOR_MAP = [
    (0, 0, 255) , # Person (blue).
    (255, 0, 0) , # Bear (red).
    (0, 255, 0) , # Tree (lime).
    (255, 0, 255) , # Bird (fuchsia).
    (0, 255, 255) , # Sky (aqua).
    (255, 255, 0) , # Cat (yellow).
    ]


    def read_img(img_file, img_mean=127.5, img_scale=1/127.5):
    img = cv2.imread(img_file)[:, :, ::-1]
    img = cv2.resize(img, (640,640), interpolation=cv2.INTER_LINEAR)
    img = (img - img_mean) * img_scale
    img = np.asarray(img, dtype=np.float32)
    img = np.expand_dims(img,0)
    img = img.transpose(0,3,1,2)
    return img


    def model_import_image_list_tidl(model_path, img_path=None, mean=None, scale=None):
    "model compilation"
    img_file_list = list(open(img_path))[:calibration_frames]
    EP_list = ['TIDLCompilationProvider','CPUExecutionProvider']
    sess = rt.InferenceSession(model_path ,providers=EP_list, provider_options=[compile_options, {}], sess_options=so)

    input_name = sess.get_inputs()[0].name
    for img_index, img_file in enumerate(img_file_list):
    img_file = img_file.split(' ')[0].rstrip()
    input = read_img(img_file, mean, scale)
    output = sess.run([], {input_name: input})


    def model_infer_image_list_tidl(model_path, img_path=None, mean=None, scale=None, dst_path=None):
    "inference on sample images"
    os.makedirs(dst_path, exist_ok=True)
    EP_list = ['TIDLExecutionProvider','CPUExecutionProvider']
    sess = rt.InferenceSession(model_path ,providers=EP_list, provider_options=[compile_options, {}], sess_options=so)
    input_name = sess.get_inputs()[0].name
    img_file_list = list(open(img_path))
    max_index = 20
    print("Starting Inference")
    for img_index, img_file in enumerate(img_file_list):
    print(f"{img_index+1}/{len(img_file_list)}")
    img_file = img_file.split(' ')[0].rstrip()
    input = read_img(img_file, mean, scale)
    output = sess.run([], {input_name: input})
    output = np.squeeze(output[0])
    dst_file = os.path.join(dst_path, os.path.basename(img_file))
    post_process(img_file, dst_file, output, score_threshold=0.5)

    def post_process(img_file, dst_file, output, score_threshold=0.3):
    """
    Draw bounding boxes on the input image. Dump boxes in a txt file.
    """
    det_bboxes, det_scores, det_labels = output[:, 0:4], output[:, 4], output[:, 5]
    img = cv2.imread(img_file)
    #To generate color based on det_label, to look into the codebase of Tensorflow object detection api.
    dst_txt_file = dst_file.replace('png', 'txt')
    f = open(dst_txt_file, 'wt')
    for idx in range(len(det_bboxes)):
    det_bbox = det_bboxes[idx]
    if det_scores[idx]>0:
    f.write("{:8.0f} {:8.5f} {:8.5f} {:8.5f} {:8.5f} {:8.5f}\n".format(det_labels[idx], det_scores[idx], det_bbox[1], det_bbox[0], det_bbox[3], det_bbox[2]))
    if det_scores[idx]>score_threshold:
    color_map = _CLASS_COLOR_MAP[int(det_labels[idx]) %len(_CLASS_COLOR_MAP)]
    img = cv2.rectangle(img, (int(det_bbox[0]), int(det_bbox[1])), (int(det_bbox[2]), int(det_bbox[3])), color_map[::-1], 2)
    cv2.putText(img, "id:{}".format(int(det_labels[idx])), (int(det_bbox[0]+5),int(det_bbox[1])+15), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color_map[::-1], 2)
    cv2.putText(img, "score:{:2.1f}".format(det_scores[idx]), (int(det_bbox[0] + 5), int(det_bbox[1]) + 30), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color_map[::-1], 2)
    cv2.imwrite(dst_file, img)
    f.close()

    def main():
    model_import_image_list_tidl(model_path=model_path, img_path=img_calib_path,
    mean=1.0, scale=0.00392156862745098)

    model_infer_image_list_tidl(model_path=model_path, img_path=img_path,
    mean=1.0, scale=0.00392156862745098,
    dst_path=dst_path)


    if __name__== "__main__":
    main()

    output

    Starting Inference
    1/8
    
    RuntimeErrorTraceback (most recent call last)
    <ipython-input-7-e3978fe9a8f2> in <module>
        136 
        137 if __name__== "__main__":
    --> 138     main()
    
    <ipython-input-7-e3978fe9a8f2> in main()
        132         model_infer_image_list_tidl(model_path=model_path, img_path=img_path,
        133                                mean=1.0, scale=0.00392156862745098,
    --> 134                                dst_path=dst_path)
        135 
        136 
    
    <ipython-input-7-e3978fe9a8f2> in model_infer_image_list_tidl(model_path, img_path, mean, scale, dst_path)
         96         img_file = img_file.split(' ')[0].rstrip()
         97         input = read_img(img_file, mean, scale)
    ---> 98         output = sess.run([], {input_name: input})
         99         output = np.squeeze(output[0])
        100         dst_file = os.path.join(dst_path, os.path.basename(img_file))
    
    /usr/local/lib/python3.6/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py in run(self, output_names, input_feed, run_options)
        186             output_names = [output.name for output in self._outputs_meta]
        187         try:
    --> 188             return self._sess.run(output_names, input_feed, run_options)
        189         except C.EPFail as err:
        190             if self._enable_fallback:
    
    RuntimeError: std::exception
    
    


    Please find the solution of the error that anything i missed or mistakenly coded. Also I run these code on ti cloud model analyzer and can u tell me which sdk version suits this generated artifacts

    Regards,
    Nandu
  • Hi,

    from posted response I can see that you are able to compile the model and generate the model artifacts.

    I think i successfully compiled the model

    We recommend to create new thread if you have any more question not specific to model compilation.

    Regards,

    Pratik