This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: Infer custom Yolov7 model in the device

Part Number: TDA4VM

I would like to run " Yolov7.pt " model in the device using a Camera. Based on the link: https://e2e.ti.com/support/processors-group/processors/f/processors-forum/1228053/faq-sk-tda4vm-how-to-do-inferencing-with-custom-compiled-model-on-target , I can understand that I need to compile the model first.

Before compiling my custom model, just to understand the concepts I have the followed the link: https://github.com/TexasInstruments/edgeai-benchmark/blob/master/docs/custom_models.md for compilation. I am getting error while running the script  run_tutorials_pc.sh. The error is as below: Please guide me and also tell me the right procedure on how to compile my Yolov7 model to infer in the TDA4VM. I don't want any benchmark result of the model. Just run the pretrained Coco-Yolov7 model using a camera. I already know how to run Yolov5 and Yolox in the device which is perfectly given in your documentation but Yolov7 is not there. So guide me for Yolov7 Compilation and Inference. Thank you.

*********************************************************************************

Note: I didn't use Jupyter and I just copied and pasted the code in PyCharm and ran the code but it gives error. The error starts from in the code: 

# run the model compliation/import and inference
tools.run_accuracy(settings, work_dir, pipeline_configs)

Error:

(benchmark) vimal.p@ECON000345L:~/Documents/Texas/Final_hope/edgeai-benchmark$ python my_code.py
/home/balavignesh/Documents/Texas/Final_hope/edgeai-benchmark/tools/TDA4VM/tidl_tools
/home/balavignesh/Documents/Texas/Final_hope/edgeai-benchmark
<TemporaryDirectory '/tmp/tmpgd8topd6'>
work_dir = /tmp/tmpgd8topd6/modelartifacts/8bits

INFO:20230606-134836: dataset exists - will reuse - ./dependencies/datasets/coco
loading annotations into memory...
Done (t=0.43s)
creating index...
index created!

INFO:20230606-134837: dataset exists - will reuse - ./dependencies/datasets/coco
loading annotations into memory...
Done (t=0.49s)
creating index...
index created!
<class 'edgeai_benchmark.sessions.tflitert_session.TFLiteRTSession'>
{'tensor_bits': 8, 'accuracy_level': 1, 'debug_level': 0, 'priority': 0, 'advanced_options:high_resolution_optimization': 0, 'advanced_options:pre_batchnorm_fold': 1, 'advanced_options:calibration_frames': 10, 'advanced_options:calibration_iterations': 10, 'advanced_options:quantization_scale_type': 0, 'advanced_options:activation_clipping': 1, 'advanced_options:weight_clipping': 1, 'advanced_options:bias_calibration': 1, 'advanced_options:channel_wise_quantization': 0, 'advanced_options:output_feature_16bit_names_list': '', 'advanced_options:params_16bit_names_list': '', 'advanced_options:add_data_convert_ops': 3}
{'od-mlpefmnv1': {'task_type': 'detection', 'calibration_dataset': <edgeai_benchmark.datasets.coco_det.COCODetection object at 0x7fe8ac3ddb38>, 'input_dataset': <edgeai_benchmark.datasets.coco_det.COCODetection object at 0x7fe8f9e49240>, 'preprocess': <edgeai_benchmark.preprocess.PreProcessTransforms object at 0x7fe8ae146550>, 'session': <edgeai_benchmark.sessions.tflitert_session.TFLiteRTSession object at 0x7fe8ae1465c0>, 'postprocess': <edgeai_benchmark.postprocess.PostProcessTransforms object at 0x7fe86f5abe48>, 'metric': {'label_offset_pred': {0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8, 8: 9, 9: 10, 10: 11, 11: 12, 12: 13, 13: 14, 14: 15, 15: 16, 16: 17, 17: 18, 18: 19, 19: 20, 20: 21, 21: 22, 22: 23, 23: 24, 24: 25, 25: 26, 26: 27, 27: 28, 28: 29, 29: 30, 30: 31, 31: 32, 32: 33, 33: 34, 34: 35, 35: 36, 36: 37, 37: 38, 38: 39, 39: 40, 40: 41, 41: 42, 42: 43, 43: 44, 44: 45, 45: 46, 46: 47, 47: 48, 48: 49, 49: 50, 50: 51, 51: 52, 52: 53, 53: 54, 54: 55, 55: 56, 56: 57, 57: 58, 58: 59, 59: 60, 60: 61, 61: 62, 62: 63, 63: 64, 64: 65, 65: 66, 66: 67, 67: 68, 68: 69, 69: 70, 70: 71, 71: 72, 72: 73, 73: 74, 74: 75, 75: 76, 76: 77, 77: 78, 78: 79, 79: 80, 80: 81, 81: 82, 82: 83, 83: 84, 84: 85, 85: 86, 86: 87, 87: 88, 88: 89, 89: 90, -1: 0, 90: 91}}, 'model_info': {'metric_reference': {'accuracy_ap[.5:.95]%': 23.0}}}}
configs to run: ['od-mlpefmnv1_tflitert_coco_mlperf_ssd_mobilenet_v1_coco_20180128_tflite']
number of configs: 1
TASKS | | 0% 0/1| [< ]
INFO:20230606-134840: starting process on parallel_device - 0 0%| || 0/1 [00:00<?, ?it/s]

INFO:20230606-134848: starting - od-mlpefmnv1_tflitert_coco_mlperf_ssd_mobilenet_v1_coco_20180128_tflite
INFO:20230606-134848: model_path - /home/balavignesh/Documents/Texas/Final_hope/edgeai-modelzoo/models/vision/detection/coco/mlperf/ssd_mobilenet_v1_coco_20180128.tflite
INFO:20230606-134848: model_file - /tmp/tmpgd8topd6/modelartifacts/8bits/od-mlpefmnv1_tflitert_coco_mlperf_ssd_mobilenet_v1_coco_20180128_tflite/model/ssd_mobilenet_v1_coco_20180128.tflite
Downloading 1/1: /home/balavignesh/Documents/Texas/Final_hope/edgeai-modelzoo/models/vision/detection/coco/mlperf/ssd_mobilenet_v1_coco_20180128.tflite
Download done for /home/balavignesh/Documents/Texas/Final_hope/edgeai-modelzoo/models/vision/detection/coco/mlperf/ssd_mobilenet_v1_coco_20180128.tflite
/tmp/tmpgd8topd6/modelartifacts/8bits/od-mlpefmnv1_tflitert_coco_mlperf_ssd_mobilenet_v1_coco_20180128_tflite/model/ssd_mobilenet_v1_coco_20180128.tflite
Traceback (most recent call last):
File "/home/balavignesh/Documents/Texas/Final_hope/edgeai-benchmark/edgeai_benchmark/pipelines/pipeline_runner.py", line 154, in _run_pipeline
result = cls._run_pipeline_impl(settings, pipeline_config, description)
File "/home/balavignesh/Documents/Texas/Final_hope/edgeai-benchmark/edgeai_benchmark/pipelines/pipeline_runner.py", line 125, in _run_pipeline_impl
accuracy_result = accuracy_pipeline(description)
File "/home/balavignesh/Documents/Texas/Final_hope/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 103, in __call__
self.session.start()
File "/home/balavignesh/Documents/Texas/Final_hope/edgeai-benchmark/edgeai_benchmark/sessions/basert_session.py", line 140, in start
self.get_model()
File "/home/balavignesh/Documents/Texas/Final_hope/edgeai-benchmark/edgeai_benchmark/sessions/basert_session.py", line 402, in get_model
optimization_done = self._optimize_model(is_new_file=(not model_file_exists))
File "/home/balavignesh/Documents/Texas/Final_hope/edgeai-benchmark/edgeai_benchmark/sessions/basert_session.py", line 450, in _optimize_model
tflopt.tidlTfliteModelOptimize(model_file0, model_file0, input_scale, input_mean)
File "/home/balavignesh/benchmark/lib/python3.6/site-packages/osrt_model_tools/tflite_tools/tflite_model_opt.py", line 107, in tidlTfliteModelOptimize
modelBin = open(in_model_path, 'rb').read()
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpgd8topd6/modelartifacts/8bits/od-mlpefmnv1_tflitert_coco_mlperf_ssd_mobilenet_v1_coco_20180128_tflite/model/ssd_mobilenet_v1_coco_20180128.tflite'
[Errno 2] No such file or directory: '/tmp/tmpgd8topd6/modelartifacts/8bits/od-mlpefmnv1_tflitert_coco_mlperf_ssd_mobilenet_v1_coco_20180128_tflite/model/ssd_mobilenet_v1_coco_20180128.tflite'
TASKS | 100%|██████████|| 1/1 [00:10<00:00, 10.56s/it]
TASKS | 100%|██████████|| 1/1 [00:10<00:00, 10.56s/it]

packaging artifacts to /tmp/tmpgd8topd6/modelartifacts/8bits_package please wait...
WARNING:20230606-134849: could not package - /tmp/tmpgd8topd6/modelartifacts/8bits/od-mlpefmnv1_tflitert_coco_mlperf_ssd_mobilenet_v1_coco_20180128_tflite
Traceback (most recent call last):
File "my_code.py", line 106, in <module>
tools.package_artifacts(settings, work_dir, out_dir)
File "/home/balavignesh/Documents/Texas/Final_hope/edgeai-benchmark/edgeai_benchmark/tools/run_package.py", line 265, in package_artifacts
with open(os.path.join(out_dir,'artifacts.yaml'), 'w') as fp:
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpgd8topd6/modelartifacts/8bits_package/artifacts.yaml'

******************************************************************
  • The error seems to bet that the model is not there: 

    FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpgd8topd6/modelartifacts/8bits/od-mlpefmnv1_tflitert_coco_mlperf_ssd_mobilenet_v1_coco_20180128_tflite/model/ssd_mobilenet_v1_coco_20180128.tflite'

    The model first gets dowloaded in your modelzoo folder:

    INFO:20230606-134848: model_path - /home/balavignesh/Documents/Texas/Final_hope/edgeai-modelzoo/models/vision/detection/coco/mlperf/ssd_mobilenet_v1_coco_20180128.tflite

    And then gets copied into the modelartifacts folder
    INFO:20230606-134848: model_file - /tmp/tmpgd8topd6/modelartifacts/8bits/od-mlpefmnv1_tflitert_coco_mlperf_ssd_mobilenet_v1_coco_20180128_tflite/model/ssd_mobilenet_v1_coco_20180128.tflite

    Something went wrong in that process. May be you can debug and find out. Before you begin, delete or rename the modelartifacts folder if it exists.

  • Thanks, I have rectified the issues. Now I can compile the models whichever is there in the Model zoo. But...

    I want to compile yolov7.tflite which is not there in the model zoo. Based on tutorial_detection.ipynb from the link: https://github.com/TexasInstruments/edgeai-benchmark/blob/master/docs/custom_models.md , what are all the changes I need to do in the program especially in 'pipeline_configs' or in 'settings_base.yaml' from edgeai-benchmark in order to compile the yolov7.tflite model to get the artifacts etc. Please guide me.

  • YOLOv7 has the same meta-architecture as YOLOv5 - so you can use the YOLOv5 example in benchmark_custom.py to compile the model. Where did you get the tflite model for YOLOV7? I thought YOLOv5 code base is based on PyTorch and hence would export an ONNX model. You need the ONNX file and prototxt file.