This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Hi,
My sdk version is Processor SDK Linux for Edge AI 08.04.00. The model can be successfully compiled on PC. Inferencing can also be completed using this script
[/opt/edgeai-tidl-tools/examples/osrt_python/ort/onnxrt_ep.py] normally on EVM.
But, i want to use the script [root@tda4vm-sk:/opt/edge_ai_apps/apps_python# ./app_edgeai.py ../configs/object_detection.yaml] to inference. So, i created a new folder under model_zoon. I copy od-ort-ssd-lite_mobilenetv2_fpn model to the folder.
When I was running the script [/app_edgeai.py ../configs/object_detection.yaml], there was an error.
I found that param.yaml file content is less than the SDK 's own model.
I manually modified the param.yaml file according to the param.yaml file format under [/opt/model_zoo/ONR-OD-8030-ssd-lite-mobv2-fpn-mmdet-coco-512x512]. Although can run, but there is no detection box.
My question:
1. How is param.yaml file obtained ? Manually modified ?
2. How to solve the problem in the last picture ?
Thanks,
Maiunlei
Hi Maiunlei,
The param.yaml files are generated as product of model compilation, hence they are very specific, and it is not recommended to modify it.
However I could not locate any documentation which followed similar experiment you tried above, we have verified the EVM based inferencing flow as mentioned in section
Model Inference on EVM : https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/examples/osrt_python/README.md
If you want to use app_edgeai.py for running custom compiled model, you can generate the model artifacts and then use it for edge inferencing.
I would recommend you to please take look at below links, for custom model compilation.
Edgeai Benchmark : https://github.com/TexasInstruments/edgeai-benchmark#readme
Edgeai ModelMaker : https://github.com/TexasInstruments/edgeai-modelmaker#readme
Regards,
Pratik
Hi Pratik,
According to https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/examples/osrt_python/README.md,the model is compiled. Copy model-
artifacts and models from PC to the EVM. I run the script [/app_edgeai.py ../configs/object_detection.yaml], there was an error.
How does the param.yaml file match the script [/app_edgeai.py ../configs/object_detection.yaml]?
Regards,
Maiunlei.
Hi Pratik,
My problem is similar to the problem in this link:e2e.ti.com/.../4164033
However, the link below fails. Can you help me find the file(The red box circled the file)?
Thanks,
Maiunlei
Hi,
setp 1, I downloaded the model yolov5s from [http://software-dl.ti.com/jacinto7/esd/modelzoo/gplv3/08_04_00_12/edgeai-yolov5/pretrained_models/modelartifacts/8bits/od-8100_onnxrt_coco_edgeai-yolov5_yolov5s6_640_ti_lite_37p4_56p0_onnx.tar.gz].
setp 2, I create floders [/home/machunlei/opt/edgeai-benchmark-r8.4/work_dirs/modelartifacts_myown/8bits] and [/home/machunlei/opt/edgeai-benchmark-r8.4/work_dirs/modelartifacts_myown_package/8bits]. Put the model into the folder, like this.
setp 3, I modified settings_base.yaml file.
setp 4, I modified benchmark_custom.py file.
My steps seem to be no problem. Please help solve this problem.
Hi Pratik,
I did the test again. I copy .onnx and .prototxt files to [/home/machunlei/opt/edgeai-benchmark-r8.4/work_dirs/modelartifacts_myown/8bits/imagedet-7_onnxrt_modelartifacts_8bits_yolov5s6_640_ti_lite_37p4_56p0_onnx/model/].
I tun ./run_custom_pc.sh on PC. The result is:
The version is matched.
The . onnx and .prototxt are from link http://software-dl.ti.com/jacinto7/esd/modelzoo/gplv3/08_04_00_12/edgeai-yolov5/pretrained_models/modelartifacts/8bits/od-8100_onnxrt_coco_edgeai-yolov5_yolov5s6_640_ti_lite_37p4_56p0_onnx.tar.gz.
I don 't know how to solve this problem. Please give some suggestions.
Regards,
Maiunlei
Hi Pratik,
I download .prototxt and .onnx from https://github.com/TexasInstruments/edgeai-yolov5/blob/r8.4/pretrained_models/models/detection/coco/edgeai-yolov5.
I run ./run_custom_pc.sh. The error is:
(py36) root@VM-8-5-ubuntu:/home/machunlei/opt/edgeai-benchmark-r8.4# ./run_custom_pc.sh
find: ‘./work_dirs/modelartifacts/8bits/’: No such file or directory
TIDL_TOOLS_PATH=/home/machunlei/opt/edgeai-benchmark-r8.4/tidl_tools
LD_LIBRARY_PATH=/home/machunlei/opt/edgeai-benchmark-r8.4/tidl_tools
PYTHONPATH=:
===================================================================
work_dir = ./work_dirs/modelartifacts_myown/8bits
packaged_dir = ./work_dirs/modelartifacts_myown_package/8bits
loading annotations into memory...
Done (t=0.72s)
creating index...
index created!
loading annotations into memory...
Done (t=0.69s)
creating index...
index created!
configs to run: ['imagedet-7_onnxrt_modelartifacts_8bits_yolov5s6_384_ti_lite_32p8_51p2_onnx']
number of configs: 1
TASKS | | 0% 0/1| [< ]
INFO:20230223-112750: starting process on parallel_device - 0 0%| || 0/1 [00:00<?, ?it/s]
INFO:20230223-112801: starting - imagedet-7_onnxrt_modelartifacts_8bits_yolov5s6_384_ti_lite_32p8_51p2_onnx
INFO:20230223-112801: model_path - /home/machunlei/opt/edgeai-benchmark-r8.4/work_dirs/modelartifacts/8bits/yolov5s6_384_ti_lite_32p8_51p2.onnx
INFO:20230223-112801: model_file - /home/machunlei/opt/edgeai-benchmark-r8.4/work_dirs/modelartifacts_myown/8bits/imagedet-7_onnxrt_modelartifacts_8bits_yolov5s6_384_ti_lite_32p8_51p2_onnx/model/yolov5s6_384_ti_lite_32p8_51p2.onnx
INFO:20230223-112801: running - imagedet-7_onnxrt_modelartifacts_8bits_yolov5s6_384_ti_lite_32p8_51p2_onnx
INFO:20230223-112801: pipeline_config - {'task_type': 'detection', 'calibration_dataset': <edgeai_benchmark.datasets.coco_det.COCODetection object at 0x7f10b7d75080>, 'input_dataset': <edgeai_benchmark.datasets.coco_det.COCODetection object at 0x7f10a1a4fe80>, 'preprocess': <edgeai_benchmark.preprocess.PreProcessTransforms object at 0x7f10bd2ddf98>, 'session': <edgeai_benchmark.sessions.onnxrt_session.ONNXRTSession object at 0x7f10a19c9630>, 'postprocess': <edgeai_benchmark.postprocess.PostProcessTransforms object at 0x7f10a19c96a0>, 'metric': {'label_offset_pred': {0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8, 8: 9, 9: 10, 10: 11, 11: 13, 12: 14, 13: 15, 14: 16, 15: 17, 16: 18, 17: 19, 18: 20, 19: 21, 20: 22, 21: 23, 22: 24, 23: 25, 24: 27, 25: 28, 26: 31, 27: 32, 28: 33, 29: 34, 30: 35, 31: 36, 32: 37, 33: 38, 34: 39, 35: 40, 36: 41, 37: 42, 38: 43, 39: 44, 40: 46, 41: 47, 42: 48, 43: 49, 44: 50, 45: 51, 46: 52, 47: 53, 48: 54, 49: 55, 50: 56, 51: 57, 52: 58, 53: 59, 54: 60, 55: 61, 56: 62, 57: 63, 58: 64, 59: 65, 60: 67, 61: 70, 62: 72, 63: 73, 64: 74, 65: 75, 66: 76, 67: 77, 68: 78, 69: 79, 70: 80, 71: 81, 72: 82, 73: 84, 74: 85, 75: 86, 76: 87, 77: 88, 78: 89, 79: 90, 80: 91}}, 'model_info': {'metric_reference': {'accuracy_ap[.5:.95]%': 37.4}}}
INFO:20230223-112801: import - imagedet-7_onnxrt_modelartifacts_8bits_yolov5s6_384_ti_lite_32p8_51p2_onnx - this may take some time...Error - libvx_tidl_rt.so: cannot map zero-fill pages
python3: tidl_onnxRtImport_EP.cpp:142: bool TIDL_populateOptions(std::vector<std::pair<std::__cxx11::basic_string<char>, std::__cxx11::basic_string<char> > >): Assertion `data_->infer_ops.lib' failed.
The program stuck here. The error is from tidl_onnxRtImport_EP.cpp.
I download .prototxt and .onnx from https://github.com/TexasInstruments/edgeai-yolov5/blob/r8.4/pretrained_models/models/detection/coco/edgeai-yolov5.
I run ./run_custom_pc.sh. The error is:
Hi maiunlei,
Could you please create new e2e thread so that I can connect you to respected domain expert.
Please consider adding last 2 replies or required details to it.
Closing this issue.
Regards,
Pratik