Hello forum
I need to train yolox_ti_lite models using colab or kaggle, what is the basic steps has to be followed in order to retrain the model in kaggle or colab. Is there any thread or references related to this kindly attach that too
Thanks
This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Hello forum
I need to train yolox_ti_lite models using colab or kaggle, what is the basic steps has to be followed in order to retrain the model in kaggle or colab. Is there any thread or references related to this kindly attach that too
Thanks
Hi,
Here are 2 options I can suggest.
1. You can Create and Train the model using std deep learning libs like Pytorch, Tensorflow etc and use our edgeai tidl tools to compile the onnx or tflite model
https://github.com/TexasInstruments/edgeai-tidl-tools
2. You can explore edgeai modelmaker repos to train/re-train(transfer learning) and compile the model here : https://github.com/TexasInstruments/edgeai-modelmaker
Check out how we have utilized this tools to create the demos here : https://github.com/TexasInstruments/edgeai-gst-apps-people-tracking
Thanks.
Hi Pratik
Thanks for the reply, is there any way to deploy yolov8 onnx model on tda4vm Or else it is possible to make it compatible to tda4vm
End - End (DSP inference) support is not available for Yolo V8, however you can try out OSRT for model inference on hardware.
Please read more about the same here :
https://github.com/TexasInstruments/edgeai-tidl-tools/tree/master
Hello Pratik
We are capable of writing DSP instrinsics for yolov8, Is there any user guide in order make yolov8 to deploy on TDA4VM
Hi,
You can use OSRT flow to get your model inferred on target SoC, the supported layers will get delegated to DSP for optimization
however you can try out OSRT for model inference on hardware.
Please read more about the same here :
https://github.com/TexasInstruments/edgeai-tidl-tools/tree/master
With every release we add new set of operators and models you can look for further release to get this supported end to end.
Thanks for the reply, can I know more about calibration iteration and calibration frames what these parameters actually does while compiling models?
During quantization, to calculate the scales and offset is done by using provided calibrations frames as input for certain specified iterations.
Thanks.
Hello Pratik!
I have successfully compiled yolox_m_lite model, while making inference, we have faced some issues:
nfer : imagedet-6_onnxrt_edgeai-benchmark_model_yolox_m_onn| 0%| || 0/100 [00:22<?, ?it/s] Traceback (most recent call last): File "/home/mugu/edgeai-benchmark/edgeai_benchmark/pipelines/pipeline_runner.py", line 174, in _run_pipeline result = cls._run_pipeline_impl(basic_settings, pipeline_config, description) File "/home/mugu/edgeai-benchmark/edgeai_benchmark/pipelines/pipeline_runner.py", line 147, in _run_pipeline_impl accuracy_result = accuracy_pipeline(description) File "/home/mugu/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 123, in __call__ param_result = self._run(description=description) File "/home/mugu/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 180, in _run output_list = self._infer_frames(description) File "/home/mugu/edgeai-benchmark/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 273, in _infer_frames output, info_dict = postprocess(output, info_dict) File "/home/mugu/edgeai-benchmark/edgeai_benchmark/utils/transforms_utils.py", line 41, in __call__ tensor, info_dict = t(tensor, info_dict) File "/home/mugu/edgeai-benchmark/edgeai_benchmark/postprocess/transforms.py", line 486, in __call__ bbox_copy[..., self.dst_indices] = bbox[..., self.src_indices] IndexError: index 5 is out of bounds for axis 1 with size 5 index 5 is out of bounds for axis 1 with size 5 TASKS | 100%|██████████|| TASKS | 100%|██████████|| packaging artifacts to ./work_dirs/modelpackage/TDA4VM/8bits please wait... SUCCESS:20240225-231436: finished packaging - imagedet-6_onnxrt_edgeai-benchmark_model_yolox_m_onnx SUCCESS:20240225-231437: finished packaging - imageseg-3_onnxrt_edgeai-benchmark_model_model_bes_onnx ------------------------------------------------------------------- =================================================================== settings: {'include_files': None, 'pipeline_type': 'accuracy', 'num_frames': 100, 'calibration_frames': 25, 'calibration_iterations': 25, 'configs_path': './configs', 'models_path': '../edgeai-modelzoo/models', 'modelartifacts_path': './work_dirs/modelartifacts/', 'modelpackage_path': './work_dirs/modelpackage/', 'datasets_path': './dependencies/datasets', 'target_device': None, 'target_machine': 'pc', 'run_suffix': None, 'parallel_devices': None, 'parallel_processes': 1, 'tensor_bits': 8, 'runtime_options': None, 'run_import': True, 'run_inference': True, 'run_missing': True, 'detection_threshold': 0.3, 'detection_top_k': 200, 'detection_nms_threshold': None, 'detection_keep_top_k': None, 'save_output': False, 'num_output_frames': 50, 'model_selection': None, 'model_shortlist': None, 'model_exclusion': None, 'task_selection': None, 'runtime_selection': None, 'session_type_dict': {'onnx': 'onnxrt', 'tflite': 'tflitert', 'mxnet': 'tvmdlr'}, 'dataset_type_dict': {'imagenet': 'imagenetv2c'}, 'dataset_selection': None, 'dataset_loading': True, 'config_range': None, 'enable_logging': True, 'verbose': False, 'capture_log': False, 'experimental_models': False, 'rewrite_results': False, 'with_udp': True, 'flip_test': False, 'model_transformation_dict': None, 'report_perfsim': False, 'tidl_offload': True, 'input_optimization': False, 'run_dir_tree_depth': None, 'target_device_preset': True, 'fast_calibration_factor': None, 'skip_pattern': '_package', 'settings_file': 'settings_import_on_pc.yaml', 'basic_keys': ['include_files', 'pipeline_type', 'num_frames', 'calibration_frames', 'calibration_iterations', 'configs_path', 'models_path', 'modelartifacts_path', 'modelpackage_path', 'datasets_path', 'target_device', 'target_machine', 'run_suffix', 'parallel_devices', 'parallel_processes', 'tensor_bits', 'runtime_options', 'run_import', 'run_inference', 'run_missing', 'detection_threshold', 'detection_top_k', 'detection_nms_threshold', 'detection_keep_top_k', 'save_output', 'num_output_frames', 'model_selection', 'model_shortlist', 'model_exclusion', 'task_selection', 'runtime_selection', 'session_type_dict', 'dataset_type_dict', 'dataset_selection', 'dataset_loading', 'config_range', 'enable_logging', 'verbose', 'capture_log', 'experimental_models', 'rewrite_results', 'with_udp', 'flip_test', 'model_transformation_dict', 'report_perfsim', 'tidl_offload', 'input_optimization', 'run_dir_tree_depth', 'target_device_preset', 'fast_calibration_factor', 'skip_pattern', 'settings_file'], 'dataset_cache': None} results found for 2 models Report generated at ./work_dirs/modelartifacts/