This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM68A: yolov8 object detection model implementation on am68a

Part Number: AM68A


Tool/software:

I am using TI AM68A board. I have a yolov8 custom trained model and i need to port the model to TI board for object detection.

So first I exported the model to onnx type and try to convert it to portable type using EdgeAI tidl tools. But the sample python code to convert wasn't working properly in my model case it shows errors. How do I convert the model to portable type?

Also, I need to use custom gstreamer plugin and gstreamer inbuild plugin for my streaming solutions . eg:onnxruntime plugin,rtmpsink. And it shows element not found error. So how do i fix this issue. I am afraid that if I reinstall the gstreamer is the all configuration already done on the ti board will be lost so i didn't try it? And I need to use apt for installing packages is it possible to integrate it too?

  • Hi,

    Thanks for question, please expect response from our analytics expert.

    Thanks

  • Hi,

    Could you provide the complete error logs when you try to compile your model with with debug_level set to 2? This will allow us to debug the issues you are having. If it is also possible to share the ONNX model you are trying to run within edgeai-tidl-tools that would be beneficial as well.

    Regarding your gstreamer question, can you make a separate E2E post describing the error? This will allow the question to go to the appropriate expert. 

    Best,

    Asha

  • yeah sure ,

    **********  Frame Index 1 : Running float inference **********
    2024-06-04 06:19:01.381133288 [E:onnxruntime:, sequential_executor.cc:494 ExecuteKernel] Non-zero status code returned while running ReorderInput node. Name:'ReorderInput' Status Message: /onnxruntime/onnxruntime/contrib_ops/cpu/nchwc_ops.cc:17 virtual onnxruntime::common::Status onnxruntime::contrib::ReorderInput::Compute(onnxruntime::OpKernelContext*) const X_rank == 4 was false.

     

    Process Process-1:
    Traceback (most recent call last):
      File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
        self.run()
      File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
        self._target(*self._args, **self._kwargs)
      File "/home/root/examples/osrt_python/ort/onnxrt_ep.py", line 239, in run_model
        imgs, output, proc_time, sub_graph_time, height, width  = infer_image(sess, input_images, config)
      File "/home/root/examples/osrt_python/ort/onnxrt_ep.py", line 129, in infer_image
        output = list(sess.run(None, {input_name: input_data}))
      File "/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 200, in run
        return self._sess.run(output_names, input_feed, run_options)
    onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running ReorderInput node. Name:'ReorderInput' Status Message: /onnxruntime/onnxruntime/contrib_ops/cpu/nchwc_ops.cc:17 virtual onnxruntime::common::Status onnxruntime::contrib::ReorderInput::Compute(onnxruntime::OpKernelContext*) const X_rank == 4 was false.


    The step i followed:

    1.install edgeai_tidl tool

    2.update model_configs.py file included my custom model
    models_configs = {
        'custom-yolov8': {
            'model_path': '/home/root/examples/osrt_python/ort/Ros/best.onnx',
            'artifacts_dir': '/home/root/examples/osrt_python/ort/Ros/artifacts',
            'model_type': 'od',  
            'mean': [0.485, 0.456, 0.406], 
            'scale': [0.229, 0.224, 0.225],  
            'num_images': 10,

        },

    3.run the python3 onnxrt_ep.py file with model my custom model 
    root@6e569d2946f0:/home/root/examples/osrt_python/ort# python3 onnxrt_ep.py -c -m custom-yolov8

    complete output:

    root@6e569d2946f0:/home/root/examples/osrt_python/ort# python3 onnxrt_ep.py -c -m custom-yolov8
    Available execution providers :  ['TIDLExecutionProvider', 'TIDLCompilationProvider', 'CPUExecutionProvider']

     

    Running 1 Models - ['custom-yolov8']

     

     

    Running_Model :  custom-yolov8  

     

     

    Running shape inference on model /home/root/examples/osrt_python/ort/Ros/best.onnx

     

     

    Preliminary subgraphs created = 3 
    Final number of subgraphs created are : 3, - Offloaded Nodes - 229, Total Nodes - 233 
    Graph Domain TO version : 17
    ************** Frame index 1 : Running float import ************* 
    ****************************************************
    **                ALL MODEL CHECK PASSED          **
    ****************************************************

     

    The soft limit is 2048
    The hard limit is 2048
    MEM: Init ... !!!
    MEM: Init ... Done !!!
    0.0s:  VX_ZONE_INIT:Enabled
    0.16s:  VX_ZONE_ERROR:Enabled
    0.21s:  VX_ZONE_WARNING:Enabled
    0.6233s:  VX_ZONE_INIT:[tivxInit:190] Initialization Done !!!

     

    **********  Frame Index 1 : Running float inference **********
    2024-06-04 06:19:01.381133288 [E:onnxruntime:, sequential_executor.cc:494 ExecuteKernel] Non-zero status code returned while running ReorderInput node. Name:'ReorderInput' Status Message: /onnxruntime/onnxruntime/contrib_ops/cpu/nchwc_ops.cc:17 virtual onnxruntime::common::Status onnxruntime::contrib::ReorderInput::Compute(onnxruntime::OpKernelContext*) const X_rank == 4 was false.

     

    Process Process-1:
    Traceback (most recent call last):
      File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
        self.run()
      File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
        self._target(*self._args, **self._kwargs)
      File "/home/root/examples/osrt_python/ort/onnxrt_ep.py", line 239, in run_model
        imgs, output, proc_time, sub_graph_time, height, width  = infer_image(sess, input_images, config)
      File "/home/root/examples/osrt_python/ort/onnxrt_ep.py", line 129, in infer_image
        output = list(sess.run(None, {input_name: input_data}))
      File "/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 200, in run
        return self._sess.run(output_names, input_feed, run_options)
    onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running ReorderInput node. Name:'ReorderInput' Status Message: /onnxruntime/onnxruntime/contrib_ops/cpu/nchwc_ops.cc:17 virtual onnxruntime::common::Status onnxruntime::contrib::ReorderInput::Compute(onnxruntime::OpKernelContext*) const X_rank == 4 was false.

     

    MEM: Deinit ... !!!
    MEM: Alloc's: 27 alloc's of 138980105 bytes 
    MEM: Free's : 27 free's  of 138980105 bytes 
    MEM: Open's : 0 allocs  of 0 bytes 
    MEM: Deinit ... Done !!!

  • Hi,

    Can you make sure that you have set debug_level=2 (see common_utils.py) and attach the logs? This will provide layer level information - based on the onnx runtime issue you are seeing, this might be caused by a layer in your model. 

    Best,

    Asha

  • Certainly! Here's the revised version:

    ---

    However, I don't think that is the issue. The tool has been successfully converting all given models. The issue is probably related to the configuration of my custom model. I suspect there might be some incorrect values in my model configuration structure. I developed the ONNX model by creating a YOLOv8 model using the Ultralytics library and then converting it with the Ultralytics export function. You can refer to the process here:

    https://docs.ultralytics.com/modes/export/#key-features-of-export-mode

    I included my model in the tool's configuration file and ran the Python script, setting the model to a custom model. The tool's documentation didn't provide complete guidance on how to configure it. If possible, could you try this process and verify if I did everything properly? If you get the result without any errors, it will indicate that I missed something, and you can provide guidance on how to fix it.

    this is the model configuration i used :
        'custom-yolov8': {
            'model_path': '/home/root/examples/osrt_python/ort/Ros/best.onnx',
            'artifacts_dir': '/home/root/examples/osrt_python/ort/Ros/artifacts',
            'model_type': 'od',  
            'mean': [0.485, 0.456, 0.406], 
            'scale': [0.229, 0.224, 0.225],  
            'num_images': 10,

        },

  • Hi,

    It looks like if you try to attach a file (such as the debug log or your model) it's not getting included on e2e. 

    The issue is probably related to the configuration of my custom model. I suspect there might be some incorrect values in my model configuration structure.

    In this case, before you tried compiling for C7x, did you verify that your model could compile with "ARM only mode"? The process is documented here https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/docs/tidl_osr_debug.md#model-compilation-issues

    But essentially you would run python3 onnxrt_ep.py -d -m custom-yolov8

    Best,

    Asha

  • still having some issue .like even if i run it with -d enable it expects more details like session type,od_type etc.i have no idea how do i find all that . it will be helpfull if you try to do this .and guide me with your experience .iam using a yolov8 model you can simply download one from or even generate one from ultralytics and try to convert it .it will give you understanding of issues .
    additionally i found a document that refering the yolov8 step . but the that provided shows not found .
    https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/docs/tidl_fsg_od_meta_arch.md