This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: Algorithm model deployment application problem

Part Number: TDA4VM

Hi Ti,

     My most recent work has been exploring the deployment of various algorithm models in TDA4.
     The current problem is that the onnx model provided by TI official fails to be imported using the tidlModelImport tool, details are as follows:

     1. Want to try the YoloX network;

     2. TI provides TI version of YoloX, link: TexasInstruments/edgeai-yolox: YOLOX-ti-lite models are optimized for deployment on TI edge processors (github.com).

    3. In this link, you can find the onnx file of YoloX Nano version officially provided by TI:edgeai-yolox/yolox_nano_ti_lite_26p1_41p8.onnx.link at main · TexasInstruments/edgeai-yolox · GitHub

    4.Following the standard export procedure, the export tool reports the following error:

     ONNX Model (Proto) File  : /projects/files/models/onnx/yolox_nano_ti_lite_26p1_41p8.onnx
TIDL Network File      : /projects/files/models/bin/yolox_n-sim.net.bin
TIDL IO Info File      : /projects/files/models/bin/yolox_n-sim.para.bin
Current ONNX OpSet Version   : 11
 ONNX operator Gather is not suported now..  By passing
 ONNX operator Gather is not suported now..  By passing
*** WARNING : Mul with constant tensor requires input dimensions of mul layer to be present as part of the network.      
If present, this warning can be ignored. If not, please use open source runtimes offering to run this model or run shape inference on this model before executing import  ***
Unsupported slice - axis parameters, in Slice 

   Please help me to look at the above problems, thank you!

   Regards,

   Kong

  • Hi Ti,

           Regarding the above question, I would like to add the following points, please take a look together.Thank you!

           1. Tried two YOLO projects officially provided by TI, and the corresponding project links are as follows:

    YOLOV5:TexasInstruments/edgeai-yolov5: YOLOv5        in PyTorch > ONNX > CoreML > TFLite (github.com)

    YOLOX:TexasInstruments/edgeai-yolox: YOLOX-ti-lite models are optimized for deployment on TI edge processors (github.com)

          2. I hope to try the deployment and application of YOLOX model in TDA4. You can find the download address of the onnx file of YoloX Nano version officially provided by TI in the following link:

    edgeai-yolox/yolox_nano_ti_lite_26p1_41p8.onnx.link at main · TexasInstruments/edgeai-yolox · GitHub

         3. Try to export the officially provided onnx model. If the following two options are not added to the configuration file:   

    #metaArchType = 6

    #metaLayersNamesList = "/projects/files/models/onnx/yolox_n.prototxt"

    The export tool reports the following error:

    ONNX Model (Proto) File  : /projects/files/models/onnx/yolox_nano_ti_lite_26p1_41p8.onnx

    TIDL Network File      : /projects/files/models/bin/yolox_n-sim.net.bin

    TIDL IO Info File      : /projects/files/models/bin/yolox_n-sim.para.bin

    Current ONNX OpSet Version   : 11

     ONNX operator Gather is not suported now..  By passing

     ONNX operator Gather is not suported now..  By passing

    *** WARNING : Mul with constant tensor requires input dimensions of mul layer to be present as part of the network.      

    If present, this warning can be ignored. If not, please use open source runtimes offering to run this model or run shape inference on this model before executing import  ***

    Unsupported slice - axis parameters, in Slice 

         4. Export the onnx model and the corresponding prototxt file in the YOLOX project by following commands:

    python tools/export_onnx.py --output-name yoloxs.onnx -n yolox-s

          Enable the corresponding metaArchType and metaLayersNamesList options during conversion:

    metaArchType = 6

    metaLayersNamesList = "/projects/files/models/onnx/yoloxs.prototxt"

        The export tool reports the following error:

    ONNX Model (Proto) File  : /projects/files/models/onnx/yoloxs.onnx

    TIDL Network File      : /projects/files/models/bin/yoloxs_net.bin

    TIDL IO Info File      : /projects/files/models/bin/yoloxs.para.bin

    Current ONNX OpSet Version   : 11

    [libprotobuf FATAL ./google/protobuf/repeated_field.h:1537] CHECK failed: (index) < (current_size_):

    terminate called after throwing an instance of 'google::protobuf::FatalException'

      what():  CHECK failed: (index) < (current_size_):

    fish: Job 1, 'out/tidl_model_import.out /proj…' terminated by signal SIGABRT (Abort)

         5. Now look at the YOLOv5 project and try to export the onnx file corresponding to YOLOv5s by executing the following command:

    python export.py --weights yolov5s.pt --include onnx

        The following error occurs:

    Traceback (most recent call last):

      File "/projects/TexasInstruments/edgeai-yolov5/export.py", line 253, in <module>

        main(opt)

      File "/projects/TexasInstruments/edgeai-yolov5/export.py", line 248, in main

        run(**vars(opt))

      File "/projects/TexasInstruments/edgeai-yolov5/export.py", line 171, in run

        model = attempt_load(weights, map_location=device)  # load FP32 model

      File "/projects/TexasInstruments/edgeai-yolov5/models/experimental.py", line 119, in attempt_load

        ckpt = torch.load(attempt_download(w), map_location=map_location)  # load

      File "/anaconda3/envs/torch-2.0.1_cu118_cp310/lib/python3.10/site-packages/torch/serialization.py", line 809, in load

        return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)

      File "/anaconda3/envs/torch-2.0.1_cu118_cp310/lib/python3.10/site-packages/torch/serialization.py", line 1172, in _load

        result = unpickler.load()

      File "/anaconda3/envs/torch-2.0.1_cu118_cp310/lib/python3.10/site-packages/torch/serialization.py", line 1165, in find_class

        return super().find_class(mod_name, name)

    AttributeError: Can't get attribute 'SPPF' on <module 'models.common' from '/projects/TexasInstruments/edgeai-yolov5/models/common.py'>

     

     Based on the above trial and error, I have the following questions:

    1. Considering that the above project is provided by TI's official Github warehouse, how to solve the above error about YOLOv5 exporting ONNX file (point 5)?

    2. Considering that the above project is provided by TI's official Github warehouse, how to solve the above error of exporting the ONNX file of YOLOX into TDA4 bin file (points 3 and 4)?

    Note: According to the error log, the metaArchType configuration in the configuration file may be faulty. However, from the official TI documentation:

    software-dl.ti.com/.../d ocs/user_guide_html/md_tidl_fsg_meta_arch_support.html

    No option is found for metaArchType to correspond to YOLOX

    3. During the project, we tried a variety of different target detection models and encountered problems similar to those mentioned in the third point. However, these models (e.g. FCOS, etc.) do not belong to any of the corresponding architectures in metaArchType=1/2/3/4/5/6. Does this mean that you can only try metaArchType=1/2/3/4/5/6 one by one when converting these models?

    Or, to put it another way, does the fact that these model examples (such as FCOS, etc.) encounter problems like Point 3 above in the transformation process mean that there is no solution?

        Regards,

        Kong

  • Hi,

        We have not received a reply on this problem for eight days. Could you please help us solve it as soon as possible?

        Thank you!

        Regards,

         Kong

  • I have successfully complied the model using onnxruntime-tidl. I used https://github.com/TexasInstruments/edgeai-benchmark

    which is a wrapper over the underlying tidl_tools / onnxruntime-tidl. 

    Sample configs for edgeai-yolox are here: https://github.com/TexasInstruments/edgeai-benchmark/blob/master/configs/detection_experimental.py#L59

    See the instructions for compiling custom model to compile your own model: https://github.com/TexasInstruments/edgeai-benchmark/blob/master/docs/custom_models.md

  • You can also train and compile your own model using Model Composer in Edge AI Studio

    https://www.ti.com/tool/EDGE-AI-STUDIO