This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM62A7: I plan to use my own YOLOv5 model and convert it into both an ONNX file and a prototxt file. However, the conversion has failed at present

Part Number: AM62A7

Tool/software:

Hello, I plan to use my own YOLOv5 model and convert it into both an ONNX file and a prototxt file, in order to compile it into a format that can run on the AM62A. However, the conversion has failed at present. Could you please help me investigate the cause of the failure?  , I use the export.py from github.com/.../edgeai-yolov5

  • I  would like to upload a .pt file,but upload failed? 

  • Furthermore, could you provide a demo of the post-processing demo for YOLOv5? We need to retrieve the model's results and then perform subsequent operations based on those results.

  • I'd like to add that the current model I'm using is the YOLOv5 model from the ultralytics yolov5 v6.0 version    https://github.com/ultralytics/yolov5/blob/v6.0

  • Hello,

    The YOLOv5 expected here is based on the trained version of the model produced by this repository. The original YOLOv5 was modified in accordance with the notes within the readme on that page

    I expect that your export error is because you are trying to use the upstream, ultralytics version of YOLOv5 as opposed to the TI version (aka ti-lite) YOLOv5 model. The edgeai-yolov5 code will produce this ti-lite model in onnx format with the prototxt

    We migrated the edgai-yolo (and other training/model repos) into edgeai-tensorlab repo to resolve some interrepo dependencies and version conflicts. There is a note on issues like your own, for converting upstream YOLO models into "ti-lite" (aka the AM6xA-friendly version of a NN architecture

    We produced a webinar a few years ago on our version of yolov5. This describes some of the changes from the ultralytics version. i would recommend viewing it.

    BR,
    Reese

  • Hello, Reese, thank you very much for your detailed answer. As you said, I failed to convert the onnx and prototxt files using the ultralytics version of YOLOv5. This model is the model currently used by my team. So I want to confirm again, if I want to train a model to run on AM642A, do I have to train the model based on edgeai_yolov5? Or if I want to complete the generation of onnx and prototxt and subsequent compilation on the ultralytics version of YOLOv5, is it possible? In addition, can you provide the edgeai_yolov5 post-processing and decoding related codes? I hope to be able to process the output of the model myself and get the correct detection result information.

    Best wishes.

    zhuang

  • Hello,

    So I want to confirm again, if I want to train a model to run on AM642A, do I have to train the model based on edgeai_yolov5?

    I believe there is some amount of retraining required for a fully accelerated ultralytics yolov5 to run on AM6xA SoCs. This is mainly for the SiLU->ReLU replacement.

    There were few model changes noted:

    • SiLU -> ReLU.
      • Outputs of these will likely have effect on distributions, so at least fine-tuning is recommended. You could compile and run without this, but I expect accuracy to suffer
      • SiLU is also called "swish" depending on the source/document.
    • Slice -> Conv
      • We had previously made changes to replace "Slice" layers in the first portion of the model, but recent releases should relax this requirement.
    • maxpool -> maxpool
      • There were changes due to large kernels being replaced with cascaded small (3x3) kernels, but this does not constitute any trained weights, so no training necessary
    • Make input dimensions static

    For best results, I would suggest starting from your pretrained weights on ultralytics version and do 20-100 epochs of fine-tuning with the edgeai-yolov5. Otherwise, you can keep as-is weights, but swap SiLU to ReLU. If you don't make this change, then many layers will delegate to Arm and performance will suffer greatly.

    I will again mention the modeloptimization workflow: https://github.com/TexasInstruments/edgeai-tensorlab/issues/7. This details using the mmyolo repo to make YOLO-specific optimizations. One of this is SiLU-.ReLU replacement. It recommends a small amount of retraining. This is an option worth considering.

    In addition, can you provide the edgeai_yolov5 post-processing and decoding related codes? I hope to be able to process the output of the model myself and get the correct detection result information.

    Reese