TDA4VM: How to create prototxt file for edgeai-yolov5 pose model?

Part Number: TDA4VM

Tool/software:

Hello,

I am trying to use the edgeai-yolov5’s pose implementation. I have mentioned the exact link to repo, dataset and model head description below for quick reference.

 

Repo: https://github.com/TexasInstruments/edgeai-yolov5/tree/yolo-pose

Dataset: coco-pose

TI SDK: ti-processor-sdk-rtos-j721s2-evm-08_05_00_11

 

head: # YOLOv5 head

  [ [ -1, 1, Conv, [ 768, 1, 1 ] ],

    [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],

    [ [ -1, 8 ], 1, Concat, [ 1 ] ],  # cat backbone P5

    [ -1, 3, C3, [ 768, False ] ],  # 15

 

    [ -1, 1, Conv, [ 512, 1, 1 ] ],

    [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],

    [ [ -1, 6 ], 1, Concat, [ 1 ] ],  # cat backbone P4

    [ -1, 3, C3, [ 512, False ] ],  # 19

 

    [ -1, 1, Conv, [ 256, 1, 1 ] ],

    [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],

    [ [ -1, 4 ], 1, Concat, [ 1 ] ],  # cat backbone P3

    [ -1, 3, C3, [ 256, False ] ],  # 23 (P3/8-small)

 

    [ -1, 1, Conv, [ 256, 3, 2 ] ],

    [ [ -1, 20 ], 1, Concat, [ 1 ] ],  # cat head P4

    [ -1, 3, C3, [ 512, False ] ],  # 26 (P4/16-medium)

 

    [ -1, 1, Conv, [ 512, 3, 2 ] ],

    [ [ -1, 16 ], 1, Concat, [ 1 ] ],  # cat head P5

    [ -1, 3, C3, [ 768, False ] ],  # 29 (P5/32-large)

 

    [ -1, 1, Conv, [ 768, 3, 2 ] ],

    [ [ -1, 12 ], 1, Concat, [ 1 ] ],  # cat head P6

    [ -1, 3, C3, [ 1024, False ] ],  # 32 (P6/64-xlarge)

 

    [ [ 23, 26, 29, 32 ], 1, Detect, [ nc, anchors, nkpt ] ],  # Detect(P3, P4, P5, P6)

  ]

 

This branch (yolo-pose) does not have the ‘utils/proto’ folder which the main branch has. Neither does this branch have prototxt implementation inside its export.py file. I tried using the proto folder from the main branch, but it doesn’t have the message description for head with key-points information. So, my questions are as follows:

  1. Is there a script to generate prototxt file for yolov5-pose model?
  2. I think I might be able to get away with using the prototxt messages from yolov5 bbox model for the pose implementation, but I won’t be able to use NMS in the graph. But, I need to perform NMS in the graph and not on the ARM core. Can I modify the proto files in any way to accommodate the pose head with NMS?
  3. Is the pose head supported by the SDK version I am using which is ‘08_05_00_11’?
  • Hi Bushan,

    My thought is if there is not a .protext file associated with the model it is not needed.  Model zoo models tend to be complete.  In general yolo5 models do not need a .protext file except for specific applications.  What is the intended application for this model?

    Regards,

    Chris

  • Hi Chris,

    This model is yolov5-pose model which is intended to detect human pose: BBOX + 17 keypoints. We always used prototxt files for yolov5 bbox detection model. The repository I am using does not fall under model zoo.

    So are you suggesting that yolov5-pose model should be imported without the prototxt in the SDK I am using? and there is no restriction from the SDK side to import the pose model? I can give it a try and respond with specifics.

  • Hi Brushan,

    Depending on your use case a prototext file may not be needed.  I am unsure in your specific case.  Can you please include the model, any other files associated with he model you are using, import file,  inference file, input data and I will try it out and give more specific information.

    Regards,

    Chris 

  • Hi Chris,

    Please find the model pytorch file, onnx file, model config yaml, coco-pose data sample, export file and tidl import config file in the following folder: https://drive.google.com/drive/folders/1DXdruZYgUE1GpWHvPKohbUij0qNiclsg?usp=drive_link

    I have used the edgeai-yolov5 (branch: yolo-pose) without any modifications with the coco-pose dataset as mentioned in the repo. I am mentioning the link to repo again for your reference: https://github.com/TexasInstruments/edgeai-yolov5/tree/yolo-pose. Please let me know if you need anything else from my side.

    Thanks,

    Bhushan

  • Hi Bhushan,

    Please use the following python script and it will generate the corresponding prototext file for an input ONNX model.

    Run it by:

    python3 ./onnx2proto.py -m model_opt.onnx

    import onnx
    import json
    from onnx.defs import onnx_opset_version
    import argparse
    import os
    
    
    parser = argparse.ArgumentParser()
    
    parser.add_argument(
        "-m",
        "--model",
        required=True,
        help="ONNX Model To Execute",
    )
    
    args = parser.parse_args()
    
    model = args.model
    
    # Is this a valid model?
    mymodel = onnx.load(model)
    
    
    
    def onnx_to_prototxt(onnx_path, output_path):
        """
        Converts an ONNX model to a prototext file.
        """
        model = onnx.load(onnx_path)
    
        with open(output_path, "w") as f:
            # Print key model information
            print(f"ir_version: {json.dumps(model.ir_version)}", file=f)
            print(f"producer_name: {json.dumps(model.producer_name)}", file=f)
            print(f"producer_version: {json.dumps(model.producer_version)}", file=f)
            print(f"model_version: {json.dumps(model.model_version)}", file=f)
    
            # Print graph details (nodes, inputs, outputs, initializers, etc.)
            # This part requires more detailed parsing of the 'model.graph' object
            # to represent the full structure in a text format.
            # For a complete representation, you would iterate through model.graph.node,
            # model.graph.input, model.graph.output, model.graph.initializer, etc.,
            # and format their details into the output file.
            # A simple example for nodes:
            for node in model.graph.node:
                print(f"node {{", file=f)
                print(f"  name: {json.dumps(node.name)}", file=f)
                print(f"  op_type: {json.dumps(node.op_type)}", file=f)
                print(f"  input: {json.dumps(list(node.input))}", file=f)
                print(f"  output: {json.dumps(list(node.output))}", file=f)
                print(f"}}", file=f)
    
    onnx_to_prototxt(model, model.replace(".onnx",".prototext"))
    

    Regards,

    Chris

  • Hi Chris, I am receiving the following error while trying to import the model using TI SDK:

    TIDL Meta PipeLine (Proto) File : ../../test/testvecs/models/magna/TAD3/yolov5n_exp_pose_coco/yolov5n_exp_pose_coco.prototxt
    [libprotobuf ERROR google/protobuf/text_format.cc:309] Error parsing text-format tidl_meta_arch.TIDLMetaArch: 1:11: Message type "tidl_meta_arch.TIDLMetaArch" has no field named "ir_version".
    ERROR: google::protobuf::TextFormat::Parse proto file(../../test/testvecs/models/magna/TAD3/yolov5n_exp_pose_coco/yolov5n_exp_pose_coco.prototxt) FAILED !!!

    I have generated the prototxt file for the same onnx model I have shared earlier. Can you also let me know if the https://drive.google.com/file/d/13O0kRU-dXarmjpntL4l5xPGivPxzslZk/view?usp=drive_link onnx model is created correctly? because there is a big difference in the size if I use nms or not while exporting to onnx. I have used the export.py in the shared folder to export onnx with nms.

    Thank you for your help,

    Bhushan

  • Hi Bhushan,

    Please send me how you are compiling the model and if you are using OSRT or TIDLRT.  Also, I would recommend you move to a newer TIDL version like 11.08 as we no longer support updates to 8.5.

    Regards,

    Chris