This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: Porting ADAS application into TDA4VM

Part Number: TDA4VM


Tool/software:

Hello Team,

I have a video pipeline application which makes use of tensorflow model and the algorithm has dependencies on OpenCV. Can you please guide in porting this application into TDA4VM?

Best Regards

Chethan

  • Hello;

    Thanks for the question.

    For video pipeline application development, please refer to this link. 

    https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/latest/exports/docs/vision_apps/docs/user_guide/BUILD_INSTRUCTIONS.html#BUILD_SOURCE

    You can build the demo examples that come with SDK first. Depend on your application, you can modify the examples accordingly.

    For Tensorflow; please refer to this link:

    https://github.com/TexasInstruments/edgeai-tidl-tools

    You can git-clone the TIDL edgeAI tool. 

    We recommend to use ONNX format, then import your ONNX format model to TIDL flow.

    If you already have a TI EVM, your can verify the model inference on the EVM in real time, and test the vision apps in real time as well, including camera/image-sensor and display in your pipeline.

    Best regards

    Wen Li

     

  • Hello,

    Thanks for the reply.

    In the edge-AI-tidl sdk, I can see that scripts are available to compile some pre-configured models. Can you please guide to compile a custom onnx model? 

  • Hello,

    The engineer assigned is currently out of office. They will return beginning of next week and will be able to update. We appreciate your patience.

    Warm regards,

    Christina

  • For the model compiling guideline:

    1. Please create a model instantiation in the "model_configs.py"; this is the file in the downloaded edgeai-TIDL-TOOLS folder, under the "../examples/osrt_python/" sub-folder.
    2. I am using the yolo model as an examples and paste below. You should config your file with the parameters that fit your model&Application
    3. There is another "common_utils.py" file, please adjust the parameters according to your model as well.

    Then you can compile or inference your model.


    Here are the commands to compile and inference a model

    1. Compile and inference a model
    python3 onnxrt_ep.py -c -m yolox_s_pose_ti_lite

    2. Inference a model only
    python3 onnxrt_ep.py -m yolox_s_pose_ti_lite

    =======================================================================================

    "od-8200_onnxrt_coco_edgeai-mmdet_yolox_nano_lite_416x416_20220214_model_onnx": create_model_config(
    task_type="detection",
    source=dict(
    model_url="">software-dl.ti.com/.../yolox_nano_lite_416x416_20220214_model.onnx",
    meta_arch_url="">software-dl.ti.com/.../yolox_nano_lite_416x416_20220214_model.prototxt",
    infer_shape=True,
    ),
    preprocess=dict(
    resize=416,
    crop=416,
    data_layout="NCHW",
    pad_color=[114, 114, 114],
    resize_with_pad=[True, "corner"],
    reverse_channels=True,
    ),
    session=dict(
    session_name="onnxrt",
    model_path=os.path.join(
    models_base_path, "yolox_nano_lite_416x416_20220214_model.onnx"
    ),
    meta_layers_names_list=os.path.join(
    models_base_path, "yolox_nano_lite_416x416_20220214_model.prototxt"
    ),
    meta_arch_type=6,
    input_mean=[0, 0, 0],
    input_scale=[1, 1, 1],
    input_optimization=True,
    ),
    postprocess=dict(
    formatter="DetectionBoxSL2BoxLS",
    resize_with_pad=True,
    keypoint=False,
    object6dpose=False,
    normalized_detections=False,
    shuffle_indices=None,
    squeeze_axis=None,
    reshape_list=[(-1, 5), (-1, 1)],
    ignore_index=None,
    ),
    extra_info=dict(
    od_type="SSD",
    framework="MMDetection",
    num_images=numImages,
    num_classes=91,
    label_offset_type="80to90",
    label_offset=1,