This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-AM62A: YoloX deploying Problem

Part Number: PROCESSOR-SDK-AM62A


I tried compiling original YoloX model and the compilation was successful. But on deploying it to the board, the Object Detection application is not starting. In the terminal it shows Object Detection Application selected and no values for inference time is coming. Could you please throw some light on what may be the issue?

  • Hi Abhy,

    Thank you for your query.

    Are you talking about the original YOLOX model from Megvii or our modified version of YOLOX? We have a few changes to make it more embedded friendly. See here: https://github.com/TexasInstruments/edgeai-yolox. Please use models with the yolox-ti-lite naming convention. May I also ask how you compiled the model?

    How are you starting this application? Would you please share the run command? If using edgeai-gst-apps, the default configuration suppresses output. If you add -n, it will disable this suppression and see the output logs that may describe why this does not appear to be running.

    Please include any logs, commands, or files that will help me debug the issue!

    Best
    -Reese

  • Thanks Mr Reese for your reply.

    I am trying out original YoloX model from Megvii. Is it not compatible on TI-AM62A as such? Is mandatory to make changes to it?

    I will get the logs and reply with the same too.

    Regards,
    -Abhy

  • Hi Abhy,

    The original Megvii model can compile and import, but there will be some layers not supported on the target's accelerator. Those layers will be run on the Arm core, and incur performance penalty from IPC. I've seen some struggles to compile this architecture because some unsupported layers may require explicit 'denylist' in the options for compilation.

    Please see this page on our yolox repo for info on how this model differs from the original: https://github.com/TexasInstruments/edgeai-yolox/blob/main/README_2d_od.md. There are a few changes from the original architecture, but nothing drastic (SiLU -> ReLU, modify initial focus layer, replace large maxpool kernels with cascade of smaller ones). If you are trying to use pretrained weights from YOLOX, then most weights should import well to the architecture we support, but fine-tuning would still be a good decision in this scenario.

    Could you tell us more about what you're trying to do?

    Best,
    Reese

  • Hi Reese,

    I am sharing the model that I have used here: https://drive.google.com/file/d/1Y38cH8WyRCkAx1n015bT0UuUyJyGPbvi/view?usp=drive_link

    I am not getting any error, but the detection screen is not opening for the model. I am attaching the log that I am getting on the command prompt. Please help.

    root@am62axx-evm:/opt/edgeai-gst-apps/apps_python# ./app_edgeai.py ../configs/obj
    ject_detection.yaml -n
    libtidl_onnxrt_EP loaded 0x3db9be40
    Final number of subgraphs created are : 2, - Offloaded Nodes - 268, Total Nodes - 277
    APP: Init ... !!!
    MEM: Init ... !!!
    MEM: Initialized DMA HEAP (fd=5) !!!
    MEM: Init ... Done !!!
    IPC: Init ... !!!
    IPC: Init ... Done !!!
    REMOTE_SERVICE: Init ... !!!
    REMOTE_SERVICE: Init ... Done !!!
        38.942860 s: GTC Frequency = 200 MHz
    APP: Init ... Done !!!
        38.950740 s:  VX_ZONE_INIT:Enabled
        38.951450 s:  VX_ZONE_ERROR:Enabled
        38.951890 s:  VX_ZONE_WARNING:Enabled
        38.956681 s:  VX_ZONE_INIT:[tivxInitLocal:130] Initialization Done !!!
        38.958354 s:  VX_ZONE_INIT:[tivxHostInitLocal:96] Initialization Done for HOST !!!
    ==========[INPUT PIPELINE(S)]==========
    
    [PIPE-0]
    
    v4l2src device=/dev/video-usb-cam0 brightness=133 contrast=5 saturation=83 ! capsfilter caps="image/jpeg, width=(int)1280, height=(int)720;" ! jpegdec ! tiovxdlcolorconvert ! capsfilter caps="video/x-raw, format=(string)NV12;" ! tiovxmultiscaler name=split_01
    split_01. ! queue ! capsfilter caps="video/x-raw, width=(int)1280, height=(int)720;" ! tiovxdlcolorconvert out-pool-size=4 ! capsfilter caps="video/x-raw, format=(string)RGB;" ! appsink max-buffers=2 drop=True name=sen_0
    split_01. ! queue ! capsfilter caps="video/x-raw, width=(int)640, height=(int)640;" ! tiovxdlpreproc out-pool-size=4 tensor-format=1 ! capsfilter caps="application/x-tensor-tiovx;" ! appsink max-buffers=2 drop=True name=pre_0
    
    
    ==========[OUTPUT PIPELINE]==========
    
    appsrc do-timestamp=True format=3 block=True name=post_0 ! tiovxdlcolorconvert ! capsfilter caps="video/x-raw, format=(string)NV12, width=(int)1280, height=(int)720;" ! queue ! mosaic_0.sink_0
    
    tiovxmosaic target=1 background=/tmp/background_0 name=mosaic_0 src::pool-size=4
    sink_0::startx="<320>" sink_0::starty="<150>" sink_0::widths="<1280>" sink_0::heights="<720>"
    ! capsfilter caps="video/x-raw, format=(string)NV12, width=(int)1920, height=(int)1080;" ! queue ! tiperfoverlay title=Object Detection ! kmssink sync=False max-lateness=5000000 qos=True processing-deadline=15000000 driver-name=tidss connector-id=40 plane-id=31 force-modesetting=True
    
    [  316.580919] EXT4-fs (mmcblk1p2): error count since last fsck: 1
    [  316.586937] EXT4-fs (mmcblk1p2): initial error at time 1697619348: ext4_validate_inode_bitmap:105
    [  316.595870] EXT4-fs (mmcblk1p2): last error at time 1697619348: ext4_validate_inode_bitmap:105

  • Hi Abhy,

    I do not see any errors being reported. Are you saying the display does not show anything? I assume you have a USB camera attached -- if not, modify the config/object_detection.yaml to use the video-based input. I have a few suggestions for debugging the model otherwise:

    • Include -v in the command line options for the app_edgeai.py script
    • in the calling linux shell, export TIDL_RT_DEBUG=1 for more logging
    • run /opt/vx_app_arm_remote_log.out in the background
    • Open /usr/lib/python3.10/site-packages/edgeai_dl_inferer.py, and modify the runtime_options at line 162 to include "debug_level": 2

    These will provide more information about it the accelerator is struggling with the model or not. Best results will come from using our fork of the YOLOX model versus the Megvii one, since we have assured all layers are accelerated.

    Best,
    Reese