Because of the holidays, TI E2E™ design support forum responses will be delayed from Dec. 25 through Jan. 2. Thank you for your patience.

This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

SK-TDA4VM: EdgeAI Apps can't run sample OD networks by compiled using Edge TIDL Tools

Part Number: SK-TDA4VM

Hi,

https://e2e.ti.com/support/processors-group/processors/f/processors-forum/1086904/sk-tda4vm-i-can-t-compile-any-sample-model-on-edgeai-tidl-tools

I had model compile issue like above link , now I compiled sample models using EdgeAI TIDL Tools.
and copy the compiled models to SK-TDA4VM's model zoo directory like below:

od-ort-ssd-lite_mobilenetv2_fpn
│
├── param.yaml
│
├── artifacts
│   ├── allowedNode.txt
│   ├── boxeslabels_tidl_io_1.bin
│   ├── boxeslabels_tidl_net.bin
│   ├── boxeslabels_tidl_net.bin.layer_info.txt
│   ├── boxeslabels_tidl_net.bin.svg
│   ├── boxeslabels_tidl_net.bin_netLog.txt
│   ├── onnxrtMetaData.txt
│   └── runtimes_visualization.svg
│
└── model
    ├── ssd-lite_mobilenetv2_fpn.onnx
    └── ssd-lite_mobilenetv2_fpn.prototxt

After copied model files, I tested CL/SS/OD models using CPP based EdgeAI Apps.
The CL/SS networks are works. but OD networks are not working.
(OD networks is work using EdgeAI TIDL Tools on PC)

Log when running OD network on CPP-based EdgeAI Apps.

root@j7-evm:/opt/edge_ai_apps/apps_cpp# ./bin/Release/app_edgeai ../configs/object_detection.yaml

 Number of subgraphs:1 , 107 nodes delegated out of 107 nodes

APP: Init ... !!!
MEM: Init ... !!!
MEM: Initialized DMA HEAP (fd=5) !!!
MEM: Init ... Done !!!
IPC: Init ... !!!
IPC: Init ... Done !!!
REMOTE_SERVICE: Init ... !!!
REMOTE_SERVICE: Init ... Done !!!
   664.341727 s: GTC Frequency = 200 MHz
APP: Init ... Done !!!
   664.341809 s:  VX_ZONE_INIT:Enabled
   664.341815 s:  VX_ZONE_ERROR:Enabled
   664.341820 s:  VX_ZONE_WARNING:Enabled
   664.342515 s:  VX_ZONE_INIT:[tivxInitLocal:130] Initialization Done !!!
   664.342834 s:  VX_ZONE_INIT:[tivxHostInitLocal:86] Initialization Done for HOST !!!
[01:49:16.000.000439]:DEBUG:[OutputInfo:0438] CONSTRUCTOR
[01:49:16.000.000452]:DEBUG:[GstWrapperBuffer:0094] CONSTRUCTOR
[01:49:16.000.000472]:DEBUG:[OutputInfo:0438] CONSTRUCTOR
[01:49:16.000.000484]:DEBUG:[GstWrapperBuffer:0094] CONSTRUCTOR
[01:49:16.000.000504]:DEBUG:[OutputInfo:0438] CONSTRUCTOR
[01:49:16.000.000534]:DEBUG:[MosaicInfo:0649] CONSTRUCTOR
[01:49:16.000.000549]:DEBUG:[FlowInfo:1011] CONSTRUCTOR
libtidl_onnxrt_EP loaded 0x34d2a460
Final number of subgraphs created are : 1, - Offloaded Nodes - 494, Total Nodes - 494
APP: Init ... !!!
MEM: Init ... !!!
MEM: Initialized DMA HEAP (fd=4) !!!
MEM: Init ... Done !!!
IPC: Init ... !!!
IPC: Init ... Done !!!
REMOTE_SERVICE: Init ... !!!
REMOTE_SERVICE: Init ... Done !!!
   566.016988 s: GTC Frequency = 200 MHz
APP: Init ... Done !!!
   566.017090 s:  VX_ZONE_INIT:Enabled
   566.017097 s:  VX_ZONE_ERROR:Enabled
   566.017103 s:  VX_ZONE_WARNING:Enabled
   566.017880 s:  VX_ZONE_INIT:[tivxInitLocal:130] Initialization Done !!!
   566.018154 s:  VX_ZONE_INIT:[tivxHostInitLocal:86] Initialization Done for HOST !!!
[01:49:31.15198.198457]:DEBUG:[DlTensor:0062] DEFAULT CONSTRUCTOR
[01:49:31.15198.198526]:DEBUG:[DlTensor:0080] COPY CONSTRUCTOR
[01:49:31.15198.198549]:DEBUG:[~DlTensor:0136] DESTRUCTOR
[01:49:31.15198.198577]:DEBUG:[DlTensor:0062] DEFAULT CONSTRUCTOR
[01:49:31.15198.198589]:DEBUG:[DlTensor:0080] COPY CONSTRUCTOR
[01:49:31.15198.198600]:DEBUG:[DlTensor:0080] COPY CONSTRUCTOR
[01:49:31.15198.198610]:DEBUG:[~DlTensor:0136] DESTRUCTOR
[01:49:31.15198.198628]:DEBUG:[ORTInferer:0158] CONSTRUCTOR
[01:49:31.15198.198890]:DEBUG:[DlTensor:0080] COPY CONSTRUCTOR
[01:49:31.15198.198915]:DEBUG:[DlTensor:0080] COPY CONSTRUCTOR
[01:49:31.15198.198927]:DEBUG:[DlTensor:0080] COPY CONSTRUCTOR
[01:49:31.15199.199143]:DEBUG:[InferencePipe:0107] CONSTRUCTOR
[01:49:31.15199.199168]:DEBUG:[SubFlowInfo:0913] CONSTRUCTOR
[01:49:31.15232.232934]:DEBUG:[GstPipe:0134] CONSTRUCTOR
[01:49:31.15232.232985]:INFO:[GstPipe:0136] SRC_CMDS:
[01:49:31.15232.232997]:INFO:[GstPipe:0139]  multifilesrc location=/opt/edge_ai_apps/data/images/%04d.jpg loop=1 index=0 stop-index=-1 caps=image/jpeg,framerate=1/1 ! jpegdec  ! videoscale ! video/x-raw, width=1280, height=720 ! tiovxcolorconvert ! video/x-raw, format=NV12 ! tiovxmultiscaler name=multiscaler_split_20 multiscaler_split_20. ! queue ! video/x-raw, width=1280, height=720 ! tiovxcolorconvert out-pool-size=4 target=1 ! video/x-raw, format=RGB ! appsink drop=true max-buffers=2 name=flow0_sensor0 multiscaler_split_20. ! queue ! video/x-raw, width=512, height=512 ! tiovxdlpreproc data-type=3 channel-order=0 mean-0=0.000000 mean-1=0.000000 mean-2=0.000000 scale-0=1.000000 scale-1=1.000000 scale-2=1.000000 tensor-format=rgb out-pool-size=4 ! application/x-tensor-tiovx ! appsink drop=true max-buffers=2 name=flow0_pre_proc0
[01:49:31.15233.233014]:INFO:[GstPipe:0141] SINK_CMD: appsrc format=GST_FORMAT_TIME is-live=true block=true do-timestamp=true name=flow0_post_proc0 ! tiovxcolorconvert ! video/x-raw,format=NV12 ! queue ! mosaic0.sink_0
appsrc format=GST_FORMAT_TIME block=true num-buffers=1 name=background0 ! tiovxcolorconvert ! video/x-raw,format=NV12 ! queue ! mosaic0.background
tiovxmosaic name=mosaic0
  sink_0::startx=320  sink_0::starty=180  sink_0::width=1280  sink_0::height=720
 ! video/x-raw, format=NV12, width=1920, height=1080 ! kmssink sync=false driver-name=tidss

[01:49:31.15319.319066]:DEBUG:[GstWrapperBuffer:0094] CONSTRUCTOR
[01:49:31.15319.319210]:DEBUG:[GstWrapperBuffer:0094] CONSTRUCTOR
[01:49:31.15319.319221]:INFO:[inferenceThread:0227] Starting inference thread.
MEM: ERROR: Alloc failed with status = 12 !!!
   665.294935 s:  VX_ZONE_ERROR:[tivxMemBufferAlloc:80] Shared mem ptr allocation failed
Segmentation fault (core dumped)

Is there additional work to run OD models?

I attached compiled OD models. 
Download: https://drive.google.com/file/d/1-cuvAOlSoC0TaRyt8KKmNsWTP7hxzxGL/view?usp=sharing

And modified 'object_detection.yaml'

title: "Object Detection Demo"
log_level: 2
inputs:
    input0:
        source: /dev/video1
        format: jpeg
        width: 1280
        height: 720
        framerate: 30
    input1:
        source: /opt/edge_ai_apps/data/videos/video_0000_h264.mp4
        format: h264
        width: 1280
        height: 720
        framerate: 25
        loop: False
    input2:
        source: /opt/edge_ai_apps/data/images/%04d.jpg
        width: 1280
        height: 720
        index: 0
        framerate: 1
        loop: True
models:
    model0:
        model_path: /opt/model_zoo/TVM-OD-5020-yolov3-mobv1-gluon-mxnet-416x416
        viz_threshold: 0.6
    model1:
        model_path: /opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320
        viz_threshold: 0.6
    model2:
        model_path: /opt/model_zoo/ONR-OD-8050-ssd-lite-regNetX-800mf-fpn-bgr-coco-512x512
        viz_threshold: 0.6
    model3:
        model_path: /opt/model_zoo/test/od-ort-ssd-lite_mobilenetv2_fpn
        viz_threshold: 0.3
    model4:
        model_path: /opt/model_zoo/test/od-tfl-ssd_mobilenet_v2_300_float
        viz_threshold: 0.3
outputs:
    output0:
        sink: kmssink
        width: 1920
        height: 1080
    output1:
        sink: /opt/edge_ai_apps/data/output/videos/output_video.mkv
        width: 1920
        height: 1080
    output2:
        sink: /opt/edge_ai_apps/data/output/images/output_image_%04d.jpg
        width: 1920
        height: 1080

flows:
    flow0:
        input: input2
        models: [model3]
        outputs: [output0]
        mosaic:
            mosaic0:
                width:  1280
                height: 720
                pos_x:  320
                pos_y:  180


Regards,
Lee

  • Hi Lee,

    We will get this checked, did you try running the pyhon edge_ai_apps? Is it similar in behavior?

    Thanks for sharing the precompiled models as well. We mostly rely on a params.yaml file which informs the edge_ai_apps some information to setup the pipeline.

    This params.yaml file is kept in every model available in ModelZoo. Not sure if the import tool is generating these params.yaml file. 

    Regards,
    Shyam

  • Hi, Shyam

    I checked python based EdgeAI Apps. The result was similar.
    In python based apps test, i modify model's params.yaml files. (If I didn't modify params.yaml, returns error)
    I added input_dataset, preprocess(reverse_channels) information that copied default sample model's param.yaml fiies.

    Here is ONNX OD model's running log.

    root@j7-evm:/opt/edge_ai_apps/apps_python# ./app_edgeai.py -n -v ../configs/object_detection.yaml
    libtidl_onnxrt_EP loaded 0x22976d30
    Final number of subgraphs created are : 1, - Offloaded Nodes - 494, Total Nodes - 494
    APP: Init ... !!!
    MEM: Init ... !!!
    MEM: Initialized DMA HEAP (fd=4) !!!
    MEM: Init ... Done !!!
    IPC: Init ... !!!
    IPC: Init ... Done !!!
    REMOTE_SERVICE: Init ... !!!
    REMOTE_SERVICE: Init ... Done !!!
       196.995324 s: GTC Frequency = 200 MHz
    APP: Init ... Done !!!
       197.000692 s:  VX_ZONE_INIT:Enabled
       197.000721 s:  VX_ZONE_ERROR:Enabled
       197.000726 s:  VX_ZONE_WARNING:Enabled
       197.003385 s:  VX_ZONE_INIT:[tivxInitLocal:130] Initialization Done !!!
       197.005921 s:  VX_ZONE_INIT:[tivxHostInitLocal:86] Initialization Done for HOST !!!
    [GST SRC STR]
    [FLOW 0]
    filesrc location=/opt/edge_ai_apps/data/videos/video_0000_h264.mp4 ! qtdemux ! h264parse ! v4l2h264dec ! video/x-raw, format=NV12  ! tiovxmultiscaler name=split_01
    split_01. ! queue ! video/x-raw, width=512, height=512 ! tiovxdlpreproc data-type=10 channel-order=0 mean-0=0.000000 mean-1=0.000000 mean-2=0.000000 scale-0=1.000000 scale-1=1.000000 scale-2=1.000000 tensor-format=bgr out-pool-size=4 ! application/x-tensor-tiovx ! appsink name=pre_0 max-buffers=2 drop=true
    split_01. ! queue ! video/x-raw, width=1280, height=720 ! tiovxcolorconvert target=1 out-pool-size=4 ! video/x-raw, format=RGB ! appsink name=sen_0 max-buffers=2 drop=true
    
    [GST SINK STR]
    appsrc format=GST_FORMAT_TIME is-live=true block=true do-timestamp=true name=post_0 ! tiovxcolorconvert ! video/x-raw,format=NV12, width=1280, height=720 ! queue ! mosaic_0.sink_0
    appsrc format=GST_FORMAT_TIME block=true num-buffers=1 name=background_0 ! tiovxcolorconvert ! video/x-raw,format=NV12, width=1920, height=1080 ! queue ! mosaic_0.background
    tiovxmosaic name=mosaic_0
    sink_0::startx=320  sink_0::starty=180  sink_0::width=1280   sink_0::height=720
    ! video/x-raw,format=NV12, width=1920, height=1080 ! kmssink sync=fals[  191.241171] unclassified data detected!
    e driver-name=tidss
    
    [  191.266497] unclassified data detected!
    Exception in thread Thread-1:
    Traceback (most recent call last):
      File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
        self.run()
      File "/usr/lib/python3.8/threading.py", line 870, in run
        self._target(*self._args, **self._kwargs)
      File "/opt/edge_ai_apps/apps_python/infer_pipe.py", line 90, in pipeline
        result = self.run_time(input_img)
      File "/opt/edge_ai_apps/apps_python/run_times.py", line 119, in __call__
        return self.interpreter.run(None, {self.input_name:input_img})
      File "/usr/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 188, in run
        return self._sess.run(output_names, input_feed, run_options)
    onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: (tensor(float)) , expected: (tensor(uint8))
    

    Regards,
    Lee

  • Update this thread.

    I tried to anther way OD sample model compile using EdgeAI Benchmark.
    The model compile result using EdgeAI Benchmark are successful.
    And it create a model's "param.yaml" file similar to the sample model's "param.yaml" file.
    Also The compiled model works on SK-TDA4VM(EdgeAI Apps).

    I don't know that why EdgeAI TIDL Tools compiled model not working.
    I used same sample model files. but result is different.

    Regards,
    Lee

  • Hi Manu, Kumar,

    Can you please suggest what Lee should be using to run the models properly on EdgeAI SDK?

    EdgeAI Benchmark or EdgeAI TIDL Tools.


    Regards,
    Shyam

  • edgeai-tidl-tools provide the foundational compilation and inference tools. The compilation and inference tools that edgeai-benchmark uses come from edgeai-tidl-tools. edgeai-tidl-tools only provide a few basic compilation scripts as examples.

    However, edgeai-benchkmark has 120+ models integrated and those compiled models are frequently tested to make sure that they work in the edgeai-SDK (SK-TDA4VM).

    When it comes to running the models in the SDK, edgeai-benchmark will be more reliable. However to understand the basics of how to do the model compilation, the scripts and examples in edgeai-tidl-tools will be easier.