This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-AM62A: Can I stream 3280x2464 resolution video through apps python

Part Number: PROCESSOR-SDK-AM62A

Tool/software:

Hello Expert,

I tried to stream 4k through apps_python by modifying gst_wrapper and config file.

Here is attaching config yaml file and gst_wrapper.py

title: "Preview"
inputs:
    input0:
        source: /dev/video-imx219-cam1
        subdev-id: /dev/v4l-imx219-subdev1
        format: rggb10
        width: 3280
        height: 2464
        framerate: 15
models:
    model0:
        model_path: /opt/model_zoo/mobileNetV2
        topN: 1
outputs:
    output0:
        sink: kmssink
        width: 3280
        height: 2464
flows:
    flow0: [input0,model0,output0]

L#645

property = {                                                            
                "sensor-name": sen_name,                                            
                "dcc-isp-file": "/opt/imaging/%s/linear/7140.dcc_viss.bin" % input.sen_id,
                "format-msb": format_msb,                                         
            }    
            
L#1314

        for elem in f.input.gst_inp_elements:        
            if elem.get_factory().get_name() == "tiovxisp":  
                dcc_2a_file = "/opt/imaging/%s/linear/7140.dcc_2a.bin" % f.input.sen_id
                Gst.ChildProxy.set_property(elem, "sink_0::dcc-2a-file", dcc_2a_file)
                if not f.input.format.startswith("bggi"):
                    Gst.ChildProxy.set_property(elem, "sink_0::device", f.input.subdev_id)

Error =>

root@am62axx-evm:/opt/edgeai-gst-apps# ./apps_python/app_edgeai.py configs/dms_config.yaml  -n
libtidl_onnxrt_EP loaded 0x234393d0 
Final number of subgraphs created are : 1, - Offloaded Nodes - 103, Total Nodes - 103 
APP: Init ... !!!
  1644.607400 s: MEM: Init ... !!!
  1644.607478 s: MEM: Initialized DMA HEAP (fd=5) !!!
  1644.607660 s: MEM: Init ... Done !!!
  1644.607694 s: IPC: Init ... !!!
  1644.625306 s: IPC: Init ... Done !!!
REMOTE_SERVICE: Init ... !!!
REMOTE_SERVICE: Init ... Done !!!
  1644.629926 s: GTC Frequency = 200 MHz
APP: Init ... Done !!!
  1644.630104 s:  VX_ZONE_INFO: Globally Enabled VX_ZONE_ERROR                                                                                                                                        
  1644.630126 s:  VX_ZONE_INFO: Globally Enabled VX_ZONE_WARNING                                                                                                                                      
  1644.630137 s:  VX_ZONE_INFO: Globally Enabled VX_ZONE_INFO                                                                                                                                         
  1644.631260 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-0                                                                                                                   
  1644.631574 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-1                                                                                                                   
  1644.631858 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-2                                                                                                                   
  1644.632114 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-3                                                                                                                   
  1644.632151 s:  VX_ZONE_INFO: [tivxInitLocal:126] Initialization Done !!!                                                                                                                           
  1644.632183 s:  VX_ZONE_INFO: Globally Disabled VX_ZONE_INFO                                                                                                                                        
==========[INPUT PIPELINE(S)]==========                                                                                                                                                               
                                                                                                                                                                                                      
                                                                                                                                                                                                      
** (python3:2124): CRITICAL **: 00:05:50.307: gst_tiovx_multi_scaler_fixate_caps: assertion 'src_caps_list' failed                                                                                    
                                                                                                                                                                                                      
** (python3:2124): CRITICAL **: 00:05:50.315: gst_tiovx_multi_scaler_fixate_caps: assertion 'src_caps_list' failed                                                                                    
                                                                                                                                                                                                      
** (python3:2124): CRITICAL **: 00:05:50.356: gst_tiovx_multi_scaler_fixate_caps: assertion 'src_caps_list' failed                                                                                    
                                                                                                                                                                                                      
** (python3:2124): CRITICAL **: 00:05:50.391: gst_tiovx_multi_scaler_fixate_caps: assertion 'src_caps_list' failed                                                                                    
[ERROR] Error pulling tensor from GST Pipeline                                                                                                                                                        
[PIPE-0]                                                                                                                                                                                              
                                                                                                                                                                                                      
v4l2src device=/dev/video-imx219-cam1 io-mode=5 pixel-aspect-ratio=None ! queue leaky=2 ! capsfilter caps="video/x-bayer, width=(int)3280, height=(int)2464, format=(string)rggb10;" ! tiovxisp dcc-is
p-file=/opt/imaging/imx219/linear/7140.dcc_viss.bin sensor-name=SENSOR_SONY_IMX219_RPI format-msb=9 ! capsfilter caps="video/x-raw, format=(string)NV12;" ! tiovxmultiscaler name=split_01            
split_01. ! queue ! capsfilter caps="video/x-raw, width=(int)1920, height=(int)1080;" ! tiovxdlcolorconvert out-pool-size=4 ! capsfilter caps="video/x-raw, format=(string)RGB;" ! appsink max-buffers
=2 drop=True name=sen_0                                                                                                                                                                               
split_01. ! queue ! capsfilter caps="video/x-raw, width=(int)1810, height=(int)1360;" ! tiovxmultiscaler target=1 name=tiovxmultiscaler2                                                              
                                                                                                                                                                                                      
                                                                                                                                                                                                      
==========[OUTPUT PIPELINE]==========                                                                                                                                                                 
                                                                                                                                                                                                      
appsrc do-timestamp=True format=3 block=True name=post_0 ! tiovxdlcolorconvert ! capsfilter caps="video/x-raw, format=(string)NV12, width=(int)1920, height=(int)1080;" ! kmssink max-lateness=5000000
 qos=True processing-deadline=15000000 driver-name=tidss connector-id=40 plane-id=41 fd=36                                                                                                            
                                                                                                                                                                                                      
APP: Deinit ... !!!                                                                                                                                                                                   
REMOTE_SERVICE: Deinit ... !!!                                                                                                                                                                        
REMOTE_SERVICE: Deinit ... Done !!!                                                                                                                                                                   
  1650.625557 s: IPC: Deinit ... !!!                                                                                                                                                                  
  1650.626119 s: IPC: DeInit ... Done !!!                                                                                                                                                             
  1650.626173 s: MEM: Deinit ... !!!                                                                                                                                                                  
  1650.626262 s: DDR_SHARED_MEM: Alloc's: 56 alloc's of 263115479 bytes                                                                                                                               
  1650.626276 s: DDR_SHARED_MEM: Free's : 56 free's  of 263115479 bytes                                                                                                                               
  1650.626286 s: DDR_SHARED_MEM: Open's : 0 allocs  of 0 bytes                                                                                                                                        
  1650.626301 s: MEM: Deinit ... Done !!!                                                                                                                                                             
APP: Deinit ... Done !!!                                               

Note: Simple 4k pipeline runs well

Warm Regards,
Sajan

  • Hello Sajan,

    Thank you for including the information above. 

    I first note that the output resolution you have for kmssink is too large for AM62A. The display system has the following capabilities, per the datasheet:

    • Display subsystem – Single display support – Up to 2048x1080 @ 60fps – Up to 165MHz pixel clock support with independent PLL – DPI 24-bit RGB parallel interface – Supports safety features such as freeze frame detection and MISR data check
    • Alternatively, the video encoder / VPU can support up to 4K resolution 3840 × 2160

    I see the lines "** (python3:2124): CRITICAL **: 00:05:50.391: gst_tiovx_multi_scaler_fixate_caps: assertion 'src_caps_list' failed" which tell me the pipeline was not able to initialize for the tiovxmultiscaler plugin. It is not obvious which instance this is, since you have multiple in your pipeline. 

    The pipeline string shown in the log does not have anything at the end of the name=tiovxmultiscaler2 instance, and I am inclined to think this one is the problem -- it has no actual output caps converting down from 1810x1360 resolution. 

    Note: Simple 4k pipeline runs well

    Can you give more information here? What portions of that pipeline worked well? Capture+ISP?

    BR,
    Reese

  • Hello Reese,

    Can you give more information here? What portions of that pipeline worked well? Capture+ISP?

    #kmssink
    
    
    #!/bin/bash
    gst-launch-1.0 -e \
    v4l2src device=/dev/video-imx219-cam1 io-mode=5 ! \
    queue max-size-buffers=1 leaky=2 ! \
    video/x-bayer,width=3280,height=2464,framerate=15/1,format=rggb10 ! \
    tiovxisp sink_0::device=/dev/v4l-imx219-subdev1 \
        sensor-name=SENSOR_SONY_IMX219_RPI \
        dcc-isp-file=/opt/imaging/imx219/linear/7140.dcc_viss.bin \
        sink_0::dcc-2a-file=/opt/imaging/imx219/linear/7140.dcc_2a.bin \
        format-msb=9 ! \
    video/x-raw, format=NV12, width=3280, height=2464, framerate=15/1 ! \
    kmssink
    
    
    
    #video sink
    
    #!/bin/bash
    gst-launch-1.0 \
    v4l2src device=/dev/video-imx219-cam1 io-mode=5 ! \
    queue max-size-buffers=1 leaky=2 ! \
    video/x-bayer,width=3280,height=2464,framerate=15/1,format=rggb10 ! \
    tiovxisp sink_0::device=/dev/v4l-imx219-subdev1 \
        sensor-name=SENSOR_SONY_IMX219_RPI \
        dcc-isp-file=/opt/imaging/imx219/linear/7140.dcc_viss.bin \
        sink_0::dcc-2a-file=/opt/imaging/imx219/linear/7140.dcc_2a.bin \
        format-msb=9 ! \
    video/x-raw, format=NV12, width=3280, height=2464, framerate=15/1 ! \
    videoflip method=horizontal-flip ! \
    v4l2h264enc ! \
    h264parse ! \
    mp4mux ! \
    filesink location=/opt/edgeai-test-data/output/4kstream.mp4
    

    These two pipelines are working well. 

    Can you please give me proper instruction about the modification needed in gst pipeline like resolution, DCC files etc, is there any caps in apps_python.

    I need to run the app in maximum resolution supported by AM62A.

    Warm Regards,
    Sajan

  • Hi Sajan,

    I need to run the app in maximum resolution supported by AM62A.

    It is surprising that this would work with KMSSINK, especially since this is out of the SOC spec. Saving to a video file is realistic and within spec.

    Note taht IMX219's 8MP mode is generally meant for still-image capture, and we do not have official support for this resolution. There are not DCC files available for this resolution, even though the driver is known to function at 15 FPS for this resolution. 

    I'll also quickly note that videoflip is quite slow and will add noticeable latency, especially at a very high resolution. 

    I would suggest trying this with edgeai-gst-apps/optiflow for generating the pipeline before modifying the source in apps_python. 

    In apps_python, you should be looking at the get_dl_scaler_elements portion of the pipeline generation code[0]. This is where it has not created your second stage correctly

    • This may be giving you problems because your mobilenet model is probably 224x224, which is more than smaller than 1/4 the first scaler's output of caps="video/x-raw, width=(int)1810, height=(int)1360;

    [0]https://github.com/TexasInstruments/edgeai-gst-apps/blob/799dcbda54f829eb7b234bf93a5669addcc1b919/apps_python/gst_wrapper.py#L847 

    BR,
    Reese

  • Hello Reese,

    I checked 4k in optiflow app. Seems working well in that. Working pipeline is

    v4l2src device=/dev/video-imx219-cam1 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=3280, height=2464, format=rggb10 ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/linear/7140.dcc_viss.bin format-msb=9 sink_0::dcc-2a-file=/opt/imaging/imx219/linear/7140.dcc_2a.bin sink_0::device=/dev/v4l-imx219-subdev1 ! video/x-raw, format=NV12 ! \
    tiovxmultiscaler src_1::pool-size=4 name=split_01 src_0::roi-startx=205 src_0::roi-starty=154 src_0::roi-width=2870 src_0::roi-height=2156 target=0 \
    \
    split_01. ! queue ! video/x-raw, width=718, height=540 ! tiovxmultiscaler target=1 ! video/x-raw, width=224, height=224 ! tiovxdlpreproc model=/opt/model_zoo/mobileNetV2  out-pool-size=4 ! application/x-tensor-tiovx ! tidlinferer target=1  model=/opt/model_zoo/mobileNetV2 ! post_0.tensor \
    split_01. ! queue ! video/x-raw, width=1920, height=1080 ! post_0.sink \
    tidlpostproc name=post_0 model=/opt/model_zoo/mobileNetV2 alpha=0.200000 viz-threshold=0.500000 top-N=1 display-model=true ! \
    kmssink driver-name=tidss sync=false force-modesetting=true
     

    Where should I modify in gst_wrapper.py of apps_python to get this output.

    Also  I am have a question. From the pipeline, your previous chat and documentation. I saw that the inference is taken from 224x224 resolution.
    Can I increase this resolution to get a better model inference through increasing this resolution.

    Below is my apps_python pipeline now.

    ==========[INPUT PIPELINE(S)]==========
    
    [ERROR] Error pulling tensor from GST Pipeline
    [PIPE-0]
    
    v4l2src device=/dev/video-imx219-cam1 io-mode=5 pixel-aspect-ratio=None ! queue leaky=2 ! capsfilter caps="video/x-bayer, width=(int)3280, height=(int)2464, format=(string)rggb10;" ! tiovxisp dcc-is
    p-file=/opt/imaging/imx219/linear/7140.dcc_viss.bin sensor-name=SENSOR_SONY_IMX219_RPI format-msb=9 ! capsfilter caps="video/x-raw, format=(string)NV12;" ! tiovxmultiscaler name=split_01
    split_01. ! queue ! capsfilter caps="video/x-raw, width=(int)3280, height=(int)2464;" ! tiovxdlcolorconvert out-pool-size=4 ! capsfilter caps="video/x-raw, format=(string)RGB;" ! appsink max-buffers
    =2 drop=True name=sen_0
    split_01. ! queue ! capsfilter caps="video/x-raw, width=(int)224, height=(int)224;" ! tiovxmultiscaler target=1 name=tiovxmultiscaler2
    
    
    ==========[OUTPUT PIPELINE]==========
    
    appsrc do-timestamp=True format=3 block=True is-live=True name=post_0 ! tiovxdlcolorconvert ! capsfilter caps="video/x-raw, format=(string)NV12, width=(int)3280, height=(int)2464;" ! v4l2h264enc ext
    ra-controls="controls, frame_level_rate_control_enable=(int)1, video_bitrate=(int)10000000, video_gop_size=(int)30;" ! h264parse ! mp4mux movie-timescale=1800 faststart-file=/tmp/qtmux-657797001 ! f
    ilesink sync=False location=/opt/edgeai-test-data/output/video_sink.mp4
    


    Warm Regards,
    Sajan

  • Hi Sajan,

    The multiscalar supports downscaling of max 1/4 at a time. In your application you need to downscale from 4K resolution down to 224x224 (this is determined by the resolution of your AI model). The pipeline has to conduct multiple rounds of downscaling in order to achieve this. The edgeai-gst-apps code is not prepared for such task. You can edit the code here to add a loop which implements the multiple rounds of downscaling: https://github.com/TexasInstruments/edgeai-gst-apps/blob/799dcbda54f829eb7b234bf93a5669addcc1b919/apps_python/gst_wrapper.py#L862. That said, I don't recommend it. Instead, I suggest to write the gst pipeline yourself. 

    Where should I modify in gst_wrapper.py of apps_python to get this output.

    In order for the pipeline to work, the optiflow is cropping the input so that only two round of downscaling are needed. See the roi here "rc_0::roi-startx=205 src_0::roi-starty=154 src_0::roi-width=2870 src_0::roi-height=2156". If this is what you would like to do, you can look at the code for the optiflow and try to reproduce it in the python code for the egdgeai-gst-apps at the same location I provided in the link above. 

    I saw that the inference is taken from 224x224 resolution.
    Can I increase this resolution to get a better model inference through increasing this resolution.

    This is the input resolution of the model you are using "mobileNetV2". For models with higher resolution you can use Yolox-s which has 640x640 input resolution. 

    Best regards,

    Qutaiba

  • Hello Qutaiba,

    For models with higher resolution you can use Yolox-s which has 640x640 input resolution

    My understanding is that I can't train classification model using Yolo. Object Detection training can done using Yolo. Is it correct?

    Warm Regards,
    Sajan

  • Hi Sajan,

    Yes, Yolox-s is an object detection model. For classification models with higher resolution you can look at the edgeai-modelzoo: https://github.com/TexasInstruments/edgeai-tensorlab/tree/r10.0/edgeai-modelzoo/modelartifacts/AM62A/8bits. There are several options such as versions of mobilenet_v2_tv and resnet50 with resolution of up to 1024x1024. For performance data about the models look at Model Selection tool in the Edgeai Studio: https://dev.ti.com/edgeaistudio/. Note that the classification models with higher resolution are not available in all SDKs revisions and the link I shared above is for SDK 10.0. 

    Best regards,

    Qutaiba

  • Hi Qutaiba,

    For classification models with higher resolution you can look at the edgeai-modelzoo: https://github.com/TexasInstruments/edgeai-tensorlab/tree/r10.0/edgeai-modelzoo/modelartifacts/AM62A/8bits. There are several options such as versions of mobilenet_v2_tv and resnet50 with resolution of up to 1024x1024.

    I saw the trained model when downloading the link in text file located in the above github repo. How can I train my own custom model?
    Below is the config_classification.yaml training section in ModelMaker. Only 3 model options are available there.

    training:
        # enable/disable training
        enable: True #False
    
        # Image Classification model chosen can be changed here if needed
        # options are: 'mobilenet_v2_lite', 'regnet_x_400mf', 'regnet_x_800mf'
        model_name: 'mobilenet_v2_lite'
    
        training_epochs: 99 #30
        batch_size: 64 #8 #32
        learning_rate: 0.002
        # num_gpus: 0 #1 #4


    If  I can train for higher resolution, please give me instruction to achieve.

    Thanks,
    Sajan

  • Hi Sajan, 

    Below is the config_classification.yaml training section in ModelMaker. Only 3 model options are available there.

    Yes, ModelMaker supports training of limited number of models. For models which are not supported by ModelMaker, you can use any other non-TI tool such as pytourch to conduct the training. When done, you can use the edgeai-tidl tool to port/compile the trained model to work on AM62A. 

    The last responses on this thread are not related to the original topic of the thread about video streaming. I am marking the thread as resolved. Please, start a new thread if you have question not related to this topic. 

    Best regards,

    Qutaiba 

  • Hello Qutaiba,

    The multiscalar supports downscaling of max 1/4 at a time. In your application you need to downscale from 4K resolution down to 224x224 (this is determined by the resolution of your AI model). The pipeline has to conduct multiple rounds of downscaling in order to achieve this. The edgeai-gst-apps code is not prepared for such task. You can edit the code here to add a loop which implements the multiple rounds of downscaling: https://github.com/TexasInstruments/edgeai-gst-apps/blob/799dcbda54f829eb7b234bf93a5669addcc1b919/apps_python/gst_wrapper.py#L862. That said, I don't recommend it. Instead, I suggest to write the gst pipeline yourself. 

    I did this. But I can't stream the output video in 3280x2464 resolution. Works well when 3280x2464 as input and 1920x1080 as output.

    When I tried to fix the error through AI tool, it seems CMA memory allocation failed. (Now CMA is 576M)

    Below is the log when running the app:

    ==========[INPUT PIPELINE(S)]==========
    
    [PIPE-0]
    
    v4l2src device=/dev/video-imx219-cam1 io-mode=5 pixel-aspect-ratio=None ! queue leaky=2 ! capsfilter caps="video/x-bayer, width=(int)3280, height=(int)2464, format=(string)rggb10;" ! tiovxisp dcc-i1
    split_01. ! queue ! capsfilter caps="video/x-raw, width=(int)3280, height=(int)2464;" ! tiovxdlcolorconvert out-pool-size=4 ! capsfilter caps="video/x-raw, format=(string)RGB;" ! appsink max-buffer0
    split_01. ! queue ! capsfilter caps="video/x-raw, width=(int)820, height=(int)616;" ! tiovxmultiscaler target=1 ! capsfilter caps="video/x-raw, width=(int)340, height=(int)256;" ! tiovxdlcolorconve0
    
    
    ==========[OUTPUT PIPELINE]==========
    
    appsrc do-timestamp=True format=3 block=True name=post_0 ! tiovxdlcolorconvert ! capsfilter caps="video/x-raw, format=(string)NV12, width=(int)3280, height=(int)2464;" ! kmssink sync=False max-late6
    
     21991.499172 s: MEM: ERROR: Alloc failed with status = 12 !!!
     21991.499279 s:  VX_ZONE_ERROR: [tivxMemBufferAlloc:111] Shared mem ptr allocation failed
     21991.499848 s: MEM: ERROR: Alloc failed with status = 12 !!!
     21991.499891 s:  VX_ZONE_ERROR: [tivxMemBufferAlloc:111] Shared mem ptr allocation failed
     21991.503133 s: MEM: ERROR: Alloc failed with status = 12 !!!
     21991.503179 s:  VX_ZONE_ERROR: [tivxMemBufferAlloc:111] Shared mem ptr allocation failed
     21991.503192 s:  VX_ZONE_ERROR: [ownAllocRawImageBuffer:359] could not allocate memory
     21991.503203 s:  VX_ZONE_ERROR: [ownGraphAllocateDataObject:1109] Memory allocation for replicated parameter parent object failed
     21991.503243 s:  VX_ZONE_ERROR: [ graph_171 ] Memory alloc for data objects failed
     21991.503255 s:  VX_ZONE_ERROR: [ graph_171 ] Graph verify failed
     21991.503738 s:  VX_ZONE_ERROR: [ownReleaseReferenceInt:747] Invalid reference
    [ERROR] Error pulling tensor from GST Pipeline
     21996.553779 s:  VX_ZONE_WARNING: [vxReleaseContext:1275] Found a reference 0xffffa7e9ee80 of type 0000080f at external count 2, internal count 0, releasing it
     21996.553876 s:  VX_ZONE_WARNING: [vxReleaseContext:1277] Releasing reference (name=image_119) now as a part of garbage collection
     21996.553899 s:  VX_ZONE_WARNING: [vxReleaseContext:1275] Found a reference 0xffffa7f1c560 of type 00000813 at external count 1, internal count 0, releasing it
     21996.553912 s:  VX_ZONE_WARNING: [vxReleaseContext:1277] Releasing reference (name=object_array_128) now as a part of garbage collection
     21996.555991 s:  VX_ZONE_WARNING: [vxReleaseContext:1275] Found a reference 0xffffa7f1ca70 of type 00000813 at external count 1, internal count 0, releasing it
     21996.556015 s:  VX_ZONE_WARNING: [vxReleaseContext:1277] Releasing reference (name=object_array_136) now as a part of garbage collection
     21996.557957 s:  VX_ZONE_WARNING: [vxReleaseContext:1275] Found a reference 0xffffa7f1d490 of type 00000813 at external count 1, internal count 0, releasing it
     21996.557973 s:  VX_ZONE_WARNING: [vxReleaseContext:1277] Releasing reference (name=object_array_151) now as a part of garbage collection
     21996.559942 s:  VX_ZONE_WARNING: [vxReleaseContext:1275] Found a reference 0xffffa7e6c180 of type 00000816 at external count 1, internal count 0, releasing it
     21996.559959 s:  VX_ZONE_WARNING: [vxReleaseContext:1277] Releasing reference (name=user_data_object_169) now as a part of garbage collection
     21996.560016 s:  VX_ZONE_WARNING: [vxReleaseContext:1275] Found a reference 0xffffa7e6c3b0 of type 00000816 at external count 1, internal count 0, releasing it
     21996.560031 s:  VX_ZONE_WARNING: [vxReleaseContext:1277] Releasing reference (name=user_data_object_170) now as a part of garbage collection
     21996.560080 s:  VX_ZONE_WARNING: [vxReleaseContext:1275] Found a reference 0xffffa7f1db50 of type 00000813 at external count 1, internal count 0, releasing it
     21996.560094 s:  VX_ZONE_WARNING: [vxReleaseContext:1277] Releasing reference (name=object_array_174) now as a part of garbage collection
     21996.560114 s:  VX_ZONE_WARNING: [vxReleaseContext:1275] Found a reference 0xffffa7e6c810 of type 00000816 at external count 1, internal count 0, releasing it
     21996.560127 s:  VX_ZONE_WARNING: [vxReleaseContext:1277] Releasing reference (name=user_data_object_175) now as a part of garbage collection
     21996.560146 s:  VX_ZONE_WARNING: [vxReleaseContext:1275] Found a reference 0xffffa7f1dd00 of type 00000813 at external count 1, internal count 0, releasing it
     21996.560159 s:  VX_ZONE_WARNING: [vxReleaseContext:1277] Releasing reference (name=object_array_176) now as a part of garbage collection
     21996.560178 s:  VX_ZONE_WARNING: [vxReleaseContext:1275] Found a reference 0xffffa7e78f30 of type 00000817 at external count 1, internal count 0, releasing it
     21996.560191 s:  VX_ZONE_WARNING: [vxReleaseContext:1277] Releasing reference (name=raw_image_177) now as a part of garbage collection
     21996.560211 s:  VX_ZONE_WARNING: [vxReleaseContext:1275] Found a reference 0xffffa7f1deb0 of type 00000813 at external count 1, internal count 0, releasing it
     21996.560224 s:  VX_ZONE_WARNING: [vxReleaseContext:1277] Releasing reference (name=object_array_178) now as a part of garbage collection
     21996.560244 s:  VX_ZONE_WARNING: [vxReleaseContext:1275] Found a reference 0xffffa7e6ca40 of type 00000816 at external count 1, internal count 0, releasing it
     21996.560256 s:  VX_ZONE_WARNING: [vxReleaseContext:1277] Releasing reference (name=user_data_object_179) now as a part of garbage collection
     21996.560275 s:  VX_ZONE_WARNING: [vxReleaseContext:1275] Found a reference 0xffffa7f1e060 of type 00000813 at external count 1, internal count 0, releasing it
     21996.560288 s:  VX_ZONE_WARNING: [vxReleaseContext:1277] Releasing reference (name=object_array_180) now as a part of garbage collection
     21996.560307 s:  VX_ZONE_WARNING: [vxReleaseContext:1275] Found a reference 0xffffa7ea1310 of type 0000080f at external count 1, internal count 0, releasing it
     21996.560320 s:  VX_ZONE_WARNING: [vxReleaseContext:1277] Releasing reference (name=image_181) now as a part of garbage collection
    APP: Deinit ... !!!
    

    If I need to increase the CMA memory give me the instructions to do that  

    Warm Regards,
    Sajan

  • Hi Sajan,

    It seems that you are trying to display 4K frames on the display. This is not supported. The maximum supported resolution is "Display subsystem – Single display support – Up to 2048x1080 @ 60fps".

    Best regards,

    Qutaiba

  • Hello Qutaiba,

    I tried optiflow app and derived a pipeline from it. Works well with 3280x2464. I am working to build an application with that pipeline. Is it possible to do or as you mentioned Display subsystem limits it to 2048x1080?

    Warm Regards,
    Sajan

  • Hi Sajan,

    I tried optiflow app and derived a pipeline from it. Works well with 3280x2464.

    Do you mean you actually have a 4K display on AM62A and you have a 4K hdmi screen connected to the EVM and this entire setup worked fine? This is not possible as AM62A display subsystem is limited at 2048x1080. The optiflow must have inserted a downscaling plugin before the output sink. Otherwise, it would not work. 

    I am working to build an application with that pipeline. Is it possible to do or as you mentioned Display subsystem limits it to 2048x1080?

    No, AM62A does not support 4K display.

    I understand that you need to deal with 4K input stream/camera. I don't understand why it is important to have a 4K display connected directly to AM62A. 

    Best regards,

    Qutaiba

  • Hello Qutaiba,

    you can use any other non-TI tool such as pytourch to conduct the training. When done, you can use the edgeai-tidl tool to port/compile the trained model to work on AM62A.

    I trained a model with mobileNetv3 small and exported as .onnx. How can I compile the model using edgeai-modelmaker. Should I add in in config_classification.yaml file by enable: False for dataset and training

    common:
        target_module: 'vision'
        task_type: 'classification'
        target_device: 'AM62A'
        # run_name can be any string, but there are some special cases:
        # {date-time} will be replaced with datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
        # {model_name} will be replaced with the name of the model
        run_name: '{date-time}/{model_name}'
    
    dataset:
        # enable/disable dataset loading
        enable: True #False
        max_num_files: None
    
        # Image Classification Dataset Examples:
        # -------------------------------------
        # Example 1, (known datasets): 'oxford_flowers102'
        # dataset_name: oxford_flowers102
        # -------------------------------------
        # Example 2, give a dataset_name and input_data_path.
        # input_data_path could be a path to zip file, tar file, folder OR http, https link to zip or tar files
        # for input_data_path these are provided with this repository as examples:
        #    'http://software-dl.ti.com/jacinto7/esd/modelzoo/08_06_00_01/datasets/animal_classification.zip'
        # -------------------------------------
        dataset_name: oxford_flowers102
        input_data_path: data/datasets/oxford_flowers102
    
    training:
        # enable/disable training
        enable: True #False
    
        # Image Classification model chosen can be changed here if needed
        # options are: 'mobilenet_v2_lite', 'regnet_x_400mf', 'regnet_x_800mf'
        model_name: 'mobilenet_v2_lite'
    
        training_epochs: 99 #30
        batch_size: 64 #8 #32
        learning_rate: 0.002
        # num_gpus: 0 #1 #4
    
    compilation:
        # enable/disable compilation
        enable: True #False
        tensor_bits: 16 #16 #32

     
    OR any other methods to compile.

    Thanks,
    Sajan

  • Hi Sajan,

    As mentioned in my reply before, the edgeai-tidl-tool is used to compile a model:https://github.com/TexasInstruments/edgeai-tidl-tools. The modelmaker does not compile custom models, it just train/compile specified set of models. 

    To compile your model which is in onnx format, you can follow the example in this link: https://github.com/TexasInstruments/edgeai-tidl-tools/tree/master/examples/osrt_python. Specifically, start with this code: https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/examples/osrt_python/ort/onnxrt_ep.py. You also need to set configs for your model in this file: https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/examples/osrt_python/model_configs.py. There is already an example for mobilenetv3_large. You can use it as a starting point for the configs for your own model. 

    If you have another question about model compilation, please start a new thread. 

    Best regards,

    Qutaiba