This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

SK-AM68: edgeai: Error running multiple models with high resolution

Part Number: SK-AM68

Tool/software:

Hi,
I use SDK_10.00.00.08 to do product defect detection based on SK-AM68,  Running multiple models with 1920*1080 resolution is ok,running multiple models with 3280*2464 is failed Running a single model is ok。

The configuration  is as follows,



The error message is as follows  

  • Hi,

    What is the input (input0) being used in your configuration file? Does the sensor support the 3280x2464 resolution? Did you reconfigure the formats in your media graph to this resolution? See step 3 of this FAQ to do so if you haven't: https://e2e.ti.com/support/processors-group/processors/f/processors-forum/1403218/faq-what-are-the-common-reasons-v4l2-based-applications-fail-to-capture-images-from-a-probed-csi-sensor

    If your answer to the last two questions is yes, the issue is likely with the values in the flow section of the configuration file. Resolutions 1920x1080 and 3280x2464 have differing aspect ratios. Currently the width and height of each flow/channel (640x360) has a compatible aspect ratio to that of 1920x1080, so you will have to change the width and height of each flow to one that has the same aspect ratio as 3280x2464. After doing this, you need to change the mosaic position coordinates to ensure that they do not overlap each other with this change. The (mosaic_pos_x, mosaic_pos_y) coordinates correspond to the top left corner of each flow/channel.

    flowname : [input,model1,output0,[mosaic_pos_x,mosaic_pos_y,width,height]]

    Please let me know if you have any further questions.

    Thank you,

    Fabiana

  • Hi Fabiana,

    We have already make sure that the problem is not from capture side. Customer uses imx219 which supports both 3280x2464 and 1920x1080. Customer has tried using only one flow with 3280x2464 and it runs successfully when set mosaic width =1280, height = 720. Customer also tried running two flows with 1920x1080 and it also went well when set mosaic width = 640, height = 360.

    The app went wrong in two cases: 1 run one flow with 3280x2464 and mosaic width = 640, height = 360.   2. two flows with 3280x2464 with mosaic width = 640, height = 360.

    Customer's target is to run two flows with 3280x2464. 

    Regards,

    Adam 

  • Hi Adam,

    Could you please share the changes made to the media graph prior to running the application at this larger resolution? It would be helpful for me to see complete logs of the following test runs you mentioned.

    Customer has tried using only one flow with 3280x2464 and it runs successfully when set mosaic width =1280, height = 720.

    The app went wrong in two cases: 1 run one flow with 3280x2464 and mosaic width = 640, height = 360.   2. two flows with 3280x2464 with mosaic width = 640, height = 360.

    Thank you,

    Fabiana

  • any update? thanks

  • Hello,

    Could you please share the information requested of the three use cases I mentioned in my last response?

    Thank you,

    Fabiana

  • 1) change setup_cameras.sh and replace DCC file for 3280x2464
    IMX219_CAM_FMT="${IMX219_CAM_FMT:-[fmt:SRGGB10_1X10/3280x2464]}"

    2) one flow with 3280x2464 and it runs successfully when set mosaic width =1280, height = 720.

    flows:
    flow0: [input0,model1,output0,[320,150,1280,720]]

    3)  The app went wrong in two cases:

    flows:
    flow0: [input0,model0,output0,[320,150,640,360]]

    or:

    flows:
    flow0: [input0,model0,output0,[320,150,640,360]]
    flow1: [input0,model1,output0,[960,150,640,360]]

  • We do not support 3280x2464 resolution on imx219 out-of-box. Looking into this for you.

    Thank you,

    Fabiana

  • Could you please help resolve this issue or provide guidance on how to address it? thanks

  • Can you try the following pipeline?

    gst-launch-1.0 v4l2src device=/dev/video-imx219-cam0 io-mode=dmabuf-import ! queue max-size-buffers=1 leaky=2 ! \
    video/x-bayer, width=3280, height=2464, framerate=15/1, format=rggb10 ! \
    tiovxisp sink_0::pool-size=4  sink_0::device=/dev/v4l-imx219-subdev0 sensor-name="SENSOR_SONY_IMX219_RPI" \
    dcc-isp-file=dcc_viss_3280x2464_10b.bin \
    sink_0::dcc-2a-file=dcc_2a_3280x2464_10b.bin format-msb=9 ! \
    video/x-raw, format=NV12, width=3280, height=2464, framerate=15/1 ! queue ! tiovxmultiscaler ! queue ! \
    video/x-raw, format=NV12, width=1920, height=1080, framerate=15/1 !  \
    kmssink driver-name=tidss sync=false force-modesetting=true

    Thank you,

    Fabiana

  • root@am68a-sk:~# gst-launch-1.0 v4l2src device=/dev/video-imx219-cam0 io-mode=dmabuf-import ! queue max-size-buffers=1 leaky=2 ! \
    > video/x-bayer, width=3280, height=2464, framerate=15/1, format=rggb10 ! \
    > tiovxisp sink_0::pool-size=4 sink_0::device=/dev/v4l-imx219-subdev0 sensor-name="SENSOR_SONY_IMX219_RPI" \
    > dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss_3280x2464_10b.bin \
    > sink_0::dcc-2a-file=/opt/imaging/imx219/linear/dcc_2a_3280x2464_10b.bin format-msb=9 ! \
    > video/x-raw, format=NV12, width=3280, height=2464, framerate=15/1 ! queue ! tiovxmultiscaler ! queue ! \
    > video/x-raw, format=NV12, width=1920, height=1080, framerate=15/1 ! \
    > kmssink driver-name=tidss sync=false force-modesetting=true
    APP: Init ... !!!
    757.626463 s: MEM: Init ... !!!
    757.626515 s: MEM: Initialized DMA HEAP (fd=8) !!!
    757.626629 s: MEM: Init ... Done !!!
    757.626641 s: IPC: Init ... !!!
    757.670289 s: IPC: Init ... Done !!!
    REMOTE_SERVICE: Init ... !!!
    REMOTE_SERVICE: Init ... Done !!!
    757.681336 s: GTC Frequency = 200 MHz
    APP: Init ... Done !!!
    757.683477 s: VX_ZONE_INIT:Enabled
    757.683514 s: VX_ZONE_ERROR:Enabled
    757.683523 s: VX_ZONE_WARNING:Enabled
    757.686652 s: VX_ZONE_INIT:[tivxPlatformCreateTargetId:124] Added target MPU-0
    757.686807 s: VX_ZONE_INIT:[tivxPlatformCreateTargetId:124] Added target MPU-1
    757.686906 s: VX_ZONE_INIT:[tivxPlatformCreateTargetId:124] Added target MPU-2
    757.687010 s: VX_ZONE_INIT:[tivxPlatformCreateTargetId:124] Added target MPU-3
    757.687023 s: VX_ZONE_INIT:[tivxInitLocal:136] Initialization Done !!!
    757.689542 s: VX_ZONE_INIT:[tivxHostInitLocal:106] Initialization Done for HOST !!!
    Setting pipeline to PAUSED ...
    Pipeline is live and does not need PREROLL ...
    Pipeline is PREROLLED ...
    Setting pipeline to PLAYING ...
    New clock: GstSystemClock
    Redistribute latency...
    0:00:12.4 / 99:99:99.
    ^Chandling interrupt.
    Interrupt: Stopping pipeline ...
    Execution ended after 0:01:01.126448028
    Setting pipeline to NULL ...
    Freeing pipeline ...
    819.478496 s: VX_ZONE_INIT:[tivxHostDeInitLocal:120] De-Initialization Done for HOST !!!
    819.483002 s: VX_ZONE_INIT:[tivxDeInitLocal:206] De-Initialization Done !!!
    APP: Deinit ... !!!
    REMOTE_SERVICE: Deinit ... !!!
    REMOTE_SERVICE: Deinit ... Done !!!
    819.484792 s: IPC: Deinit ... !!!
    819.485289 s: IPC: DeInit ... Done !!!
    819.485312 s: MEM: Deinit ... !!!
    819.485367 s: DDR_SHARED_MEM: Alloc's: 35 alloc's of 228033825 bytes
    819.485377 s: DDR_SHARED_MEM: Free's : 35 free's of 228033825 bytes
    819.485384 s: DDR_SHARED_MEM: Open's : 0 allocs of 0 bytes
    819.485394 s: MEM: Deinit ... Done !!!
    APP: Deinit ... Done !!!
    root@am68a-sk:~#



  • Hi,

    Could you try renaming the following files before running the application with your modified configuration file again?

    /opt/imaging/imx219/linear/dcc_viss_3280x2464_10b.bin ⇒ /opt/imaging/imx219/linear/dcc_viss.bin
    /opt/imaging/imx219/linear/dcc_2a_3280x2464_10b.bin ⇒ /opt/imaging/imx219/linear/dcc_2a.bin

    Thank you,

    Fabiana

  • It is the same result because my target board dcc_viss.bin and dcc_2a.bin are for 3280*2464

  • Hi, 

    It would be helpful for me to see complete logs of the following test runs you mentioned.

    Customer has tried using only one flow with 3280x2464 and it runs successfully when set mosaic width =1280, height = 720.

    The app went wrong in two cases: 1 run one flow with 3280x2464 and mosaic width = 640, height = 360.   2. two flows with 3280x2464 with mosaic width = 640, height = 360.

    I would like the see logs that include the generated GStreamer pipelines of the failing test cases.

    Thanks,

    Fabiana

  • Because this log flashes by, I can only record the screen 

    root@am68a-sk:/opt/edgeai-gst-apps/apps_cpp/bin/Release# ./app_edgeai /opt/edgeai-gst-apps/configs/single_input_multi_infer_new.yaml

    Number of subgraphs:1 , 129 nodes delegated out of 129 nodes

    APP: Init ... !!!
    588.954620 s: MEM: Init ... !!!
    588.954691 s: MEM: Initialized DMA HEAP (fd=6) !!!
    588.954853 s: MEM: Init ... Done !!!
    588.954873 s: IPC: Init ... !!!
    589.000923 s: IPC: Init ... Done !!!
    REMOTE_SERVICE: Init ... !!!
    REMOTE_SERVICE: Init ... Done !!!
    589.005148 s: GTC Frequency = 200 MHz
    APP: Init ... Done !!!
    589.005251 s: VX_ZONE_INIT:Enabled
    589.005265 s: VX_ZONE_ERROR:Enabled
    589.005272 s: VX_ZONE_WARNING:Enabled
    589.006185 s: VX_ZONE_INIT:[tivxPlatformCreateTargetId:124] Added target MPU-0
    589.006360 s: VX_ZONE_INIT:[tivxPlatformCreateTargetId:124] Added target MPU-1
    589.006490 s: VX_ZONE_INIT:[tivxPlatformCreateTargetId:124] Added target MPU-2
    589.006601 s: VX_ZONE_INIT:[tivxPlatformCreateTargetId:124] Added target MPU-3
    589.006721 s: VX_ZONE_INIT:[tivxInitLocal:136] Initialization Done !!!
    589.007197 s: VX_ZONE_INIT:[tivxHostInitLocal:106] Initialization Done for HOST !!!
    libtidl_onnxrt_EP loaded 0x32c4c0f0
    Final number of subgraphs created are : 1, - Offloaded Nodes - 283, Total Nodes - 283
    graph
    ==========[INPUT PIPELINE(S)]==========

    [PIPE-0]

    v4l2src device=/dev/video-imx219-cam0 io-mode=5 ! queue leaky=2 ! capsfilter caps="video/x-bayer, width=(int)3280, height=(int)2464, format=(string)rggb10;" ! tiovxisp dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss.bin sensor-name=SENSOR_SONY_IMX219_RPI format-msb=9 ! capsfilter caps="video/x-raw, format=(string)NV12;" ! tee name=input0_split
    input0_split. ! queue ! tiovxmultiscaler name=multiscaler_split_00
    multiscaler_split_00. ! queue ! capsfilter caps="video/x-raw, width=(int)820, height=(int)616;" ! tiovxmultiscaler target=1 ! capsfilter caps="video/x-raw, width=(int)320, height=(int)320;" ! tiovxdlpreproc out-pool-size=4 channel-order=1 data-type=3 ! capsfilter caps="application/x-tensor-tiovx;" ! appsink max-buffers=2 drop=true name=flow0_pre_proc0
    multiscaler_split_00. ! queue ! capsfilter caps="video/x-raw, width=(int)640, height=(int)360;" ! tiovxdlcolorconvert out-pool-size=4 ! capsfilter caps="video/x-raw, format=(string)RGB;" ! appsink max-buffers=2 drop=true name=flow0_sensor0
    input0_split. ! queue ! tiovxmultiscaler name=multiscaler_split_01
    multiscaler_split_01. ! queue ! capsfilter caps="video/x-raw, width=(int)820, height=(int)616;" ! tiovxmultiscaler target=1 ! capsfilter caps="video/x-raw, width=(int)416, height=(int)416;" ! tiovxdlpreproc out-pool-size=4 data-type=3 tensor-format=1 ! capsfilter caps="application/x-tensor-tiovx;" ! appsink max-buffers=2 drop=true name=flow0_pre_proc1
    multiscaler_split_01. ! queue ! capsfilter caps="video/x-raw, width=(int)640, height=(int)360;" ! tiovxdlcolorconvert out-pool-size=4 ! capsfilter caps="video/x-raw, format=(string)RGB;" ! appsink max-buffers=2 drop=true name=flow0_sensor1

    ==========[OUTPUT PIPELINE]==========

    appsrc do-timestamp=true format=3 block=true name=flow0_post_proc0 ! tiovxdlcolorconvert ! capsfilter caps="video/x-raw, width=(int)640, height=(int)360, format=(string)NV12;" ! queue ! mosaic0.sink0

    appsrc do-timestamp=true format=3 block=true name=flow0_post_proc1 ! tiovxdlcolorconvert ! capsfilter caps="video/x-raw, width=(int)640, height=(int)360, format=(string)NV12;" ! queue ! mosaic0.sink1

    tiovxmosaic target=1 background=/tmp/background0 name=mosaic0 src::pool-size=4
    sink_0::startx="<320>" sink_0::starty="<150>" sink_0::widths="<640>" sink_0::heights="<360>"
    sink_1::startx="<960>" sink_1::starty="<150>" sink_1::widths="<640>" sink_1::heights="<360>"
    ! capsfilter caps="video/x-raw, format=(string)NV12, width=(int)1920, height=(int)1080;" ! queue ! tiperfoverlay title=Single Input, Multi Inference ! kmssink sync=false max-lateness=5000000 qos=true processing-deadline=15000000 driver-name=tidss connector-id=40 plane-id=31 force-modesetting=true fd=65













  • Thank you very much for sharing. I have been able to reproduce the issue using optiflow and I am currently investigating the source of this bug. I appreciate your patience.

    Thank you,

    Fabiana

  • any update? thanks

  • The expert, Fabiana, assigned to this thread, is out of the office today. Please expect a delay in response to this query. Thanks.

  • Hi csscyt,

    Apologies for the delay in response. Within gst_wrapper.py, could you try changing io-mode from 5 to dmabuf-import? This would be within the camera source and rggb format case (get_input_str).

    Thank you,

    Fabiana

  • have changed io-mode these files  from 5 yo dmabuf-import, 
    root@am68a-sk:/opt/edgeai-gst-apps# find ./ -name gst_wrapper.py
    ./apps_python/gst_wrapper.py
    ./optiflow/gst_wrapper.py

    and have changed io-mode   from 5 yo dmabuf-import and rebuilt

    root@am68a-sk:/opt/edgeai-gst-apps# find ./ -name edgeai_demo_config.cpp
    ./apps_cpp/common/src/edgeai_demo_config.cpp

    The same phenomenon still doesn't work. Can't you reproduce the problem? Can it work after modifying these? thanks

  • The expert, Fabiana, assigned to this thread, is out of the office today. Please expect a delay in response to this query. Thanks.

  • Another question is that we previously ran single-input multiple-output at 1920*1080. For example, if we run 4 models, the inference time of each model is 20ms. When running multiple models (4 models), the total inference time is about 80ms after actual testing. Is it possible to run multiple models in parallel, that is, run 4 models with a total running time of 20~30ms? thanks

  • Hi csscyt,

    As Fabiana is out, I will try to fill in. 

    Reading through this thread, I suspect the original issue is due to a limitation of the multi-scaler (MSC) hardware. MSC can only scale down by x4. Although a bit hard to find, TRM and the plugin for MSC has this limitation documented: https://github.com/TexasInstruments/edgeai-gst-plugins/wiki/tiovxmultiscaler

    A method I have seen used for large input is to put two MSC elements in the pipeline to downscale twice. I recommend trying with the single input pipeline first, then the dual input pipeline since it will be easier to debug.

    Is it possible to run multiple models in parallel, that is, run 4 models with a total running time of 20~30ms?

    For this, I recommend creating a new E2E thread. This is in the realm of model optimization and would be best if we get some experts in deep learning.

    What I can say with my limited knowledge is that it will depend. For example, if you have multiple inputs but a single model processes both, then batch processing can optimize the pipeline. If you have some layers in the model that are not hardware accelerated and running on the ARM A-core instead of the MMA HWA, then switching out the layer to ones that are supported by the TIDL (TI Deep Learning) library so that they run on the MMA can optimize the pipeline. If the model is complex, making it simpler can decrease the inference time, maybe there is a better way of quantization, etc, etc, etc... 

    But simple answer for the model question is that it depends.

    Regards,

    Takuma

  • Reading through this thread, I suspect the original issue is due to a limitation of the multi-scaler (MSC) hardware. MSC can only scale down by x4. Although a bit hard to find, TRM and the plugin for MSC has this limitation documented: https://github.com/TexasInstruments/edgeai-gst-plugins/wiki/tiovxmultiscaler

    >>>>>The resolution of the photos captured by the camera is 3280*2464, which is scaled to 640*640 for inference through MSC. MSC can be scaled down to 4 times at most. In fact, I can run a single input single model at a resolution of 3280*2464 and it can be inferred normally. In this case, 3280 > 640*4(2560),is this the reason? 

    A method I have seen used for large input is to put two MSC elements in the pipeline to downscale twice. I recommend trying with the single input pipeline first, then the dual input pipeline since it will be easier to debug.
    >>>>>
     Is there any reference example on how to do this?

    The second question, i will create a new E2E thread, thanks

  • Hi cssyt,

    >>>>>The resolution of the photos captured by the camera is 3280*2464, which is scaled to 640*640 for inference through MSC. MSC can be scaled down to 4 times at most. In fact, I can run a single input single model at a resolution of 3280*2464 and it can be inferred normally. In this case, 3280 > 640*4(2560),is this the reason? 

    Yes, the MSC limitation is my suspicion for the behaviors you are observing.

    >>>>> Is there any reference example on how to do this?

    Our "OpTIFlow" EdgeAI examples should already have this embedded into the script that generates the GStreamer pipeline:

    https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-am68a/10_01_00/exports/edgeai-docs/common/edgeai_dataflows.html#optiflow

    Specifically,

     

    I think you are using the other method which is the appsrc/appsink method (which is the 6.2. Python/C++ apps method in above documentation) to create the GStreamer pipeline. A bit questionable if the app implements the double multiscaler method to work around the MSC limitation. 

    tiovxmultiscaler name=multiscaler_split_00
    multiscaler_split_00. ! queue ! capsfilter caps="video/x-raw, width=(int)820, height=(int)616;" ! tiovxmultiscaler target=1 ! capsfilter caps="video/x-raw, width=(int)320, height=(int)320;"

    However, I do see that the pipeline you shared in the past above looks to have this double multiscaler method. But maybe the multiscaler plugin is too close to the limit? 

    As a experiment, changing the pipeline to only use 1 camera and changing the width x height of the first capsfilter is something I would recommend for you to try.

    Regards,

    Takuma

  • Yes, the MSC limitation is my suspicion for the behaviors you are observing.

    Can you help confirm that it is a limitation of MSC?

    I think you are using the other method which is the appsrc/appsink method (which is the 6.2. Python/C++ apps method in above documentation) to create the GStreamer pipeline.

    yes i use the 6.2. Python/C++ apps method in above documentation

     

    A bit questionable if the app implements the double multiscaler method to work around the MSC limitation. 

    However, I do see that the pipeline you shared in the past above looks to have this double multiscaler method. But maybe the multiscaler plugin is too close to the limit?

    How do I confirm these?




    As a experiment, changing the pipeline to only use 1 camera and changing the width x height of the first capsfilter is something I would recommend for you to try.

    In this "single input, multiple models" scenario, I am using 1 camera.


     

  • Hi csscyt,

    To confirm if it is a limitation from MSC, could you make the pipeline as simple as possible. Single input, single model, and single output. But, keep the camera resolution at 3280*2464. And then share the GStreamer pipeline that gets generated. Similar to what you posted 2 months ago on March 29.

    I can then try to simplify the pipeline even more by taking out the appsrc/appsink logic to narrow down the issue.

    Regards,

    Takuma


  • I share the video  Single input, single model, and single output with 3280*2464 resolution.

    Using the configuration below, it works.
    flows:
    flow0: [input0,model1,output0,[320,150,1280,720]]


    Changing to the following configuration does not work
    flows:
    flow0: [input0,model1,output0,[320,150,640,360]]

    You can pause the video to check the log.




  • Hi csscyt,

    Could you run two commands (apologies if they are typos, since I based it off of the video):

    • gst-launch-1.0 v4l2src device=/dev/video-imx219-cam0 ! queue leaky=2 ! capsfilter caps="video/x-bayer, width=(int)3280, height=(int)2464, format=(string)rggb10;" tiovxisp dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss.bin sensor-name=SENSOR_SONY_IMX219_RPI format-msb=9 ! capsfilter caps="video/x-raw, format=(string)NV12;" ! tiovxmultiscaler ! queue ! capsfilter caps="video/x-raw, width=(int)640, height=(int)360;" ! autovideosink
    • gst-launch-1.0 v4l2src device=/dev/video-imx219-cam0 ! queue leaky=2 ! capsfilter caps="video/x-bayer, width=(int)3280, height=(int)2464, format=(string)rggb10;" tiovxisp dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss.bin sensor-name=SENSOR_SONY_IMX219_RPI format-msb=9 ! capsfilter caps="video/x-raw, format=(string)NV12;" ! tiovxmultiscaler ! queue ! capsfilter caps="video/x-raw, width=(int)1280, height=(int)720;" ! tiovxmultiscaler target=1 ! capsfilter caps="video/x-raw, width=(int)640, height=(int)360;" ! autovideosink

    First pipeline just has a single multiscaler that goes to 640x360. The second pipeline has two multiscalers where the first scales down to 1280x720, then the second scales down to 640x360. First pipeline should give a similar error log and behavior as you are observing, and second pipeline should work if the issue is with MSC.

    Regards,

    Takuma

  • apologies if they are typos, since I based it off of the video

    there are some typos, i chaged it, Please help me confirm whether the command is correct.

    gst-launch-1.0 v4l2src device=/dev/video-imx219-cam0 ! queue leaky=2 ! capsfilter caps="video/x-bayer, width=(int)3280, height=(int)2464, format=(string)rggb10;" tiovxisp dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss.bin sensor-name=SENSOR_SONY_IMX219_RPI format-msb=9 ! capsfilter caps="video/x-raw, format=(string)NV12;" ! tiovxmultiscaler ! queue ! capsfilter caps="video/x-raw, width=(int)640, height=(int)360;" ! autovideosink

    I changed it as this:

    gst-launch-1.0 v4l2src device=/dev/video-imx219-cam0 io-mode=5 ! queue leaky=2 ! \
    video/x-bayer, width=3280, height=2464, framerate=15/1, format=rggb10 ! \
    tiovxisp sink_0::device=/dev/v4l-rpi-subdev0 sensor-name="SENSOR_SONY_IMX219_RPI" \
    dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss.bin \
    sink_0::dcc-2a-file=/opt/imaging/imx219/linear/dcc_2a.bin format-msb=9 ! \
    video/x-raw, format=NV12,  ! tiovxmultiscaler ! queue ! capsfilter caps="video/x-raw, width=(int)640, height=(int)360;" ! autovideosink


    yes it gave the same error.

    root@am68a-sk:/opt/edgeai-gst-apps# gst-launch-1.0 v4l2src device=/dev/video-imx219-cam0 io-mode=5 ! queue leaky=2 ! \
    > video/x-bayer, width=3280, height=2464, framerate=15/1, format=rggb10 ! \
    > tiovxisp sink_0::device=/dev/v4l-rpi-subdev0 sensor-name="SENSOR_SONY_IMX219_RPI" \
    > dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss.bin \
    > sink_0::dcc-2a-file=/opt/imaging/imx219/linear/dcc_2a.bin format-msb=9 ! \
    > video/x-raw, format=NV12,  ! tiovxmultiscaler ! queue ! capsfilter caps="video/x-raw, width=(int)640, height=(int)360;" ! autovideosink
    APP: Init ... !!!
       879.774630 s: MEM: Init ... !!!
       879.774667 s: MEM: Initialized DMA HEAP (fd=8) !!!
       879.774762 s: MEM: Init ... Done !!!
       879.774772 s: IPC: Init ... !!!
       879.820031 s: IPC: Init ... Done !!!
    REMOTE_SERVICE: Init ... !!!
    REMOTE_SERVICE: Init ... Done !!!
       879.823900 s: GTC Frequency = 200 MHz
    APP: Init ... Done !!!
       879.823969 s:  VX_ZONE_INIT:Enabled
       879.823980 s:  VX_ZONE_ERROR:Enabled
       879.823987 s:  VX_ZONE_WARNING:Enabled
       879.824782 s:  VX_ZONE_INIT:[tivxPlatformCreateTargetId:124] Added target MPU-0 
       879.824933 s:  VX_ZONE_INIT:[tivxPlatformCreateTargetId:124] Added target MPU-1 
       879.825036 s:  VX_ZONE_INIT:[tivxPlatformCreateTargetId:124] Added target MPU-2 
       879.825131 s:  VX_ZONE_INIT:[tivxPlatformCreateTargetId:124] Added target MPU-3 
       879.825146 s:  VX_ZONE_INIT:[tivxInitLocal:136] Initialization Done !!!
       879.825518 s:  VX_ZONE_INIT:[tivxHostInitLocal:106] Initialization Done for HOST !!!
    Setting pipeline to PAUSED ...
    warning: queue 0xffff70000be0 destroyed while proxies still attached:
      xdg_wm_base@6 still attached
      wl_subcompositor@5 still attached
      wl_compositor@4 still attached
      wl_registry@2 still attached
    Pipeline is live and does not need PREROLL ...
    Got context from element 'autovideosink0': gst.gl.GLDisplay=context, gst.gl.GLDisplay=(GstGLDisplay)"\(GstGLDisplayWayland\)\ gldisplaywayland0";
    Pipeline is PREROLLED ...
    Setting pipeline to PLAYING ...
    New clock: GstSystemClock
       880.080833 s:  VX_ZONE_ERROR:[tivxAddKernelVpacVissValidate:669] Parameters 'output2' and 'raw' should have the same value for VX_IMAGE_WIDTH
       880.080868 s:  VX_ZONE_ERROR:[tivxAddKernelVpacVissValidate:674] Parameters 'output2' and 'raw' should have the same value for VX_IMAGE_HEIGHT
       880.080898 s:  VX_ZONE_ERROR:[ownGraphNodeKernelValidate:568] node kernel validate failed for kernel com.ti.hwa.vpac_viss at index 0
       880.080906 s:  VX_ZONE_ERROR:[vxVerifyGraph:2132] Node kernel Validate failed
       880.080913 s:  VX_ZONE_ERROR:[vxVerifyGraph:2311] Graph verify failed
       880.081197 s:  VX_ZONE_ERROR:[ownReleaseReferenceInt:594] Invalid reference
    ERROR: from element /GstPipeline:pipeline0/GstTIOVXISP:tiovxisp0: Unable to init TIOVX module
    Additional debug info:
    ../gst-libs/gst/tiovx/gsttiovxmiso.c(1512): gst_tiovx_miso_negotiated_src_caps (): /GstPipeline:pipeline0/GstTIOVXISP:tiovxisp0
    Execution ended after 0:00:00.175373822
    Setting pipeline to NULL ...
    warning: queue 0xffff7003a8a0 destroyed while proxies still attached:
      xdg_wm_base@11 still attached
      wl_subcompositor@7 still attached
      wl_compositor@10 still attached
      wl_registry@13 still attached
    Freeing pipeline ...
       880.159719 s:  VX_ZONE_WARNING:[vxReleaseContext:1213] Found a reference 0xffff89c4f7d8 of type 00000816 at external count 1, internal count 0, releasing it
       880.159739 s:  VX_ZONE_WARNING:[vxReleaseContext:1215] Releasing reference (name=user_data_object_119) now as a part of garbage collection
       880.159807 s:  VX_ZONE_WARNING:[vxReleaseContext:1213] Found a reference 0xffff89c4fa00 of type 00000816 at external count 1, internal count 0, releasing it
       880.159818 s:  VX_ZONE_WARNING:[vxReleaseContext:1215] Releasing reference (name=user_data_object_120) now as a part of garbage collection
       880.159855 s:  VX_ZONE_WARNING:[vxReleaseContext:1213] Found a reference 0xffff89d68948 of type 00000813 at external count 1, internal count 0, releasing it
       880.159864 s:  VX_ZONE_WARNING:[vxReleaseContext:1215] Releasing reference (name=object_array_122) now as a part of garbage collection
       880.159877 s:  VX_ZONE_WARNING:[vxReleaseContext:1213] Found a reference 0xffff89c4fe50 of type 00000816 at external count 1, internal count 0, releasing it
       880.159885 s:  VX_ZONE_WARNING:[vxReleaseContext:1215] Releasing reference (name=user_data_object_123) now as a part of garbage collection
       880.159896 s:  VX_ZONE_WARNING:[vxReleaseContext:1213] Found a reference 0xffff89d68af0 of type 00000813 at external count 1, internal count 0, releasing it
       880.159904 s:  VX_ZONE_WARNING:[vxReleaseContext:1215] Releasing reference (name=object_array_124) now as a part of garbage collection
       880.159915 s:  VX_ZONE_WARNING:[vxReleaseContext:1213] Found a reference 0xffff89c6aae0 of type 00000817 at external count 1, internal count 0, releasing it
       880.159923 s:  VX_ZONE_WARNING:[vxReleaseContext:1215] Releasing reference (name=raw_image_125) now as a part of garbage collection
       880.159934 s:  VX_ZONE_WARNING:[vxReleaseContext:1213] Found a reference 0xffff89d68c98 of type 00000813 at external count 1, internal count 0, releasing it
       880.159942 s:  VX_ZONE_WARNING:[vxReleaseContext:1215] Releasing reference (name=object_array_126) now as a part of garbage collection
       880.159953 s:  VX_ZONE_WARNING:[vxReleaseContext:1213] Found a reference 0xffff89c50078 of type 00000816 at external count 1, internal count 0, releasing it
       880.159961 s:  VX_ZONE_WARNING:[vxReleaseContext:1215] Releasing reference (name=user_data_object_127) now as a part of garbage collection
       880.159972 s:  VX_ZONE_WARNING:[vxReleaseContext:1213] Found a reference 0xffff89d68e40 of type 00000813 at external count 1, internal count 0, releasing it
       880.159980 s:  VX_ZONE_WARNING:[vxReleaseContext:1215] Releasing reference (name=object_array_128) now as a part of garbage collection
       880.159992 s:  VX_ZONE_WARNING:[vxReleaseContext:1213] Found a reference 0xffff89c94538 of type 0000080f at external count 1, internal count 0, releasing it
       880.160000 s:  VX_ZONE_WARNING:[vxReleaseContext:1215] Releasing reference (name=image_129) now as a part of garbage collection
       880.160059 s:  VX_ZONE_INIT:[tivxHostDeInitLocal:120] De-Initialization Done for HOST !!!
       880.164483 s:  VX_ZONE_INIT:[tivxDeInitLocal:206] De-Initialization Done !!!
    APP: Deinit ... !!!
    REMOTE_SERVICE: Deinit ... !!!
    REMOTE_SERVICE: Deinit ... Done !!!
       880.166550 s: IPC: Deinit ... !!!
       880.167154 s: IPC: DeInit ... Done !!!
       880.167180 s: MEM: Deinit ... !!!
       880.167284 s: DDR_SHARED_MEM: Alloc's: 22 alloc's of 103299677 bytes 
       880.167298 s: DDR_SHARED_MEM: Free's : 22 free's  of 103299677 bytes 
       880.167305 s: DDR_SHARED_MEM: Open's : 0 allocs  of 0 bytes 
       880.167315 s: MEM: Deinit ... Done !!!
    APP: Deinit ... Done !!!
    root@am68a-sk:/opt/edgeai-gst-apps# 
    






    gst-launch-1.0 v4l2src device=/dev/video-imx219-cam0 ! queue leaky=2 ! capsfilter caps="video/x-bayer, width=(int)3280, height=(int)2464, format=(string)rggb10;" tiovxisp dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss.bin sensor-name=SENSOR_SONY_IMX219_RPI format-msb=9 ! capsfilter caps="video/x-raw, format=(string)NV12;" ! tiovxmultiscaler ! queue ! capsfilter caps="video/x-raw, width=(int)1280, height=(int)720;" ! tiovxmultiscaler target=1 ! capsfilter caps="video/x-raw, width=(int)640, height=(int)360;" ! autovideosink

    I changed it as this:

    gst-launch-1.0 v4l2src device=/dev/video-imx219-cam0 io-mode=5 ! queue leaky=2 ! \
    video/x-bayer, width=3280, height=2464, framerate=15/1, format=rggb10 ! \
    tiovxisp sink_0::device=/dev/v4l-rpi-subdev0 sensor-name="SENSOR_SONY_IMX219_RPI" \
    dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss.bin \
    sink_0::dcc-2a-file=/opt/imaging/imx219/linear/dcc_2a.bin format-msb=9 ! \
    video/x-raw, format=NV12, ! tiovxmultiscaler ! queue ! capsfilter caps="video/x-raw, width=(int)1280, height=(int)720;" ! tiovxmultiscaler target=1 ! capsfilter caps="video/x-raw, width=(int)640, height=(int)360;" ! autovideosink
    


    there is not the above error,
    root@am68a-sk:/opt/edgeai-gst-apps# gst-launch-1.0 v4l2src device=/dev/video-imx219-cam0 io-mode=5 ! queue leaky=2 ! \
    > video/x-bayer, width=3280, height=2464, framerate=15/1, format=rggb10 ! \
    > tiovxisp sink_0::device=/dev/v4l-rpi-subdev0 sensor-name="SENSOR_SONY_IMX219_RPI" \
    > dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss.bin \
    > sink_0::dcc-2a-file=/opt/imaging/imx219/linear/dcc_2a.bin format-msb=9 ! \
    > video/x-raw, format=NV12, ! tiovxmultiscaler ! queue ! capsfilter caps="video/x-raw, width=(int)1280, height=(int)720;" ! tiovxmultiscaler target=1 ! capsfilter caps="video/x-raw, width=(int)640, height
    =(int)360;" ! autovideosink
    APP: Init ... !!!
      1127.787455 s: MEM: Init ... !!!
      1127.787510 s: MEM: Initialized DMA HEAP (fd=8) !!!
      1127.787605 s: MEM: Init ... Done !!!
      1127.787616 s: IPC: Init ... !!!
      1127.833375 s: IPC: Init ... Done !!!
    REMOTE_SERVICE: Init ... !!!
    REMOTE_SERVICE: Init ... Done !!!
      1127.837409 s: GTC Frequency = 200 MHz
    APP: Init ... Done !!!
      1127.837483 s:  VX_ZONE_INIT:Enabled
      1127.837493 s:  VX_ZONE_ERROR:Enabled
      1127.837500 s:  VX_ZONE_WARNING:Enabled
      1127.840690 s:  VX_ZONE_INIT:[tivxPlatformCreateTargetId:124] Added target MPU-0 
      1127.840930 s:  VX_ZONE_INIT:[tivxPlatformCreateTargetId:124] Added target MPU-1 
      1127.841050 s:  VX_ZONE_INIT:[tivxPlatformCreateTargetId:124] Added target MPU-2 
      1127.841151 s:  VX_ZONE_INIT:[tivxPlatformCreateTargetId:124] Added target MPU-3 
      1127.841165 s:  VX_ZONE_INIT:[tivxInitLocal:136] Initialization Done !!!
      1127.841589 s:  VX_ZONE_INIT:[tivxHostInitLocal:106] Initialization Done for HOST !!!
    Setting pipeline to PAUSED ...
    warning: queue 0xffff74000be0 destroyed while proxies still attached:
      xdg_wm_base@6 still attached
      wl_subcompositor@5 still attached
      wl_compositor@4 still attached
      wl_registry@2 still attached
    Pipeline is live and does not need PREROLL ...
    Got context from element 'autovideosink0': gst.gl.GLDisplay=context, gst.gl.GLDisplay=(GstGLDisplay)"\(GstGLDisplayWayland\)\ gldisplaywayland0";
    Pipeline is PREROLLED ...
    Setting pipeline to PLAYING ...
    New clock: GstSystemClock
    Redistribute latency...
      1129.222777 s:  VX_ZONE_ERROR:[vxMapUserDataObject:457] No available user data object maps
      1129.222810 s:  VX_ZONE_ERROR:[vxMapUserDataObject:458] May need to increase the value of TIVX_USER_DATA_OBJECT_MAX_MAPS in tiovx/include/TI/tivx_config.h
      1129.222833 s:  VX_ZONE_ERROR:[vxMapUserDataObject:457] No available user data object maps
      1129.222841 s:  VX_ZONE_ERROR:[vxMapUserDataObject:458] May need to increase the value of TIVX_USER_DATA_OBJECT_MAX_MAPS in tiovx/include/TI/tivx_config.h
    Caught SIGSEGV
    #0  0x0000ffff8e08906c in poll () from /usr/lib/libc.so.6
    #1  0x0000ffff8e24ac20 in ?? () from /usr/lib/libglib-2.0.so.0
    #2  0x0000ffff8e24b734 in g_main_loop_run () from /usr/lib/libglib-2.0.so.0
    #3  0x000000000040509c in ?? ()
    #4  0x0000ffff8dfd84b4 in ?? () from /usr/lib/libc.so.6
    #5  0x0000ffff8dfd858c in __libc_start_main () from /usr/lib/libc.so.6
    #6  0x0000000000403c30 in ?? ()
    Spinning.  Please run 'gdb gst-launch-1.0 1736' to continue debugging, Ctrl-C to quit, or Ctrl-\ to dump core.
    0:00:01.2 / 99:99:99.
    0:00:01.2 / 99:99:99.
    0:00:01.2 / 99:99:99.
    0:00:01.2 / 99:99:99.
    0:00:01.2 / 99:99:99.
    





  • Hi csscyt,

    Based on the behavior observed, I am pretty sure it is MSC causing issues.

      

    From above, I suspect the part of the pipeline that is split off to go directly to tidlpostproc is causing issues due to not having an extra multiscaler.

    You can try adding an extra multiscaler node in the application to fix the issue.

    Regards,

    Takuma

  • Thank you very much for your patience in debugging this issue.

  • Hi csscyt,

    Your welcome. Please feel free to hit "Resolved" if the extra tiovxmultiscaler solved the issue. Otherwise, please feel free to continue the thread or open a new thread if a new issue is encountered.

    Regards,

    Takuma