This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-AM68A: Unable to Stream IMX219 Camera at Full Resolution (3264x2464) on AM68A via CSI2

Part Number: PROCESSOR-SDK-AM68A
Other Parts Discussed in Thread: AM68A

Tool/software:

Hi Team,

I'm currently working on integrating the IMX219-160_IR-CUT_Camera module with the AM68A board via the CSI2 interface.

I'm able to stream video successfully at 1920x1080 resolution using the following pipeline:


root@am68a-sk:/opt/edgeai-gst-apps# gst-launch-1.0     v4l2src device=/dev/video-imx219-cam0 io-mode=2 !  queue leaky=2 ! video/x-bayer,width=1920,height=1080,format=rggb !     tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss.bin format-msb=7 sink_0::dcc-2a-file=/opt/imaging/imx219/linear/dcc_2a.bin sink_0::device=/dev/v4l-imx219-subdev0 !  video/x-raw,format=NV12 !       timeoverlay  !    tiovxmultiscaler name=split0 src_0::roi-startx=0 src_0::roi-starty=0 src_0::roi-width=1280 src_0::roi-height=720 src_2::roi-startx=0 src_2::roi-starty=0 src_2::roi-width=1280 src_2::roi-height=720 target=0 !     queue ! video/x-raw,width=320,height=320 !     tiovxdlpreproc model=/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320 out-pool-size=4 !     application/x-tensor-tiovx !     tidlinferer target=1 model=/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320 !     post_0.tensor         split0. !     queue ! video/x-raw,width=1920,height=1080 !     post_0.sink         tidlpostproc name=post_0 model=/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320 alpha=0.400000 viz-threshold=0.600000 top-N=5 display-model=true !     queue ! mosaic.sink_0  tiovxmosaic name=mosaic sink_0::startx="<0>" sink_0::starty="<0>" !     kmssink driver-name=tidss sync=false
APP: Init ... !!!
  3476.128820 s: MEM: Init ... !!!
  3476.128885 s: MEM: Initialized DMA HEAP (fd=8) !!!
  3476.129008 s: MEM: Init ... Done !!!
  3476.129020 s: IPC: Init ... !!!
  3476.190767 s: IPC: Init ... Done !!!
REMOTE_SERVICE: Init ... !!!
REMOTE_SERVICE: Init ... Done !!!
  3476.201408 s: GTC Frequency = 200 MHz
APP: Init ... Done !!!
  3476.201666 s:  VX_ZONE_INFO: Globally Enabled VX_ZONE_ERROR
  3476.201938 s:  VX_ZONE_INFO: Globally Enabled VX_ZONE_WARNING
  3476.202038 s:  VX_ZONE_INFO: Globally Enabled VX_ZONE_INFO
  3476.202846 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-0
  3476.203756 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-1
  3476.205391 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-2
  3476.205670 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-3
  3476.205820 s:  VX_ZONE_INFO: [tivxInitLocal:126] Initialization Done !!!
  3476.205877 s:  VX_ZONE_INFO: Globally Disabled VX_ZONE_INFO

 Number of subgraphs:1 , 129 nodes delegated out of 129 nodes

Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Redistribute latency...
0:00:04.8 / 99:99:99.


However, when I try to stream at the camera's maximum supported resolution (3264x2464) using the same pipeline with updated width and height, I encounter an error:

root@am68a-sk:/opt/edgeai-gst-apps# gst-launch-1.0     v4l2src device=/dev/video-imx219-cam0 io-mode=2 !  queue leaky=2 ! video/x-bayer,width=3264,height=2464,format=rggb !     tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss.bin format-msb=7 sink_0::dcc-2a-file=/opt/
imaging/imx219/linear/dcc_2a.bin sink_0::device=/dev/v4l-imx219-subdev0 !  video/x-raw,format=NV12 !       timeoverlay  !    tiovxmultiscaler name=split0 sr
c_0::roi-startx=0 src_0::roi-starty=0 src_0::roi-width=1280 src_0::roi-height=720 src_2::roi-startx=0 src_2::roi-starty=0 src_2::roi-width=1280 src_2::roi-h
eight=720 target=0 !     queue ! video/x-raw,width=320,height=320 !     tiovxdlpreproc model=/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320 out-
pool-size=4 !     application/x-tensor-tiovx !     tidlinferer target=1 model=/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320 !     post_0.tensor
         split0. !     queue ! video/x-raw,width=1920,height=1080 !     post_0.sink         tidlpostproc name=post_0 model=/opt/model_zoo/TFL-OD-2020-ssdLit
e-mobDet-DSP-coco-320x320 alpha=0.400000 viz-threshold=0.600000 top-N=5 display-model=true !     queue ! mosaic.sink_0  tiovxmosaic name=mosaic sink_0::star
tx="<0>" sink_0::starty="<0>" !     kmssink driver-name=tidss sync=false
APP: Init ... !!!
  3718.199423 s: MEM: Init ... !!!
  3718.199467 s: MEM: Initialized DMA HEAP (fd=8) !!!
  3718.199586 s: MEM: Init ... Done !!!
  3718.199599 s: IPC: Init ... !!!
  3718.251133 s: IPC: Init ... Done !!!
REMOTE_SERVICE: Init ... !!!
REMOTE_SERVICE: Init ... Done !!!
  3718.258316 s: GTC Frequency = 200 MHz
APP: Init ... Done !!!
  3718.258583 s:  VX_ZONE_INFO: Globally Enabled VX_ZONE_ERROR
  3718.258657 s:  VX_ZONE_INFO: Globally Enabled VX_ZONE_WARNING
  3718.258714 s:  VX_ZONE_INFO: Globally Enabled VX_ZONE_INFO
  3718.259491 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-0
  3718.261561 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-1
  3718.261894 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-2
  3718.262060 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-3
  3718.262078 s:  VX_ZONE_INFO: [tivxInitLocal:126] Initialization Done !!!
  3718.262086 s:  VX_ZONE_INFO: Globally Disabled VX_ZONE_INFO

 Number of subgraphs:1 , 129 nodes delegated out of 129 nodes

Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Failed to allocate required memory.
Additional debug info:
/usr/src/debug/gstreamer1.0-plugins-good/1.22.12/sys/v4l2/gstv4l2src.c(950): gst_v4l2src_decide_allocation (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
Buffer pool activation failed
Execution ended after 0:00:00.025853731
Setting pipeline to NULL ...
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
/usr/src/debug/gstreamer1.0/1.22.12/libs/gst/base/gstbasesrc.c(3134): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
Freeing pipeline ...
APP: Deinit ... !!!
REMOTE_SERVICE: Deinit ... !!!
REMOTE_SERVICE: Deinit ... Done !!!
  3719.060723 s: IPC: Deinit ... !!!
  3719.063066 s: IPC: DeInit ... Done !!!
  3719.063103 s: MEM: Deinit ... !!!
  3719.063482 s: DDR_SHARED_MEM: Alloc's: 37 alloc's of 94292515 bytes
  3719.063581 s: DDR_SHARED_MEM: Free's : 37 free's  of 94292515 bytes
  3719.063610 s: DDR_SHARED_MEM: Open's : 0 allocs  of 0 bytes
  3719.063634 s: MEM: Deinit ... Done !!!
APP: Deinit ... Done !!!



  • Camera module: IMX219-160_IR-CUT_Camera

  • Interface: CSI2

  • Board: AM68A

  • Working resolution: 1920x1080

  • Desired resolution: 3264x2464

  • SDK:10.1

    I've confirmed that the sensor is capable of this resolution. Are any bandwidth restrictions, driver limitations, or additional settings required to achieve full-resolution streaming?

    Any guidance or suggestions would be greatly appreciated!

              Noushad

  • Hi Noushad,

    Download and add the attached binaries to the opt/imaging/imx219/linear directory:

    /cfs-file/__key/communityserver-discussions-components-files/791/3404.dcc_5F00_viss_5F00_3280x2464_5F00_10b.bin

    /cfs-file/__key/communityserver-discussions-components-files/791/3404.dcc_5F00_2a_5F00_3280x2464_5F00_10b.bin

    Make the following format change to opt/edgeai-gst-apps/scripts/setup_cameras.sh and reboot the device after saving.

    IMX219_CAM_FMT="${IMX219_CAM_FMT:-[fmt:SRGGB10_1X10/3280x2464]}"

    You should be able to run the following GStreamer pipeline at the desired 3264x2464 resolution.

    gst-launch-1.0 v4l2src device=/dev/video-imx219-cam0 io-mode=dmabuf-import ! queue max-size-buffers=1 leaky=2 ! \
    video/x-bayer, width=3280, height=2464, framerate=15/1, format=rggb10 ! \
    tiovxisp sink_0::pool-size=4  sink_0::device=/dev/v4l-imx219-subdev0 sensor-name="SENSOR_SONY_IMX219_RPI" \
    dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss_3280x2464_10b.bin \
    sink_0::dcc-2a-file=/opt/imaging/imx219/linear/dcc_2a_3280x2464_10b.bin format-msb=9 ! \
    video/x-raw, format=NV12, width=3280, height=2464, framerate=15/1 ! queue ! tiovxmultiscaler ! queue ! \
    video/x-raw, format=NV12, width=1920, height=1080, framerate=15/1 ! \
    kmssink driver-name=tidss sync=false force-modesetting=true

    Thank you,

    Fabiana

  • Hi

    Thank you for you valueable support 

    Is it possible to set the camera resolution to 3280x2464 through V3Link by modifying the setup_cameras_v3link.sh script located at /opt/edgeai-gst-apps/scripts/?

    Thank you,

    Noushad



  • Hi Noushad,

    Yes, the process should be similar. Please let me know if you run into any issues after making the change to the script.

    Thank you,

    Fabiana

  • Hi Fabiana,

    Thank you for the confirmation.

    I tested the IMX219-160_IR-CUT camera with a 3280x2464 resolution by modifying the setup_cameras_v3link.sh script. However, after making this change, no camera is being detected. I reverted the resolution back to the original 1920x1080, but unfortunately, the camera is still not detected via V3Link.




    Is there any resolution limitation on V3Link that could be causing this issue?

    Best regards,
    Noushad

  • Hi Noushad,

    Are you running the setup_cameras_v3link.sh script after making the modification? The setup_cameras.sh script runs automatically each time the device boots as it is included in the initialization script, but sensors connected via V3Link require you to manually run setup_cameras_v3link.sh upon each boot.

    Try this and the sensor is still not being detected, please share the output of media-ctl -p after having run this setup_cameras_v3link.sh script.

    Thank you,

    Fabiana

  • Hi ,

    Yes, I did run the setup_cameras_v3link.sh script manually after the modification:

    root@am68a-sk:/opt/edgeai-gst-apps# source /opt/edgeai-gst-apps/scripts/setup_cameras_v3link.sh
    USB Camera 0 detected
        device = /dev/video-usb-cam0
        format = jpeg
    USB Camera 1 detected
        device = /dev/video-usb-cam1
        format = jpeg



    After that, I ran media-ctl -p, and here is the output:

    Media controller API version 6.6.44
    
    Media device information
    ------------------------
    driver          j721e-csi2rx
    model           TI-CSI2RX
    serial
    bus info        platform:4500000.ticsi2rx
    hw revision     0x1
    driver version  6.6.44
    
    Device topology
    - entity 1: 4500000.ticsi2rx (9 pads, 0 link, 0 routes)
                type V4L2 subdev subtype Unknown flags 0
            pad0: Sink
            pad1: Source
            pad2: Source
            pad3: Source
            pad4: Source
            pad5: Source
            pad6: Source
            pad7: Source
            pad8: Source
    
    - entity 11: cdns_csi2rx.4504000.csi-bridge (5 pads, 0 link, 0 routes)
                 type V4L2 subdev subtype Unknown flags 0
            pad0: Sink
            pad1: Source
            pad2: Source
            pad3: Source
            pad4: Source


    Please let me know if you need any additional logs or if there's anything else I should check.

    Thank you,
    Noushad

  • Hi


    Additionally, I ran the I2C commands to gather further information, and here is the output from i2cdump for the I2C address 0x30

    root@am68a-sk:/opt/edgeai-gst-apps# i2cdump -y 5 0x30
    No size specified (using byte-data access)
         0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f    0123456789abcdef
    00: 60 00 1e 40 d0 01 00 fe 1c 10 7a 7a 0f b9 03 ff    `.?@??.???zz???.
    10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 04 00    ..............?.
    20: f0 03 00 00 00 00 00 00 00 00 00 00 00 00 00 00    ??..............
    30: 00 00 01 42 00 00 00 00 00 00 00 00 00 00 00 00    ..?B............
    40: 00 a9 71 01 00 00 20 00 00 00 00 12 01 03 04 64    .?q?.. ....????d
    50: 00 00 00 04 00 00 00 00 5e 00 00 30 88 00 00 00    ...?....^..0?...
    60: 00 00 00 00 00 00 00 00 00 00 00 00 00 78 88 88    .............x??
    70: 2b 2c e4 00 00 00 00 c5 00 01 00 00 20 00 00 00    +,?....?.?.. ...
    80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00    ................
    90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00    ................
    a0: 00 00 00 00 00 1e 00 00 00 00 00 00 00 00 00 00    .....?..........
    b0: 04 09 08 08 25 00 18 00 88 33 83 74 80 00 00 00    ????%.?.?3?t?...
    c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00    ................
    d0: 00 43 94 00 60 e0 00 02 07 7f 00 00 00 00 00 00    .C?.`?.???......
    e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00    ................
    f0: 5f 55 42 39 36 30 00 00 00 00 00 00 00 00 00 00    _UB960..........


    Let me know if you need any further information or if there is anything else I should check.

    Thank you,
    Noushad





  • Hi Noushad,

    Please take a look at the FAQ linked below and verify that you have followed all the steps required to enable V3Link + imx219 using AM68A.

    https://e2e.ti.com/support/processors-group/processors/f/processors-forum/1452909/faq-am67a-enabling-csi2-sensors-via-fusion-board-on-am6x

    Thank you,

    Fabiana

  • hi

    Thank you for your valuable response. The IMX219 Camera 0 has now been detected after updating the uEnv.txt file

    root@am68a-sk:/opt/edgeai-gst-apps/configs# sudo cat  /run/media/BOOT-mmcblk1p1/uEnv.txt
    # This uEnv.txt file can contain additional environment settings that you
    # want to set in U-Boot at boot time.  This can be simple variables such
    # as the serverip or custom variables.  The format of this file is:
    #    variable=value
    # NOTE: This file will be evaluated after the bootcmd is run and the
    #       bootcmd must be set to load this file if it exists (this is the
    #       default on all newer U-Boot images.  This also means that some
    #       variables such as bootdelay cannot be changed by this file since
    #       it is not evaluated until the bootcmd is run.
    
    # Update the Linux hostname based on board_name
    # The SK also requires an additional dtbo to boot. Prepend it to name_overlays depending on board_name
    uenvcmd=if test "$board_name" = "am68-sk"; then ; setenv args_all $args_all systemd.hostname=am68a-sk ; fi
    
    # Setting the right U-Boot environment variables
    dorprocboot=1
    name_overlays=ti/k3-j721s2-edgeai-apps.dtbo ti/k3-am68-sk-v3link-fusion.dtbo ti/k3-v3link-imx219-0-0.dtbo
    
    
    root@am68a-sk:/opt/edgeai-gst-apps/configs# 


    I tried running the sample, but the camera did not display anything


    root@am68a-sk:/opt/edgeai-gst-apps/configs# source /opt/edgeai-gst-apps/scripts/setup_cameras_v3link.sh
    USB Camera 0 detected
        device = /dev/video-usb-cam0
        format = jpeg
    USB Camera 1 detected
        device = /dev/video-usb-cam1
        format = jpeg
    IMX219 Camera 0 detected
        device = /dev/video-imx219-cam0
        name = imx219
        format = [fmt:SRGGB8_1X8/1920x1080]
        subdev_id = /dev/v4l-imx219-subdev0
        isp_required = yes
        ldc_required = yes
    root@am68a-sk:/opt/edgeai-gst-apps/configs# cd ..
    root@am68a-sk:/opt/edgeai-gst-apps# cd apps_cpp
    root@am68a-sk:/opt/edgeai-gst-apps/apps_cpp# ./bin/Release/app_edgeai ../configs/imx219_cam_example.yaml
    libtidl_onnxrt_EP loaded 0x551b690 
    Final number of subgraphs created are : 1, - Offloaded Nodes - 283, Total Nodes - 283 
    APP: Init ... !!!
      2152.274270 s: MEM: Init ... !!!
      2152.274351 s: MEM: Initialized DMA HEAP (fd=5) !!!
      2152.274549 s: MEM: Init ... Done !!!
      2152.274581 s: IPC: Init ... !!!
      2152.317540 s: IPC: Init ... Done !!!
    REMOTE_SERVICE: Init ... !!!
    REMOTE_SERVICE: Init ... Done !!!
      2152.322607 s: GTC Frequency = 200 MHz
    APP: Init ... Done !!!
      2152.322779 s:  VX_ZONE_INFO: Globally Enabled VX_ZONE_ERROR
      2152.322810 s:  VX_ZONE_INFO: Globally Enabled VX_ZONE_WARNING
      2152.322833 s:  VX_ZONE_INFO: Globally Enabled VX_ZONE_INFO
      2152.324461 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-0 
      2152.324734 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-1 
      2152.324903 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-2 
      2152.327386 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-3 
      2152.327445 s:  VX_ZONE_INFO: [tivxInitLocal:126] Initialization Done !!!
      2152.327482 s:  VX_ZONE_INFO: Globally Disabled VX_ZONE_INFO
    graph
    ==========[INPUT PIPELINE(S)]==========
    
    [PIPE-0]
    
    v4l2src device=/dev/video-imx219-cam0 io-mode=5 ! queue leaky=2 ! capsfilter caps="video/x-bayer, width=(int)1920, height=(int)1080, format=(string)rggb;" ! tiovxisp dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss.bin sensor-name=SENSOR_SONY_IMX219_RPI ! capsfilter caps="video/x-raw, format=(string)NV12;" ! tiovxmultiscaler name=multiscaler_split_00
    multiscaler_split_00. ! queue ! capsfilter caps="video/x-raw, width=(int)480, height=(int)416;" ! tiovxmultiscaler target=1 ! capsfilter caps="video/x-raw, width=(int)416, height=(int)416;" ! tiovxdlpreproc out-pool-size=4 data-type=3 tensor-format=1 ! capsfilter caps="application/x-tensor-tiovx;" ! appsink max-buffers=2 drop=true name=flow0_pre_proc0
    multiscaler_split_00. ! queue ! capsfilter caps="video/x-raw, width=(int)1280, height=(int)720;" ! tiovxdlcolorconvert out-pool-size=4 ! capsfilter caps="video/x-raw, format=(string)RGB;" ! appsink max-buffers=2 drop=true name=flow0_sensor0
    
    ==========[OUTPUT PIPELINE]==========
    
    appsrc do-timestamp=true format=3 block=true name=flow0_post_proc0 ! tiovxdlcolorconvert ! capsfilter caps="video/x-raw, width=(int)1280, height=(int)720, format=(string)NV12;" ! queue ! mosaic0.sink0
    
    tiovxmosaic target=1 background=/tmp/background0 name=mosaic0 src::pool-size=4
    sink_0::startx="<320>" sink_0::starty="<150>" sink_0::widths="<1280>" sink_0::heights="<720>"
    ! capsfilter caps="video/x-raw, format=(string)NV12, width=(int)1920, height=(int)1080;" ! queue ! tiperfoverlay title=IMX219 Camera ! kmssink sync=false max-lateness=5000000 qos=true processing-deadline=15000000 driver-name=tidss connector-id=40 plane-id=31 force-modesetting=true fd=52
    
    







    How can I resolve this error? Do I need to make any changes to the uEnv.txt file, specifically the name_overlay parameter?

  • Hi Noushad,

    Could verify if you are able to stream from the sensor to a connected display by running this GStreamer pipeline?

    gst-launch-1.0 \
    v4l2src device=/dev/video-imx219-cam0 io-mode=5 ! queue leaky=2 ! \
    video/x-bayer, width=1920, height=1080, framerate=30/1, format=rggb ! \
    tiovxisp sink_0::device=/dev/v4l-imx219-subdev0 sensor-name="SENSOR_SONY_IMX219_RPI" \
    dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss_1920x1080.bin \
    sink_0::dcc-2a-file=/opt/imaging/imx219/linear/dcc_2a_1920x1080.bin format-msb=7 ! \
    video/x-raw, format=NV12, width=1920, height=1080, framerate=30/1 ! \
    kmssink driver-name=tidss sync=false

    FAQ for your referencehttps://e2e.ti.com/support/processors-group/processors/f/processors-forum/1427869/faq-sk-am69-how-to-stream-from-csi-sensor-and-usb-camera-to-display-using-gstreamer

    Thank you,

    Fabiana

  • hi,

    Thank you for the suggestion.

    I tested the provided GStreamer pipeline with the IMX219 sensor on /dev/video-imx219-cam0 via V3Link. Unfortunately, the pipeline fails with the following error:

    root@am68a-sk:/opt/edgeai-gst-apps/scripts# source /opt/edgeai-gst-apps/scripts/setup_cameras_v3link.sh
    USB Camera 0 detected
        device = /dev/video-usb-cam0
        format = jpeg
    USB Camera 1 detected
        device = /dev/video-usb-cam1
        format = jpeg
    IMX219 Camera 0 detected
        device = /dev/video-imx219-cam0
        name = imx219
        format = [fmt:SRGGB8_1X8/1920x1080]
        subdev_id = /dev/v4l-imx219-subdev0
        isp_required = yes
        ldc_required = yes
    root@am68a-sk:/opt/edgeai-gst-apps/scripts# gst-launch-1.0 v4l2src device="/dev/video-usb-cam0" ! videoconvert ! videoscale ! video/x-raw, width=1280, height=720 ! kmssink sync=false driver-name=tidss
    Setting pipeline to PAUSED ...
    Pipeline is live and does not need PREROLL ...
    Pipeline is PREROLLED ...
    Setting pipeline to PLAYING ...
    New clock: GstSystemClock
    Redistribute latency...
    ^Chandling interrupt.
    Interrupt: Stopping pipeline ...
    Execution ended after 0:00:02.847038118
    Setting pipeline to NULL ...
    Freeing pipeline ...
    root@am68a-sk:/opt/edgeai-gst-apps/scripts# gst-launch-1.0 v4l2src device=/dev/video-imx219-cam0 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=1920, height=1080, framerate=30/1, format=rggb ! tiovxisp sink_0::device=/dev/v4l-imx219-subdev0 sensor-name="SENSOR_SONY_IMX219_RPI" dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss_1920x1080.bin sink_0::dcc-2a-file=/opt/imaging/imx219/linear/dcc_2a_1920x1080.bin format-msb=7 ! video/x-raw, format=NV12, width=1920, height=1080, framerate=30/1 ! kmssink driver-name=tidss sync=false
    APP: Init ... !!!
      2199.210285 s: MEM: Init ... !!!
      2199.210378 s: MEM: Initialized DMA HEAP (fd=8) !!!
      2199.210552 s: MEM: Init ... Done !!!
      2199.210572 s: IPC: Init ... !!!
      2199.271106 s: IPC: Init ... Done !!!
    REMOTE_SERVICE: Init ... !!!
    REMOTE_SERVICE: Init ... Done !!!
      2199.290231 s: GTC Frequency = 200 MHz
    APP: Init ... Done !!!
      2199.292429 s:  VX_ZONE_INFO: Globally Enabled VX_ZONE_ERROR
      2199.292481 s:  VX_ZONE_INFO: Globally Enabled VX_ZONE_WARNING
      2199.292495 s:  VX_ZONE_INFO: Globally Enabled VX_ZONE_INFO
      2199.300493 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-0 
      2199.300711 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-1 
      2199.300879 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-2 
      2199.301076 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-3 
      2199.301092 s:  VX_ZONE_INFO: [tivxInitLocal:126] Initialization Done !!!
      2199.301100 s:  VX_ZONE_INFO: Globally Disabled VX_ZONE_INFO
    Setting pipeline to PAUSED ...
    Pipeline is live and does not need PREROLL ...
    Pipeline is PREROLLED ...
    Setting pipeline to PLAYING ...
    New clock: GstSystemClock
    ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Failed to allocate required memory.
    Additional debug info:
    /usr/src/debug/gstreamer1.0-plugins-good/1.22.12/sys/v4l2/gstv4l2src.c(950): gst_v4l2src_decide_allocation (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
    Buffer pool activation failed
    Execution ended after 0:00:00.024694841
    Setting pipeline to NULL ...
    ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
    Additional debug info:
    /usr/src/debug/gstreamer1.0/1.22.12/libs/gst/base/gstbasesrc.c(3134): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
    streaming stopped, reason not-negotiated (-4)
    Freeing pipeline ...
      2199.955246 s:  VX_ZONE_WARNING: [vxReleaseContext:1275] Found a reference 0xffff9574c9f8 of type 00000817 at external count 1, internal count 0, releasing it
      2199.955290 s:  VX_ZONE_WARNING: [vxReleaseContext:1277] Releasing reference (name=raw_image_101) now as a part of garbage collection
      2199.955313 s:  VX_ZONE_WARNING: [vxReleaseContext:1275] Found a reference 0xffff9584e298 of type 00000813 at external count 1, internal count 0, releasing it
      2199.955326 s:  VX_ZONE_WARNING: [vxReleaseContext:1277] Releasing reference (name=object_array_102) now as a part of garbage collection
      2199.955808 s:  VX_ZONE_WARNING: [vxReleaseContext:1275] Found a reference 0xffff9584e448 of type 00000813 at external count 1, internal count 0, releasing it
      2199.955848 s:  VX_ZONE_WARNING: [vxReleaseContext:1277] Releasing reference (name=object_array_104) now as a part of garbage collection
      2199.959020 s:  VX_ZONE_WARNING: [vxReleaseContext:1275] Found a reference 0xffff9584e5f8 of type 00000813 at external count 1, internal count 0, releasing it
      2199.959096 s:  VX_ZONE_WARNING: [vxReleaseContext:1277] Releasing reference (name=object_array_106) now as a part of garbage collection
      2199.959566 s:  VX_ZONE_WARNING: [vxReleaseContext:1275] Found a reference 0xffff9584e7a8 of type 00000813 at external count 1, internal count 0, releasing it
      2199.960179 s:  VX_ZONE_WARNING: [vxReleaseContext:1277] Releasing reference (name=object_array_108) now as a part of garbage collection
      2199.962502 s:  VX_ZONE_WARNING: [vxReleaseContext:1275] Found a reference 0xffff9584e958 of type 00000813 at external count 1, internal count 0, releasing it
      2199.962552 s:  VX_ZONE_WARNING: [vxReleaseContext:1277] Releasing reference (name=object_array_110) now as a part of garbage collection
      2199.963095 s:  VX_ZONE_WARNING: [vxReleaseContext:1275] Found a reference 0xffff9584eb08 of type 00000813 at external count 1, internal count 0, releasing it
      2199.963128 s:  VX_ZONE_WARNING: [vxReleaseContext:1277] Releasing reference (name=object_array_112) now as a part of garbage collection
    APP: Deinit ... !!!
    REMOTE_SERVICE: Deinit ... !!!
    REMOTE_SERVICE: Deinit ... Done !!!
      2199.982610 s: IPC: Deinit ... !!!
      2199.989319 s: IPC: DeInit ... Done !!!
      2199.989371 s: MEM: Deinit ... !!!
      2199.989381 s: DDR_SHARED_MEM: Alloc's: 6 alloc's of 12441600 bytes 
      2199.989395 s: DDR_SHARED_MEM: Free's : 6 free's  of 12441600 bytes 
      2199.989403 s: DDR_SHARED_MEM: Open's : 0 allocs  of 0 bytes 
      2199.989415 s: MEM: Deinit ... Done !!!
    APP: Deinit ... Done !!!
    root@am68a-sk:/opt/edgeai-gst-apps/scripts# 


    However, streaming from a USB camera using a simpler GStreamer pipeline works successfully and displays video as expected.

    Thank you,

    Noushad

  • Hi Noushad,

    This IMX219 IR variant has not been validated on any of our devices. Based on the logs you've shared, it appears that further changes would need to be made to the setup cameras script to properly setup the device. I recommend taking a look at the OV2312 setup function since this is an RGB & IR sensor.

    Thank you,

    Fabiana

  • Hi ,

    Thank you for the update.

    I’d like to share that when I connect the IMX219 IR variant directly to the CSI2 port on the AM68A, I am able to display the output at full resolution (width = 3280, height = 2464) without any issues by simply sourcing the existing setup_cameras.sh script — no additional modifications were required in my setup.

    It seems the issue might be related to sourcing with setup_cameras_v3link.sh.

    Please let me know if you’d like me to share any additional details or logs.

    Best regards,

    Noushad

  • Hi Noushad,

    I appreciate the clarification. Did you make any changes to the setup_cameras_v3link script? Could you share the output of media-ctl -p after running the script?

    Thank you,

    Fabiana

  • Hi, 

    Yes, I modified the setup_cameras_v3link script. Specifically, I fixed the IMX219 camera high-resolution format by updating the configuration to:

    IMX219_CAM_FMT="${IMX219_CAM_FMT:-[fmt:SRGGB10_1X10/3280x2464 field:none]}"



    I added field:none in addition to what you suggested above. After this change, when setting up two IMX219 IR cameras over V3Link, everything works properly at high resolution using the following script:


    gst-launch-1.0 v4l2src device=/dev/video-imx219-cam0 io-mode=dmabuf-import ! queue max-size-buffers=1 leaky=2 ! \
    video/x-bayer, width=3280, height=2464, framerate=15/1, format=rggb10 ! \
    tiovxisp sink_0::pool-size=4  sink_0::device=/dev/v4l-imx219-subdev0 sensor-name="SENSOR_SONY_IMX219_RPI" \
    dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss_3280x2464_10b.bin \
    sink_0::dcc-2a-file=/opt/imaging/imx219/linear/dcc_2a_3280x2464_10b.bin format-msb=9 ! \
    video/x-raw, format=NV12, width=3280, height=2464, framerate=15/1 ! queue ! tiovxmultiscaler ! queue ! \
    video/x-raw, format=NV12, width=1920, height=1080, framerate=15/1 ! \
    kmssink driver-name=tidss sync=false force-modesetting=true


    However, when I try to set up three IMX219 IR cameras with the same high resolution, I start seeing image distortion when displaying the output of a single camera.





    I’ve attached a video showing the issue:

    Thank you,
    Noushad

  • Hi Noushad,

    I assume this is the output stream from /dev/video-imx219-cam0. Do you observe the same image distortion when streaming from /dev/video-imx219-cam1 or /dev/video-imx219-cam2?

    Thank you,

    Fabiana

  • Hi Fabiana,

    I tested all IMX219 cameras separately using the same scripts mentioned above, and all of them show the same behavior. However, when I configure only two IMX219 cameras in uEnv.txt as shown below, they work successfully at high resolution:

    name_overlays=ti/k3-j721s2-edgeai-apps.dtbo ti/k3-am68-sk-v3link-fusion.dtbo ti/k3-v3link-imx219-0-0.dtbo ti/k3-v3link-imx219-0-1.dtbo
    


    Thank you,
    Noushad

  • Hi Noushad,

    Just to clarify, do you only see the issue occur when you stream from from a single camera or subset in a multi camera set up?

    Thank you,

    Fabiana

  • Hi ,

    To clarify — when I set up 2 IMX219 cameras in uEnv.txt, displaying each camera separately works successfully. However, when I configure 3 IMX219 cameras in uEnv.txt and test each camera individually, I observe image distortion on the display.

    Thank you,
    Noushad

  • Hi ,

    As per your suggestion, I streamed and saved video from both cameras (two IMX219 sensors with a resolution of 3280x2464) simultaneously.

    However, upon playback, I still observe image distortion in the recorded video.

    For further analysis, I’ve attached both the GStreamer streaming code and the recorded video files.



    title: "Multi Input, Multi Inference"
    log_level: 2
    inputs:
        input0:
            source: /dev/video-imx219-cam0
            subdev-id: /dev/v4l-imx219-subdev0
            width: 3280
            height: 2464
            format: rggb10
            framerate: 30
        input1:
            source: /dev/video-imx219-cam1
            subdev-id: /dev/v4l-imx219-subdev1
            width: 3280
            height: 2464
            format: rggb10
            framerate: 30
        input2:
            source: /dev/video-imx219-cam2
            subdev-id: /dev/v4l-imx219-subdev2
            width: 3280
            height: 2464
            format: rggb10
            framerate: 30
    models:
        model0:
            model_path: /opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320
            viz_threshold: 0.6
        model1:
            model_path: /opt/model_zoo/ONR-OD-8200-yolox-nano-lite-mmdet-coco-416x416
            viz_threshold: 0.6
        model2:
            model_path: /opt/model_zoo/ONR-CL-6360-regNetx-200mf
            topN: 5
        model3:
            model_path: /opt/model_zoo/ONR-SS-8610-deeplabv3lite-mobv2-ade20k32-512x512
            alpha: 0.4
    outputs:
        output0:
            sink: kmssink
            width: 1920
            height: 1080
            overlay-perf-type: graph
        output1:
            sink: /opt/edgeai-test-data/output/output_video.mkv
            width: 1920
            height: 1080
        output2:
            sink: /opt/edgeai-test-data/output/output_video2.mkv
            width: 1920
            height: 1080
        output3:
            sink: /opt/edgeai-test-data/output/output_video3.mkv
            width: 1920
            height: 1080
    
    flows:
        
        flow0: [input0,model0,output1,[0,0,1920,1080]]
        flow1: [input1,model0,output2,[0,0,1920,1080]]
        #flow2: [input2,model1,output3,[320,150,1280,720]]
        #flow0: [input0,model1,output0,[320,150,640,360]]
        #flow1: [input1,model1,output2,[960,150,640,360]]
        #flow2: [input2,model2,output3,[320,530,640,360]]
        
    












    Please let me know if you need any additional details.



    - Noushad

  • Hi Noushad,

    Fabiana is currently out, so I will try to fill in for her in the meantime.

    One thing I noticed in the latest response is the flow looks like it needs to be modified:

    • flow0: [input0,model0,output1,[0,0,1920,1080]]
    • flow1: [input1,model0,output2,[0,0,1920,1080]]

    Specifically, the last portion of the two flows "[0,0,1920,1080]".

    What this does is it defines the X, Y coordinates on the screen to draw out the output at a certain resolution. In your case, both flow0 and flow1 are drawing a 1920x1080 output starting at 0, 0 (the corner of the screen). So, if a 1920x1080 screen is used, it is drawing two outputs on top of each other on the same screen. I suspect this will cause some strange behavior, which might be manifesting visually as "distortion".

    I recommend reading the Flows section here and updating the flow configuration: https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-am68a/10_01_00/exports/edgeai-docs/common/configuration_file.html#flows

    Regards,

    Takuma

  • hello@

    Thank you for your response.

    I believe there may have been a misunderstanding. I'm not streaming the output to a display — my intention is only to save two videos to specific file paths.

    Here is the relevant flow configuration:

    flow0: [input0, model0, output1, [0, 0, 1920, 1080]]
    flow1: [input1, model0, output2, [0, 0, 1920, 1080]]
    


    And the outputs are defined as:

    output1:
        sink: /opt/edgeai-test-data/output/output_video.mkv
        width: 1920
        height: 1080
    
    output2:
        sink: /opt/edgeai-test-data/output/output_video2.mkv
        width: 1920
        height: 1080
    


    Since both outputs are saved to files and not displayed on a screen, the [0, 0, 1920, 1080] coordinates shouldn’t cause any overlay issues.

    However, I’ve noticed that image distortion occurs only when I configure three cameras in uEnv.txt as follows:

    name_overlays=ti/k3-j721s2-edgeai-apps.dtbo ti/k3-am68-sk-v3link-fusion.dtbo ti/k3-v3link-imx219-0-0.dtbo ti/k3-v3link-imx219-0-1.dtbo ti/k3-v3link-imx219-0-2.dtbo
    

    When I configure only two cameras, like this, everything works fine:

    name_overlays=ti/k3-j721s2-edgeai-apps.dtbo ti/k3-am68-sk-v3link-fusion.dtbo ti/k3-v3link-imx219-0-0.dtbo ti/k3-v3link-imx219-0-1.dtbo
    


    So it seems the issue may be related to enabling the third camera rather than the flow or output configuration itself.

    Please let me know if this gives any clues or if you have suggestions on how to further debug it.

    Best regards,
    Noushad



  • Hi@

    Just to add more context:

    I also tested using two TI V3Link Arducam SerDes boards and configured three cameras as follows:

    name_overlays=ti/k3-j721s2-edgeai-apps.dtbo ti/k3-am68-sk-v3link-fusion.dtbo ti/k3-v3link-imx219-0-0.dtbo ti/k3-v3link-imx219-0-1.dtbo ti/k3-v3link-imx219-1-0.dtbo
    


    In this configuration, everything works successfully — no image distortion occurs, and the videos are saved correctly.

    So it seems that the issue only arises when using three cameras on a single SerDes board.

    Hope this helps in narrowing down the root cause


    -Noushad



  • Hi Noushad,

    Since both outputs are saved to files and not displayed on a screen, the [0, 0, 1920, 1080] coordinates shouldn’t cause any overlay issues.

    Ah, yes. That should be fine. Thanks for clearing up my misunderstanding.

    So it seems that the issue only arises when using three cameras on a single SerDes board.

    Before we conclude on this, can you do one more experiment to rule out another hypothesis? Hypothesis is that maybe it is an issue with the particular camera module that is enabled by the third dtbo.

    Can you only enable the third camera and see if distortion is observed? So in U-boot, like the following:

    • name_overlays=ti/k3-j721s2-edgeai-apps.dtbo ti/k3-am68-sk-v3link-fusion.dtbo ti/k3-v3link-imx219-0-2.dtbo

    Regards,

    Takuma

  • hello@

    Thanks for the suggestion.

    I’m using three identical camera modules:IMX219-160_IR-CUT_Camera. I’ve already tested enabling just one camera at a time on different ports (0, 1, 2, 3), and in all cases, the image is clean — no distortion observed.

    Even enabling two cameras at once works fine without any issues.

    The image distortion only occurs when all three IMX219 cameras are enabled together, and the scripts are set to use high-resolution settings. When accessing any one of the cameras under this configuration, I see distortion in the image.

    Interestingly, when I switch to lower resolution, I can even enable four cameras on a single SerDes board without any image distortion.

    So it seems the issue is not specific to a particular camera or overlay, but more likely related to the bandwidth or processing limits when using high-resolution streams from multiple cameras.

    • Camera module: IMX219-160_IR-CUT_Camera

    • Interface: CSI2

    • Board: AM68A

    • Working resolution: 1920x1080

    • Desired resolution: 3264x2464

    • SDK:10.1

    Let me know if you'd like me to test anything else.

    Best regards,
    Noushad

  • Hi Noushad,

    The image distortion only occurs when all three IMX219 cameras are enabled together, and the scripts are set to use high-resolution settings. When accessing any one of the cameras under this configuration, I see distortion in the image.

    Interestingly, when I switch to lower resolution, I can even enable four cameras on a single SerDes board without any image distortion.

    I agree with you. That does sounds like some sort of bandwidth or processing limits being hit. 

    Some limitations I know of that are commonly hit are limitations with VPAC and limitations with codec. For AM68A, I think both have a limit of 480MPixels/s. I think in our current case, you mentioned distortion is seen when outputting to a monitor, so I suspect some VPAC limitation being hit rather than codec limitations.

    As an experiment, could you try lowering the frames per second to lessen data throughput? If it is 15 FPS, then lower it to 10FPS to see if distortion is still seen. If distortion is still seen at 10FPS, try lowering even more.

    Regards,

    Takuma

  • Hi ,

    Thanks for your input.

    I already tried reducing the FPS to as low as 5 FPS, but unfortunately, the image distortion still persists when using high-resolution settings.

    For high-resolution capture, I’m using the TI-provided DCC files:

    Could there be any known issues or configuration limitations associated with these DCC files when using multiple IMX219 cameras at high resolution?

    Let me know if there are other areas I should check or tune.

    Best regards,

    Noushad

  • Hi Noushad,

    If distortion is still seen at 5 FPS, then maybe not a bandwidth issue.  It should be the same amount of data as having 1 camera at 15 FPS.

    For high-resolution capture, I’m using the TI-provided DCC files

    Where were these dcc files obtained? I do not think we have a 3280x2464 dcc file in our default pre-built SD card image, but if these files are from an E2E thread, could you link the E2E thread?

    In the meantime, let me ask another team member who worked on bringing up an 8MP camera to see if they saw some similar issues.

    Regards,

    Takuma