This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

SK-TDA4VM: J721EXSKG01EVM TDA4VM Evaluation board CSI Camera IMX219 Stereo Depth Estimation using the SDE Engine onboard.

Part Number: SK-TDA4VM
Other Parts Discussed in Thread: TDA4VM

Please provide instructions on how to use Stereo Depth Estimation using the SDE Engine Onboard for 2 Arducam IMX219 Cameras connected via CSI cables. The instructions provided in documentation are for launch files with bag file and Zed camera. Is there any instructions for Stereo Depth Estimation using video stream feed from IMX219 Cameras?

Also, I tried using the bag file demo but it shows No Image on the Rviz.

The gst command works with the demo videos. But not sure how to use with IMX219 cameras.

Also, cannot locate the stereo_capture.py, stereo_calibrate.py, and stereo_test.py neither in the Robotics SDK 8.6.1 running on my tda4vm nor the pc docker mentioned in your "Process This" webinar on Stereo Vision. Where can I find these scripts?

  • Hello,

    It would be doable but need some works for stereo calibration. As long as you have LDC LUTs for two cameras, you should be able to use gstreamer to run stereo depth engine on two IMX219 cameras. I believe you can still run gstreamer SDE pipeline without LDC LUTs (You don't need ROS). But the output disparity map wouldn't be correct.

    You can find stereo calibration scripts in robotics_sdk/tools/stereo_camera/calibration. But note that it is PC tool for USB cameras. For CSI camera, you need to modify this tool so that it works with two CSI cameras on target. (Or it would be possible to capture and save images on target and run this tool on PC with modification.)  As you know, two cameras should be mounted firmly on stereo rig before calibration.

    Best regards,

    Do-Kyoung

  • Can you share the correct gstreamer SDE command for taking video source directly from IMX219 cameras? Also, how do you add LDC LUTs to the GStreamer pipeline? Can this be run from a Python/C++ code as well?

    I had tried earlier to just add sde.left_sink and sde.right_sink at end of individual camera pipelines but it showed only one image and hanged.

  • Hi Aditya,

    I think in the previous E2E thread, Fabiana was helping you get the IMX219 camera stream: https://e2e.ti.com/support/processors-group/processors/f/processors-forum/1281917/sk-tda4vm-j721exskg01evm-tda4vm-evaluation-board-csi-camera-imx219-getting-detected-but-get-error-while-trying-to-open.

    Please reference that E2E for getting camera stream, and reference our plugin wiki for how LDC can be integrated: https://github.com/TexasInstruments/edgeai-gst-plugins/wiki/tiovxldc#pipeline-examples

    And using something like gscam should make it possible to get the GStreamer pipeline into the ROS framework.

    Regards,

    Takuma

  • We are trying to use the gstreamer SDE command using v4l2src from the 2 IMX219 cameras instead of the filesrc given in the documentation.

    I tried using this command but doesnt seem to work.

    gst-launch-1.0 v4l2src device=/dev/video2 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=1920, height=1080, format=rggb10 ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/dcc_viss.bin format-msb=6 sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a.bin sink_0::device=/dev/v4l-subdev2 ! video/x-raw,format=NV12, width=1920, height=1080 ! queue ! sde.left_sink tiovxsde name=sde ! tiovxsdeviz ! kmssink driver-name=tidss sync=false v4l2src device=/dev/video18 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=1920, height=1080, format=rggb10 ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/dcc_viss.bin format-msb=6 sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a.bin sink_1::device=/dev/v4l-subdev5 ! video/x-raw,format=NV12, width=1920, height=1080 ! queue ! sde.right_sink tiovxsde name=sde1 ! tiovxsdeviz ! kmssink driver-name=tidss sync=false


    APP: Init ... !!!
    MEM: Init ... !!!
    MEM: Initialized DMA HEAP (fd=5) !!!
    MEM: Init ... Done !!!
    IPC: Init ... !!!
    IPC: Init ... Done !!!
    REMOTE_SERVICE: Init ... !!!
    REMOTE_SERVICE: Init ... Done !!!
    3070.736495 s: GTC Frequency = 200 MHz
    APP: Init ... Done !!!
    3070.736603 s: VX_ZONE_INIT:Enabled
    3070.736622 s: VX_ZONE_ERROR:Enabled
    3070.736638 s: VX_ZONE_WARNING:Enabled
    3070.738330 s: VX_ZONE_INIT:[tivxInitLocal:130] Initialization Done !!!
    3070.739611 s: VX_ZONE_INIT:[tivxHostInitLocal:93] Initialization Done for HOST !!!
    WARNING: erroneous pipeline: could not link queue1 to sde

    Can you please tell us what would be the right Gstreamer command pipeline to apply SDE on the 2 IMX219 Cameras given that this is the commad to open each camera:

    For First Camera:

    gst-launch-1.0 v4l2src device=/dev/video2 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=1920, height=1080, format=rggb10 ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/dcc_viss.bin format-msb=6 sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a.bin sink_0::device=/dev/v4l-subdev2 ! video/x-raw,format=NV12, width=1920, height=1080 ! queue ! kmssink driver-name=tidss sync=false

    For Second Camera:

    gst-launch-1.0 v4l2src device=/dev/video2 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=1920, height=1080, format=rggb10 ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/dcc_viss.bin format-msb=6 sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a.bin sink_0::device=/dev/v4l-subdev2 ! video/x-raw,format=NV12, width=1920, height=1080 ! queue ! kmssink driver-name=tidss sync=false

  • Any updates on this?

  • I am not familiar with gstreamer. But I think you can't call SDE twice. This is example gstreamer SDE pipeline with file inputs:

    gst-launch-1.0                                                              \
    filesrc location=/opt/edge_ai_apps/data/videos/left-1280x720.avi !          \
    avidemux ! h264parse ! v4l2h264dec !                                        \
    video/x-raw, format=NV12 ! queue ! sde.left_sink                            \
    filesrc location=/opt/edge_ai_apps/data/videos/right-1280x720.avi !         \
    avidemux ! h264parse ! v4l2h264dec !                                        \
    video/x-raw, format=NV12 ! queue ! sde.right_sink                           \
    tiovxsde name=sde ! tiovxsdeviz ! kmssink sync=false
  • Hi Aditya,

    Apologies for the late reply. Replies on E2E are slightly delayed currently due to US Thanksgiving holidays.

    I will need a couple of days to take a deeper look into this issue, but in the meantime, I can give some quick suggestions.

    First, we have an example of how tiovxsde can be used in the wiki section of the Github repository hosting the gstreamer plugins which I recommend to use as a reference when building a pipeline: https://github.com/TexasInstruments/edgeai-gst-plugins/wiki/tiovxsde

    Second, I see kmssink used twice in the pipeline. So, instead of having two inputs merging into one, it looks like two complete but separate pipelines are being made. For example, in the wiki page's "Single SDE" example in "Pipeline examples" section they create one long string with two sections:

    1. filesrc location=left.avi ! avidemux ! h264parse ! v4l2h264dec ! video/x-raw, format=NV12 ! queue ! sde.left_sink 

    2. filesrc location=right.avi ! avidemux ! h264parse ! v4l2h264dec ! video/x-raw, format=NV12 ! queue ! sde.right_sink tiovxsde name=sde ! tiovxsdeviz ! kmssink sync=false

    And combining the two sections and adding the gst-launch-1.0 command creates the final complete string: gst-launch-1.0 filesrc location=left.avi ! avidemux ! h264parse ! v4l2h264dec ! video/x-raw, format=NV12 ! queue ! sde.left_sink filesrc location=right.avi ! avidemux ! h264parse ! v4l2h264dec ! video/x-raw, format=NV12 ! queue ! sde.right_sink tiovxsde name=sde ! tiovxsdeviz ! kmssink sync=false

    Similarly, I would recommend building the pipeline where first camera input terminates with sde.left_sink, and second camera input terminates with sde.right_sink tiovxsde name=sde ! tiovxsdeviz ! kmssink sync=false.

    Regards,

    Takuma

  • Hi Aditya,

    Please reference the pipeline in my previous post if you have not already.

    Regards,

    Takuma

  • This didn't quite work. Can you please try out at let me know the right pipeline?

  • Hi Aditya,

    Let us troubleshoot this one step at a time. Does this example pipeline using file input work?

    If it does not work, please post the full logs from running this pipeline.

    If it does work, please try running something like the following:

    • gst-launch-1.0 v4l2src device=/dev/video2 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=1920, height=1080, format=rggb10 ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/dcc_viss.bin format-msb=6 sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a.bin sink_0::device=/dev/v4l-subdev2 ! video/x-raw,format=NV12, width=1920, height=1080 ! queue ! sde.left_sink v4l2src device=/dev/video18 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=1920, height=1080, format=rggb10 ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/dcc_viss.bin format-msb=6 sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a.bin sink_1::device=/dev/v4l-subdev5 ! video/x-raw,format=NV12, width=1920, height=1080 ! queue ! sde.right_sink tiovxsde name=sde1 ! tiovxsdeviz ! kmssink driver-name=tidss sync=false

    Assuming video# and subdev# is correct, then this should give an output. If it does not, again, please post the full logs.

    regards,

    Takuma

  • Hi Takuma,

    I have tried the pipeline mentioned in the link for single output but I am facing these issues,

    root@tda4vm-sk:/opt/edgeai-gst-apps# DCC_FILE="/opt/imaging/imx390/dcc_ldc_wdr.bin"
    root@tda4vm-sk:/opt/edgeai-gst-apps# INPUT_FILE="/opt/edgeai-tiovx-modules/data/input/imx390_fisheye_1936x1096_nv12.yuv"
    root@tda4vm-sk:/opt/edgeai-gst-apps# WIDTH=1936
    root@tda4vm-sk:/opt/edgeai-gst-apps# HEIGHT=1096
    root@tda4vm-sk:/opt/edgeai-gst-apps# OUTPUT_WIDTH=1980
    root@tda4vm-sk:/opt/edgeai-gst-apps# OUTPUT_HEIGHT=1080
    root@tda4vm-sk:/opt/edgeai-gst-apps# FORMAT="NV12"
    root@tda4vm-sk:/opt/edgeai-gst-apps# FORMAT_LOWERCASE="nv12"
    root@tda4vm-sk:/opt/edgeai-gst-apps# OUTPUT_FILE="output.raw"
    root@tda4vm-sk:/opt/edgeai-gst-apps# SENSOR="SENSOR_SONY_IMX390_UB953_D3"
    root@tda4vm-sk:/opt/edgeai-gst-apps# IN_POOL_SIZE=4
    root@tda4vm-sk:/opt/edgeai-gst-apps# OUT_POOL_SIZE=4
    root@tda4vm-sk:/opt/edgeai-gst-apps# 
    root@tda4vm-sk:/opt/edgeai-gst-apps# gst-launch-1.0 -e filesrc location=${INPUT_FILE} !                                                                \
    > videoparse format=${FORMAT_LOWERCASE} width=${WIDTH} height=${HEIGHT} !                                           \
    > tiovxldc dcc-file=${DCC_FILE} sensor-name=${SENSOR} in-pool-size=${IN_POOL_SIZE} out-pool-size=${OUT_POOL_SIZE} ! \
    > video/x-raw,width=${OUTPUT_WIDTH},height=${OUTPUT_HEIGHT} !                                                       \
    > filesink location=${OUTPUT_FILE}
    APP: Init ... !!!
    MEM: Init ... !!!
    MEM: Initialized DMA HEAP (fd=4) !!!
    MEM: Init ... Done !!!
    IPC: Init ... !!!
    IPC: Init ... Done !!!
    REMOTE_SERVICE: Init ... !!!
    REMOTE_SERVICE: Init ... Done !!!
       284.082499 s: GTC Frequency = 200 MHz
    APP: Init ... Done !!!
       284.082726 s:  VX_ZONE_INIT:Enabled
       284.082805 s:  VX_ZONE_ERROR:Enabled
       284.082815 s:  VX_ZONE_WARNING:Enabled
       284.083421 s:  VX_ZONE_INIT:[tivxInitLocal:130] Initialization Done !!!
       284.085493 s:  VX_ZONE_INIT:[tivxHostInitLocal:93] Initialization Done for HOST !!!
    Setting pipeline to PAUSED ...
    ERROR: Pipeline doesn't want to pause.
    ERROR: from element /GstPipeline:pipeline0/GstFileSrc:filesrc0: Resource not found.
    Additional debug info:
    ../gstreamer-1.16.3/plugins/elements/gstfilesrc.c(532): gst_file_src_start (): /GstPipeline:pipeline0/GstFileSrc:filesrc0:
    No such file "/opt/edgeai-tiovx-modules/data/input/imx390_fisheye_1936x1096_nv12.yuv"
    Setting pipeline to NULL ...
    Freeing pipeline ...
       284.090636 s:  VX_ZONE_INIT:[tivxHostDeInitLocal:107] De-Initialization Done for HOST !!!
       284.095042 s:  VX_ZONE_INIT:[tivxDeInitLocal:193] De-Initialization Done !!!
    APP: Deinit ... !!!
    REMOTE_SERVICE: Deinit ... !!!
    REMOTE_SERVICE: Deinit ... Done !!!
    IPC: Deinit ... !!!
    IPC: DeInit ... Done !!!
    MEM: Deinit ... !!!
    DDR_SHARED_MEM: Alloc's: 0 alloc's of 0 bytes 
    DDR_SHARED_MEM: Free's : 0 free's  of 0 bytes 
    DDR_SHARED_MEM: Open's : 0 allocs  of 0 bytes 
    DDR_SHARED_MEM: Total size: 536870912 bytes 
    MEM: Deinit ... Done !!!
    APP: Deinit ... Done !!!
    root@tda4vm-sk:/opt/edgeai-gst-apps# 
    
    

    In addition to that, while runnning the gstlaunch script mentioned above I am facing these issues.

    root@tda4vm-sk:/opt/edgeai-gst-apps# gst-launch-1.0 v4l2src device=/dev/video2 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=1920, height=1080, format=rggb10 ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/dcc_viss.bin format-msb=6 sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a.bin sink_0::device=/dev/v4l-subdev2 ! video/x-raw,format=NV12, width=1920, height=1080 ! queue ! sde.left_sink v4l2src device=/dev/video18 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=1920, height=1080, format=rggb10 ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/dcc_viss.bin format-msb=6 sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a.bin sink_1::device=/dev/v4l-subdev5 ! video/x-raw,format=NV12, width=1920, height=1080 ! queue ! sde.right_sink tiovxsde name=sde1 ! tiovxsdeviz ! kmssink driver-name=tidss sync=false
    APP: Init ... !!!
    MEM: Init ... !!!
    MEM: Initialized DMA HEAP (fd=5) !!!
    MEM: Init ... Done !!!
    IPC: Init ... !!!
    IPC: Init ... Done !!!
    REMOTE_SERVICE: Init ... !!!
    REMOTE_SERVICE: Init ... Done !!!
       815.236972 s: GTC Frequency = 200 MHz
    APP: Init ... Done !!!
       815.237038 s:  VX_ZONE_INIT:Enabled
       815.237047 s:  VX_ZONE_ERROR:Enabled
       815.237068 s:  VX_ZONE_WARNING:Enabled
       815.237727 s:  VX_ZONE_INIT:[tivxInitLocal:130] Initialization Done !!!
       815.239076 s:  VX_ZONE_INIT:[tivxHostInitLocal:93] Initialization Done for HOST !!!
    WARNING: erroneous pipeline: No sink-element named "(null)" - omitting link
    root@tda4vm-sk:/opt/edgeai-gst-apps# 
    


    Please help me resolve this .

    Best,
    Aditya



  • Hi Aditya,

    As for the first pipeline, directory structure must have changed in the underlying prebuilt micro SD card image. I will look into what and why they changed.

    Mistake on my part for the second pipeline. Could you try this pipeline out:

    gst-launch-1.0 v4l2src device=/dev/video2 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=1920, height=1080, format=rggb10 ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/dcc_viss.bin format-msb=6 sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a.bin sink_0::device=/dev/v4l-subdev2 ! video/x-raw,format=NV12, width=1920, height=1080 ! queue ! sde.left_sink v4l2src device=/dev/video18 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=1920, height=1080, format=rggb10 ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/dcc_viss.bin format-msb=6 sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a.bin sink_1::device=/dev/v4l-subdev5 ! video/x-raw,format=NV12, width=1920, height=1080 ! queue ! sde.right_sink tiovxsde name=sde ! tiovxsdeviz ! kmssink driver-name=tidss sync=false

    Main change is the name=sde1 to name=sde. sde.right_sink and sde.left_sink looks for right_sink and left_sink of a element named sde, so having sde1 makes these try to link to a non-existent element named sde1.

    Regards,

    Takuma

  • I tried the second pipeline and this is the issue I am facing after executing

    root@tda4vm-sk:/opt/edgeai-gst-apps# gst-launch-1.0 v4l2src device=/dev/video2 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=1920, height=1080, format=rggb10 ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/dcc_viss.bin format-msb=6 sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a.bin sink_0::device=/dev/v4l-subdev2 ! video/x-raw,format=NV12, width=1920, height=1080 ! queue ! sde.left_sink v4l2src device=/dev/video18 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=1920, height=1080, format=rggb10 ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/dcc_viss.bin format-msb=6 sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a.bin sink_1::device=/dev/v4l-subdev5 ! video/x-raw,format=NV12, width=1920, height=1080 ! queue ! sde.right_sink tiovxsde name=sde ! tiovxsdeviz ! kmssink driver-name=tidss sync=false
    APP: Init ... !!!
    MEM: Init ... !!!
    MEM: Initialized DMA HEAP (fd=5) !!!
    MEM: Init ... Done !!!
    IPC: Init ... !!!
    IPC: Init ... Done !!!
    REMOTE_SERVICE: Init ... !!!
    REMOTE_SERVICE: Init ... Done !!!
       104.692334 s: GTC Frequency = 200 MHz
    APP: Init ... Done !!!
       104.692397 s:  VX_ZONE_INIT:Enabled
       104.692405 s:  VX_ZONE_ERROR:Enabled
       104.692413 s:  VX_ZONE_WARNING:Enabled
       104.693064 s:  VX_ZONE_INIT:[tivxInitLocal:130] Initialization Done !!!
       104.694297 s:  VX_ZONE_INIT:[tivxHostInitLocal:93] Initialization Done for HOST !!!
    WARNING: erroneous pipeline: could not link queue1 to sde
    

    Please help me resolve this

  • The error you have now might have nothing to do with resolution. But can you test with smaller resolution. e.g. 1280x720. Max resolution to SDE is 2048x1024? 1080 height is not supported. So I recommend you to try smaller resolution.

  • This is the issue after the modifications mentioned above.

    root@tda4vm-sk:/opt/edgeai-gst-apps# gst-launch-1.0 v4l2src device=/dev/video2 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=1920, height=1080, format=rggb10 ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/dcc_viss.bin format-msb=6 sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a.bin sink_0::device=/dev/v4l-subdev2 ! video/x-raw,format=NV12, width=1280, height=720 ! queue ! sde.left_sink v4l2src device=/dev/video18 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=1280, height=720, format=rggb10 ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/dcc_viss.bin format-msb=6 sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a.bin sink_1::device=/dev/v4l-subdev5 ! video/x-raw,format=NV12, width=1280, height=720 ! queue ! sde.right_sink tiovxsde name=sde ! tiovxsdeviz ! kmssink driver-name=tidss sync=false
    APP: Init ... !!!
    MEM: Init ... !!!
    MEM: Initialized DMA HEAP (fd=5) !!!
    MEM: Init ... Done !!!
    IPC: Init ... !!!
    IPC: Init ... Done !!!
    REMOTE_SERVICE: Init ... !!!
    REMOTE_SERVICE: Init ... Done !!!
       983.521317 s: GTC Frequency = 200 MHz
    APP: Init ... Done !!!
       983.521389 s:  VX_ZONE_INIT:Enabled
       983.521398 s:  VX_ZONE_ERROR:Enabled
       983.521411 s:  VX_ZONE_WARNING:Enabled
       983.522178 s:  VX_ZONE_INIT:[tivxInitLocal:130] Initialization Done !!!
       983.523432 s:  VX_ZONE_INIT:[tivxHostInitLocal:93] Initialization Done for HOST !!!
    Setting pipeline to PAUSED ...
    Pipeline is live and does not need PREROLL ...
    Setting pipeline to PLAYING ...
    New clock: GstSystemClock
       983.708242 s:  VX_ZONE_ERROR:[tivxAddKernelVpacVissValidate:643] Parameters 'output2' and 'raw' should have the same value for VX_IMAGE_WIDTH
       983.708486 s:  VX_ZONE_ERROR:[tivxAddKernelVpacVissValidate:648] Parameters 'output2' and 'raw' should have the same value for VX_IMAGE_HEIGHT
       983.708550 s:  VX_ZONE_ERROR:[ownGraphNodeKernelValidate:531] node kernel validate failed for kernel com.ti.hwa.vpac_viss at index 0
       983.708613 s:  VX_ZONE_ERROR:[vxVerifyGraph:1941] Node kernel Validate failed
       983.708681 s:  VX_ZONE_ERROR:[vxVerifyGraph:2109] Graph verify failed
       983.709129 s:  VX_ZONE_ERROR:[ownReleaseReferenceInt:294] Invalid reference
    ERROR: from element /GstPipeline:pipeline0/GstTIOVXISP:tiovxisp0: Unable to init TIOVX module
    Additional debug info:
    ../git/gst-libs/gst/tiovx/gsttiovxmiso.c(1511): gst_tiovx_miso_negotiated_src_caps (): /GstPipeline:pipeline0/GstTIOVXISP:tiovxisp0
    Execution ended after 0:00:00.094501414
    Setting pipeline to NULL ...
    

  • I still see: 1920, 1080 - root@tda4vm-sk:/opt/edgeai-gst-apps# gst-launch-1.0 v4l2src device=/dev/video2 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=1920, height=1080

  • I tried this but it is not displaying on the PC

    root@tda4vm-sk:/opt/edgeai-gst-apps# gst-launch-1.0 v4l2src device=/dev/video2 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=1280, height=720, format=rggb10 ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/dcc_viss.bin format-msb=6 sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a.bin sink_0::device=/dev/v4l-subdev2 ! video/x-raw,format=NV12, width=1280, height=720 ! queue ! sde.left_sink v4l2src device=/dev/video18 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=1280, height=720, format=rggb10 ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/dcc_viss.bin format-msb=6 sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a.bin sink_1::device=/dev/v4l-subdev5 ! video/x-raw,format=NV12, width=1280, height=720 ! queue ! sde.right_sink tiovxsde name=sde ! tiovxsdeviz ! kmssink driver-name=tidss sync=false
    
    APP: Init ... !!!
    MEM: Init ... !!!
    MEM: Initialized DMA HEAP (fd=5) !!!
    MEM: Init ... Done !!!
    IPC: Init ... !!!
    IPC: Init ... Done !!!
    REMOTE_SERVICE: Init ... !!!
    REMOTE_SERVICE: Init ... Done !!!
       250.846698 s: GTC Frequency = 200 MHz
    APP: Init ... Done !!!
       250.849952 s:  VX_ZONE_INIT:Enabled
       250.849973 s:  VX_ZONE_ERROR:Enabled
       250.849981 s:  VX_ZONE_WARNING:Enabled
       250.850787 s:  VX_ZONE_INIT:[tivxInitLocal:130] Initialization Done !!!
       250.851848 s:  VX_ZONE_INIT:[tivxHostInitLocal:93] Initialization Done for HOST !!!
    Setting pipeline to PAUSED ...
    Pipeline is live and does not need PREROLL ...
    Setting pipeline to PLAYING ...
    New clock: GstSystemClock
    
    

  • Hi Aditya,

    Since sde seems to have a resolution limitation, could you try some experiments.

    First, confirm that this pipeline still works:

    • gst-launch-1.0 v4l2src device=/dev/video2 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=1920, height=1080, format=rggb10 ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/dcc_viss.bin format-msb=6 sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a.bin sink_0::device=/dev/v4l-subdev2 ! video/x-raw,format=NV12, width=1920, height=1080 ! queue ! kmssink driver-name=tidss sync=false

    And since changing the capsfilter did not seem to work, could you try adding a tiovxmultiscaler  to downscale the pipeline to 720p.

    • gst-launch-1.0 v4l2src device=/dev/video2 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=1920, height=1080, format=rggb10 ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/dcc_viss.bin format-msb=6 sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a.bin sink_0::device=/dev/v4l-subdev2 !  tiovxmultiscaler ! video/x-raw,format=NV12, width=1280, height=720 ! queue ! kmssink driver-name=tidss sync=false

    Then, once you can confirm resolution can be downscaled, use two of these pipelines and feed it into sde.

    Regards,

    Takuma

  • The first pipeline works and I also tried this gstlaunch pipeline

    root@tda4vm-sk:/opt/edgeai-gst-apps# gst-launch-1.0 v4l2src device=/dev/video2 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=1280, height=720, format=rggb10 ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/dcc_viss_1280x720.bin format-msb=6 sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a_1280x720.bin sink_0::device=/dev/v4l-subdev2 ! video/x-raw,format=NV12, width=1280, height=720 ! queue ! sde.left_sink v4l2src device=/dev/video18 io-mode=5 ! queue leaky=2 ! video/x-bayer, width=1280, height=720, format=rggb10 ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/dcc_viss_1280x720.bin format-msb=6 sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a_1280x720.bin sink_1::device=/dev/v4l-subdev5 ! video/x-raw,format=NV12, width=1280, height=720 ! queue ! sde.right_sink tiovxsde name=sde ! tiovxsdeviz ! kmssink driver-name=tidss sync=false
    APP: Init ... !!!
    MEM: Init ... !!!
    MEM: Initialized DMA HEAP (fd=5) !!!
    MEM: Init ... Done !!!
    IPC: Init ... !!!
    IPC: Init ... Done !!!
    REMOTE_SERVICE: Init ... !!!
    REMOTE_SERVICE: Init ... Done !!!
       238.325247 s: GTC Frequency = 200 MHz
    APP: Init ... Done !!!
       238.325310 s:  VX_ZONE_INIT:Enabled
       238.325319 s:  VX_ZONE_ERROR:Enabled
       238.325327 s:  VX_ZONE_WARNING:Enabled
       238.326009 s:  VX_ZONE_INIT:[tivxInitLocal:130] Initialization Done !!!
       238.328325 s:  VX_ZONE_INIT:[tivxHostInitLocal:93] Initialization Done for HOST !!!
    Setting pipeline to PAUSED ...
    Pipeline is live and does not need PREROLL ...
    Setting pipeline to PLAYING ...
    New clock: GstSystemClock
       238.570125 s:  VX_ZONE_WARNING:[ownLogRtTraceAddEventClass:220] Log RT event 281473619131712 of event class 3 already exists, not adding again
       238.570217 s:  VX_ZONE_WARNING:[ownLogRtTraceAddEventClass:220] Log RT event 281473619131713 of event class 3 already exists, not adding again
    
    
    It is showing an output depth map on the PC with the HDMI, please assist us in how to calculate the depth value from the depth map output .

    Best regards,

    Aditya 

  • depth = focal_length * baseline/disparity. You can easily find lots of materials online.

  • We know the formula.

    But we wanted to know if using OpenCV's block matcher is the only way or is there

    a) a way to get the disparity value (which might be different than what is shown as disparityMap image pixel value (0-255)) from the gstreamer pipeline via a python or C++ script or

    b) whether there is a script that TI provides in the TDA4VM that can directly give the depth or disparity values as well? or

    c) a way to read in the SDE image displayed by the gstreamer pipeline in OpenCV realtime and get disparity values from it?

  • Can you capture and share the disparity output displayed on the screen?

  • The output doesn't look correct.

    1. I see the same output duplicated on the left and the right sides. 

    2. You didn't rectify the cameras. You have to calibrate the stereo cameras.

  • Yeah. I have to add the calibration files. Where do I add them in the pipeline? Also, how do I get disparity and depth values from it?

  • Hi Aditya,

    Currently checking with the engineer who took part in developing the sde plugin. We can get back to you in 1~2 business days.

    Regards,

    Takuma

  • I understand that the output of the pipelines are NV12 but the ROS sde.launch files require input of YUV422. Whereas, tiovxmultiscaler can only convert to NV12, GRAY8 and GRAY18. So is there a way to make this conversion?

  • Hi Aditya,

    If color format needs conversion, as a test the videoconvert plugin can be used to convert between most formats. It will not run on our hardware accelerators and purely run on the CPU, so performance would take a big hit, but this plugin would be useful for debug.

    Regards,

    Takuma

  • 1. For your previous questions,

    a) a way to get the disparity value (which might be different than what is shown as disparityMap image pixel value (0-255)) from the gstreamer pipeline via a python or C++ script or

    => You can save SDE output in binary by ... tiovxsde name=sde ! multifilesink location=output_%05d.bin in the end of your gstreamer string. This will create files for each frame and name it sequentially, output_00001.bin, output_00002.bin and so on. The output of SDE is 16-bit format for every pixel. Bit packing format: Sign[15], Integer[14:7], Fractional[6:3], Confidence[2:0]. So you can write python script to read disparity values for all pixels. More about SDE output can be found here: https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/latest/exports/docs/tiovx/docs/user_guide/group__group__vision__function__dmpac__sde.html

    b) whether there is a script that TI provides in the TDA4VM that can directly give the depth or disparity values as well? or

    => You can easily write a python script to read disparities from the binary files you saved. Also you can convert disparity (d) to depth (Z) using Z = B*f/d.

    c) a way to read in the SDE image displayed by the gstreamer pipeline in OpenCV realtime and get disparity values from it?

    => No.

    2. For stereo rectification, please refer to https://software-dl.ti.com/jacinto7/esd/robotics-sdk/latest/docs/source/tools/stereo_camera/calibration/README.html. You can use the script in Robotics SDK. But you have to save chart images from EVM or SK-board using gstreamer.

    3. I understand that the output of the pipelines are NV12 but the ROS sde.launch files require input of YUV422. Whereas, tiovxmultiscaler can only convert to NV12, GRAY8 and GRAY18. So is there a way to make this conversion?

    => ROS sde.launch file can take YUV420 as well. Even though the input is YUV422 (UYVY format), LDC (for rectification) also can convert YUV422 to YUV420. 

  • We have created a C++/OpenCV/Gstreamer/ROS publisher for the IMX219 cameras using the zed camera publisher script as reference. But we observed that it is using a lot of CPU while SDE part is hardly taking 20% CPU. Is there a way to optimize this?

    Secondly, we observed that the sde nodes and image publisher are having trouble getting compiled using catkin but get compiled easily via cmake.

    Third, the official qsg (Quick-Start Guide for the TDA4VM J721SKG01EVM evaluation starter kit) blog has been taken down. Do you happen to have a copy of the blog? Can you please share it? If starting over is needed at any point of time, the quick start guide would be a necessity.

    Will share the publisher code and CMakeLists.txt soon.

  • Hi Aditya,

    Third, the official qsg (Quick-Start Guide for the TDA4VM J721SKG01EVM evaluation starter kit) blog has been taken down. Do you happen to have a copy of the blog? Can you please share it? If starting over is needed at any point of time, the quick start guide would be a necessity.

    By the quick start guide, is this the one: https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-sk-tda4vm/latest/exports/edgeai-docs/devices/TDA4VM/linux/getting_started.html

    I think there has been a couple of changes to some of the URLs.

    Regards,

    Takuma