This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM62A7: ISP: Is it possible to use the output channels of the multiscaler as an input for a seperate gstreamer pipeline?

Part Number: AM62A7

Hi,

We want to capture video from one MIPI-CSI2 camera, and would like to have two seperate output streams with different parameters. Probably stream 1 will be the full image, and stream 2 will have either the full image or a ROI. Is it possible to implement this in such a way that I can restart (and configure) stream 2 independently of stream 1?

I was thinking of starting one gstreamer pipeline for the camera capture and ISP operations (with the multiscaler set up to output two streams), and then have the same program start up two seperate pipelines to further  process the output channels of the camera pipeline. The goal is to be able to keep the camera capture pipeline running, and restart the processing pipelines at will after that without interfering with each other.

Is it possible to do this with the current framework?

Regards,

Bas Vermeulen

  • Hello Bas,

    I do not think this is feasible. I assume the processing pipeline takes the output from the multiscaler. However, you can't use the multiscaler as the source for gstreamer.

    Regards,

    Jianzhong

  • Hi Bas,

    Yes you can use the Multiscaler to downscale the incoming Camera stream and use it for different purposes (In the below gstreamer pipeline, I am using one of the src pads of multiscaler to display the full Image and the other src pad to encode a scaled down resolution and store into a file.)

    Example:

    gst-launch-1.0 -v v4l2src device=/dev/video-rpi-cam0 io-mode=dmabuf-import num-buffers=1000 ! \
    video/x-bayer, width=1920, height=1080, framerate=60/1, format=rggb10 ! \
    tiovxisp sink_0::device=/dev/v4l-rpi-subdev0 sensor-name="SENSOR_SONY_IMX219_RPI" dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss_10b.bin sink_0::dcc-2a-file=/opt/imaging/imx219/linear/dcc_2a_10b.bin format-msb=9 ! \
    video/x-raw, format=NV12, width=1920, height=1080, framerate=30/1 ! \
    tiovxmultiscaler name=msc \
    msc. ! queue ! video/x-raw, format=NV12, width=1920, height=1080, framerate=30/1 ! kmssink driver-name=tidss sync=false \
    msc. ! queue ! video/x-raw, format=NV12, width=640, height=480, framerate=30/1 ! v4l2h265enc ! filesink location=480.265

    Hope this helps.

    Best Regards,

    Suren

  • Hi Suren, thanks for the help.

    It doesn't do exactly what I want, but with the help of v4l2loopback it should help me do what I want.

    gst-launch-1.0 -v v4l2src device=/dev/video-rpi-cam0 io-mode=dmabuf-import num-buffers=1000 ! \
    video/x-bayer, width=1920, height=1080, framerate=60/1, format=rggb10 ! \
    tiovxisp sink_0::device=/dev/v4l-rpi-subdev0 sensor-name="SENSOR_SONY_IMX219_RPI" dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss_10b.bin sink_0::dcc-2a-file=/opt/imaging/imx219/linear/dcc_2a_10b.bin format-msb=9 ! \
    video/x-raw, format=NV12, width=1920, height=1080, framerate=30/1 ! \
    tiovxmultiscaler name=msc \
    msc. ! queue ! video/x-raw, format=NV12, width=1920, height=1080, framerate=30/1 ! v4l2-sink /dev/videoloopback0 \
    msc. ! queue ! video/x-raw, format=NV12, width=640, height=480, framerate=30/1 ! v4l2-sink /dev/videoloopback1

    Two other pipelines would then pick up the stream from the videoloopback devices for further processing and streaming with H.265 RTP.

    I may need to make some modifications to v4l2loopback to allow it to pass on the dmabuffers to keep things efficient, but we'll see.

  • Hi Bas Vermeulen,

    Can you please tell me how you have added v4l2loopback in build?

    if you have followed https://github.com/umlaeute/v4l2loopback/tree/main this, if yes 

    then tell me which kernel header you have given?

  • Hi,

    I haven't implemented this yet, it's just what I want to do.

    You should be able to cross build the modules with the kernel in your SDK though, just set the variable KERNEL_DIR and point it to the directory your kernel built from (it's ~/ti-processor-sdk-linux-edgeai-am62axx-evm-09_01_00_07/board-support/ti-linux-kernel-6.1.46+gitAUTOINC+247b2535b2-g247b2535b2 for me).

    Please note that v4l2loopback currently doesn't support dmabuff's, so to keep things efficient that would need to be implemented.

  • Hello Udhayamoorthi Ramasamy,

    Please open a new thread for your inquiry.

    Thanks,

    Jianzhong

  • Hi Bas,

    we also experimented with pipeline splits.

    v4j2loopback is not zero copy (dmabuf) and has an undesired load on the system.

    Take a look at https://www.ridgerun.com/gstd

    It also loads the system, but not as bad as v4j2loopback; and, more important, it is a much more flexible system.