This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Linux/AM4379: gst-launch-1.0 v4l2sink missing video1 device

Part Number: AM4379

Tool/software: Linux

My client has designed a custom board that uses an AM4379 processor to capture video from a Techwell TW9906 video decoder.

The TW9906 has eight data lines which are connected to CAM_0 through CAM_7 and the decoder's PCLK, VSYNC and HSYNC are also connected to their respective pins on the AM4379. On this design the decoder's I2C signals are routed to the AM4379's I2C2_SDA and I2C2_SCL pins.

I've configured the kernel according to the information contained within the "Linux Core VPFE User's Guide" and the “Kernel Configuration Options” section of the “Linux Core DSS User’s Guide”.

The kernel boots successfully to the command line and I’m able to stream video from /dev/video0 to /dev/fb0 using: “gst-launch-0.10 -ev v4l2src device=/dev/video0 ! 'video/x-raw-yuv,width=640,height=480' ! ffmpegcolorspace ! fbdevsink”.

However, when I use “./gst-launch-1.0 -ev v4l2src device=/dev/video0 ! 'video/x-raw,format=YUY2,width=640,height=480' ! videoconvert ! v4l2sink” to stream video to the display, I get the following message: “ERROR: from element /GstPipeline:pipeline0/GstV4l2Sink:v4l2sink0: Cannot identify device '/dev/video1’.”

How do I configure the Kernel to create the /dev/video1 device file?

Gary

  • Hi Gari,

    I've notified the sw team. their feedback will be posted here.

    Best Regards,
    Yordan
  • The display driver on AM437x platform is based on DRM framework and not on v4l2 and hence you do not see the device created for display in V4L2. The DRM based display driver supports fbdev emulation and hence you were able to run gstreamer plugin with fbdevsink.  To use the DRM api based sink, you need to use waylandsink. Note that for waylandsink to run, weston must be running. On PLSDK, we do so by running below script -

    #/etc/init.d/weston start

    You can refer to below wiki page to learn more on display driver -

    http://processors.wiki.ti.com/index.php/Linux_Core_DSS_User%27s_Guide

  • Manisha,

    Thank you for your reply.

    My ultimate goal is to overlay a 640x480 YCbCr video image over a static 1280x800 RGB graphic. I’m able to do this using the following pipe line (please note that in this example the static graphic is represented by the videotestsrc).

    ./gst-launch-0.10 -ev videomixer name=mix sink_1::xpos=100 sink_1::ypos=100 sink_1::alpha=1.0 sink_1::zorder=3 sink_2::xpos=0 sink_2::ypos=0 sink_2::zorder=2 ! ffmpegcolorspace ! fbdevsink v4l2src device=/dev/video0 ! videoscale ! video/x-raw-yuv,width=640,height=480,framerate=30/1 ! ffmpegcolorspace ! mix. videotestsrc ! video/x-raw-rgb, width=1280, height=800 ! mix.

    This example works, but there is a 20-30 millisecond delay from when an action that occurs in front of the camera and when that action gets displayed.

    What is the most expedient way to reduce this latency? This is a simple embedded application and it seems to me that implementing xwindow/wayland is overly complicated.

    Is it possible to create two frame buffers (one video and one graphic) that can be overlaid without having to implement the Wayland protocol?

    I’ve also looked at the dual-camera-demo-1.0 that is included in the SDK.

    Would setting up the DRM overlays in user space be the better approach? That is, would it be easier and less complicated?

    Gary

  • Hi Gary,

    The Display Sub System (DSS) IP on AM437x supports hardware accelerated on the fly overlaying and display. 

    Please follow dual-camera demo, it meets your need.  It's easier and less complicated. In dual-camera-demo, the graphics is drawn in software. If you want the graphics to be drawn in SGX, then please follow video-graphics-test.

    The latency is function of camera capture rate, number of frames you queue before starting to process the data, display rate, assuming overlay happens in DSS. If you send the image immediately after you capture, then the latency you observe should be at display rate. Else, the latency is capture_rate*num_frames_queued_before_sending to display + display_rate.

  • Thank you, Manisah.

    I don't specifically setup to capture a certain number of frames in my camera sensor driver. Does V4L2/VPFE use a default number of frames? Is there a way to check the current setting for the number of frames that are captured before the image is sent to the display?

    Gary
  • Default number is three by the VIP driver. VIP driver is designed with this assumption.