This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

How to understand the VIDEO capturing on DM816x ??

Other Parts Discussed in Thread: TVP7002

Hi All , 

-> I am starting with video processing and capture for the first time and have no prior exposure to the same. 

-> I am having the spectrum digital EVM on which i have just tested capture of COMPONENT video given as an

     input through TVP7002  using executable.

->  I am sorry ,if the questions mentioned are not clear or might be i am missing some imp information .

On same prospective can any one from the group help me for :-

a. What is the flow (i.e hardware or software ) sections that are involved for capturing of component video from 

     TVP7002 ?

b. How is the driver + v4l2 + Gstreamer  linked (just an brief , to start )

c. Which are the sample code that are working together to give the complete functionality ?

d. How is  openmax and v4l2 related for the capturing?

I am trying to get some information from these terminologies that i just came across and trying to find the proper 

flow , so that using that flow i can understand the video capturing using gstreamer. 

(As of now just started goind through  the WIKI for the above mentioned terms !!!)

Thank You,

Ashish Kumar Mishra.

  • Moving this to the DM816x forum.

    BR
    Pavel

  • Hello,

    Ashish Mishra1 said:

    a. What is the flow (i.e hardware or software ) sections that are involved for capturing of component video from 

         TVP7002 ?

    EZSDK controls external interfaces like TVP7002 through media controller binary but if you want to use different external decoder you could add it. We have wiki page about this.

    Ashish Mishra1 said:
    b. How is the driver + v4l2 + Gstreamer  linked (just an brief , to start )

    For v4l2 capture you could use:

    - the saLoopBack demo in EZSDK:

    http://processors.wiki.ti.com/index.php/TI81XX_PSP_VIDEO_CAPTURE_Driver_User_Guide

    - gstreamer (example pipelines):

    gst-launch -e -v v4l2src device="/dev/video0" always-copy=false queue-size=12 num-buffers=-1 ! 'video/x-raw-yuv-strided,format=(fourcc)NV12,width=800,height=480,framerate=(fraction)60/1' ! omxbufferalloc silent=false numBuffers=12 ! omx_scaler ! omx_ctrl display-mode=OMX_DC_MODE_1080P_60  ! omx_videosink sync=true

    gst-launch -v  v4l2src always-copy=false queue-size=12 num-buffers=2000 ! 'video/x-raw-yuv-strided,format=(fourcc)YUV2,width=1280,height=720,framerate=(fraction)30/1' ! omxbufferalloc numBuffers=12 ! omx_h264enc bitrate=5000000 ! gstperf ! filesink location=v4l2.h264

    Ashish Mishra1 said:
    c. Which are the sample code that are working together to give the complete functionality ?

    If I understand you right check the previous answer.

    Ashish Mishra1 said:
    d. How is  openmax and v4l2 related for the capturing?

    For capture you could use OMX or v4l2.

    You could check the user guide in EZSDK how to execute the capture_encode demo.

    /ti-ezsdk_dm816x-evm_5_05_02_00/docs

    or the OMX user guide PAGE 70:

    ti-ezsdk_dm816x-evm_5_05_02_00/component-sources/omx_05_02_00_48

    Please notice that we have two binaries dm816x_hdvpss_v4l2.xem3  and dm816x_hdvpss.xem3.

    The difference between dm816x_hdvpss.xem3 vs dm816x_hdvpss_v4l2_xme3 is one control I2c from M3, the other does not.If you want to use v4l2 capture,  it should use the one without M3 I2C control, which is the one that having v4l2.This is because we support two application stacks. One is OpenMax application and other is V4L2 based application. Each of these applications is having its own way of exposing API to final product. So for OpenMax I2c is getting controlled through M3 and V4L2 I2C is controlled through A8. Thats why we have two binaries.

    dm816x_hdvpss.xem3:
       - This is use for OpenMax application.  So user will use their application is OpenMax interface.
            * Display driver is controlled by A8 with OpenMax from user.  Does not support V4L2.
            * Capture driver is controlled by A8 with OpenMax from user.  Does not support V4L2.
       - This can NOT be used to use I2C from A8 during HDVPSS driver is working.
       - The feature is same as dm816x_hdvpss_v4l2_xem3 without I2C control.

    dm816x_hdvpss_v4l2_xem3:
        - This is use for V4L2 application.  So user will use their application is V4L2 interface.
            * Display driver is controlled by A8 with V4L2.  Does not support OpenMax.
            * Capture driver is controlled by A8 with V4L2.  Does not support OpenMax.
       - This can be used to use I2C from A8 during HDVPSS driver is working.
       - The feature is same as dm816x_hdvpss_xem3 without I2C control

    Please let me know if you need further questions.

    Best Regards,

    Margarita

  • Dear Margarita, 

    Thanks for initial pointers , i will test at my end for the steps u have just mentioned. 

    1. "..EZSDK controls external interfaces like TVP7002 through media controller.."

          -> As  am starting on this domain i would first try to understand the available setup and solution which you

               pointed above

          -> Is below mentioned code the actual driver which is being used for capturing the component video from

              TVP7002

           http://lxr.free-electrons.com/source/drivers/media/video/tvp7002.c?v=3.0 

     

    2. On same line which is the code to which i could refer to understand the data flow from 

         TVP7002 -> v4l2 Layer -> Gstremer -> Application 

         If i get this flow of source file which collectively are resposible for actual operation , i can understand those files 

         ( as i mentioned i am not having an clear info of how v4l2 and Gstreamer interacts  between each other)

         

    Basically if i get the flow of capturing with the names of source code , i can put some effort and understand the same. Hope my query is clear tou you. 

    So could you please help me for the same. I am sorry if i am missing any important information whcih you have shared with me as i am trying to get some clear idea. 

    Thank You,

    Ashish Kumar Mishra 

  • Hello,

    In gstreamer plugin we have v4l2src element which  reads frames from a Video4Linux2 device which is used for capture. You could set properties like "device" for example which means device location and by default is /dev/video0.

    V4L2 capture streaming is supported on /dev/video0 node with TVP7002 as decoder.

    In gstreamer we have also v4l2sink element which is responsible for displaying frames.

    You could find tvp7002. in ti-ezsdk_dm816x-evm_5_05_02_00/board-support/linux-2.6.37-psp04.04.00.01/drivers/media/video

    As I wrote in my previous post check these user guide:

    http://processors.wiki.ti.com/index.php/TI81XX_PSP_VIDEO_CAPTURE_Driver_User_Guide

    http://processors.wiki.ti.com/index.php/TI81XX_PSP_VPSS_Video_Driver_User_Guide

    Best Regards,

    Margarita

  • Dear Margarita Gashova, 

     

    Thanks for input. I am able to test the component video capture using EVM816x . 

    The sample code used is "capture_encode_a8host_debug.xv5T" @ /usr/share/ti/ti-omx/

    by ./capture_encode_a8host_debug.xv5T -o sample.h264 -m 720p -f 60 -b 1000000 -d 0 -n 1000.

    Thanks ,

    Ashish Kumar Mishra 

  • Hello,

    Let me know if you need further details.

    Best Regards,

    Margarita

  • Dear Margarita Gashova , 

    Thanks for your valuable inputs , i have started a new thread today where i have few basic query 

    http://e2e.ti.com/support/dsp/davinci_digital_media_processors/f/717/t/370768.aspx

     

    Could you please provide some inputs on the same 

    Thanks 
    Ashish Kumar Mishra 

     

  • Hello,

    It is answered,

    Best Regards,

    Margarita