This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

DM8148: SGX530 Live video

Hello,


I have a DM8148 media proc with EZSDK 5.05.02.00.

My question is, is it possible to capture a video frame from a camera with the ARM processor and then do some processing on this video with the GPU SGX530 and then display the processed video on a screen ?

How can we make the link between CPU and GPU ?


Thanks in advance.


Dylan

  • Please refer - http://processors.wiki.ti.com/index.php/GPU_Compositing

    The steps are applicable for DM814x as well.

  • Hi Prashant,


    Thank you for your reply, I've already looked at this document (not in details but I've read the main points).

    This solution is only adapted for gstreamer module but not with OpenMax components, isn't it ? The goal here is to keep the work done with OpenMax API'ss and be able to use this work with OpenGL API's.

    My main aim is to do some rotation processing on the capture video with an high precision such as one degree rotation. To do this I had the idea to use the GPU to do the rotation processing. Does it seem a good way to do it ?

    Thanks

    Dylan

  • Dylan,

    The main item to pick up from the document would be the gpu-composition module. This would allow one to associate textures with a given surface, and perform rotation, resize etc.

    The solution is split into two parts -

    1) The composition daemon, which receives YUV frames over a named pipe, and performs OpenGL operations on them. You could consider this to be a renderer of sorts.

    2) The video source. The example cited in the document uses gstreamer(gpuvsink). The gpuvsink submits frames over a named pipe, which are then used by the composition daemon.

    Infact, this could be any image/video source - a gstreamer pipeline, OpenMax application etc.

    All that is needed for the composition daemon to render correctly are the physical address of the buffers, frame dimensions, and chroma format.  You can look at the implementation of the gpuvsink plugin, to find out how this is done.

    You can then use the composition daemon, and write your own OpenMax application that would submit buffers over a named pipe .

    Hope this helps.

    -Prashant.

  • Prashant,


    Thank you very  much for these informations. This is really good news to know that this is manageable.

    If I understood well, the gpu-composition module takes in a video frame buffer (such as YUV frames) and "convert" these frames into textures.

    To allow the daemon to access these frames, we have to tell it where it has to look for. That is why we need the physical address of the frame buffers. This operation can be done through an OpenMax directly.

    There is just one think I didn't understand, what is the goal of the named pipe ? How can I create a named pipe for this application ?

    Thank you again for your precious help

    Dylan

  • Prashant,


    One more thing, do you have any example of this kind of implementation for OpenMax application ?

    Regards,

    Dylan

  • You can use the following transform to obtain the physical address -

    guint32 srptr;
    gushort id;

    id = SharedRegion_getId(omxbuffer->pBuffer);
    srptr = SharedRegion_getSRPtr(omxbuffer->pBuffer, id);
    phyaddr = SR2_BASE + (srptr&0x3fffffff);

    We don't have a sample OMX application that does this. We only have a gstreamer(which in turn uses openmax gst-omx)  implementation that does this.

    What is the exact usecase of your product?

    -Prashant.

  • Prashant,

    Thank you for your reply and your help.

    I looked gpuvsink plugin but it is completely connected with gstreamer framework and I just can't figure out how I can adapt this plugin for an OMX application.

    My goal is to keep the simple OpenMax implementation with capture video, pass the physical address of the frame buffers to a named pipe and then be able to manage the frames received as OpenGL textures.
     But is it possible to combine an OpenMax application with an OpenGL one ?

    For the project I'm working on, I would like to capture an HD video from a camera and I would like to compensate the moves done by the camera thanks to an inertial sensor (with gyroscope, accelerometer and GPS) which is fixed on the camera.

    For this, I have to take in count the sensor parameters and if the camera do a 90 degrees rotation, we will notice this rotation thanks to the sensor data collected and compensate the image with a rotation filter which rotates the image of -90 degrees.

    Because the rotation proprety is not implemented yet on OMX release, I thought using an OpenGL texture would be a nice way to do this whole processing.

    Thank you very much

    Dylan

  • The gpuvsink basically populates this structure and passes it over to the composition daemon over a named pipe.

    typedef struct
    {
       int config_data;   /* 1 - config   0 - data   2 - close the video plane (to close the named pipe corresponding to video plane) */
       int buf_index;     /* if data, buffer index */
       int enable;        /* 1 - enable the video plane; 0 - disable */
       int overlayongfx;  /* 0 - gfx on video; 1 - video on gfx */
    
       /* Video plane config structure */
       struct in {
           float rotate;  /* rotate angle in decimal degrees [-180.0 to 180.0]*/
           int count;     /* Number of video buffers */
           int width;     /* video frame width in pixels */
           int height;    /* video frame height in pixels */
           unsigned int fourcc;    /* pixel format */
           unsigned long phyaddr[MAX_VIDEO_BUFFERS_PER_CHANNEL]; /* Physical addresses of video buffers */
       } in;
       /* output video window position and resolution in normalized device co-ordinates */
       struct out {
           float  xpos;   /* x position [-1.0 to 1.0] */
           float  ypos;   /* y position [-1.0 to 1.0] */
           float  width;  /*  width  - [0.0 to 2.0], 2.0 correspond to fullscreen width */
           float  height; /*  height - [0.0 to 2.0], 2.0 correspond to fullscreen height */
       } out;
    } videoConfig_s;

    A named pipe is an IPC(Inter-Process Communication) mechanism in linux.

    If the value of the field "config_data"  in the above structure is '1' then the composition daemon uses the values in the embedded structure "Struct in" to configure the texture using the physical addresses of the buffers passed, number of buffers, width, height, pixel format, and the rotation angle.

    If the value of the field "config_data" is 0 then it renders the texture using the buffer whose index is specified by "buf_index" . Note that when configuring the texture, we would register the physical addresses of all the buffers with the driver, and subsequently all reference to buffers is done using the "buf_index".

    FOr your application, I can recommend two approaches-

    1) Populate and pass the above structure appropriately over named pipes to the composition daemon.

    Or better yet-

    2) Integrate the relevant code from the composition daemon into your own application. I would recommend this approach, since the latency introduced by passing buffers to another process(as done in the first approach) may not be tolerable in your usecase.

    -Prashant.

  • This is great news !

    So now, I need to add this structure as well as the gpu_composition daemon code (main.c, gpu_composition.h) into  my omx application and then configure my application to fill these structure fields, am I right ?

    I have some more question :

    1)  When do I fill the structure fields in the OMX application ? Should it be when FillBufferDone is done after capturing frame with VFCC component (that suppose the buffer is filled with the frame) or elsewhere ?

    2) Since I will use the second approach, how and when can I call the composition daemon now that the code is integrated in my application ?

    Thank you so much for you help !


    Dylan

  • Dylan,

    WHat I meant was, you can use the openGL APIs that are used in the composition daemon, in the omx application.

    You would have to implement a suitable mechanism to hand over buffers once they are available from the FillBufferDOne callback.

    -Prashant.

  • Prashant,


    I will try to implement this in my application. First of all, I would like to simply used the named pipe and pass the structure over it just to test the application even without taking care of the latency issue (the first approach).

    My current application is using capture and display component. With the OpenGL driver, I won't need the display component any more, right ?

    So I should use only capture component. When FillBufferDone callback is called, that means the buffer contains data (captured frames) and I should fill the composition structure at this point (outside the FBD callback of course) and then pass the structure over the named pipe. Is this a correct approach ?

    The suitable mechanism you are talking about is precisely the point where I'm stuck because I'm not exactly sure  when to fill/pass the composition structure over the named pipe.


    Thank you.

    Dylan

  • Prashant,

    I'm updating this thread to ask if I need to keep display component (VFDC) in my application ? Will it work if I just have capture component which fill the buffer with captured frames and pass the composition structure over the named pipe to GPU ?


    Thank you.

    Regards,

    Dylan

  • I've successfully have just capture component working, i wrote FTB command in local pipe when  FBD pipe function was called instead of ETB, now I'll try using gpu composition daemon.

    Thank you for your help

    Dylan

  • Regarding VFDC , SGX doesn't have capability to directly display anything. Generally fbdev drivers are used for displaying SGX composited output on display.

    FBDEV drivers in turn use the HDVPSS graphics pipeline to display the buffer.

    So you don't need VFDC component but you do require fbdev which is present by default.