This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Using Graphics Pipeline for video on DM8148

Hello Friends

We are planning to display two videos on a single display on Dm8148.  Our use case requires us to display or hide dynamically changing regions of two video planes.

Can we implement this with relative ease if we change one video stream to ARGB and use graphics pipeline for it second video stream can be used as background? This way for the pixels where we want to show the background video we can set alpha value 0 in ARGB in forground graphics.

Will it be good idea to write to the display buffers of fbdev at say 30 fps? as we will be required to refresh the forground at almost that rate?

Please suggest us if there is any better way to implement this.

Thanks alot

Ravikant

  • Hi,

    Blending can be done between graphics and video pipelines or between two graphics pipelines. You are on right track. Only thing i would like to suggest you is that instead of updating Alpha value every time you can just set the alpha value and enable/disable alpha blending based on requirement. This will save your bandwidth and you will be able to get performance of 30FPs.

    Regards,

    Hardik shah

  • Hi Hardik

    Thanks for your quick reply.

    Probing more on this track, what can be possible roadblocks of using graphics pipeline for displaying video instead of static data like GUI. I am asking this because of all the sample applications using Frame Buffer fbdev are first filling data to the mmaped buffers and then playing with blending mode in the while loop.

    What if i try to write to the frame buffer in the while loop? Do i need to takecare of anything else to implement this?

    Thanks

    Ravikant

  • Hi,

    Graphics pipelines are sometimes used for displaying video thats not a problem. Only thing is that graphics pipelines only support RGB data. So in future if you want YUV data them you will have to use video pipelines.

    Ravikant said:
    What if i try to write to the frame buffer in the while loop? Do i need to takecare of anything else to implement this?

    I am not clearly getting this question.

    Regards,

    Hardik Shah

  • Hi Hardik

    I will try to explain my concern by taking the reference of graphics thread of decode_MosiacDisplay omx example(FB_Blending.c). 

    In this example following steps are being performed

    1) Open the FB  device.

    2) Configure the screeninfo.

    3) Mmap the driver buffers in application space so that application can write on to them.

    4) Fill the mmaped buffers with color bar pattern.

    5) Enter the while(1) loop and enable/disable blending for demo purpose.

    Now my concern is as follows:

    1) Do we have a mechanism similar to queue/dequeue  for controlling the frame buffers? 

    2) How can i modify the above demo application if i want to change the contents of display buffers in step 5 along with modifying blend parameter.

    Thanks alot

    Ravi

  • Hi,

    Queue/Dequeue model is not available in Framebuffer framework. But you can do panning to change buffer address.  Here is an example. You can look at saFbdevPanDisplay example on how to change buffer address.

    Regards,

    Hardik Shah

  • We are taking YUYV into the SGX530 and rendering it as a texture. We got to 57Hz without much trying for a 720P60 source on a DM8168, and the SGX530 also does scaling and CSC to ARGB. You could render only the areas needed for display each frame, and use your other video as a background. Except for acquiring two inputs and buffering, the SGX530 might do pretty much everything if you can split your video1/video2 areas into triangles and render them both. You might even get away with an AM387x.

    Note - I'm not a TI guy, and they have better information than I do.

  • Hello, TI major:
     I have some development issues need your help.
     
     I develop the function on DM8148. My package version is DVRRDK_02_00_00_23.
     I execute the demo function, use-case 6, to display 4 channels H.264 decoder.
     I can display the logo and draw (dot, line or rectangle)on the SD by frame buffer.
     But I need to do the logo or lines transparence above the video display.
     How and where can I set the alpha value and enable/disable the alpha blending on frame buffer(graphic)?
     Another question, Can I implement the OSD function with alpha blending on use-case 6?
     Thanks for your answer.