This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Mixing OpenGL and DMAI

Other Parts Discussed in Thread: DM3730

Running on DM3730.  I have an application that has video coming from an external source and displayed on an LCD, and also draws a semi-transparent graphical overlay over the video.

I'm currently using DMAI for the V4L2 Capture/Display video and just straight-up passing 32-bit-RGBA-formatted buffers to FBdev to draw the graphical overlay.  With that setup, making the graphics transparent over the display video is simple.  I just do something like this:

    struct v4l2_framebuffer framebuffer;
    if(ioctl(Display_getHandle(hDisplay), VIDIOC_G_FBUF, &framebuffer) == -1)
        cout << "Error VIDIOC_G_FBUF" << endl;
    else
    {
        if((framebuffer.capability & V4L2_FBUF_CAP_LOCAL_ALPHA) == 0)
            cout << "Device does not support Alpha Blending." << endl;
        framebuffer.flags &= ~V4L2_FBUF_FLAG_CHROMAKEY;
        framebuffer.flags |= V4L2_FBUF_FLAG_LOCAL_ALPHA;
        if(ioctl(Display_getHandle(hDisplay), VIDIOC_S_FBUF, &framebuffer) == -1)
            cout << "Error setting Alpha Blending" << endl;
    }

So that all works.  Now fast forward and I'd like to start doing more fancy and dynamic things with the graphical overlay, so instead of drawing the buffers "by hand", I'd like to leverage OpenGL to draw my graphics.  That works great as well, except that I cannot figure out how to make the OpenGL graphics overlay transparent to the DMAI video.  Any idea what needs to be done to enable that?

For reference, I've attached my current EGL init (since I assume that's probably what needs to change)...
5148.eglinit.txt

Thanks,
Glenn Wainwright
Senior Software Engineer, Verathon Inc.


  • You might have to change DMAI internals to support this

    Alternatively, you can run the entire pipeline through OpenGL (including video capture/ display). We now have classes that support both YUV and RGB streaming through SGX. This way, with simple openGL code, you can achieve what you need, Or possibly use Qt to do it.

    Take a look at below samples,

    http://tigraphics.blogspot.in/2012/02/8-cpu-ultrasound-viewer-with-v3dfx-base.html

    http://tigraphics.blogspot.in/2012/02/sgx-video-streaming-with-qglwidget.html

    https://github.com/prabindh/v3dfx-base/wiki/V3dfx-base---Getting-Started-Guide

  • Additionally, I looked through your code - the EGL code itself is ok. What is happening is - you are drawing OpenGL output to /dev/fb, while the v4l2 output goes to the video window. Irrespective of what EGL config you choose, it will never get blended with video. This is because all GL operations happen in the /dev/fb, and not with another external plane.

    You can accomplish what you need by setting global alpha and blend 2 different planes with a method similar to below (note that this is an older release)

    http://processors.wiki.ti.com/index.php/UserGuideDisplayDrivers_PSP_03.00.00.05#Alpha_Blending

  • Thank you very much for your excellent responses.  I'll have to take some time to go through all those links you provided. 

    I have a question about your last post, however.  As you pointed out, since the OpenGL operations happen on /dev/fb and the video is going through v4l2, I should be able to use the alpha blending operations mentioned in that post that you linked to.  Global alpha isn't quite what I was looking for, but this:

    struct v4l2_framebuffer framebuffer;
    ret = ioctl (fd, VIDIOC_G_FBUF, &framebuffer);
    if (ret < 0) {
        perror ("VIDIOC_S_FBUF");
        close(fd);
        return 0;
    }
    framebuffer.flags |= V4L2_FBUF_FLAG_LOCAL_ALPHA;
    framebuffer.flags &= ~(V4L2_FBUF_FLAG_CHROMAKEY | V4L2_FBUF_FLAG_SRC_CHROMAKEY);
    ret = ioctl (fd, VIDIOC_S_FBUF, &framebuffer);
    if (ret < 0) {
        perror ("VIDIOC_S_FBUF");
        close(fd);
        return 0;
    }

    ... should allow blending based on the pixel-by-pixel alpha values in /dev/fb, yes? I know that this works when I am able to set up /dev/fb via ioctl since that's how I've been operating so far.
    Yet for some reason, it doesn't seem to function properly for me running through the SGX.

    In any case, you've given me a lot to think about. Streaming the video through SGX is an interesting option.
    I suppose another option would be to run OpenGL in a framebuffer instead and set up /dev/fb in my old fashion, using the GL framebuffer as a building area that then gets copied to the fb buffer when drawing is complete.

    Thanks again,
    Glenn Wainwright
  • SGX finally outputs in pre-multiplied format - ie, the alpha component is folded into the RGB components and the Alpha will be ignored. Hence the suggestion on using global alpha. Do not use CPU based copy operations after the output - they will increase cpu loading significantly.

     

  • Okay, that definitely explains it.  Thank you.