This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Linux/TDA2PXEVM: OpenGL& EGL Between QT and VisionSDK

Part Number: TDA2PXEVM

Tool/software: Linux

Hi:

  As we known that in visionSDK, OpenGL use GL_TEXTURE_EXTERNAL_OES to process the YUV data directly for GPU.

As discussed before, if we want to render the JPG picture, should to convert the JPG to yuv and  use following source to map the yuv data to texture

 GLuint ECarx_eglWindowSetupYuvTexSurface(int width, int height, int dmaBufFd, GLuint *outTexIndex)
{
    EGLint attr[32];
    int attrIdx;
    PFNEGLCREATEIMAGEKHRPROC eglCreateImageKHR;
    PFNGLEGLIMAGETARGETTEXTURE2DOESPROC glEGLImageTargetTexture2DOES;

    attrIdx = 0;

    attr[attrIdx++] = EGL_LINUX_DRM_FOURCC_EXT;
    attr[attrIdx++] = FOURCC_STR("NV12");

    attr[attrIdx++] = EGL_WIDTH;
    attr[attrIdx++] = width;
    printf("width  %d  \n",width);

    attr[attrIdx++] = EGL_HEIGHT;
    attr[attrIdx++] = height;
     printf("height  %d \n", height);

    attr[attrIdx++] = EGL_DMA_BUF_PLANE0_PITCH_EXT;
    attr[attrIdx++] = width;
   // printf("pitch %d  \n",pProp->pitch[0]);

    attr[attrIdx++] = EGL_DMA_BUF_PLANE1_PITCH_EXT;
    attr[attrIdx++] =width;// pProp->pitch[0];
    //printf("pitch %d  \n",pProp->pitch[0]);

    attr[attrIdx++] = EGL_DMA_BUF_PLANE0_OFFSET_EXT;
    attr[attrIdx++] = 0;

    attr[attrIdx++] = EGL_DMA_BUF_PLANE1_OFFSET_EXT;
    attr[attrIdx++] = 0;

    attr[attrIdx++] = EGL_DMA_BUF_PLANE0_FD_EXT;
    attr[attrIdx++] = dmaBufFd;

    attr[attrIdx++] = EGL_DMA_BUF_PLANE1_FD_EXT;
    attr[attrIdx++] = dmaBufFd;

    attr[attrIdx++] = EGL_NONE;

    eglCreateImageKHR =
        (PFNEGLCREATEIMAGEKHRPROC)eglGetProcAddress("eglCreateImageKHR");
    glEGLImageTargetTexture2DOES =
        (PFNGLEGLIMAGETARGETTEXTURE2DOESPROC)eglGetProcAddress("glEGLImageTargetTexture2DOES");

  printf("EGLWindowDisplay %p \n",EGLWindowDisplay);
   EGLImageKHR textImg = eglCreateImageKHR(
                                EGLWindowDisplay,
                                EGL_NO_CONTEXT,
                                EGL_LINUX_DMA_BUF_EXT,
                                NULL,
                                attr
                              );

    System_eglCheckEglError("eglCreateImageKHR", EGL_TRUE);
    if (textImg == EGL_NO_IMAGE_KHR) {
        Vps_printf(" EGL: ERROR: eglCreateImageKHR failed !!!\n");
        return -1;
    }

    glGenTextures(1, outTexIndex);
    System_eglCheckGlError("glGenTextures");

    glBindTexture(GL_TEXTURE_EXTERNAL_OES, *outTexIndex);
    System_eglCheckGlError("glBindTexture");

    glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    System_eglCheckGlError("glTexParameteri");

    glEGLImageTargetTexture2DOES(GL_TEXTURE_EXTERNAL_OES, (GLeglImageOES)textImg);
    System_eglCheckGlError("glEGLImageTargetTexture2DOES");

For now, we succeed run the QT on VisionSDK and meet the same question under QT

  1. how we can render the JPG/BMP by OpenGL
  2. If do as before, how we can get the egl Native window and native display
  3. Should we do same in system_gl_egl_utils.c that call System_eglOpen()?
  4. Can introduce relation between OpenGL native display and QT display?

Thanks.

  • Hi ,

    OpenGL doesn't support jpg. You need to decode jpg and can add decoded YUV as texture.

    Refer viddec3test with --kmscube option from omapdrmtest repo, here h264/mpeg2/mpeg4 streams are decoded and decoded YUV is added as a texture on kmscube with eglCreateIMGKHR API.

    Thanks

    Ramprasad

  • Hi Ramprasad:

      Thanks, i need to open it for more details to discuss.

    Ramprasad said:
    decoded YUV is added as a texture on kmscube with eglCreateIMGKHR API.

    I have already learn all the source code from omapdrmtest.

    My question is:

     

    	disp_kmsc->gl.eglCreateImageKHR(disp_kmsc->gl.display, EGL_NO_CONTEXT,
    					EGL_RAW_VIDEO_TI2, (EGLClientBuffer)fd, attr);

    Here the disp_kmsc->gl.display, can we use the display from QTOpengl which already created? if so, no need to integrate other source code like gbm related.

    For qt 5.6 there is a bug that can't get the native display.

    https://bugreports.qt.io/browse/QTBUG-43223