This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

RTOS/TDA2EVM5777: Split screen in LINUX TDA2xx

Part Number: TDA2EVM5777


Tool/software: TI-RTOS

Hi All,

I am working with linux OS on tda2xx board, I want to split the display into 2(in different resolution). In one part I have to display stitched image which is created using sgxfrmcpy link. In second part I have to display one of the input camera view. I found following ways to solve this :

1]Using sgxfrmcpy and 1x2 rendering

I have seen lvds_vip_multi_cam_view_sgx_display usecase in which 4 camera views are shown. Using same concept is is possible to solve above mentioned problem ? I mean is it possible to create 1x2 display(in different resolution or half). If yes then what are the changes for the same.

2] Using DssM2mwb

Can I use DssM2mwb link? and does it supports one part as result from sgxfrmcpy link and other to be a view from any one camera(out of 4 camera's).

Please provide input  and suggest an optimal solution from above. And if there is any other way then please share.


Thank you.

Regards,

Salman

  • Hi,

    The 2nd option is not possible as the same link si used to dump the data of display not to split it.
    1st option, sgx does not support 1x2render type.

    Please refer the below usecase file and check for all display related params, in this usecase also we display 2 different data.
    vision_sdk\apps\src\hlos\adas\src\usecases\csi2Cal_multi_cam_3d_srv_cbb\chains_csi2CalMultiCam_Sgx3Dsrv_carBlackBox.c

    Regards,
    Anuj
  • Hi Anuj,

    Thank you for prompt reply.

    I am new to openGL-ES. Pardon me if any question is not appropriately phrased.

    I have added  1x2 render type in sgxFrmCpy. I am able to get two camera views in those 2 section. I would like to know few things :

    1] I wanted to give joystick control to one of the view and second view should be independent of joystick movement. Is this possible? Because as far as I know, openGL-ES has one frustum in which everything is rendered. So I think second view also will move by joystick movement. Please let me know if it's possible and way to do.

    2] To use different program object with there different shaders we have to use glUseProgram but switching between them will cause fps drop and only those things can be rendered whose program object is active. So all the things cannot be rendered at same time. Is there any way to overcome this?

    Thank you.

    Regards,

    Salman

  • Salman,

    1. You can use glviewport to render to only a specific portion of the screen , and then use glviewport again (for the rest of the screen) and draw again. When all the draw calls are complete call eglSwapBuffers.

    2. This answers your second question as well. If you switch between multiple programs before calling eglSwapBuffers, it will not cause any appreciable drop in frame rate as all the operations will happen in one shot

    - Subhajit
  • Salman,

    Did this resolve your issue?

    I am closing this thread.

    - Subhajit

  • Hi Subhajit,

    Thank you. Pardon me for late reply.
    I have already implemented 2 part in one window using glScissor in PC based code.
    Flow of code is as following :
    (WIDTH , HEIGHT is #define values for width and height of displaying window)
    1. Set the viewport
        glViewport ( 0, 0, WIDTH, HEIGHT );
    2. esPerspective 
    3. glScissor (0 , 0 , WIDTH/2 , HEIGHT );
    4. First window part object
    esMatrixLoadIdentity(&modelview);
    translate
    rotate


    5.find modelviewprojection and then
       glClear ( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
       glDrawArrays for first window part rendering


    6.Second window part object
      esMatrixLoadIdentity(&modelview);
      translate
      rotate


    7. Find modelviewprojection and then
       glClear ( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
       glDrawArrays for second window part rendering


    8. eglSwapBuffers

    I would like to know is above process can be followed in same way with eglObj or else I have to use eglWindowobj.

    Thank you.

    Regards,

    Salman

  • You can do this in both eglObj and eglWindowObj. I will mark this thread as resolved now

  • Hi Subhajit,

    Can you please help me out. How can I use eglSwapBuffers with eglObj as it doesn't contain EGLSurface surface.

     

    Thank you.

     

    Regards,

    Salman

  • Hi all,

    I have shifted to EglWindowObj ( I mean window surface). I have stored my video frames in pVideoFrame->bufAddr[0](in previous link i.e. before sgxFrmCpy) and I have used this as texture

    texYuv[0] = System_eglWindowGetTexYuv(&pObj->eglWindowObj, &texProp, pVideoFrame->dmaFd[0]);

    As "dmaFd is internally mapped to video data in case of linux"  taken from https://e2e.ti.com/support/processors/f/791/p/759364/2805346?tisearch=e2e-sitesearch&keymatch=dmafd%20mapping#2805346

    But I am unable to get Texture on my 3D object. Object is rendered but without texture.

    1] Please could you explain how dmaFd is mapped internally.

    2] How the texture map image data is loaded ? For loading image data, is glTexImage2D function used? or any other implementation is used(might be EGLImage)?    

    Please could you help me out with this. Thank you.

    Regards,

    Salman

  • Salman, 

    eglTexImage2D can not be used for loading eglImage textures. Please refer to vision SDK system_gl_egl_utils.c to see how eglImages are used for texturing

    - Subhajit 

  • Hi  Subhajit,

    Thank you. I have gone through systems_gl_egl_utils.c . I am able to get texture in System_eglWindowGetTexYuv for each of pEglWindowObj->numBuf. But while rendering it is just blank(means it doesn't get textured on the object). 

    Thank you.

    Regards,

    Salman

  • Hi,

    I am able to get valid texYuv from System_eglWindowGetTexYuv(I have printed those values and seems to right). I think I am missing something. Please can you help me out.

    My video frame data stored in  pVideoFrame->bufAddr[0] is of ARGB32_8888. For using eglCreateImageKHR what changes need to be done.  Where can I get this API details for vision sdk.

    I have seen https://e2e.ti.com/support/processors/f/791/p/763784/2864110?tisearch=e2e-sitesearch&keymatch=samplerExternalOES%27#pi320966=2 thread in which it is mentioned that RGB texturing can be done(with eglImage) in 2 ways

    1. You can use the standard render to texture option with glTexImage2D. 
    2. Render to a pixmap surface and create an EGLImage out of it with the target set to EGL_NATIVE_PIXMAP_KHR.

    For 2nd way I am referring https://e2e.ti.com/support/legacy_forums/embedded/linux/f/354/p/563420/2066253#2066253.

    With "Window surface" can I create EGLImage with target EGL_NATIVE_PIXMAP_KHR (As KHR_image_pixmap extension says :

    The EGL implementation must define an EGLNativePixmapType (although it
        is not required either to export any EGLConfigs supporting rendering to
        native pixmaps, or to support eglCreatePixmapSurface). )

    Thank you.

    Regards,

    Salman

  • Hi All,

    Gentle reminder. Please update as soon as possible.

    Thank you.

    Regards,

    Salman

  • Hi Salman,
    You can refer create_texure() function in this file.

    This is source code for a demonstrating adding NV12/YUYV/ARGB raw data as a texture on a kmscube using eglCreateImageKHR API.

    This can be tested as with viddec3test filevpedisplay applications from omapdrmtest repository.
    In case of viddec3test, decoded NV12 data is added as texture and in filevpedisplay, vpe processed YUYV or ARGB data is added as texture.
    EGL_NATIVE_PIXMAP_KHR target will be used for ARGB texture.

    Thanks
    Ramprasad

  • Hi Ramprasad,

    Thank you for explanation.

    When I am following it ,I am getting error as follows :

    _____

    ...

       pEglWindowObj->surface = eglCreateWindowSurface(pEglWindowObj->display, pEglWindowObj->config, pEglWindowObj->windowNative, attribList);

    ....

    _____

    void * BO;//global

    _________

    ....

    struct omap_bo* lpOmapBo = omap_bo_new( odev, width * height * 4, 3 );

    stride = width * 4;

    exp.vaddr = (unsigned long) lpOmapBo;//buffer;
    exp.size = (width * 4 * height);

    exp.fd = omap_bo_dmabuf( lpOmapBo );

    struct gbm_import_fd_data ifdd = {
    .width = width,
    .height = height,
    .stride = stride,
    .format = GBM_FORMAT_ARGB8888,//GBM_FORMAT_XRGB8888,
    .fd = exp.fd//omap_bo_dmabuf( lpOmapBo )
    };
    struct gbm_bo *bo = gbm_bo_import(dev,
    GBM_BO_IMPORT_FD,
    &ifdd,
    GBM_BO_USE_SCANOUT | GBM_BO_USE_RENDERING);

    BO = (void *)bo;

       eglCBuf.pixmapNative = (void *)lpOmapBo;

    ....

    _____

    EGLint attrib_list1 = EGL_NONE;

    pObj->texImg[texIndex] = eglCreateImageKHR(
    pObj->display,
    EGL_NO_CONTEXT,
    EGL_NATIVE_PIXMAP_KHR,
    (EGLNativePixmapType)BO,
    attrib_list1
    );

    I am getting segmentation fault at eglCreateImageKHR.

    I am having a doubt :

    In function gbm_allocator_get_native_buffer() does ioctl(dbuf_fd, DBUFIOC_EXPORT_VIRTMEM, &exp);  imports dma_buf as gbm bo or I have to use omap_bo_dmabuf() explicitly.

    Could you please help me with this. How to import dma_buf as gbm bo using ioctl(dbuf_fd, DBUFIOC_EXPORT_VIRTMEM, &exp); or omap_bo_dmabuf or any other way.

    Thank you.

    Regards,

    Salman

  • Hi Salman,

    Can you refer to this thread once?

    https://e2e.ti.com/support/processors/f/791/t/815175#pi320966=2

    Here NV12 texture is being added with eglCreateImageKHR() and it also uses virtual mem exporter to get the texture buffer.

    Thanks

    Ram

  • Hi Ramprasad,

    I have gone through given link and and I have implemented in similar manner. I am exporting buffer as dmaFd and I am also getting a valid dmaFd. But still I am getting segmentation at eglCreateImageKHR.

     


    My code is as follows:

    void * BO=NULL; //global

    EGLNativeDisplayType gbm_allocator_get_native_display ()
    {
    if(fd == -1) {
    fd = drmOpen("omapdrm", NULL);
    }
    if(fd > 0 && dev == NULL) {
    dev = gbm_create_device(fd);
    odev = omap_device_new(fd);
    }

    return (EGLNativeDisplayType)dev;
    }

    EGLCompatBuffer gbm_allocator_get_native_buffer (uint32_t width, uint32_t height){

    EGLCompatBuffer eglCBuf;
    Void *buffer;
    UInt32 stride = 0;
    Int32 dbuf_fd = -1;
    struct dmabuf_vmem_export exp;

    dbuf_fd = open("/dev/vmemexp", O_RDWR | O_CLOEXEC);
    if (dbuf_fd < 0) {
    Vps_printf("Error opening virt mem dmabuf exporter");
    }

    stride = width * 4;


    struct gbm_bo* bo = gbm_bo_create(dev, width, height, GBM_FORMAT_ARGB8888 , GBM_BO_USE_RENDERING);
    Int32 dmafd1= gbm_bo_get_fd(bo); //Export as DMAbuf handle
    struct omap_bo *omap_bo1 = omap_bo_from_dmabuf(odev, dmafd1);
    void *omap_bo_userspace = omap_bo_map(omap_bo1);

    struct gbm_import_fd_data ifdd = {
    .width = width,
    .height = height,
    .stride = stride,
    .format = GBM_FORMAT_ARGB8888,
    .fd = dmafd1//exp.fd
    };

    struct gbm_bo *bo1 = gbm_bo_import(dev,
    GBM_BO_IMPORT_FD,
    &ifdd,
    GBM_BO_USE_SCANOUT | GBM_BO_USE_RENDERING);

    Vps_printf("fd=%d dmafd1=%d bo=%p BO=%p omap_bo_=%p omap_bo_userspc=%p\n",fd,dmafd1,bo,BO,omap_bo_,omap_bo_userspace);

    BO = (void *)bo1;

    eglCBuf.width = gbm_bo_get_width(bo1);
    eglCBuf.height = gbm_bo_get_height(bo1);
    eglCBuf.stride = gbm_bo_get_stride(bo1);
    eglCBuf.eglPixmap = (EGLNativePixmapType) bo1;
    eglCBuf.pixmapNative = (void *)omap_bo_userspace;//buffer

    Vps_printf("after fd=%d dmafd1=%d bo=%p bo1=%p BO=%p omap_bo_=%p omap_bo_userspace=%p eglPixmap=%p\n",fd,dmafd1,bo,bo1,BO,omap_bo_,omap_bo_userspace,eglCBuf.eglPixmap);
    close(dmafd1);//exp.fd;
    close(dbuf_fd);

    return eglCBuf;
    }

    ....

    static GLuint System_eglWindowSetupYuvTexSurface(System_EglWindowObj *pObj, System_EglTexProperty *pProp, int dmaBufFd, int texIndex)

    {

    EGLint attrib_list1 = EGL_NONE;

    PFNEGLCREATEIMAGEKHRPROC eglCreateImageKHR;
    PFNGLEGLIMAGETARGETTEXTURE2DOESPROC glEGLImageTargetTexture2DOES;

    eglCreateImageKHR =
    (PFNEGLCREATEIMAGEKHRPROC)eglGetProcAddress("eglCreateImageKHR");
    glEGLImageTargetTexture2DOES =
    (PFNGLEGLIMAGETARGETTEXTURE2DOESPROC)eglGetProcAddress("glEGLImageTargetTexture2DOES");

    pObj->texImg[texIndex] = eglCreateImageKHR(
    pObj->display,
    EGL_NO_CONTEXT,
    EGL_NATIVE_PIXMAP_KHR,
    BO,
    attrib_list1
    );

    if (pObj->texImg[texIndex] == EGL_NO_IMAGE_KHR) {
    Vps_printf(" EGL: ERROR: eglCreateImageKHR failed !!!\n");
    return -1;
    }

    glGenTextures(1, &pObj->texYuv[texIndex]);
    System_eglCheckGlError("glGenTextures");

    glBindTexture(GL_TEXTURE_EXTERNAL_OES, pObj->texYuv[texIndex]);
    System_eglCheckGlError("glBindTexture");

    glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    System_eglCheckGlError("glTexParameteri");

    glEGLImageTargetTexture2DOES(GL_TEXTURE_EXTERNAL_OES, (GLeglImageOES)pObj->texImg[texIndex]);
    System_eglCheckGlError("glEGLImageTargetTexture2DOES");

    pObj->dmaBufFd[texIndex] = dmaBufFd;
    return 0;

    }

    Is anything wrong in this?

    I am getting prints as follows :

    [HOST] [HOST ] 47.443298 s: fd=15 dmafd1=19 bo=0x8bf0e2e0 BO=0x8bf0e2e0 omap_bo_=0x8bf0e328 omap_bo_userspace=0x8b48b000
    [HOST] [HOST ] 47.443329 s: after fd=15 dmafd1=19 bo=0x8bf0e2e0 BO=0x8bf0e2e0 omap_bo_=0x8bf0e328 omap_bo_userspace=0x8b48b0000
    [HOST] [HOST ] 47.443756 s: fd=15 dmafd1=19 bo=0x8bf0e4f0 BO=0x8bf0e4f0 omap_bo_=0x8bf0e538 omap_bo_userspace=0x8aca2000
    [HOST] [HOST ] 47.443786 s: after fd=15 dmafd1=19 bo=0x8bf0e4f0 BO=0x8bf0e4f0 omap_bo_=0x8bf0e538 omap_bo_userspace=0x8aca20000
    [HOST] [HOST ] 47.444152 s: fd=15 dmafd1=19 bo=0x8bf0e700 BO=0x8bf0e700 omap_bo_=0x8bf0e748 omap_bo_userspace=0x8a4b9000
    [HOST] [HOST ] 47.444183 s: after fd=15 dmafd1=19 bo=0x8bf0e700 BO=0x8bf0e700 omap_bo_=0x8bf0e748 omap_bo_userspace=0x8a4b90000
    [HOST] [HOST ] 47.444549 s: fd=15 dmafd1=19 bo=0x8bf0e910 BO=0x8bf0e910 omap_bo_=0x8bf0e958 omap_bo_userspace=0x89cd0000
    [HOST] [HOST ] 47.444580 s: after fd=15 dmafd1=19 bo=0x8bf0e910 BO=0x8bf0e910 omap_bo_=0x8bf0e958 omap_bo_userspace=0x89cd00000

    Thank you.

    Regards,

    Salman

  • Hi All,

    I have given a try to render ARGB_8888 formatted texture to be rendered using eglImage by converting ARGB_8888 to YUV using opengl shader. I knew that for SYSTEM_DF_YUV420SP_UV we need 2 buffers but I was giving a try(using only one bufffer) and I got output as some random coloured texture on my object(opengl-es 3D object)[as i expected]. Please can anyone elaborate how it's processing. 

    Any update on my earlier post. What could be wrong? 

    Any help is appreciated.

    Thank you.

    Regards,

    Salman

  • Hi Salman,

    Segmentation fault from userspace may come if any of the pointers accessed is NULL or invalid.

    Can you please check in your application once?

    Thanks

    Ramprasad

  • Hi Ramprasad,

    I have checked in my application. I am not getting any NULL pointer , memory is getting allocated successfully same as I have given in debug prints. And I am not accessing any other memory. Still I am getting segmentation fault at eglCreateImage function. 

    Can you tell me if the implementation steps which I am doing above for creation eglImage is correct. Is anything missing? 

    Help appreciated.

    Thank you.

    Regards,

    Salman


  • Hi Salman,
    I don't have an ready application to reproduce the issue and to rootcause.
    Please share an application/source which I can compile to reproduce the issue.

    Thanks
    RamPrasad