This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA2HG: 【OpenGL】stitching with GL_TEXTURE_EXTERNAL_OES

Part Number: TDA2HG

Hi:

The following picture is the result that we run on the ubuntu with GL_TEXTURE_2D with RGB input

 

This picture is we run the surround view on tda2hg with GL_TEXTURE_EXTERNAL_OES with nv12 input

please see the difference between this two picture (with red-highlighted),

we are using the same surround view algorithm with same calibration table data.

the only difference is the first picture is using GL_TEXTURE_2D and second is using GL_TEXTURE_EXTERNAL_OES.

Any suggestion?

  • Hi,

    Is the issue the appearance of the green bands with GL_TEXTURE_EXTERNAL_OES as opposed to GL_TEXTURE_2D? Can you confirm?

    As you noted GL_TEXTURE_2D expects RGB data and with EXTERNAL_OES YUV data can be used. If you are noticing differences, can you confirm if the sources are as expected and no difference is seen in the source images?

    Thanks,

    Gowtham

  • Gowtham Tammana said:
    Is the issue the appearance of the green bands with GL_TEXTURE_EXTERNAL_OES as opposed to GL_TEXTURE_2D? Can you confirm?

         yes, mark it as the green band is blend.

    Gowtham Tammana said:
    As you noted GL_TEXTURE_2D expects RGB data and with EXTERNAL_OES YUV data can be used. If you are noticing differences, can you confirm if the sources are as expected and no difference is seen in the source images?

     yes, the source image is same except the no color on above picture that we take Y from NV12 to RGB, you can check the vehicles in both picture, there is the same scene.

  • Hi,

    With your second note, its little more clear. The issue you are seeing is the image not be calibrated the same from RGB standalone output compared to NV12 TDA output. Its not the appearance of the green color in the output image. Is that correct.

    The calibration file seems to ok, the output image wouldn't look that close if there was something wrong with it. Can you confirm if you are using the same lens file as well with both the tests.

    Also can you share on the details of the TI baseline release you are using here.

    Thanks,
    Gowtham

  • Gowtham Tammana said:
    With your second note, its little more clear. The issue you are seeing is the image not be calibrated the same from RGB standalone output compared to NV12 TDA output. Its not the appearance of the green color in the output image. Is that correct.

      yes.

    Gowtham Tammana said:
    Also can you share on the details of the TI baseline release you are using here.

     we'are using visionSDK 3.0.5 

  • Hi,

    Can you also confirm if the same lens file is being used.

    Thanks,
    Gowtham

  • Hello,

    Could you confirm that the CALMAT.BIN in the standalone app matches the files found at /home/root/.calibtable on the target file system?  Also, could you confirm that the LENS.BIN file you are using in the standalone app matches the file found at /opt/vision_sdk/LENS.BIN on the target file system?

    Regards,

    Lucas

  • Hi:

      we are not use the surround usecase of visionSDK, thus there is no such file  "/opt/vision_sdk/LENS.BIN" used.

      we are using the QT with our standalone parameters, and confirmed that parameters of the desktop(ubuntu) and the target are same

  • Hello:

      Any update?

      Thanks....

  • Hi, 

    I see that you mentioned that you are not using VisionSDK version of usecase and have your own QT version.

    Is there any component that is being used from VisionSDK usecase here. If so what are they. Is the calibration scheme also different to that of SDK here.

    From SDK perspective, the calibration scheme is not dependent on the format of the images and the use of RGB/NV12 format shouldn't have any impact on the final image.

    Thanks,

    Gowtham

  • Hi:

    Here is our AVM with QT architecture 

    As you seen that we can test with picture, that should not related with camera capture.the display out is located on IPU2 which run the visionSDK.

    Is any simple OpenGl test to validate such issue?

  • Hi Andy,

    > Is any simple OpenGl test to validate such issue?

    Are you looking for a example usage of the using EXTERNAL_OES textures here? I believe you are having issues with calibration here more than the texture and the calibration seems to be dependent on the format.
    Thanks,
    Gowtham
  • Hi Gowtham:

      meanwhile we got another two issues

    1. black line one car head 
    2. the center of wheel is  a little deviation 

              

    above all of 3 issues it not reproduced on our desktop AVM which use the RGB texture. and the target the is NV12 texture.(same algorithm and calibrate data)

    till now we thought it could be related with the EXTERNAL_OES textures.

    please share experience how we can verify or do some small test to check it ?

    Thanks very much!!

  • Hi Andy,

    All the three issues that are mentioned aren't indicating any NV12 rendering artifacts and they are more likely due to how the animation sequences/shaders are structured and probably they some additional handling is required for non-RGB formats. The texture usage with EXTERNAL_OES might be ok as the artifacts doesn't show any color artifacts.

    Are their assumptions on the NV12->RGB conversion assumed in here - e.g. full range, BT701 etc.

    Can you run the EXTERNAL_OES version on your desktop version.

    Thanks,
    Gowtham

  • Hi Gowtham:

      we will plan to make several sample to validate this issue that may take a bit long time.

      and like to close the ticket as no updated for a long time.

    thanks for you support.

  • Hi Andy,

    Thanks for the update. Please feel free to create a new ticket if you need any further assistance.

    Thanks,

    Gowtham