This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Render Surface When Not Using A Display

All,

  I am attempting a GPGPU application using OpenGL ES v2.  I've laid out the groundwork using demo's and emulators.  I am now trying to transfer this to my DM8168 board and use the SGX530.

  My goal is to render directly to a FBO with an attached texture as a color attachment, and pull out the information using glReadPixels.  However, I am confused on setting up the RenderContext.  I initially thought that I would need a PixelBuffer, but after a few attempts, tests and reading it seems that may not be the best approach.

  What is the best approach to setting up a EGLSurface when X is not running on my DM8168?  Thank you.

Constantin

http://processors.wiki.ti.com/index.php/Render_to_Texture_with_OpenGL_ES#PixelBuffers

P.S. -  The above wiki page is great, but doesn't include the EGLSurface setup when not using a windowing system.  Also, it relies on PVRShell, which is C++.  I need a C approach.

  • Can you please share your data flow  - is this a real-time application ? After readpixels, is this to be used further as a GL texture, or to be used in CPU only ? What performance are you expecting, and what is the input resolution of the image ? There are several offscreen examples in sgxperf link below. Let me know if you face issues.

    https://gitorious.org/tigraphics/sgxperf/blobs/1be449b21d588f2817a08e63f37e336b2f1a07fb/sgxperf_gles20_vg.cpp

  • Prabindh, thank you for your response.

    I have figured out a few things.  Hopefully if other stumble upon this they will get some quick answers.

    Data Flow:  Stream Video -> OpenGL -> Continue Streaming Video (CPU And Dedicated HW), expected to run at as fast as possible, at resolutions of up to 1920x1080.

    The concept of the application is to use OpenGL to make whole frame modifications on each frame of a video stream without affecting the fps.

    Initial testing, using gstreamer and a videotestsrc, shows around 5fps at 1920x1080.  Note: I'm sure MANY optimization exist, this is the first iteration of the software.  The only optimizations I'm using is using a FrameBuffer Object and glReadPixels.  I also do a vertical flip in the shader to correctly read the image (since glReadPixels reads from the bottom up) to store the image in the standard format.  I am not doing double buffering yet, but I assume that would be a good first step.  (Post OpenGL calls continue and when you return they should be done.  This requires thread which loads and unload textures as well.)

    Regarding my initial question.  I will post some background (my sources of confusion), my solution, and some weird instances.

    I initially thought I needed a RenderBuffer surface, and so attempted to make one on my Emulated OpenGL ES on two operating systems, a VMWare Guest Ubuntu and a native Windows 7.  I have a dedicated graphics card so I figured it shouldn't be a problem.  I could not get  the Guest VM to work with 3D HW acceleration. Because of this I couldn't get ANY offscreen surface working.  On the Windows 7 machine, I may have gotten off screen buffers working, but I was getting an error which I attributed to the same issue as the VMWare.  However, later I determined it was an unrelated error ... leading me to believe that the PowerVR Emulator can indeed do Offscreen Buffering on a native Windows with a discrete graphics card.

    So what was the solution?  Well, for Window it was making a PbufferSurface, but this wasn't meant to be for Windows.  Below I link a document, which if you Ctrl-F "no display" you will find mention of what to do when there is no display.  Apparently, you make a WindowSurface and specify NULL, or EGL_NO_DISPLAY or EGL_NO_SURFACE whatever you feel describes it best (since they all equate to 0x00).  Note... I think this approach may work in Windows as well!

    http://www.imgtec.com/powervr/insider/docs/Migration%20from%20OpenGL%20ES%201.x%20to%20OpenGL%20ES%202.0%20API.1.1f.External.pdf

    All my confusion here stemmed from the need of a RenderSurface.  We don't really need a RenderSurface, all we really need is a RenderContext... but in order to make a Context active we need a RenderSurface... At least that's the way I understand it.  I mean we specify GL_FRAMEBUFFER to be a framebuffer object as opposed to the GL_FRAMEBUFFER0, so that RenderSurface never even gets used!

    Some really weird stuff.  On the DM8168, I print out different configs than I requested (maybe it's a header values issue?).  For example, I request Open GL ES2 and a EGL_WINDOWS_BIT, I got back an OpenVG and PixMap Surface from glChooseConfig. Regardless I still used this config and everything worked fine...  Some other weird values occurred with glGetInteger(GL_IMPLEMENTATION_READ_FORMAT/TYPE).  I got FORMATs and TYPEs which were not declared in any of my headers... the GraphicsSDK given by TI or the ones given by ImgTec.

    Anyways I hope this finds someone as helpful.  In regards to me, if you have any suggestions on optimization they would be greatly appreciated.