This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Optimizing glReadPixels - GPGPU Application



Hello all,

  I have written a GPGPU application which uses the SGX chip on my DM8168 with OpenGL ES v2.0 .  I send my data in using a texture and some triangles, and render to a Framebuffer Object with a Colorbuffer attachment.  I then NEED to get the memory out of OpenGL so I use glReadPixels.

  My initial tests however are not good and  ended up being very inefficient.  On a 800x600 matrix,  I was running at 8 fps with my ARM load at 100%!  I removed the glReadPixels call and acheived 30 fps easily with very low ARM load.  Thus I am sure that my problem lies in extracting data from the SGX.

  I fell upon a few different posts while researching and just wanted some suggestions/thoughts on what approach would be best.  I see two possible solutions, using the PixMaps that come with my applications or double buffering the FBO (however, this one I am not so sure of!).

  I was linked to this source: https://gitorious.org/tigraphics/sgxperf/blobs/1be449b21d588f2817a08e63f37e336b2f1a07fb/sgxperf_gles20_vg.cpp    and I think it makes sense.  I just wanted to see if anyone had any experience in this before I started digging.  As I see it, I need to install the CMem module and use the cmem calls to specify memory for a PixMap...  That should let me render directly to user space memory... I then need to pass/copy this memory to GStreamer.

Thank you for your time!