This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

OMAP3530: underrun!!! (at least 30.640 ms long) ... WHY????

Other Parts Discussed in Thread: OMAP3530

I am using a OMAP3530 w/ DSS driving a 640x480 lcd with WindowSystem=libpvrPVR2D_LINUXFBWSEGL.so and 3 framebuffers.  When drawing the screen I split the screen vertically, reduce the size of the viewport and draw 3d terrain on the left side, then set the viewport back to the full screen and draw 2d overtop of the whole screen.  When doing this I seem to get a random hang of the gfx drivers (gdb backtrace below). The hanging of the gfx drivers is also accompanied by a kernel message like: "underrun!!! (at least 30.640 ms long)".  When I don't split the screen vertically (i.e. viewport only altered in y direction) there are no problems.  Has anyone seen this or can tell me how to stop it?

 

Program received signal SIGINT, Interrupt.
0x402f5c3c in ioctl () from /lib/libc.so.6
(gdb) bt
#0  0x402f5c3c in ioctl () from /lib/libc.so.6
#1  0x40370874 in PVRSRVBridgeCall (hServices=<value optimized out>,
    ui32FunctionID=3223086917, pvParamIn=<value optimized out>,
    ui32InBufferSize=<value optimized out>, pvParamOut=0xbec54790,
    ui32OutBufferSize=8)
    at /home/prabu/gfxsdkcreate_new/ti_references/sources/GFX_Linux_DDK/src/eurasia/services4/srvcli
ent/env/linux/common/pvr_bridge_u.c:201
#2  0x403701c4 in PVRSRVEventObjectWait (psConnection=<value optimized out>,
    hOSEvent=0xf)
    at /home/prabu/gfxsdkcreate_new/ti_references/sources/GFX_Linux_DDK/src/eurasia/services4/srvcli
ent/env/linux/common/osfunc_um.c:307
#3  0x40372e80 in PVRSRVPollForValue (psConnection=0x36, hOSEvent=0xf,
    pui32LinMemAddr=0x410b231c, ui32Value=1, ui32Mask=4294967295,
    ui32Waitus=1000, ui32Tries=920)
    at /home/prabu/gfxsdkcreate_new/ti_references/sources/GFX_Linux_DDK/src/eurasia/services4/srvcli
ent/common/resources.c:85
#4  0x4014554c in HardwareTextureUpload (gc=0x4fad858, psTex=0x5043368,
    ui32OffsetInBytes=0, psLevel=0x5048668) at texdata.c:1535
#5  0x40145eb0 in TranslateLevel (gc=0x4fad858, psTex=0x5043368, ui32Face=0,
    ui32Lod=2097152) at texdata.c:2155
#6  0x40147d90 in TextureMakeResident (gc=0x4fad858, psTex=0x5043368)
    at texmgmt.c:1015
#7  0x40147f64 in SetupTextureState (gc=0x4fad858) at texmgmt.c:2481
#8  0x40150cb8 in ValidateState (gc=0x4fad858) at validate.c:4286
#9  0x4012dbc8 in glDrawArrays (mode=4, first=0, count=6) at drawvarray.c:2296
#10 0x000a7c4c in GRL2_DrawFontList (list=0x7c709c) at GRL2/GRL2_util.c:318
#11 0x0000f1fc in UpdateScreen () at AFS/AFS_event.c:250
#12 0x0000f42c in EventLoop () at AFS/AFS_event.c:121
#13 0x000f57cc in main (argc=<value optimized out>, argv=0xbec54da4)
    at AFS/AFS_main.c:105
(gdb)

  •    How often do you reload glViewport()?  Every frame?

       I have not seen any app attempt to reload glViewport() mid-scene in this way on the SGX and I suspect it is causing problems with the PowerVR's deferred rendering architecture.  I recommend you search the PowerVR documentation for any mention of this.  Have you seen this done anywhere in the SDK demos or the training course examples?  It would be a good experiment to modify one of them to see if you get the same problem.

       There are examples of reloading glViewport() to render to an FBO or pbuffer, and that approach would probably solve the problem. Switch your render target to an FBO to render your 2D to a texture, then use that texture in your 3D scene.  This way, the viewport for the framebuffer is constant

    http://wiki.davincidsp.com/index.php/Render_to_Texture_with_OpenGL_ES

    Regards, Clay

     

  • Clay,

     

    We do reload the glVeiwport on every frame but I tried disabling that and it made no difference.  It definatly has something to do with a large texture being posted at about 2-3Hz.

     

    Also, I am only seeing about 10Hz on my frame rate for a pretty busy VGA screen.  Does this seem right?

     

    Thanks,

     

    Ken

  • Ken,

        Loading and Binding large textures is expensive.  glTexImage2D must copy the texture and glBindTexture performs the twiddling.  You should probably reduce their size, load frequency or look at using texture streaming.

    http://wiki.davincidsp.com/index.php/OpenGLES_Texture_Streaming_-_bc-cat_User_Guide

    Regards, Clay

  • I checked the SGX speed by placing the following lines in GFX_Linux_KM/services4/system/omap3430/sysuils_linux.c:EnableSGXClocks()

     

        if (atomic_read(&psSysSpecData->sSGXClocksEnabled) != 0)
        {
            return PVRSRV_OK;
        }

        PVR_DPF((PVR_DBG_MESSAGE, "EnableSGXClocks: Enabling SGX Clocks"));
            /*
            rateMpu = clk_get_rate(psSysSpecData->psMPU_CK);
            printk(KERN_ERR "EnableSGXClocks: CPU Clock is %luMhz", HZ_TO_MHZ(rateMpu));

            */

    #if defined(DEBUG)
        {
           
            IMG_UINT32 rate = clk_get_rate(psSysSpecData->psMPU_CK);
            PVR_DPF((PVR_DBG_MESSAGE, "EnableSGXClocks: CPU Clock is %dMhz", HZ_TO_MHZ(rate)));
        }
    #endif

     

    ....

        res = clk_set_rate(psSysSpecData->psSGX_FCK, lNewRate);
        if (res < 0)
        {
            PVR_DPF((PVR_DBG_ERROR, "EnableSGXClocks: Couldn't set SGX function clock rate (%d)", res));
            return PVRSRV_ERROR_GENERIC;
        }
            /*
            rateSgx = clk_get_rate(psSysSpecData->psSGX_FCK);
            printk(KERN_ERR "EnableSGXClocks: SGX Functional Clock is %luMhz", HZ_TO_MHZ(rateSgx));

            */
    #if defined(DEBUG)
        {
           
            IMG_UINT32 rate = clk_get_rate(psSysSpecData->psSGX_FCK);
            PVR_DPF((PVR_DBG_MESSAGE, "EnableSGXClocks: SGX Functional Clock is %dMhz", HZ_TO_MHZ(rate)));
        }
    #endif

     

    The output of these changes seem to happen at or near frame rate.

    EnableSGXClocks: CPU Clock is 0Mhz
    EnableSGXClocks: SGX Functional Clock is 110Mhz
    EnableSGXClocks: CPU Clock is 0Mhz
    EnableSGXClocks: SGX Functional Clock is 110Mhz
    EnableSGXClocks: CPU Clock is 0Mhz
    EnableSGXClocks: SGX Functional Clock is 110Mhz
    EnableSGXClocks: CPU Clock is 0Mhz
    EnableSGXClocks: SGX Functional Clock is 110Mhz
    EnableSGXClocks: CPU Clock is 0Mhz
    EnableSGXClocks: SGX Functional Clock is 110Mhz
    EnableSGXClocks: CPU Clock is 0Mhz
    EnableSGXClocks: SGX Functional Clock is 110Mhz
    EnableSGXClocks: CPU Clock is 0Mhz
    EnableSGXClocks: SGX Functional Clock is 110Mhz
    EnableSGXClocks: CPU Clock is 0Mhz
    EnableSGXClocks: SGX Functional Clock is 110Mhz

    Why is the EnableSGXClocks called so often and is there something I can do to eliminate it.  Also, is it commond for the CPU clock to be ZERO?

     

    Thanks,