This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Irregular video Capture timings

I am seeing a problem with the long term capture of video on the DM368. When I start capturing video (via the DMAI interface @720p) , I get regular intervals of frame capture all fairly close to 33.333 ms, with occasional sample slightly less or more. But when I do a run over night, at some point these values start to diverge and I start getting timing between frame captures  ranging between 1.5 ms and 110 ms. The really long term average looks to run close to the 33.333 one would expect for 30 frames a second capture rate.

 

In one case after about 30 minutes of capture, I start seeing two captures close together of about 16 ms and an occasional capture of about 65 ms, than these start to happen more and more frequently. I am not seeing any contention of waiting for buffers at my application level. I have narrowed it down to the call to the DMAI routine Capture_get. I am taking the timings before and after  Capture_get(). I am encoding with the h264 encoder, whose timings remain fairly constant and are much less than the time for the capture. I am also capturing and encoding AAC audio  at the same time.

 

In another case I ran for several hours and got Zero size buffer out of the h264 encoder, my application detected this, destroyed all the DMAI objects and recreated them. Immediately after this I started seeing the capture frames having large variances in the collection times. But for the all night runs I have seen it I have ran with the original DMAI objects that were created.

 

Via top I have lots of idle time >30% while streaming and >50% if not streaming, have seen the problem in both situations. For memory we have  90 MegaBytes free, so it does not look like a contention problem with these.

                                                                                                                                                                                                                                                        

My environment is DVSDK 3_10_00_19 and linux 2.6.32-rc2 PSP-37

 

Any idea what could be going on and how to fix it?

 

  • Are you trying this on DM368 EVM or IPNC camera? One of the things to check is number of buffers you are using on video capture driver. I am wondering if number of buffers are not enough to keep up to speed of video capture rate. How are you capturing the video?

    If you look at capture driver architecture, the driver takes empty buffers and fills up video capture data and this full buffers are used by application. The application takes ownership of these buffers by calling select function and once the buffers are consumed, returns back to the ownership to driver to be filled with data.

    I am just wondering if Capture_get() function (internally calls select() function) is getting stalled if there are not any full buffers available for application to consume in which case you might be seeing increase in time frame. You might want to try increasing number of buffers on capture driver and prime this buffers during initialization phase of your application.

    Prateek

     

  • The board is a proprietary design. I did not notice the problem with the 2_10_01_18 DVSDK, but did when we recently migrated to the 3_10_00_19 DVSDK. I have 6 buffers, and basically have a loop where I do the Capture_get, send the captured buffer to the encoder, get a free buffer from the encoder, and do the Capture_put to return the buffer to the capture. When the Capture device is created it is assigned 3 buffers, and when I read one, I return one back to it. So it should always have at least 2 buffers to work with.

    I take the time stamp right before the Capture_get and again right after it,  so it should have a buffer available. The encode only averages ~20 ms, it seems to be keeping up and I do not seem to be waiting for the buffers it returns. If all the times were less than 33.333 ms than I could see the capture device waiting on a buffer, but what about the 110 ms times? It seems like something happens and triggers a small amount of the variances and once they start, they slowly continue to grow more frequent, occurring every couple of minutes to start and slowly become multiple times a second.