This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Delay from DMAI Loader priming

I am trying to create a system which encodes video on one DaVinci DM365 EVM then transmits it (via serial port) to another DM365 to be decoded and displayed.  I decided to use the encode and decode demos as starting points for the software on each DaVinci.

I was able to modify the encode demo's writer.c file to write video data to the serial port rather than to a file, and that part of the project is working great.

I have also modified the decode side to read video data from the serial port rather than a file, except in this case I had to modify some DMAI Loader functions: Loader_create, Loader_prime, and Loader_readData.  I've actually been relatively successful on this side as well.  I am able to receive video from the transmitting DaVinci, then decode and display it (at a very low frame rate, but thats to be expected with a serial connection).

However, the video thats being decoded is nowhere close to real time - its delayed by about 5 minutes.  This is because the Loader must be primed before the DaVinci can start decoding, and priming requires the board to read in about 500 kB over the serial port.  The delay introduced by priming with 500 kB of data wasn't a problem in the original decode demo, since it was simply reading from a file, but it is a big problem in this application.

I'm just wondering if anybody has any ideas on how to reduce this delay.  I'm fairly confident that its possible because the encodedecode demo doesn't seem to introduce any delay, but I haven't been able to figure out how that's accomplished.

 

I appreciate any thoughts you have on the matter.

Thanks,

Brian

  • Hi Brian,

    That is an interesting use case you have got there. Ultimately, the amount of data the Loader initially reads during the priming phase is determined by this line in the video.c file in the decode demo:

     

        /* Ask the codec how much input data it needs */
        lAttrs.readSize = Vdec2_getInBufSize(hVd2);

     

    readSize corresponds to the minimum data size the decoder 'thinks' it needs for its process function call. Typically this varies depending on the decoder being used and on the parameters the decoder is given during its creation (e.g. maxHeight, maxWidth). You may want to play with these parameters to see if the codec reduces its requirement (e.g. if the max resolution your application supports is QCIF, then set the maxHeight and maxWidth accordingly).

    You can also try to hardcode this to a lower value, but depending on the decoder you may get an error saying there is not enough data to proceed.

    Best regards,

    Vincent