This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-AM57X: Improving Ducati JPEG decoder latency

Part Number: PROCESSOR-SDK-AM57X


Hello, I am designing an application that uses an MJPEG stream over UDP to the AM57x-EVM with strict video latency constraints.  As it stands, I am using Gstreamer to handle the streaming, parsing, and decoding of my stream.  However, I am interested in knowing whether changing some of the decoder options would have much effect.  By default, the gstducatividdec.c file will decode by the entire frame, and I am interested in knowing if time can be spared by only decoding chunks of the image at a time.  The HDVICP2 manual notes that if I were to use IVIDEO_FIXEDLENGTH instead of IVIDEO_ENTIREFRAME, I can send buffers of size 8K (or multiples of 8K).

Moreover, to change this would I just change the 3 instances of IVIDEO_ENTIREFRAME in gstducatividdec.c to IVIDEO_FIXEDLENGTH, and recompile the ducati plugin?  And if so, would I have to change my application to pass those image chunks rather than the entire frame, which I'm assuming is the default mode of operation for gstreamer.

Current pipeline is udpsrc ! jpegparse ! ducatijpegdec 

  • Hi Tom,
    Handling low-latency data communication calls for slice level processing will be complex to handle. In linux SDK, there is no sample application available for this feature. Just setting inputDataMode or outputDataMode will not be sufficient.
    It requires implementing callback functions, which codec calls when data is ready or data is exhausted.
    It is sample tested in qnx with an application. Here also only decoder's output mode is configured for slice level and input mode still uses ENTIREFRAME.

    Refer this for the changes required for in libdce to handle callback
    git.omapzoom.org/.../

    and this is a sample decode application
    git.omapzoom.org/.../

    Ram
  • Hi Ram,

    I've come across an old document describing how the MJPEG encoder works: SPRUH44, describing the MJPEG Decoder on HDVICP2  (title: MJPEG Decoder on HDVICP2 and Media Controller Based Platform).  It seems outdated, but possibly useful for describing the handshaking and callbacks needed for the application to decoder running on the Ducati subsystem.  How valid is this document?  

    Secondly, is there any other documentation for libdce or Ducati to understand the callback procedure? 

  • Hi Tom,

    The document you mentioned is the UserGuide for MJPEG Decoder  on IVAHD. The document is valid but it gives how hand shaking happens between IVAHD and M4. But in real usecase, the application is ruuning on A15, so the calback has to reach from M4 to A15 to get partial data and here libdce ,rpmsg, dce etc are involved.

    You can refer the libdce and ipumm for handshaking between M4 and A15.

    Ram

  • Thank you, as far as that document goes, I came to understand where it fit in after reading through it yesterday.  Going through the ipumm repo, can you recommend any changes that have to be made to rpmsg?  I saw that you mentioned that, but it looks to be just an interprocessor communication driver, and if IVAHD can handle slice decoding, with libdce expecting slice decoding, wouldn't rpmsg just have to continue what it's already doing?  

    As far as changes to libdce, would I more or less have to alter the codec algorithm it requests to have slice decoding, and ensure that both input and output are set to slice?

  • Furthermore, if I were to slice a jpeg into fixed slices to transfer to the ready codec, what would the process be for pushing that buffer of sliced jpeg into the decoder? Would I do it by sending a slice, waiting for a 'completed' message from the decoder, and sending the next slice?