This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Understanding AVSYNC

Guru 20755 points

Hello,

I am trying to understand avsync concept as done in RDK. I would like to ask if PTS timestamp is used for avsync ? What else is used for avsync ?
I would appreciate it if you can please give some points about Avsync concept in RDK. 
I am trying to see if this feature Must be used in the project, which only requires ~200msec delay between, and there is no issue of lips sync error (the audio is related to video, but more as a story teller about the video)

Thank you,

Ran

  • Avsync in RDK provides the following functionality:

    - Provides audio <-> video and video <-> video synch functionality
             - Synchronization of system time to audio reference clock
             - Synchronization of system time to video reference clock
             - No clock adjust mode.
        - Trick Play functionality
          - Pause
          - Slow Play
          - Fast Play
          - Step Forward
          - Scan mode playback (I-frame only display)
          - Seek

    As you are aware in multicore SoC like 816x, video playback happens on one core (VPSS_M3) and audio playback happens on another core (A8/c674 DSP).

    Avsync schedules playback of video frame when the PTS of the video frame matches the system time.The PTS is set by the application when feeding bitstream to the mcfw. The PTS should be converted to msec scale .

    The system time is maintained in shared memory and is common across all cores .Avsync also ensures system time gets adjusted if audio PTS leads or lags current system time beyond a threshold.

    Video AVsync functionality is integrated as part of SwMs link which means you need to have SwMs link in your data flow if you want to use Avsync fucntionality.

    You can disable avsync functionality as long as difference between audio playout and video playout is within ~200ms as you mentioned.The total delay in the video processing pipeline for decode -> swms -> display would be about 100 ms.If you see synch issue or you require any trick play functionality you would have to enable avsync,

  • Hi Badri,

    Thank you very much for the answer. 
    I would like to ask:

    1.

    >Avsync schedules playback of video frame when the PTS of the video frame matches the system time.

    Does it mean that when using AVSYNC much larger queue have to be allocated for storing buffers, for example more buffers should be allocated in SWMS for avsync ?

    2.

    >The PTS is set by the application when feeding bitstream to the mcfw

    The PTS is set in the encoder side, as I understand, so I think it should it be that the PTS is set by application when receiving bitstream from MCFW (instead of feeding) ?

    3.

    >Synchronization of system time to audio reference clock

    Isn't the synchronization is between system time and PCR ? what is meant by "audio reference clock" or "video reference clock" ?

    4.

    >The total delay in the video processing pipeline for decode -> swms -> display would be about 100 ms

    >You can disable avsync functionality as long as difference between audio playout and video playout is within ~200ms as you mentioned.
    Most decoder chains in TI's examples are decode -> swms -> display, so the delay should be limited to 100msec, as you said. Why then there is need for AVSYNC in such chains ?  Is it possible that AV drift will occur in system long streaming ? 

    5.

    What is done with DTS (decode time stamp) ?

    Thank you very much for your time!

    Ran

  • Does it mean that when using AVSYNC much larger queue have to be allocated for storing buffers, for example more buffers should be allocated in SWMS for avsync ?

    - I didnt understand why this will be the case. Avsync doesnt require a larger queue. When SwMs has to compose a new frame, it checks its input queue for each channel to determine if the next frame is ready for display.Three decisions are possible based on delta between frame PTS and current system time

        - Play (Frame is selected for display)

        - Skip (Frame is dropped , and next frame in the queue is checked if it is suitable for display)

        - Replay (Frame is left in the queue so that previous frame is replayed)

    The PTS is set in the encoder side, as I understand, so I think it should it be that the PTS is set by application when receiving bitstream from MCFW (instead of feeding) ?

     - Yes on the encoder side, the capture link will timestamp the frame. Frames received from mcfw will already have the timestamp set. On the decoder side, the application should set the PTS. Frames for decoding may be received from any source like n/w , sata etc.It is the application that should set the PTS (converted to msec scale) before giving the frames to mcfw for decoding

    Isn't the synchronization is between system time and PCR ? what is meant by "audio reference clock" or "video reference clock" ?

      -- When a drift is seen between system time and the reference clock , the system time is readjusted to the reference clock. Reference clock may be audio (If system time drifts from PTS of audio frame being played, system time is adjusted) .We dont support PCR decoding as ts demux is not part of mcfw.

    Why then there is need for AVSYNC in such chains ?  Is it possible that AV drift will occur in system long streaming ?

    -- Avsync is required for the following reasons:

        - The sychronization requirement between audio and video is much less than 200 ms. (Around 45ms )

        - Support for trick play features requires enabling avsync.

        - When doing multi channel video display, video frames across channels need to be synchronized exactly. i.e. For example All video channels should display frame #100 in the sequence at the same time.

        - Adjust due to clock drift between source of video stream and TI816x doing the playback.

    What is done with DTS (decode time stamp) ?

      - DTS is not used in RDK.