I'm trying to stream live video between two DM365s, and at the receiving end, it's prohibitively slow. I was wondering if anyone could help me construct a better GStreamer pipeline that will work for true live streaming, or suggest code modifications if necessay.
I am using version 4.02 of the SDK, developing on an Ubuntu 10.04 host.
Here are my boot arguments on the 365s:
Sender:
bootargs=console=ttyS0,115200n8 rw mem=65M video=davincifb:vid0=OFF:vid1=OFF:osd0=640x480x32,4050K dm365_imp.oper_mode=0 davinci_enc_mngr.ch0_output=LCD davinci_enc_mngr.ch0_mode=640x480 vpfe_capture.cont_bufoffset=0 vpfe_capture.cont_bufsize=6291456 root=/dev/nfs nfsroot=<nfs host>:/home/dm365 ip=dhcp
(Note: The sender's GStreamer and DMAI codes were modified to support displaying video on an LCD, but this is not the board I'm trying to display on at this time.)
Receiver:
bootargs=console=ttyS0,115200n8 rw mem=65M video=davincifb:vid0=OFF:vid1=OFF:osd0=720x576x16,4050K dm365_imp.oper_mode=0 davinci_capture.device_type=4 vpfe_capture.cont_bufsize=6291456 davinci_enc_mngr.ch0_output=COMPOSITE davinci_enc_mngr.ch0_mode=NTSC root=/dev/nfs nfsroot=<nfs host>:/home/dm365 ip=dhcp
Here are my gst-launch commands:
Sender: with a camera hooked up to the composite input:
gst-launch -v \
v4l2src input-src=composite always-copy=false \
! video/x-raw-yuv, format=\(fourcc\)NV12, framerate=\(fraction\)30000/1001, \
width=640, height=480 \
! dmaiperf \
! queue \
! TIVidenc1 codecName=h264enc engineName=codecServer \
! rtph264pay \
! udpsink host=<receiver IP> port=5000
Receiver:
CAPS='"application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96"'
gst-launch -v \
udpsrc port=5000 caps=$CAPS \
! rtph264depay \
! TIViddec2 displayBuffer=true codecName=h264dec engineName=codecServer \
! queue \
! TIDmaiVideoSink \
videoStd=D1_NTSC \
videoOutput=composite \
sync=false \
contiguousInputFrame=true
I don't see anything odd (no warnings/errors) in the buffer display and caps output:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstRtpJitterBuffer:rtpjitterbuffer0.GstPad:src: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96
/GstPipeline:pipeline0/GstRtpJitterBuffer:rtpjitterbuffer0.GstPad:sink: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96
/GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0.GstPad:src: caps = video/x-h264
/GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0.GstPad:sink: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96
/GstPipeline:pipeline0/GstTIViddec2:tividdec20.GstPad:sink: caps = video/x-h264
/GstPipeline:pipeline0/GstTIViddec2:tividdec20.GstPad:src: caps = video/x-raw-yuv, format=(fourcc)NV12, framerate=(fraction)30000/1001, width=(int)1280, height=(int)720
[B | ]
[RW | ]
[R-W | ]
[R--W | ]
[R---W | ]
[R----W | ]
[R-----W | ]
[R------W | ]
[R-------W | ]
[R--------W | ]
[R---------W | ]
[R----------W | ]
[R-----------W | ]
[R------------W | ]
[R-------------W | ]
[R--------------W | ]
[R---------------W | ]
[R----------------W | ]
[R-----------------W | ]
[R------------------W | ]
[R-------------------W | ]
[R--------------------W| ]
[R=====================W ]
[R=====================|W ]
[R=====================|=W ]
[R=====================|==W ]
[R=====================|===W ]
[R=====================|====W ]
/GstPipeline:pipeline0/GstTIViddec2:tividdec20.GstPad:src: caps = video/x-raw-yuv, format=(fourcc)NV12, framerate=(fraction)30000/1001, width=(int)640, height=(int)480
/GstPipeline:pipeline0/GstTIDmaiVideoSink:tidmaivideosink0.GstPad:sink: caps = video/x-raw-yuv, format=(fourcc)NV12, framerate=(fraction)30000/1001, width=(int)640, height=(int)480
At this point, the receiver displays video, but it takes around 45 seconds for video to display, and there's a 1-2 second delay between printing each line of the buffer status!
I found this thread explaining how the TIViddec2 buffer works:
https://gstreamer.ti.com/gf/project/gstreamer_ti/forum/%3C/?_forum_action=ForumMessageBrowse&thread_id=3676&action=ForumBrowse&forum_id=187
If I am understanding correctly, it sounds like nothing will display on the receiver until TIViddec2's internal buffer has filled up enough, which is what happens at the "|====W" line when the video plays.
However, again, it takes around 45 seconds for the buffer to fill up and for the video to display. This is way, way too slow for a "live" stream. When I play a pre-recorded video file on the 365 (over NFS), the video buffer is filled up immediately (or close enough), and the video displays right away. When I send a live video stream from a 365 to my Ubuntu host, the stream experiences a delay of only around half a second, one second maximum.
Here's my command to decode a movie on a 365:
gst-launch -v \
filesrc location=/usr/share/ti/data/videos/davincieffect.264 \
! TIViddec2 codecName=h264dec engineName=codecServer \
! dmaiperf print-arm-load=TRUE \
! TIDmaiVideoSink useUserptrBufs=TRUE \
displayStd=v4l2 displayDevice=/dev/video2 \
videoStd=D1_NTSC videoOutput=composite sync=false
Here's my command to receive live video (from a 365) on the Ubuntu host:
CAPS="application/x-rtp, format=(fourcc)NV12, framerate=(fraction)30000/1001, width=(int)640, height=(int)480"
gst-launch -v \
udpsrc port=5000 caps="$CAPS" \
! rtph264depay \
! ffdec_h264 \
! ffmpegcolorspace \
! queue \
! autovideosink sync=false
Since there is no delay when sending from a 365 to the Ubuntu host, and since there is no delay when playing a pre-recorded video on a 365 (e.g. I know the decoder's buffer is capable of filling up quickly), I am baffled by the slow performance when streaming between two 365s. Is there an intermediate buffer that's waiting to fill up before passing data along to the decoder? Is the issue on the udpsrc or rtp264depay element on the receiver? Or is this an issue with TIViddec2? Is there a way to configure TIViddec2 to be more responsive? Or should I try a different decoder?
Any suggestions, including code modifications, are greatly appreciated!
(ps. sorry if this should have gone in a GStreamer forum. I recently submitted a similar question to support, and I was directed to this forum.)