I'm trying to encode a picture coming from a video camera and transmit it over the network. The encoding platform is omap3530.

The pipeline i'm using:

gst-launch -v -e --gst-debug=2 v4l2src always-copy=false queue-size=4 do-timestamp=true ! 'video/x-raw-yuv,width=720,height=480,format=(fourcc)UYVY' ! TIVidenc1 codecName=mpeg4enc engineName=codecServer iColorSpace=UYVY rateControlPreset=1 bitRate=2000000 frameRate=25 resolution=720x480 displayBuffer=false ! rtpmp4vpay ! udpsink host=192.168.2.101 port=5000 sync=false

 

When this pipeline is used, I'm getting a good picture on the receiving end, but CPU usage is very high (over 80%). If I add "contiguousInputFrame=TRUE" to the encoder settings, the CPU usage drops to about 60%, but on the receiving end i'm getting something similar to garbage, intertwined with the actual video.

Here is the pipeline on the receiving end:

export CAPS="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)MP4V-ES, profile-level-id=(string)5, config=(string)000001b005000001b509000001000000012000845d4c28b421e0a31f, ssrc=(guint)4140046796, payload=(int)96, clock-base=(guint)0, seqnum-base=(guint)0"

gst-launch -v udpsrc caps="$CAPS" port=5000 ! queue  ! rtpmp4vdepay ! ffdec_mpeg4  ! videoparse framerate=25/1 width=720 height=480 ! xvimagesink

 

Is "contiguousInputFrame=TRUE" supported on omap3530? Perhaps there's something I can change in my pipeline(s) to make it work?

 

Thanks,

Arik