I'm working on a application to trans-code streaming video using the DM3730. The input video is x264 740x480 at 30fps. The gstreamer pipeline isn't able to keep up, and I'm not sure if its a limitation of the DM3730 or a problem with my gstreamer pipeline.
Here is my pipeline:
gst-launch udpsrc multicast-group=239.255.0.1 pt=1841 ! mpegtsdemux ! video/x-h264 ! h264parse ! queue ! TIViddec2 numOutputBufs=12 ! queue ! TIVidenc1 codecName=h264enc engineName=codecServer rateControlPreset=3 bitRate=358400 framerate=30/1 contiguousInputFrame=true ! dmaiperf print-arm-load=true engine-name=codecServer ! queue ! rtph264pay name=pay0 ! udpsink host=239.255.0.2 port=8000
And the output:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
INFO:
gsttidmaiperf.c(302): gst_dmaiperf_start (): /GstPipeline:pipeline0/GstDmaiperf:dmaiperf0:
Printing DSP load every 1 second...
Setting pipeline to PLAYING ...
New clock: MpegTSClock
INFO:
Timestamp: 0:09:40.920562747; bps: 0; fps: 0; CPU: 0; DSP: 79; mem_seg: DDR2; base: 0x87c42e80; size: 0x20000; maxblocklen: 0xd858; used: 0x127a8; mem_seg: DDRALGHEAP; base: 0x85a00000; size: 0x2000000; maxblocklen: 0x464190; used: 0x1b9bad0; mem_seg: L1DSRAM; base: 0x10f04000; size: 0x10000; maxblocklen: 0x0; used: 0x10000;
INFO:
Timestamp: 0:09:41.979125980; bps: 69202; fps: 15; CPU: 16; DSP: 97; mem_seg: DDR2; base: 0x87c42e80; size: 0x20000; maxblocklen: 0xd858; used: 0x127a8; mem_seg: DDRALGHEAP; base: 0x85a00000; size: 0x2000000; maxblocklen: 0x464190; used: 0x1b9bad0; mem_seg: L1DSRAM; base: 0x10f04000; size: 0x10000; maxblocklen: 0x0; used: 0x10000;
INFO:
Timestamp: 0:09:42.999542239; bps: 13252; fps: 14; CPU: 7; DSP: 98; mem_seg: DDR2; base: 0x87c42e80; size: 0x20000; maxblocklen: 0xd858; used: 0x127a8; mem_seg: DDRALGHEAP; base: 0x85a00000; size: 0x2000000; maxblocklen: 0x464190; used: 0x1b9bad0; mem_seg: L1DSRAM; base: 0x10f04000; size: 0x10000; maxblocklen: 0x0; used: 0x10000;
INFO:
Timestamp: 0:09:44.017120363; bps: 12517; fps: 14; CPU: 10; DSP: 97; mem_seg: DDR2; base: 0x87c42e80; size: 0x20000; maxblocklen: 0xd858; used: 0x127a8; mem_seg: DDRALGHEAP; base: 0x85a00000; size: 0x2000000; maxblocklen: 0x464190; used: 0x1b9bad0; mem_seg: L1DSRAM; base: 0x10f04000; size: 0x10000; maxblocklen: 0x0; used: 0x10000;
INFO:
Timestamp: 0:09:45.052001956; bps: 20894; fps: 14; CPU: 7; DSP: 98; mem_seg: DDR2; base: 0x87c42e80; size: 0x20000; maxblocklen: 0xd858; used: 0x127a8; mem_seg: DDRALGHEAP; base: 0x85a00000; size: 0x2000000; maxblocklen: 0x464190; used: 0x1b9bad0; mem_seg: L1DSRAM; base: 0x10f04000; size: 0x10000; maxblocklen: 0x0; used: 0x10000;
I've looked through the example gstreamer pipelines, but I couldn't find any examples of using hardware accelerated decoding and encoding in the same pipeline.
Any advice would be appreciated.
Brian