This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Linux: How to save h2642 as yuv

Tool/software: Linux

1、I use the following command to decode h264 into yuv format
gst-launch-1.0 -v filesrc location=airshow_p352x288_nv12.h264 num-buffers=600 ! queue ! h264parse ! ducatih264dec ! queue ! filesink location=1.yuv

2、debug information is as follows

Setting pipeline to PAUSED ...

Pipeline is PREROLLING ...

/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:src: caps = video/x-h264, width=(int)352, height=(int)288, parsed=(boolean)true, stream-format=(string)byte-stream, alignment=(string)au

/GstPipeline:pipeline0/GstDucatiH264Dec:ducatih264dec0.GstPad:src: caps = video/x-raw, format=(string)NV12, width=(int)512, height=(int)384, framerate=(fraction)0/1

/GstPipeline:pipeline0/GstQueue:queue1.GstPad:sink: caps = video/x-raw, format=(string)NV12, width=(int)512, height=(int)384, framerate=(fraction)0/1

/GstPipeline:pipeline0/GstDucatiH264Dec:ducatih264dec0.GstPad:sink: caps = video/x-h264, width=(int)352, height=(int)288, parsed=(boolean)true, stream-format=(string)byte-stream, alignment=(string)au

/GstPipeline:pipeline0/GstQueue:queue1.GstPad:src: caps = video/x-raw, format=(string)NV12, width=(int)512, height=(int)384, framerate=(fraction)0/1

/GstPipeline:pipeline0/GstFileSink:filesink0.GstPad:sink: caps = video/x-raw, format=(string)NV12, width=(int)512, height=(int)384, framerate=(fraction)0/1

/GstPipeline:pipeline0/GstDucatiH264Dec:ducatih264dec0.GstPad:src: caps = video/x-raw, format=(string)NV12, width=(int)512, height=(int)384, framerate=(fraction)0/1, max-ref-frames=(int)19

/GstPipeline:pipeline0/GstQueue:queue1.GstPad:sink: caps = video/x-raw, format=(string)NV12, width=(int)512, height=(int)384, framerate=(fraction)0/1, max-ref-frames=(int)19

/GstPipeline:pipeline0/GstQueue:queue1.GstPad:src: caps = video/x-raw, format=(string)NV12, width=(int)512, height=(int)384, framerate=(fraction)0/1, max-ref-frames=(int)19

/GstPipeline:pipeline0/GstFileSink:filesink0.GstPad:sink: caps = video/x-raw, format=(string)NV12, width=(int)512, height=(int)384, framerate=(fraction)0/1, max-ref-frames=(int)19

Pipeline is PREROLLED ...

Setting pipeline to PLAYING ...

New clock: GstSystemClock

Got EOS from element "pipeline0".

Execution ended after 0:00:01.493697666

Setting pipeline to PAUSED ...

Setting pipeline to READY ...

Setting pipeline to NULL ...

Freeing pipeline ...
I found in ducatih264dec when the resolution from 352x288 into 512x284, this is what reason?

 

 

  • Decoder's output will have padding on all 4 sides(left, right, top and bottom) and it varies with codec. Hence you will see the resolution 512x384 which is padded output from decoder. You can use vpe to crop the padded region
  • Hello:

    Can you write a command with vpe to crop?
  • Just add vpe after ducatih264dec , that should work.

    ducatih264dec ! vpe ! filesink

    You can add filters to scale or format conversion to vpe if required, something like below

    ducatih264dec ! vpe ! 'video/x-raw,width=480,height=320' ! filesink
  • Hello:

    After vpe processing after the image is normal.

    I still have many question.

    1、Decoder's output after the video is amplified?

    2、what is the rules of decoding output resolution?

    3、As show below is the use of YUVviewerOlus tool to play yuv file (no added vpe processing yuv file)

    Found no vpe to processed image is surrounded by green,but when using vpe, the image is normal

    vpe is how to handle the image is scaled or cropped?
  • 1) Decoder's output is not amplified. It is only padded by green region on all 4sides. This is expected behavior decoder.
    2) Refer git.ti.com/.../viddec3test.c. Here look for padded_width and padded_height. Decoder expects this as the resolution of output buffer.
    3) The padding info(refer, topLeft and bottomRight co-ordinates in viddec3test.c) are the information from decoder on cropping requirement to get the active frame region in the whole region.
    vpe will handle this and can scale to a different resolution and also to a different fourcc fromat like yuyv or rgb