This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-AM62X: Gstreamer png encode

Part Number: PROCESSOR-SDK-AM62X
Other Parts Discussed in Thread: SK-AM62

Hello,

I'm trying to encode input image to png by using gstreamer pipeline.
Here is my environment.

* SK-AM62 EVM (Rev E3)
* Processor Linux SDK ver 08.06.00.42

I'm using OV5640 as CSI camera and performed following two command.

1. $ gst-launch-1.0 v4l2src device="/dev/video0" ! video/x-raw, width=640, height=480 ! jpegenc ! filesink location=/usr/test_640x480.jpg
2. $ gst-launch-1.0 v4l2src device="/dev/video0" ! video/x-raw, width=640, height=480 ! pngenc ! filesink location=/usr/test_640x480.png

The difference is only software encode. First one is used "jpeg" encoding, and second one is used "png" encoding.
As a result, I could expected result for jpeg encoding. However, I got following error when I used "png" encoding.

* ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.

I believe that there is no issue for input size from camera (640x480), however have you ever observed above error ?

Best Regards,

  • Hi Machida-san,

    Can you add a videoconvert element before pngenc in your pipeline.

    So your pipeline should look like this:

     gst-launch-1.0 v4l2src device="/dev/video0" ! video/x-raw, width=640, height=480 ! videoconvert ! pngenc ! filesink location=/usr/test_640x480.png

    Let me know if this works. 

    Best Regards,

    Suren

  • Hello Suren-san,

    Thank you for your reply.
    I could capture image by adding "videoconvert" element.
    I have three additional questions for setting.

    Q1, When I use "jpegenc", I do not use "videoconvert" element. However I'm not sure why this element is needed case of "pngenc". Can you explain the reason ?

    Q2, I'm using "640x480" as capture image size. However, when I use other size such as 1280x720 and 1920x1080, I could not confirm expected result.
    My CSI camera can be applied to those setting, however it seems this restriction depends on pipeline setting.
    Is it possible to change larger size ?

    Q3, When I perform "jpegenc" and "pngenc", I need to use interrupt command ("ctrl + c") after "New clock: GstSystemClock" log message.
    However, if execution time is short, I could not confirm correct result.
    It seems even if pipeline operation was done, there is no message on console.
    How can I understand correct execution time ?
    (Now I'm waiting for approx 30s, but I think it is too long. (On the other hand, when I execute approx 10s, it seems execution time is not enough.))

    BR,

  • Hi Machida-san,

    1. Videoconvert is required as the input format that Png encoder expects are as shown below and CSI camera might not be providing these, in order to convert into one of these formats videoconvert is used:

    video/x-raw:
             format: { RGBA, RGB, GRAY8, GRAY16_BE }

    2. Are you initializing the camera with required resolution every time with media-ctl command?
    Please refer: https://dev.ti.com/tirex/explore/node?node=A__Afvqyi8mUm05676JZJ-UlQ__AM62-ACADEMY__uiYMDcq__LATEST


    3. You can use num-buffers with v4l2src element For eg, with 100 buffers you would have to modify the pipeline like below:
    gst-launch-1.0 v4l2src device="/dev/video0" num-buffers=100 ! video/x-raw, width=640, height=480 ! videoconvert ! pngenc ! filesink location=/usr/test_640x480.png
    With this you won't have to provide signal interrupt to stop the encoding.

    Best Regards,

    Suren

  • Hello Suren-san,

    I understood for Answer 1 and 2.
    And I could change input size by setting camera parameter.

    I could confirm automatically stop by using "num-buffers".(No need interrupt command.)
    However, I'm not sure how user decide buffer number. For example, since my case is still image, so I tried to use "num-buffers=1".
    However I could not get image in this case. Is it possible user can estimate which value is better ?

     BR,

  • Hi Machida-san,

    Did this help in capturing a single frame:

    gst-launch-1.0 v4l2src num-buffers=1 ! pngenc snapshot=true ! filesink location=capture.png

    or 

    gst-launch-1.0 v4l2src device="/dev/video0" num-buffers=100 ! video/x-raw, width=640, height=480, framerate=1/100000 ! videoconvert ! pngenc  snapshot=true ! filesink location=/usr/test_640x480.png

    Best Regards,

    Suren