This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: How to display images by DP?

Part Number: TDA4VM

Tool/software:

Hi,

My SDK version is  Processor SDK RTOS J721E 08_06_00. I want to display .bmp format images by the DP port. How should I achieve it by python?

thanks

  • Hi,

    Due to a holiday, half of our team is currently out of office. Please expect a 1~2 delay in responses.

    Apologies for the delay and thank you for your patience.

    -Fabiana

  • Hi, 

    This is all my code:

    ************************************************************

    import gi
    gi.require_version('Gst', '1.0')
    from gi.repository import Gst,GLib

    Gst.init(None)

    pipeline_str = ( "appsrc name=source ! tiovxdlcolorconvert ! video/x-raw,format=NV12 ! kmssink driver-name=tidss sync=true")
    pipeline = Gst.parse_launch(pipeline_str)

    gst_pipeline = 'multifilesrc'
    source = Gst.ElementFactory.make(gst_pipeline, 'file-source')
    source.set_property('location', '/opt/edge_ai_apps/data/images/0000.jpg') 


    decoder = Gst.ElementFactory.make('jpegdec', 'jpeg-decoder')
    encoder = Gst.ElementFactory.make('jpegdec', 'jpeg-encoder')


    encoder.set_property('idct-method', 0) 

    convert = Gst.ElementFactory.make('tiovxdlcolorconvert', 'video-converter') 

    sink = Gst.ElementFactory.make("kmssink", "sink")


    pipeline.add(source)
    pipeline.add(decoder)
    pipeline.add(encoder)
    pipeline.add(convert)
    pipeline.add(sink)
    source.link(decoder)
    decoder.link(encoder)
    encoder.link(convert)
    convert.link(sink)

    pipeline.set_state(Gst.State.PLAYING)


    main_loop = GLib.MainLoop()
    main_loop.run()

    **************************************************************

    The running result is as follows:

    It seems to be constantly circulating. But the monitor connected to EVM has not responded by eDP.

    How should I modify my code?

    thank you

  • Hi Maiunlei,

    As a start, I would recommend using gst-launch-1.0 command line tool to construct a demo pipeline. 

    For example, typing the following will start a pipeline that reads a series of jpg images and displays onto screen:

    • gst-launch-1.0 multifilesrc location=/opt/edgeai-test-data/images/%04d.jpg index=1 stop-index=-1 loop=True caps=image/jpeg,framerate=1/1 ! jpegdec ! videoscale qos=True ! tiovxdlcolorconvert ! video/x-raw, format=NV12, width=1920, height=1080 ! kmssink max-lateness=5000000 qos=True processing-deadline=15000000 driver-name=tidss connector-id=40 plane-id=31 force-modesetting=True fd=68

    Or a simpler version of this that abstracts many GStreamer element (which has more flexibility/compatibility but with the trade-off for sub-optimal performance):

    • gst-launch-1.0 multifilesrc location=/opt/edgeai-test-data/images/%04d.jpg index=1 stop-index=-1 loop=True caps=image/jpeg,framerate=1/1 ! jpegdec ! autovideosink

    If the first one does not work, but second one does, either there are issues with the elements after jpegdec (such as not setting some properties, wrong properties, etc), or an issue with the system (such as weston having control of the display and not giving kmssink permission to access display, which can be solved with "systemctl stop weston").

    Regards,

    Takuma

  • Hi ,

    I run them on the sk-TDA4-vm development board. Both of these seem to be ineffective.My sdk is  Processor SDK Linux for Edge AI 08.04.00.


    I am a novice, could you please help me solve this problem. Provide as much detail as possible.

    Thank you.

  • Hi Maiunlei,

    I see... most likely this is due to some changes in some of the GStreamer elements. I used the latest 10.0 SDK to create the pipeline. 

    If beginning, I would recommend trying to run one of our out-of-box demos: https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-sk-tda4vm/08_04_00/exports/docs/running_simple_demos.html

    Default would most likely be using a camera instead of a file source, so you may change the configuration file based on this documentation: https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-sk-tda4vm/08_04_00/exports/docs/configuration_file.html

    Running the demos should print out a large string that can be used with gst-launch-1.0. It would also be a good way to do a sanity check to make sure hardware and software is set up for streaming video output. You may take these pipelines as reference, or refer to this documentation that has the pipelines written out: https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-sk-tda4vm/08_04_00/exports/docs/data_flows.html

    Using the "Video source" example from the data flow documentation:

    1. You may take the first portion of the pipeline: filesrc location=/opt/edge_ai_apps/data/videos/video_0000_h264.mp4 ! qtdemux ! h264parse ! v4l2h264dec ! video/x-raw, format=NV12  !
    2. Connect with "autovideosink" which again, performance will be suboptimal but compatibility with other elements is the best
    3. And you can get a pipeline like: "gst-launch-1.0 filesrc location=/opt/edge_ai_apps/data/videos/video_0000_h264.mp4 ! qtdemux ! h264parse ! v4l2h264dec ! video/x-raw, format=NV12  ! autovideosink"

    If above all fails, then another alternative is to gradually develop the pipeline from the most basic GStreamer pipeline: "gst-launch-1.0 videotestsrc ! autovideosink". From this pipeline, you can replace the videotestsrc portion with your input pipeline, and/or replace the autovideosink with your output pipeline. This allows to modularize the debug to just input, or just output.

    Regards,

    Takuma

  • Hi ,

    Thank you for your help, I have succeeded. Here is the code:

    gst-launch-1.0 \
    multifilesrc location=/opt/edge_ai_apps/data/images/%04d.jpg index=1 stop-index=-1 loop=True caps=image/jpeg,framerate=1/1 ! \
    jpegdec ! videoscale ! video/x-raw, width=1920, height=1080 ! \
    tiovxdlcolorconvert ! video/x-raw, format=NV12 ! \
    tiovxdlcolorconvert target=1 out-pool-size=4 ! video/x-raw, format=RGB! \
    kmssink sync=false

    But there is still one issue that has not been resolved.I want it to play at a speed of 1 frame per second. Why is the playback frame rate very fast and how to control it? Setting framerate=1/1 or framerate=30/1 doesn't seem to work.Can you help solve this problem?

    Also, is there any explanation or user manual for GStreamer elements? I don't know the role of each element in TDA4.

    thank you

  • Hi Maiunlei,

    Great to hear that you were able to create a GStreamer pipeline. As for next steps, you may use this pipeline in your code, and/or use appsrc/appsink to bring data in and out of the application. The Python/C++ examples can be used for reference: https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-sk-tda4vm/08_04_00/exports/docs/running_simple_demos.html 

    In terms of playback speed, in the element "kmssink sync=false", the property sync=false might be making the display play as quickly as possible, so try removing this.

    As for documentation for the GStreamer elements:

    Regards,

    Takuma

  • Hi Takuma,

    I am using J721E to display images. My SDK version is  Processor SDK RTOS J721E 08_06_00. I used the previously successful command on sk-TDA4vm.

    gst-launch-1.0 \
    multifilesrc location=/opt/edgeai-test-data/images/%04d.jpg index=1 stop-index=-1 loop=True caps=image/jpeg,framerate=1/1 ! \
    jpegdec ! videoscale ! video/x-raw, width=1920, height=1080 ! \
    tiovxdlcolorconvert ! video/x-raw, format=NV12 ! \
    tiovxdlcolorconvert ! video/x-raw, format=RGB ! \
    kmssink sync=false driver-name=tidss

    But an error occurred.

    It seems that there is an issue with the kmssink element. To verify, I tried running the command  ./run_app_tidl.sh. It is ok. Display is no problem. 

    Are these two instructions using different display elements? Please help solve this problem.

    Thank you

  • Hi Maiunlei,

    In general, our GStreamer demos are incompatible with our RTOS SDK. Please refer to this FAQ: https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-sk-tda4vm/10_00_00/exports/edgeai-docs/devices/TDA4VM/linux/faq.html

    The main reason for the GStreamer pipeline not working is due to the display and capture driver (in your particular case, the display driver) being hosted through the R5F instead of the A72 core.

    In theory, you may modify the uEnv.txt file in the boot partition to use the k3-j721e-edgeai-apps.dtbo instead of the vision-apps overlay to switch between Linux and RTOS drivers, but doing so will make run_app_tidl.sh to not work since run_app_tidl.sh assumes R5F driver.

    Regards,

    Takuma