This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM69A: Alternative of imshow function

Other Parts Discussed in Thread: AM69A

Tool/software:

Hi,

I have am69a board and have connected a monitor using dp cable. 

I am developing some application where I am taking input from the usb camera and doing some processing. Now I wanted to display the result to the monitor that I have connected.

When I am calling cv2 imshow function, I am getting this error - 

Traceback (most recent call last):
  File "/opt/edgeai-gst-apps/akhilesh/hackathon/driver.py", line 147, in <module>
    cv2.imshow("image ", frame)
cv2.error: OpenCV(4.5.5) /usr/src/debug/opencv/4.5.5-r0/git/modules/highgui/src/window.cpp:1268: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvShowImage'

If this function is not implemented, do we have any alternative to this?

How can I display my results to monitor without using imshow? I have to python code and there is no alternative for the postprocessing I am doing.

Thanks

Akhilesh

  • Hi Akhilesh,

    I have few starter level question.

    Is the OpenCV module installed on target ? do you see error specific to this function ?

  • Hi Pratik,

    It is already installed as I can use other cv functions such as imread, resize, etc. It's just imshow that I am seeing this error. 

    I have not separately installed opencv. It comes along with the am69a sd card image 9.2.

    Thanks

    Akhilesh

  • Thanks, am not the expert on this, let me check with my team member to get some insights.

    Will update thread 

    Thanks

  • Sure. Thanks Pratik.

  • Will update the thread, am checking internally.

    Thanks

  • Hi Akhilesh,

    OpenCV Imshow cannot be used in the board directly.

    For displaying any image, what i will suggest is using gstreamer with kmssink at the end.

    If you just want to display a static image follow these steps:

        1. Save the frame generated in your script as a jpeg image

        2. Run this command on am69a terminal -   gst-launch-1.0 multifilesrc location=*location_of_saved_jpg* ! jpegdec ! videoconvert ! kmssink driver-name=tidss sync=false

    In case you want the display to run live with your script (i.e instead of saving and manually running the pipeline), there's a way to directly integrate gstreamer pipeline with the python script. If that is the case i still recommend you try the above steps and then i can help you in running the pipeline live from within the script.

    Regards,

    Abhay

  • Hi Abhay,

    Thanks for replying. I tested the above method and it's working fine. I would also like to display the images in real time and want to use gstreamer for that. 

    Here is what I did to do that -

    Gst.init(None)
    pipeline_str = ( "appsrc name=source ! tiovxdlcolorconvert ! video/x-raw,format=NV12 ! kmssink driver-name=tidss sync=true")
    pipeline = Gst.parse_launch(pipeline_str)
    source = pipeline.get_by_name("source")
    pipeline.set_state(Gst.State.PLAYING)
    
    xxxxxxxxxxxxxxxxxxxxx
    code for image processing 
    output image : frame
    xxxxxxxxxxxxxxxxxxxxx
    
    raw_frame = frame.flatten()
    gst_buffer = Gst.Buffer.new_allocate(None, len(raw_frame), None)
    gst_buffer.fill(0, raw_frame) ## this take around 300ms to fill up and causing the delay
    caps = Gst.caps_from_string(f"video/x-raw,format=RGB,width={frame.shape[1]},height={frame.shape[0]}")
    source.set_property("caps", caps)
    source.emit("push-buffer", gst_buffer)
    
    
    

    gst_buffer.fill(0, raw_frame) ## this take around 300ms to fill up and causing the delay

    This function is the one messing up with real time display. Do you have any other approach where we can minimize the delay? All other functions are taking hardly time.

    Let me know.

    Thanks

    Akhilesh

  • gst_buffer.fill essentiall copies the whole frame to a gstreamer which is not optimal when the frame is huge.
    Instread i would suggest using gstreamer pipeline inside cv2.VideoWriter function. There is a way to embed gstreamer pipeline directly into opencv videowriter class. This should be more optimal.

  • Can you provide code? I am new to this gstreamer.

  • Here is a sample code that uses videowriter with gstreamer.

    import time
    import cv2
    fps = 30
    frame_width = 640
    frame_height = 480
    cap = cv2.VideoCapture(0)
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, frame_width) cap.set(cv2.CAP_PROP_FRAME_HEIGHT, frame_height) cap.set(cv2.CAP_PROP_FPS, fps) gst_str = "appsrc name=source ! tiovxdlcolorconvert ! video/x-raw,format=NV12 ! kmssink driver-name=tidss sync=false" out = cv2.VideoWriter(gst_str, 0, fps, (frame_width, frame_height), True) while True: ret, frame = cap.read() out.write(frame)

    out.release() cap.release()
  • Hi Abhay,

    I created the test.py with above code with just 1 change - 

    cap = cv2.VideoCapture("/dev/video-usb-cam0")
    I am not able to see anything on my monitor. 
    Logs - 

    root@am69a-sk:/opt/edgeai-gst-apps/akhilesh/hackathon# python3 test.py
    [ WARN:0@0.137] global /usr/src/debug/opencv/4.5.5-r0/git/modules/videoio/src/cap_gstreamer.cpp (2401) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module source reported: Could not read from resource.
    [ WARN:0@0.137] global /usr/src/debug/opencv/4.5.5-r0/git/modules/videoio/src/cap_gstreamer.cpp (1356) open OpenCV | GStreamer warning: unable to start pipeline
    [ WARN:0@0.137] global /usr/src/debug/opencv/4.5.5-r0/git/modules/videoio/src/cap_gstreamer.cpp (862) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
    APP: Init ... !!!
    MEM: Init ... !!!
    MEM: Initialized DMA HEAP (fd=6) !!!
    MEM: Init ... Done !!!
    IPC: Init ... !!!
    IPC: Init ... Done !!!
    REMOTE_SERVICE: Init ... !!!
    REMOTE_SERVICE: Init ... Done !!!
       862.672320 s: GTC Frequency = 200 MHz
    APP: Init ... Done !!!
       862.672392 s:  VX_ZONE_INIT:Enabled
       862.672404 s:  VX_ZONE_ERROR:Enabled
       862.672413 s:  VX_ZONE_WARNING:Enabled
       862.673026 s:  VX_ZONE_INIT:[tivxPlatformCreateTargetId:116] Added target MPU-0
       862.673149 s:  VX_ZONE_INIT:[tivxPlatformCreateTargetId:116] Added target MPU-1
       862.673261 s:  VX_ZONE_INIT:[tivxPlatformCreateTargetId:116] Added target MPU-2
       862.673365 s:  VX_ZONE_INIT:[tivxPlatformCreateTargetId:116] Added target MPU-3
       862.673410 s:  VX_ZONE_INIT:[tivxInitLocal:136] Initialization Done !!!
       862.673953 s:  VX_ZONE_INIT:[tivxHostInitLocal:101] Initialization Done for HOST !!!
    [ WARN:0@0.221] global /usr/src/debug/opencv/4.5.5-r0/git/modules/videoio/src/cap_gstreamer.cpp (2042) open OpenCV | GStreamer warning: GStreamer: cannot find appsrc in manual pipeline
    
       862.678701 s:  VX_ZONE_INIT:[tivxHostDeInitLocal:115] De-Initialization Done for HOST !!!
       862.683118 s:  VX_ZONE_INIT:[tivxDeInitLocal:204] De-Initialization Done !!!
    APP: Deinit ... !!!
    REMOTE_SERVICE: Deinit ... !!!
    REMOTE_SERVICE: Deinit ... Done !!!
    IPC: Deinit ... !!!
    IPC: DeInit ... Done !!!
    MEM: Deinit ... !!!
    DDR_SHARED_MEM: Alloc's: 0 alloc's of 0 bytes
    DDR_SHARED_MEM: Free's : 0 free's  of 0 bytes
    DDR_SHARED_MEM: Open's : 0 allocs  of 0 bytes
    MEM: Deinit ... Done !!!
    APP: Deinit ... Done !!!
    
    

    The previous solution was working (saving the image to folder and use gstreamer for that folder and display). This is not working. Nothing is coming on display. 

    Can you please check?

    Thanks

  • The above opencv code was just an example i provided on how to use videowriter. You dont need to use the VideoCapture. It was just a boilerplate.

    I am assuming the you already have a frame generated and you just want to push the frame to display

    gst_str = "appsrc name=source ! video/x-raw, format=RGB ! tiovxdlcolorconvert ! video/x-raw,format=NV12 ! kmssink driver-name=tidss sync=false"
    
    out = cv2.VideoWriter(gst_str, 0, 30, (frame_width, frame_height), True)  ## Provide the frame_width and frame_heigh of your generated frame
    while 1:
    ## Create the frame however you're creating out.write(frame) ## Just push the frame to videowriter at the end

    out.release()
  • Yeah. You got it write. But that did not work in my application so just to cross check, I created test.py. It did not work using that either. 

  • This was working only-

    gst-launch-1.0 multifilesrc location = /opt/edgeai-gst-apps/akhilesh/hackathon/output/img_%d.jpeg ! jpegdec ! videoconvert ! kmssink driver-name=tidss sync=false
     
  • The frame generated by you application is an RGB frame and can you tell me its shape?

  • This is the shape of generated frame : (480, 640, 3)

  • Let me get back to you on this.

  • Sure. I tried to convert it to gray too but it did not work

  • Hi Abhay,

    Any update on this?

    I tried to manually convert rgb image to nv12 format and then tried to display it to monitor using the code you provided but that too did not work. 

    Thanks

  • Akhilesh, Seems like we had to explicitely define "caps" in the gstreamer pipeline. Can you quickly try this piece of code which will display random pixels on the display


    import cv2
    import numpy as np

    width = 640
    height = 480
    GST_PIPE = f'appsrc caps="video/x-raw,width={width},height={height},format=RGB,framerate=30/1" ! kmssink driver-name=tidss sync=false'
    video_out = cv2.VideoWriter(GST_PIPE, cv2.CAP_GSTREAMER, 0, 30, (width, height), True)

    while True:
        frame = np.random.randint(255, size=(height, width, 3), dtype=np.uint8)
        video_out.write(frame)
    video_out.release()

  • That worked like a butter.

    Thanks Abhay for bringing this. It's working very well now. 

    Regards

    Akhilesh