This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

SK-AM68: Edge AI Studio Model Composer detects device and camera, Live Preview show white camera feed with no error.

Part Number: SK-AM68
Other Parts Discussed in Thread: AM68,

Hello,

I am learning to use the Edge AI Model Composer, so for now I just tried to use a sample project: Classification of animals.

I have successfully trained, compiled and deployed the project. I have followed the given UART connection steps down to the Windows drivers part, and the Edge AI can detect my camera connected to the AM68. Yet, when I open live preview, camera window stays white and as far as I can tell, it does not give an error message in the log. To have more info, I tested my camera and board with the sample apps before I started using the Edge AI studio, they all work well.The log record of the Live Preview is like this:

-----------

libtidl_onnxrt_EP loaded 0x198df740,Final number of subgraphs created are : 1, - Offloaded Nodes - 289, Total Nodes - 289,APP: Init ... !!!,MEM: Init ... !!!,MEM: Initialized DMA HEAP (fd=4) !!!,MEM: Init ... Done !!!,IPC: Init ... !!!,IPC: Init ... Done !!!,REMOTE_SERVICE: Init ... !!!,REMOTE_SERVICE: Init ... Done !!!,   275.331005 s: GTC Frequency = 200 MHz,APP: Init ... Done !!!,   275.334544 s:  VX_ZONE_INIT:Enabled,   275.334555 s:  VX_ZONE_ERROR:Enabled,   275.334558 s:  VX_ZONE_WARNING:Enabled,   275.335644 s:  VX_ZONE_INIT:[tivxInitLocal:130] Initialization Done !!!,   275.336509 s:  VX_ZONE_INIT:[tivxHostInitLocal:93] Initialization Done for HOST !!!,)07=, +--------------------------------------------------------------------------+, | Object Detection Demo|, +--------------------------------------------------------------------------+, +--------------------------------------------------------------------------+, | Input Src: /dev/video2|, | Model Name: 0cd1c3a0|, | Model Type: detection|, +--------------------------------------------------------------------------+, +--------------------------------------------------------------------------+==========[INPUT PIPELINE(S)]==========,,[PIPE-0],,v4l2src device=/dev/video2 pixel-aspect-ratio=None ! capsfilter caps="image/jpeg, width=(int)1280, height=(int)720;" ! jpegdec ! tiovxdlcolorconvert ! capsfilter caps="video/x-raw, format=(string)NV12;" ! tiovxmultiscaler name=split_01,split_01. ! queue ! capsfilter caps="video/x-raw, width=(int)1280, height=(int)720;" ! tiovxdlcolorconvert out-pool-size=4 ! capsfilter caps="video/x-raw, format=(string)RGB;" ! appsink max-buffers=2 drop=True name=sen_0,split_01. ! queue ! capsfilter caps="video/x-raw, width=(int)416, height=(int)416;" ! tiovxdlpreproc out-pool-size=4 data-type=3 tensor-format=1 ! capsfilter caps="application/x-tensor-tiovx;" ! appsink max-buffers=2 drop=True name=pre_0,,,==========[OUTPUT PIPELINE]==========,,appsrc do-timestamp=True format=3 block=True name=post_0 ! tiovxdlcolorconvert ! capsfilter caps="video/x-raw, format=(string)NV12, width=(int)1280, height=(int)720;" ! jpegenc ! multipartmux boundary=spionisto ! rndbuffersize max=65000 ! udpsink sync=False clients=127.0.0.1:8081 host=127.0.0.1 port=8081,,>8,>   279.650923 s:  VX_ZONE_INIT:[tivxHostDeInitLocal:107] De-Initialization Done for HOST !!!,   279.655515 s:  VX_ZONE_INIT:[tivxDeInitLocal:193] De-Initialization Done !!!,APP: Deinit ... !!!,REMOTE_SERVICE: Deinit ... !!!,REMOTE_SERVICE: Deinit ... Done !!!,IPC: Deinit ... !!!,IPC: DeInit ... Done !!!,MEM: Deinit ... !!!,DDR_SHARED_MEM: Alloc's: 13 alloc's of 14029636 bytes,DDR_SHARED_MEM: Free's : 13 free's  of 14029636 bytes,DDR_SHARED_MEM: Open's : 0 allocs  of 0 bytes,DDR_SHARED_MEM: Total size: 536870912 bytes,MEM: Deinit ... Done !!!,APP: Deinit ... Done !!!

--------------

And I also wanted to use the deployed project, which is on /opt/projects directory, but I am very much new to this and I do not know how to run this algorithm on the board. Can you help me out?

Thanks,

İbrahim Aşık

  • Hi,

    Could you verify my understanding in your posted question is correct as mentioned below.

    You have tried to use edgeai model composer tool, and created full pipeline for classification project which classifies animals.

    Furthermore, you have AM68 board and you are connected to it via, UART which has camera (Which type ? USB, CSI etc) connected to the target.

    Moreover you have verified that camera is connected from logs and ran the standard object detection demo which comes with sdk.

    The demo ran successfully (I can infer from logs posted above).

    You want to run/infer your custom model on SK-AM68 whose model artifacts are generated using model composer tools ?

    Regards,

    Pratik

  • Hello,

    I use a CSI Camera, RPi V2.1.

    I tested the edge-ai app demos that come with sdk before starting to use Studio with the same camera I now use. Yes, they worked well so that I know my board and camera works.

    The logs I posted was not from the standard sdk demos, but from the Live Preview of Model Composer.

    I want to see the live preview before deploying the project that is generated by Model Composer; but although there is no error in the logs and the camera is detected by the Edge AI Studio, I cannot see any camera feed in the stream window, and I cannot see the detected class of the animal captured by the camera.

    Of course after I am able to test my model in Live Preview, hopefully, I will run/infer it on the board. And I deployed this model to the board, I just do not know how to run it (preferably with similar UI to the demo apps with video stream shown and the classes written on it or in the logs and such).

    Thanks,

    İbrahim

  • So, long story short, what should I do at this state of the live preview with no stream for some reason:

  • Hi,

    Thanks for the posting detailed elaboration of issue.

    We will get back to you on this.

    Regards,

    Pratik