This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

SK-AM62A-LP: Maximum number of camera feed

Part Number: SK-AM62A-LP

Hello,

How many cameras can the board support connecting simultaneously? I am planning to put one Arducam V3Link Camera Kit for TI Development Boards that has 4 CSI camera connected together in one module and along with that 4 USB cameras. Is that possible?

Also, I am using app_edgeai.py to access 3 USB camera and one single CSI camera feed. I am creating multiple flows with various i/p and o/p combinations. It works fine till 6 flows. More than that, a green screen is appearing. Also, the framerate is pretty low. Can I get at least 8 feeds?

This is the YAML file:

title: "Multi Input, Multi Inference"
log_level: 2
inputs:
    input0:
        source: /dev/video-usb-cam0
        format: jpeg
        width: 1280
        height: 720
        framerate: 30
    input1:
        source: /dev/video-rpi-cam0
        subdev-id: /dev/v4l-rpi-subdev0
        width: 1920
        height: 1080
        format: rggb
        framerate: 30
            
    input2:
        source: /dev/video-usb-cam1
        format: jpeg
        width: 1280
        height: 720
        framerate: 30

    input3:
        source: /dev/video-usb-cam2
        format: jpeg
        width: 1280
        height: 720
        framerate: 30
    
    input4:
        source: /opt/edgeai-test-data/videos/video0_1280_768.h264
        format: h264
        width: 1280
        height: 768
        framerate: 30
        loop: True
models:
    model0:
        model_path: /opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320
        viz_threshold: 0.6
    model1:
        model_path: /opt/model_zoo/ONR-OD-8200-yolox-nano-lite-mmdet-coco-416x416
        viz_threshold: 0.6
    model2:
        model_path: /opt/model_zoo/ONR-CL-6360-regNetx-200mf
        topN: 5
    model3:
        model_path: /opt/model_zoo/ONR-SS-8610-deeplabv3lite-mobv2-ade20k32-512x512
        alpha: 0.4
    
    model4:
        model_path: /opt/model_zoo/ONR-KD-7060-human-pose-yolox-s-640x640
        viz_threshold: 0.6
    model5:
        model_path: /opt/model_zoo/ONR-OD-8420-yolox-s-lite-mmdet-widerface-640x640
        viz_threshold: 0.6
    model6:
        model_path: /opt/model_zoo/ONR-SS-8818-deeplabv3lite-mobv2-qat-robokit-768x432
        viz_threshold: 0.6
    model7:
        model_path: /opt/model_zoo/TVM-CL-3090-mobileNetV2-tv
        alpha: 0.4
outputs:
    output0:
        sink: kmssink
        width: 1920
        height: 1080
        overlay-perf-type: graph
    output1:
        sink: /opt/edgeai-test-data/output/output_video.mkv
        width: 1920
        height: 1080
    output2:
        sink: /opt/edgeai-test-data/output/output_image_%04d.jpg
        width: 1920
        height: 1080
    output3:
        sink: remote
        width: 1920
        height: 1080
        port: 8081
        host: 127.0.0.1
        encoding: jpeg
        overlay-perf-type: graph

flows:
    flow0: [input0,model0,output0,[0,30,480,270]]
    flow1: [input1,model2,output0,[480,30,480,270]]
    flow2: [input2,model4,output0,[960,30,480,270]]
    flow3: [input3,model3,output0,[1440,30,480,270]]
    flow4: [input4,model1,output0,[30,360,480,270]]
    # flow5: [input1,model5,output0,[480,360,480,270]]
    # flow6: [input0,model6,output0,[960,360,480,270]]
    # #flow7: [input1,model7,output0,[1440,360,480,270]]

Any help is really appreciated.

  • Hello Prabrit,

    First, I would like to point out recently published application note "Developing Multiple-Camera Applications on AM6x" which contains details about handling the V3Link and multicamera ML inference. https://www.ti.com/lit/an/spradh2/spradh2.pdf 

    It seems that scaling down the frame is the limiting factor in your application. When receiving a feed of 1920x1080, each frame has to be scaled down three times using the hardware multiscaler accelerator.  Figure 4-3 of this application note shows the three places for the scaling operations https://www.ti.com/lit/an/spradh2/spradh2.pdf.

    What is the load on MSC0 and MSC1 when you ran the application on 6 flows? This should be shown in the graphs overplayed on the bottom of the screen.


    The first step I suggest to conduct is to use lower resolution input such as 640x480. This might reduce the need for the scaling down operations. 

    Best regards,

    Qutaiba

  • Thank you for the response.