This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM68A: GStreamer pipeline fails with three 3280x2464 cameras streaming

Part Number: AM68A

Tool/software:

Hello,

Issue with 3-Camera GStreamer Pipeline on TI Device

I’m working with a GStreamer pipeline that utilizes three IMX cameras (each with a resolution of 3280x2460) for object detection. The pipeline performs the following tasks:

  • Saves video recordings from all three cameras into separate folders.

  • Records audio from a single microphone and merges it into all three video recordings (i.e., one common audio track for all three).

  • Extracts detection data per camera.

  • Streams raw video from all three cameras to a media server via RTMP (using rtmp2sink).

The issue I’m encountering occurs when running this pipeline with all three cameras. I receive the following error:

Error received from element stream_sink3: Connection error: connection closed remotely Debugging information: /usr/src/debug/gstreamer1.0-plugins-bad/1.22.12/gst/rtmp2/gstrtmp2sink.c(1085): error_callback (): /GstPipeline:Video-Pipeline/GstRtmp2Sink:stream_sink3: domain g-io-error-quark, code 44

This error happens specifically with the third RTMP stream (stream_sink3). When the pipeline is limited to two cameras, it works without any issues.

Question:
Could this error be due to overloading of the TI device’s resources (e.g., CPU, memory, or I/O bandwidth) when handling three high-resolution camera streams, simultaneous audio processing, and RTMP streaming?

Thank you

  • Hi,

    Can you send the pipeline?

    What's the framerate of the videos?

    What does top say when you run the pipeline?

    Best,
    Jared


  • APP: Init ... !!!
    405.982632 s: MEM: Init ... !!!
    405.982681 s: MEM: Initialized DMA HEAP (fd=10) !!!
    405.982801 s: MEM: Init ... Done !!!
    405.982817 s: IPC: Init ... !!!
    406.024955 s: IPC: Init ... Done !!!
    REMOTE_SERVICE: Init ... !!!
    REMOTE_SERVICE: Init ... Done !!!
    406.029213 s: GTC Frequency = 200 MHz
    APP: Init ... Done !!!
    406.029307 s: VX_ZONE_INFO: Globally Enabled VX_ZONE_ERROR
    406.029321 s: VX_ZONE_INFO: Globally Enabled VX_ZONE_WARNING
    406.029333 s: VX_ZONE_INFO: Globally Enabled VX_ZONE_INFO
    406.030070 s: VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-0
    406.030219 s: VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-1
    406.030340 s: VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-2
    406.030459 s: VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-3
    406.030476 s: VX_ZONE_INFO: [tivxInitLocal:126] Initialization Done !!!
    406.030484 s: VX_ZONE_INFO: Globally Disabled VX_ZONE_INFO

    Number of subgraphs:1 , 129 nodes delegated out of 129 nodes


    Number of subgraphs:1 , 129 nodes delegated out of 129 nodes


    Number of subgraphs:1 , 129 nodes delegated out of 129 nodes

    Pipeline state changed from NULL to READY
    Pipeline state changed from READY to PAUSED
    407.231253 s: MEM: ERROR: Alloc failed with status = 12 !!!
    407.231303 s: VX_ZONE_ERROR: [tivxMemBufferAlloc:111] Shared mem ptr allocation failed
    407.354002 s: MEM: ERROR: Alloc failed with status = 12 !!!
    407.354055 s: VX_ZONE_ERROR: [tivxMemBufferAlloc:111] Shared mem ptr allocation failed
    407.494729 s: MEM: ERROR: Alloc failed with status = 12 !!!
    407.494775 s: VX_ZONE_ERROR: [tivxMemBufferAlloc:111] Shared mem ptr allocation failed

    Hello, thanks a lot for the response.

    I made some changes to the code to include saving from only three cameras. While the same error persists, the stream sink connection error no longer appears, so the issue doesn’t seem to be related to streaming from three cameras alone.

    The error i have given appeared for both codes.

    To investigate further, I ran two separate codes simultaneously — one for streaming and saving from the first two cameras. This setup worked fine. Then I ran a separate code for the third camera alone (for both saving and streaming), and that's when the error appeared in the terminal.

    To confirm, I tested the third camera alone, and it works without any issues.

    In conclusion, the problem arises only when the third camera is added to the combined setup. Do you know how to resolve this? I’ll share both the 3-camera save + stream code and the 3-camera save-only code below.

    First code :3 cameras saving and streaming along with audio 
    Second Code: 3 cameras saving along with audio

    Since it's a memory issue, I tried setting io-mode=4, but nothing was saved to the file. It seems io-mode=5 is necessary for input. I can't lower the resolution due to project requirements, so I reduced the framerate to 15 in the save-only script, but the issue still persists.

    Can we increase shared memory here?

    #include <gst/gst.h>
    #include <glib.h>
    
    int main(int argc, char *argv[]){
    
    //Declare pipeline
    GstElement *pipeline;
    
    //Audio
    GstElement *alsasrc, *audio_src_queue, *audioconvert, *audioresample, *audio_encoder, *audio_parser, *audio_tee, *audio_sink_queue1, *audio_sink_queue2, *audio_sink_queue3;
    
    //Decalaration for CAM1
    GstElement *source1, *source_queue1, *source_caps_cam1, *tiovxisp_cam1, *source_videoconvert_cam1, *nv12_caps_cam1, *tee1;
    GstElement *scaler_split1, *sink_queue1, *sink_caps_cam1;
    GstElement *detect_queue1, *detect_scaler1_cam1, *intermediate_caps_cam1, *detect_scaler2_cam1, *detect_caps_cam1, *preproc_cam1, *tensor_caps_cam1, *inference_cam1;
    GstElement *postproc_cam1, *final_scaler_save_cam1, *final_caps_save_cam1, *split_videoconvert_cam1, *encoder_queue1, *encoder_cam1, *parser_save_cam1, *split_sink1, *text_queue_cam1, *filesink1;
    GstElement *stream_queue_cam1, *final_scaler_stream_cam1, *final_caps_stream_cam1, *stream_videoconvert_cam1, *stream_encoder_cam1, *parser_stream_cam1, *stream_muxer_cam1, *stream_sink1;
    
    //Declaration for CAM2
    GstElement *source2, *source_queue2, *source_caps_cam2, *tiovxisp_cam2, *source_videoconvert_cam2, *nv12_caps_cam2, *tee2;
    GstElement *scaler_split2, *sink_queue2, *sink_caps_cam2;
    GstElement *detect_queue2, *detect_scaler1_cam2, *intermediate_caps_cam2, *detect_scaler2_cam2, *detect_caps_cam2, *preproc_cam2, *tensor_caps_cam2, *inference_cam2;
    GstElement *postproc_cam2, *final_scaler_save_cam2, *final_caps_save_cam2, *split_videoconvert_cam2, *encoder_queue2, *encoder_cam2, *parser_save_cam2, *split_sink2, *text_queue_cam2, *filesink2;
    GstElement *stream_queue_cam2, *final_scaler_stream_cam2, *final_caps_stream_cam2, *stream_videoconvert_cam2, *stream_encoder_cam2, *parser_stream_cam2, *stream_muxer_cam2, *stream_sink2;
    
    //Declaration for CAM3
    GstElement *source3, *source_queue3, *source_caps_cam3, *tiovxisp_cam3, *source_videoconvert_cam3, *nv12_caps_cam3, *tee3;
    GstElement *scaler_split3, *sink_queue3, *sink_caps_cam3;
    GstElement *detect_queue3, *detect_scaler1_cam3, *intermediate_caps_cam3, *detect_scaler2_cam3, *detect_caps_cam3, *preproc_cam3, *tensor_caps_cam3, *inference_cam3;
    GstElement *postproc_cam3, *final_scaler_save_cam3, *final_caps_save_cam3, *split_videoconvert_cam3, *encoder_queue3, *encoder_cam3, *parser_save_cam3, *split_sink3, *text_queue_cam3, *filesink3;
    GstElement *stream_queue_cam3, *final_scaler_stream_cam3, *final_caps_stream_cam3, *stream_videoconvert_cam3, *stream_encoder_cam3, *parser_stream_cam3, *stream_muxer_cam3, *stream_sink3;
    
    GstBus *bus;
    GstMessage *msg;
    GstStateChangeReturn ret;
    gboolean terminate= FALSE;
    gst_init(&argc, &argv);
    
    //audio
    alsasrc = gst_element_factory_make("alsasrc","alsasrc");
    audio_src_queue = gst_element_factory_make("queue","audio_src_queue");
    audioconvert = gst_element_factory_make("audioconvert","audioconvert");
    audioresample = gst_element_factory_make("audioresample","audioresample");
    audio_encoder = gst_element_factory_make("avenc_aac","audio_encoder");
    audio_parser = gst_element_factory_make("aacparse","audio_parser");
    audio_tee = gst_element_factory_make("tee", "audio_tee");
    audio_sink_queue1 = gst_element_factory_make("queue", "audio_sink_queue1");
    audio_sink_queue2 = gst_element_factory_make("queue","audio_sink_queue2");
    audio_sink_queue3 = gst_element_factory_make("queue", "audio_sink_queue3");
    
    
    //create elements for CAM1
    pipeline = gst_pipeline_new("Video-Pipeline");
    source1 = gst_element_factory_make("v4l2src", "source1");
    source_queue1 = gst_element_factory_make("queue","source_queue1");
    source_caps_cam1 = gst_element_factory_make("capsfilter","source_caps_cam1");
    tiovxisp_cam1 = gst_element_factory_make("tiovxisp","tiovxisp_cam1");
    source_videoconvert_cam1 = gst_element_factory_make("videoconvert","source_videoconvert_cam1");
    nv12_caps_cam1 = gst_element_factory_make("capsfilter", "nv12_caps_cam1");
    tee1 = gst_element_factory_make("tee","tee1");
    scaler_split1 = gst_element_factory_make("tiovxmultiscaler","scaler_split1");
    sink_queue1 = gst_element_factory_make("queue","sink_queue1");
    sink_caps_cam1 = gst_element_factory_make("capsfilter","sink_caps_cam1");
    detect_queue1 = gst_element_factory_make("queue","detect_queue1");
    detect_scaler1_cam1 = gst_element_factory_make("tiovxmultiscaler", "detect_scaler1_cam1");
    intermediate_caps_cam1 = gst_element_factory_make("capsfilter","intermediate_caps_cam1");
    detect_scaler2_cam1 = gst_element_factory_make("tiovxmultiscaler","detect_scaler2_cam1");
    detect_caps_cam1 = gst_element_factory_make("capsfilter","detect_caps_cam1");
    preproc_cam1 = gst_element_factory_make("tiovxdlpreproc","preproc_cam1");
    tensor_caps_cam1 = gst_element_factory_make("capsfilter","tensor_caps_cam1");
    inference_cam1 = gst_element_factory_make("tidlinferer","inference_cam1");
    postproc_cam1 = gst_element_factory_make("tidlpostproc","postproc_cam1");
    final_scaler_save_cam1 = gst_element_factory_make("tiovxmultiscaler", "final_scaler_save_cam1");
    final_caps_save_cam1 = gst_element_factory_make("capsfilter","final_caps_save_cam1");
    split_videoconvert_cam1 = gst_element_factory_make("videoconvert","split_videoconvert_cam1");
    encoder_queue1 = gst_element_factory_make("queue","encoder_queue1");
    encoder_cam1 = gst_element_factory_make("v4l2h264enc","encoder_cam1");
    parser_save_cam1 = gst_element_factory_make("h264parse","parser_save_cam1");
    split_sink1 = gst_element_factory_make("splitmuxsink", "split_sink1");
    text_queue_cam1 = gst_element_factory_make("queue","text_queue_cam1");
    filesink1 = gst_element_factory_make("filesink","filesink1");
    stream_queue_cam1 = gst_element_factory_make("queue","stream_queue_cam1");
    final_scaler_stream_cam1 = gst_element_factory_make("tiovxmultiscaler", "final_scaler_stream_cam1");
    final_caps_stream_cam1 = gst_element_factory_make("capsfilter","final_caps_stream_cam1");
    stream_videoconvert_cam1 = gst_element_factory_make("tiovxcolorconvert","stream_videoconvert_cam1");
    stream_encoder_cam1 = gst_element_factory_make("v4l2h264enc","stream_encoder_cam1");
    parser_stream_cam1 = gst_element_factory_make("h264parse","parser_stream_cam1");
    stream_muxer_cam1 = gst_element_factory_make("flvmux","stream_muxer_cam1");
    stream_sink1 = gst_element_factory_make("rtmp2sink","stream_sink1");
    
    //create elements for CAM2
    source2 = gst_element_factory_make("v4l2src","source2");
    source_queue2 = gst_element_factory_make("queue","source_queue2");
    source_caps_cam2 = gst_element_factory_make("capsfilter","source_caps_cam2");
    tiovxisp_cam2 = gst_element_factory_make("tiovxisp","tiovxisp_cam2");
    source_videoconvert_cam2 = gst_element_factory_make("videoconvert","source_videoconvert_cam2");
    nv12_caps_cam2 = gst_element_factory_make("capsfilter", "nv12_caps_cam2");
    tee2 = gst_element_factory_make("tee", "tee2");
    scaler_split2 = gst_element_factory_make("tiovxmultiscaler","scaler_split2");
    sink_queue2 = gst_element_factory_make("queue","sink_queue2");
    sink_caps_cam2 = gst_element_factory_make("capsfilter","sink_caps_cam2");
    detect_queue2 = gst_element_factory_make("queue","detect_queue2");
    detect_scaler1_cam2 = gst_element_factory_make("tiovxmultiscaler", "detect_scaler1_cam2");
    intermediate_caps_cam2 = gst_element_factory_make("capsfilter","intermediate_caps_cam2");
    detect_scaler2_cam2 = gst_element_factory_make("tiovxmultiscaler","detect_scaler2_cam2");
    detect_caps_cam2 = gst_element_factory_make("capsfilter","detect_caps_cam2");
    preproc_cam2 = gst_element_factory_make("tiovxdlpreproc","preproc_cam2");
    tensor_caps_cam2 = gst_element_factory_make("capsfilter","tensor_caps_cam2");
    inference_cam2 = gst_element_factory_make("tidlinferer","inference_cam2");
    postproc_cam2 = gst_element_factory_make("tidlpostproc","postproc_cam2");
    final_scaler_save_cam2 = gst_element_factory_make("tiovxmultiscaler", "final_scaler_save_cam2");
    final_caps_save_cam2 = gst_element_factory_make("capsfilter","final_caps_save_cam2");
    split_videoconvert_cam2 = gst_element_factory_make("videoconvert","split_videoconvert_cam2");
    encoder_queue2 = gst_element_factory_make("queue","encoder_queue2");
    encoder_cam2 = gst_element_factory_make("v4l2h264enc","encoder_cam2");
    parser_save_cam2 = gst_element_factory_make("h264parse","parser_save_cam2");
    split_sink2 = gst_element_factory_make("splitmuxsink", "split_sink2");
    text_queue_cam2 = gst_element_factory_make("queue","text_queue_cam2");
    filesink2 = gst_element_factory_make("filesink","filesink2");
    stream_queue_cam2 = gst_element_factory_make("queue","stream_queue_cam2");
    final_scaler_stream_cam2 = gst_element_factory_make("tiovxmultiscaler", "final_scaler_stream_cam2");
    final_caps_stream_cam2 = gst_element_factory_make("capsfilter","final_caps_stream_cam2");
    stream_videoconvert_cam2 = gst_element_factory_make("tiovxcolorconvert","stream_videoconvert_cam2");
    stream_encoder_cam2 = gst_element_factory_make("v4l2h264enc","stream_encoder_cam2");
    parser_stream_cam2 = gst_element_factory_make("h264parse","parser_stream_cam2");
    stream_muxer_cam2 = gst_element_factory_make("flvmux","stream_muxer_cam2");
    stream_sink2 = gst_element_factory_make("rtmp2sink","stream_sink2");
    
    //create elements for CAM3
    source3 = gst_element_factory_make("v4l2src","source3");
    source_queue3 = gst_element_factory_make("queue","source_queue3");
    source_caps_cam3 = gst_element_factory_make("capsfilter","source_caps_cam3");
    tiovxisp_cam3 = gst_element_factory_make("tiovxisp","tiovxisp_cam3");
    source_videoconvert_cam3 = gst_element_factory_make("videoconvert","source_videoconvert_cam3");
    nv12_caps_cam3 = gst_element_factory_make("capsfilter", "nv12_caps_cam3");
    tee3 = gst_element_factory_make("tee", "tee3");
    scaler_split3 = gst_element_factory_make("tiovxmultiscaler","scaler_split3");
    sink_queue3 = gst_element_factory_make("queue","sink_queue3");
    sink_caps_cam3 = gst_element_factory_make("capsfilter","sink_caps_cam3");
    detect_queue3 = gst_element_factory_make("queue","detect_queue3");
    detect_scaler1_cam3 = gst_element_factory_make("tiovxmultiscaler", "detect_scaler1_cam3");
    intermediate_caps_cam3 = gst_element_factory_make("capsfilter","intermediate_caps_cam3");
    detect_scaler2_cam3 = gst_element_factory_make("tiovxmultiscaler","detect_scaler2_cam3");
    detect_caps_cam3 = gst_element_factory_make("capsfilter","detect_caps_cam3");
    preproc_cam3 = gst_element_factory_make("tiovxdlpreproc","preproc_cam3");
    tensor_caps_cam3 = gst_element_factory_make("capsfilter","tensor_caps_cam3");
    inference_cam3 = gst_element_factory_make("tidlinferer","inference_cam3");
    postproc_cam3 = gst_element_factory_make("tidlpostproc","postproc_cam3");
    final_scaler_save_cam3 = gst_element_factory_make("tiovxmultiscaler", "final_scaler_save_cam3");
    final_caps_save_cam3 = gst_element_factory_make("capsfilter","final_caps_save_cam3");
    split_videoconvert_cam3 = gst_element_factory_make("videoconvert","split_videoconvert_cam3");
    encoder_queue3 = gst_element_factory_make("queue","encoder_queue3");
    encoder_cam3 = gst_element_factory_make("v4l2h264enc","encoder_cam3");
    parser_save_cam3 = gst_element_factory_make("h264parse","parser_save_cam3");
    split_sink3 = gst_element_factory_make("splitmuxsink", "split_sink3");
    text_queue_cam3 = gst_element_factory_make("queue","text_queue_cam3");
    filesink3 = gst_element_factory_make("filesink","filesink3");
    stream_queue_cam3 = gst_element_factory_make("queue","stream_queue_cam3");
    final_scaler_stream_cam3 = gst_element_factory_make("tiovxmultiscaler", "final_scaler_stream_cam3");
    final_caps_stream_cam3 = gst_element_factory_make("capsfilter","final_caps_stream_cam3");
    stream_videoconvert_cam3 = gst_element_factory_make("tiovxcolorconvert","stream_videoconvert_cam3");
    stream_encoder_cam3 = gst_element_factory_make("v4l2h264enc","stream_encoder_cam3");
    parser_stream_cam3 = gst_element_factory_make("h264parse","parser_stream_cam3");
    stream_muxer_cam3= gst_element_factory_make("flvmux","stream_muxer_cam3");
    stream_sink3 = gst_element_factory_make("rtmp2sink","stream_sink3");
    
    //verify elements created
    if(!pipeline 
        //CAM1 element verification
        || !source1 || !source_queue1 || !source_caps_cam1 || !tiovxisp_cam1 || !source_videoconvert_cam1 || !nv12_caps_cam1 || !tee1 
        || !scaler_split1 || !sink_queue1 || !sink_caps_cam1 || !detect_queue1 || !detect_scaler1_cam1 || !intermediate_caps_cam1 || !detect_scaler2_cam1 || !detect_caps_cam1 
        || !preproc_cam1 || !tensor_caps_cam1 || !inference_cam1 || !postproc_cam1
        || !final_scaler_save_cam1 || !final_caps_save_cam1 || !split_videoconvert_cam1 || !encoder_queue1 || !encoder_cam1 || !parser_save_cam1 || !split_sink1 || !text_queue_cam1 || !filesink1 
        || !stream_queue_cam1 || !final_scaler_stream_cam1 || !final_caps_stream_cam1 || !stream_videoconvert_cam1 || !stream_encoder_cam1 || !parser_stream_cam1 || !stream_muxer_cam1 || !stream_sink1   
        
        //CAM2 element verification
        || !source2  || !source_queue2 || !source_caps_cam2 || !tiovxisp_cam2 || !source_videoconvert_cam2 || !nv12_caps_cam2 || !tee2
        || !scaler_split2 || !sink_queue2 || !sink_caps_cam2 || !detect_queue2 || !detect_scaler1_cam2 || !intermediate_caps_cam2 || !detect_scaler2_cam2 || !detect_caps_cam2
        || !preproc_cam2 || !tensor_caps_cam2 || !inference_cam2 || !postproc_cam2 
        || !final_scaler_save_cam2 || !final_caps_save_cam2 || !split_videoconvert_cam2 || !encoder_queue2 || !encoder_cam2 || !parser_save_cam2 || !split_sink2 || !text_queue_cam2 || !filesink2 
        || !stream_queue_cam2 || !final_scaler_stream_cam2 || !final_caps_stream_cam2 || !stream_videoconvert_cam2 || !stream_encoder_cam2 || !parser_stream_cam2 || !stream_muxer_cam2 || !stream_sink2
        //CAM3 element verification
        || !source3 || !source_queue3 || !source_caps_cam3 || !tiovxisp_cam3 || !source_videoconvert_cam3 || !nv12_caps_cam3 || !tee3 
        || !scaler_split3 || !sink_queue3 || !sink_caps_cam3 || !detect_queue3 || !detect_scaler1_cam3 || !intermediate_caps_cam3 || !detect_scaler2_cam3 || !detect_caps_cam3 
        || !preproc_cam3 || !tensor_caps_cam3 || !inference_cam3 || !postproc_cam3
        || !final_scaler_save_cam3 || !final_caps_save_cam3 || !split_videoconvert_cam3 || !encoder_queue3 || !encoder_cam3 || !parser_save_cam3 || !split_sink3 || !text_queue_cam3 || !filesink3 
        || !stream_queue_cam3 || !final_scaler_stream_cam3 || !final_caps_stream_cam3 || !stream_videoconvert_cam3 || !stream_encoder_cam3 || !parser_stream_cam3 || !stream_muxer_cam3 || !stream_sink3 
        //Audio elements verification
        || !alsasrc || !audio_src_queue || !audioconvert || !audioresample || !audio_encoder || !audio_parser || !audio_tee || !audio_sink_queue1 || !audio_sink_queue2 || !audio_sink_queue3){
        
        g_printerr("Failed to create all elements\n");
        return -1;
    }
    
    //PROPERTIES
    
    //Set properties for CAM1
    
    //source1
    g_object_set(G_OBJECT(source1),
        "device", "/dev/video-imx219-cam0",
        "io-mode", 5,
        NULL);
    
    //source_queue1
    g_object_set(G_OBJECT(source_queue1),
        "max-size-buffers",4,
        "leaky",2,
        NULL);
    
    //source_caps_cam1
    GstCaps *source_caps_cam1_val = gst_caps_new_simple("video/x-bayer",
        "width", G_TYPE_INT, 3280,
        "height", G_TYPE_INT, 2464,
        "format" ,G_TYPE_STRING, "rggb10",
        "framerate", GST_TYPE_FRACTION, 15, 1,
        NULL);
    g_object_set(G_OBJECT(source_caps_cam1),"caps",source_caps_cam1_val,NULL);
    gst_caps_unref(source_caps_cam1_val);
    
    //tiovxisp_1
    g_object_set(G_OBJECT(tiovxisp_cam1),
        "sensor-name","SENSOR_SONY_IMX219_RPI",
        "dcc-isp-file","/opt/imaging/imx219/linear/dcc_viss_3280x2464_10b.bin",
        "format-msb",9,
        NULL);
    
    //nv12_caps_cam1
    GstCaps *nv12_caps_val_cam1 = gst_caps_new_simple("video/x-raw",
        "format", G_TYPE_STRING, "NV12",
        NULL);
    g_object_set(G_OBJECT(nv12_caps_cam1), "caps", nv12_caps_val_cam1, NULL);
    gst_caps_unref(nv12_caps_val_cam1);
    
    //detect_queue1
    g_object_set(G_OBJECT(detect_queue1),
        "max-size-buffers", 4,
        "leaky", 2,
        NULL);
    
    //intermediate_caps_cam1
    GstCaps *intermediate_caps_val_cam1 = gst_caps_new_simple("video/x-raw",
        "width", G_TYPE_INT, 820,
        "height", G_TYPE_INT, 616,
        NULL);
    g_object_set(G_OBJECT(intermediate_caps_cam1),"caps",intermediate_caps_val_cam1,NULL);
    gst_caps_unref(intermediate_caps_val_cam1);
    
    //detect_caps_cam1
    GstCaps  *detect_caps_val_cam1 = gst_caps_new_simple("video/x-raw",
        "format",G_TYPE_STRING,"NV12",
        "width",G_TYPE_INT,320,
        "height",G_TYPE_INT,320,
        NULL);
    g_object_set(G_OBJECT(detect_caps_cam1),"caps",detect_caps_val_cam1,NULL);
    gst_caps_unref(detect_caps_val_cam1);
    
    //preproc_cam1
    g_object_set(G_OBJECT(preproc_cam1),
        "model","/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320",
        "out-pool-size",4,
        NULL);
    
    //tensor_caps_cam1
    GstCaps *tensor_caps_val_cam1 = gst_caps_new_simple("application/x-tensor-tiovx",NULL);
    g_object_set(G_OBJECT(tensor_caps_cam1),"caps",tensor_caps_val_cam1,NULL);
    gst_caps_unref(tensor_caps_val_cam1);
    
    //inference_cam1
    g_object_set(G_OBJECT(inference_cam1),"target",1,
        "model","/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320",
        NULL);
    
    //postproc_cam1
    g_object_set(G_OBJECT(postproc_cam1),
            "model","/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320",
            "alpha",0.4,
            "viz-threshold", 0.6,
            "top-N",5,
            "display-model",TRUE,
            NULL);
    
    //sink_queue1
    g_object_set(G_OBJECT(sink_queue1),
        "max-size-buffers", 4,
        "leaky", 2,
        NULL);
    
    //sink_caps_cam1
    GstCaps *sink_caps_val1 = gst_caps_new_simple("video/x-raw",
        "width",G_TYPE_INT,3280,
        "height",G_TYPE_INT,2464,
        NULL);
    g_object_set(G_OBJECT(sink_caps_cam1),"caps",sink_caps_val1,NULL);
    gst_caps_unref(sink_caps_val1);
    
    //text_queue_cam1
    g_object_set(G_OBJECT(text_queue_cam1),
        "max-size-buffers",4,
        "leaky", 2,
        NULL);
    
    //final_caps_save_cam1
    GstCaps *final_caps_save_val_cam1 = gst_caps_new_simple("video/x-raw",
        "width",G_TYPE_INT,1280,
        "height",G_TYPE_INT,720,
        "framerate",GST_TYPE_FRACTION,15,1,
        NULL);
    g_object_set(G_OBJECT(final_caps_save_cam1),"caps",final_caps_save_val_cam1 ,NULL);
    gst_caps_unref(final_caps_save_val_cam1);
    
    //final_caps_stream_cam1
    GstCaps *final_caps_stream_val_cam1 = gst_caps_new_simple("video/x-raw",
        "width",G_TYPE_INT,1280,
        "height",G_TYPE_INT,720,
        "framerate",GST_TYPE_FRACTION,15,1,
        NULL);
    g_object_set(G_OBJECT(final_caps_stream_cam1),"caps",final_caps_stream_val_cam1 ,NULL);
    gst_caps_unref(final_caps_stream_val_cam1);
    
    //encoder_queue1 
    g_object_set(G_OBJECT(encoder_queue1),"max-size-buffers",1,NULL);
    
    //encoder_cam1
    GstStructure *extra_controls_save_cam1 = gst_structure_new("controls",
        "h264_i_frame_period", G_TYPE_INT, 10,
        NULL);
    g_object_set(G_OBJECT(encoder_cam1),
        "extra-controls", extra_controls_save_cam1,
        NULL); 
    
    //splitmuxsink1
    g_object_set(G_OBJECT(split_sink1),
        "location","camera1_event/video%02d.mkv", 
        "max-size-time",10000000000, 
        "max-files",4, 
        "muxer-factory","matroskamux",
        NULL);
    
    //filesink1
    g_object_set(G_OBJECT(filesink1),
        "location","camera1_event/class.yaml",
        "sync",TRUE,
        NULL);
    
    //stream_queue_cam1
    g_object_set(G_OBJECT(stream_queue_cam1),
        "max-size-buffers",4,
        "leaky", 2,
        NULL);
    
    //stream_encoder_cam1
    GstStructure *extra_controls_stream_cam1 = gst_structure_new("controls",
        "h264_i_frame_period", G_TYPE_INT, 30,
          "video_bitrate", G_TYPE_INT, 4000000,
          "video_bitrate_mode",G_TYPE_INT, 1,
          "h264_profile",G_TYPE_INT,0,
          "h264_level",G_TYPE_INT,31,
          "video_gop_size",G_TYPE_INT,30,
          "video_b_frame_count",G_TYPE_INT,0,
          "iframeinterval", G_TYPE_INT, 1,
          "ratecontrol_enable", G_TYPE_BOOLEAN, TRUE,
         NULL);
    g_object_set(G_OBJECT(stream_encoder_cam1),
          "extra-controls", extra_controls_stream_cam1,
          NULL); 
    
    //stream_sink1
    g_object_set(G_OBJECT(stream_sink1),
            "location","rtmp://4.197.203.77/WebRTCAppEE/3cm7cnXajUVvZSqH1738732466240",
            "sync", TRUE,
            "async", TRUE,
            "max-lateness", 20000000,
            "qos", TRUE,
            NULL);
    
    //Set properties for CAM2
    
    //source2 
    g_object_set(G_OBJECT(source2),
        "device", "/dev/video-imx219-cam1",
        "io-mode", 5,
        NULL);
    
    //source_queue2
    g_object_set(G_OBJECT(source_queue2),
        "max-size-buffers",4,
        "leaky",2,
        NULL);
    
    //Source caps cam2
    GstCaps *source_caps_val_cam2 = gst_caps_new_simple("video/x-bayer",
        "width", G_TYPE_INT, 3280,
        "height", G_TYPE_INT, 2464,
        "format" ,G_TYPE_STRING, "rggb10",
        "framerate", GST_TYPE_FRACTION, 15, 1,
        NULL);
    g_object_set(G_OBJECT(source_caps_cam2),"caps",source_caps_val_cam2,NULL);
    gst_caps_unref(source_caps_val_cam2);
    
    //tiovxisp_2
    g_object_set(G_OBJECT(tiovxisp_cam2),
        "sensor-name","SENSOR_SONY_IMX219_RPI",
        "dcc-isp-file","/opt/imaging/imx219/linear/dcc_viss_3280x2464_10b.bin",
        "format-msb",9,
        NULL);
    
    //nv12_caps_cam2
    GstCaps *nv12_caps_val_cam2 = gst_caps_new_simple("video/x-raw",
        "format", G_TYPE_STRING, "NV12",
        NULL);
    g_object_set(G_OBJECT(nv12_caps_cam2), "caps", nv12_caps_val_cam2, NULL);
    gst_caps_unref(nv12_caps_val_cam2);
    
    //detect_queue2
    g_object_set(G_OBJECT(detect_queue2),
        "max-size-buffers", 4,
        "leaky", 2,
        NULL);
    
    //intermediate_caps_cam2
    GstCaps *intermediate_caps_val_cam2 = gst_caps_new_simple("video/x-raw",
        "width", G_TYPE_INT, 820,
        "height", G_TYPE_INT, 616,
        NULL);
    g_object_set(G_OBJECT(intermediate_caps_cam2),"caps",intermediate_caps_val_cam2,NULL);
    gst_caps_unref(intermediate_caps_val_cam2);
    
    //detect_caps_cam2
    GstCaps  *detect_caps_val_cam2 = gst_caps_new_simple("video/x-raw",
        "format",G_TYPE_STRING,"NV12",
        "width",G_TYPE_INT,320,
        "height",G_TYPE_INT,320,
        NULL);
    g_object_set(G_OBJECT(detect_caps_cam2),"caps",detect_caps_val_cam2,NULL);
    gst_caps_unref(detect_caps_val_cam2);
    
    //preproc_cam2
    g_object_set(G_OBJECT(preproc_cam2),
        "model","/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320",
        "out-pool-size",4,
        NULL);
    
    //tensor_caps_cam2
    GstCaps *tensor_caps_val_cam2 = gst_caps_new_simple("application/x-tensor-tiovx",NULL);
    g_object_set(G_OBJECT(tensor_caps_cam2),"caps",tensor_caps_val_cam2,NULL);
    gst_caps_unref(tensor_caps_val_cam2);
    
    //inference_cam2
    g_object_set(G_OBJECT(inference_cam2),"target",1,
        "model","/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320",
        NULL);
    
    //postproc_cam2
    g_object_set(G_OBJECT(postproc_cam2),
            "model","/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320",
            "alpha",0.4,
            "viz-threshold", 0.6,
            "top-N",5,
            "display-model",TRUE,
            NULL);
    
    //sink_queue2
    g_object_set(G_OBJECT(sink_queue2),
        "max-size-buffers", 4,
        "leaky", 2,
        NULL);
    
    //sink_caps_cam2
    GstCaps *sink_caps_val2 = gst_caps_new_simple("video/x-raw",
        "width",G_TYPE_INT,3280,
        "height",G_TYPE_INT,2464,
        NULL);
    g_object_set(G_OBJECT(sink_caps_cam2),"caps",sink_caps_val2,NULL);
    gst_caps_unref(sink_caps_val2);
    
    //text_queue_cam2
    g_object_set(G_OBJECT(text_queue_cam2),
        "max-size-buffers",4,
        "leaky", 2,
        NULL);
    
    //final_caps_save_cam2
    GstCaps *final_caps_save_val_cam2 = gst_caps_new_simple("video/x-raw",
        "width",G_TYPE_INT,1280,
        "height",G_TYPE_INT,720,
        "framerate",GST_TYPE_FRACTION,15,1,
        NULL);
    g_object_set(G_OBJECT(final_caps_save_cam2),"caps",final_caps_save_val_cam2 ,NULL);
    gst_caps_unref(final_caps_save_val_cam2);
    
    //final_caps_stream_cam2
    GstCaps *final_caps_stream_val_cam2 = gst_caps_new_simple("video/x-raw",
        "width",G_TYPE_INT,1280,
        "height",G_TYPE_INT,720,
        "framerate",GST_TYPE_FRACTION,15,1,
        NULL);
    g_object_set(G_OBJECT(final_caps_stream_cam2),"caps",final_caps_stream_val_cam2 ,NULL);
    gst_caps_unref(final_caps_stream_val_cam2);
    
    //encoder_queue2
    g_object_set(G_OBJECT(encoder_queue2),"max-size-buffers",1,NULL);
    
    //encoder_cam2
    GstStructure *extra_controls_save_cam2 = gst_structure_new("controls",
        "h264_i_frame_period", G_TYPE_INT, 10,
        NULL);
    g_object_set(G_OBJECT(encoder_cam2),
        "extra-controls", extra_controls_save_cam2,
        NULL); 
    
    //splitmuxsink2
    g_object_set(G_OBJECT(split_sink2),
        "location","camera2_event/video%02d.mkv", 
        "max-size-time",10000000000, 
        "max-files",4, 
        "muxer-factory","matroskamux",
        NULL);
    
    //filesink2
    g_object_set(G_OBJECT(filesink2),
        "location","camera2_event/class.yaml",
        "sync",TRUE,
        NULL);
    
    //stream_queue_cam2
    g_object_set(G_OBJECT(stream_queue_cam2),
        "max-size-buffers",4,
        "leaky", 2,
        NULL);
    
    //stream_encoder_cam1
    GstStructure *extra_controls_stream_cam2 = gst_structure_new("controls",
        "h264_i_frame_period", G_TYPE_INT, 30,
          "video_bitrate", G_TYPE_INT, 4000000,
          "video_bitrate_mode",G_TYPE_INT, 1,
          "h264_profile",G_TYPE_INT,0,
          "h264_level",G_TYPE_INT,31,
          "video_gop_size",G_TYPE_INT,30,
          "video_b_frame_count",G_TYPE_INT,0,
          "iframeinterval", G_TYPE_INT, 1,
          "ratecontrol_enable", G_TYPE_BOOLEAN, TRUE,
         NULL);
    g_object_set(G_OBJECT(stream_encoder_cam2),
          "extra-controls", extra_controls_stream_cam2,
          NULL); 
    
    //stream_sink2
    g_object_set(G_OBJECT(stream_sink2),
            "location","rtmp://4.197.203.77/WebRTCAppEE/9MrnW97nimztoNvA1742282493064",
            "sync", TRUE,
            "async", TRUE,
            "max-lateness", 20000000,
            "qos", TRUE,
            NULL);
    
    //Set properties for CAM3
    
    //source3
    g_object_set(G_OBJECT(source3),
        "device", "/dev/video-imx219-cam2",
        "io-mode", 5,
        NULL);
    
    //source_queue3
    g_object_set(G_OBJECT(source_queue3),
        "max-size-buffers",4,
        "leaky",2,
        NULL);
    
    //source_caps_cam3
    GstCaps *source_caps_cam3_val = gst_caps_new_simple("video/x-bayer",
        "width", G_TYPE_INT, 3280,
        "height", G_TYPE_INT, 2464,
        "format" ,G_TYPE_STRING, "rggb10",
        "framerate", GST_TYPE_FRACTION, 15, 1,
        NULL);
    g_object_set(G_OBJECT(source_caps_cam3),"caps",source_caps_cam3_val,NULL);
    gst_caps_unref(source_caps_cam3_val);
    
    //tiovxisp_3
    g_object_set(G_OBJECT(tiovxisp_cam3),
        "sensor-name","SENSOR_SONY_IMX219_RPI",
        "dcc-isp-file","/opt/imaging/imx219/linear/dcc_viss_3280x2464_10b.bin",
        "format-msb",9,
        NULL);
    
    //nv12_caps_cam3
    GstCaps *nv12_caps_val_cam3 = gst_caps_new_simple("video/x-raw",
        "format", G_TYPE_STRING, "NV12",
        NULL);
    g_object_set(G_OBJECT(nv12_caps_cam3), "caps", nv12_caps_val_cam3, NULL);
    gst_caps_unref(nv12_caps_val_cam3);
    
    //detect_queue3
    g_object_set(G_OBJECT(detect_queue3),
        "max-size-buffers", 4,
        "leaky", 2,
        NULL);
    
    //intermediate_caps_cam1
    GstCaps *intermediate_caps_val_cam3 = gst_caps_new_simple("video/x-raw",
        "width", G_TYPE_INT, 820,
        "height", G_TYPE_INT, 616,
        NULL);
    g_object_set(G_OBJECT(intermediate_caps_cam3),"caps",intermediate_caps_val_cam3,NULL);
    gst_caps_unref(intermediate_caps_val_cam3);
    
    //detect_caps_cam3
    GstCaps  *detect_caps_val_cam3 = gst_caps_new_simple("video/x-raw",
        "format",G_TYPE_STRING,"NV12",
        "width",G_TYPE_INT,320,
        "height",G_TYPE_INT,320,
        NULL);
    g_object_set(G_OBJECT(detect_caps_cam3),"caps",detect_caps_val_cam3,NULL);
    gst_caps_unref(detect_caps_val_cam3);
    
    //preproc_cam3
    g_object_set(G_OBJECT(preproc_cam3),
        "model","/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320",
        "out-pool-size",4,
        NULL);
    
    //tensor_caps_cam3
    GstCaps *tensor_caps_val_cam3 = gst_caps_new_simple("application/x-tensor-tiovx",NULL);
    g_object_set(G_OBJECT(tensor_caps_cam3),"caps",tensor_caps_val_cam3,NULL);
    gst_caps_unref(tensor_caps_val_cam3);
    
    //inference_cam3
    g_object_set(G_OBJECT(inference_cam3),"target",1,
        "model","/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320",
        NULL);
    
    //postproc_cam3
    g_object_set(G_OBJECT(postproc_cam3),
            "model","/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320",
            "alpha",0.4,
            "viz-threshold", 0.6,
            "top-N",5,
            "display-model",TRUE,
            NULL);
    
    //sink_queue3
    g_object_set(G_OBJECT(sink_queue3),
        "max-size-buffers", 4,
        "leaky", 2,
        NULL);
    
    //sink_caps_cam3
    GstCaps *sink_caps_val3 = gst_caps_new_simple("video/x-raw",
        "width",G_TYPE_INT,3280,
        "height",G_TYPE_INT,2464,
        NULL);
    g_object_set(G_OBJECT(sink_caps_cam3),"caps",sink_caps_val3,NULL);
    gst_caps_unref(sink_caps_val3);
    
    //text_queue_cam3
    g_object_set(G_OBJECT(text_queue_cam3),
        "max-size-buffers",4,
        "leaky", 2,
        NULL);
    
    //final_caps_save_cam3
    GstCaps *final_caps_save_val_cam3 = gst_caps_new_simple("video/x-raw",
        "width",G_TYPE_INT,1280,
        "height",G_TYPE_INT,720,
        "framerate",GST_TYPE_FRACTION,15,1,
        NULL);
    g_object_set(G_OBJECT(final_caps_save_cam3),"caps",final_caps_save_val_cam3 ,NULL);
    gst_caps_unref(final_caps_save_val_cam3);
    
    //final_caps_stream_cam3
    GstCaps *final_caps_stream_val_cam3 = gst_caps_new_simple("video/x-raw",
        "width",G_TYPE_INT,1280,
        "height",G_TYPE_INT,720,
        "framerate",GST_TYPE_FRACTION,15,1,
        NULL);
    g_object_set(G_OBJECT(final_caps_stream_cam3),"caps",final_caps_stream_val_cam3 ,NULL);
    gst_caps_unref(final_caps_stream_val_cam3);
    
    //encoder_queue3
    g_object_set(G_OBJECT(encoder_queue3),"max-size-buffers",1,NULL);
    
    //encoder_cam3
    GstStructure *extra_controls_save_cam3 = gst_structure_new("controls",
        "h264_i_frame_period", G_TYPE_INT, 10,
        NULL);
    g_object_set(G_OBJECT(encoder_cam3),
        "extra-controls", extra_controls_save_cam3,
        NULL); 
    
    //splitmuxsink3
    g_object_set(G_OBJECT(split_sink3),
        "location","camera3_event/video%02d.mkv", 
        "max-size-time",10000000000, 
        "max-files",4, 
        "muxer-factory","matroskamux",
        NULL);
    
    //filesink3
    g_object_set(G_OBJECT(filesink3),
        "location","camera3_event/class.yaml",
        "sync",TRUE,
        NULL);
    
    //stream_queue_cam3
    g_object_set(G_OBJECT(stream_queue_cam3),
        "max-size-buffers",4,
        "leaky", 2,
        NULL);
    
    //stream_encoder_cam3
    GstStructure *extra_controls_stream_cam3 = gst_structure_new("controls",
        "h264_i_frame_period", G_TYPE_INT, 30,
          "video_bitrate", G_TYPE_INT, 4000000,
          "video_bitrate_mode",G_TYPE_INT, 1,
          "h264_profile",G_TYPE_INT,0,
          "h264_level",G_TYPE_INT,31,
          "video_gop_size",G_TYPE_INT,30,
          "video_b_frame_count",G_TYPE_INT,0,
          "iframeinterval", G_TYPE_INT, 1,
          "ratecontrol_enable", G_TYPE_BOOLEAN, TRUE,
         NULL);
    g_object_set(G_OBJECT(stream_encoder_cam3),
          "extra-controls", extra_controls_stream_cam3,
          NULL); 
    
    //stream_sink3
    g_object_set(G_OBJECT(stream_sink3),
            "location","rtmp://4.197.203.77/WebRTCAppEE/msg7lRxnULDLh9vL1748261276306r",
            "sync", TRUE,
            "async", TRUE,
            "max-lateness", 20000000,
            "qos", TRUE,
            NULL);
    //Add elements to pipeline
    gst_bin_add_many(GST_BIN(pipeline),
        //CAM1
        source1, source_queue1, source_caps_cam1, tiovxisp_cam1, source_videoconvert_cam1, nv12_caps_cam1, tee1, 
        scaler_split1, detect_queue1, detect_scaler1_cam1, intermediate_caps_cam1, detect_scaler2_cam1, detect_caps_cam1, preproc_cam1, tensor_caps_cam1, inference_cam1,
        sink_queue1, sink_caps_cam1,
        postproc_cam1, text_queue_cam1, filesink1, final_scaler_save_cam1, final_scaler_stream_cam1, final_caps_save_cam1, final_caps_stream_cam1, split_videoconvert_cam1, 
        encoder_queue1, encoder_cam1, parser_save_cam1, split_sink1, 
        stream_queue_cam1, stream_videoconvert_cam1, stream_encoder_cam1, parser_stream_cam1, stream_muxer_cam1, stream_sink1, 
        //CAM2
        source2, source_queue2, source_caps_cam2, tiovxisp_cam2, source_videoconvert_cam2, nv12_caps_cam2, tee2,
        scaler_split2, detect_queue2, detect_scaler1_cam2, intermediate_caps_cam2, detect_scaler2_cam2, detect_caps_cam2, preproc_cam2, tensor_caps_cam2, inference_cam2,
        sink_queue2, sink_caps_cam2,
        postproc_cam2, text_queue_cam2, filesink2, final_scaler_save_cam2, final_scaler_stream_cam2, final_caps_save_cam2, final_caps_stream_cam2, split_videoconvert_cam2, 
        encoder_queue2, encoder_cam2, parser_save_cam2, split_sink2,
        stream_queue_cam2, stream_videoconvert_cam2, stream_encoder_cam2, parser_stream_cam2, stream_muxer_cam2, stream_sink2,
        //CAM3
        source3, source_queue3, source_caps_cam3, tiovxisp_cam3, source_videoconvert_cam3, nv12_caps_cam3, tee3,
        scaler_split3, detect_queue3, detect_scaler1_cam3, intermediate_caps_cam3, detect_scaler2_cam3, detect_caps_cam3, preproc_cam3, tensor_caps_cam3, inference_cam3,
        sink_queue3, sink_caps_cam3,
        postproc_cam3, text_queue_cam3, filesink3, final_scaler_save_cam3, final_scaler_stream_cam3, final_caps_save_cam3, final_caps_stream_cam3, split_videoconvert_cam3,
        encoder_queue3, encoder_cam3, parser_save_cam3, split_sink3,
        stream_queue_cam3, stream_videoconvert_cam3, stream_encoder_cam3, parser_stream_cam3, stream_muxer_cam3, stream_sink3,
        //Audio
        alsasrc, audio_src_queue, audioconvert, audioresample, audio_encoder, audio_parser, audio_tee, audio_sink_queue1,audio_sink_queue2, audio_sink_queue3,NULL);
    
    
    //Pad Properties   
    
    //Set pad properties for CAM1
    
    //Set tiovxisp_cam1 sink pad properties
    GstPad *tiovx_pad1 = gst_element_request_pad_simple(tiovxisp_cam1,"sink_%u");
    if (!tiovx_pad1){
        g_printerr("Failed to get sink pad from tiovxisp pad1\n");
        return -1;
        }
    g_object_set(G_OBJECT(tiovx_pad1),
        "dcc-2a-file","/opt/imaging/imx219/linear/dcc_2a_3280x2464_10b.bin",
        "device","/dev/v4l-imx219-subdev0",
        NULL);
    gst_object_unref(tiovx_pad1);
    
    //detect_scaler1_cam1
    
    GstPad *detect_scaler1_cam1_src = gst_element_request_pad_simple(detect_scaler1_cam1, "src_%u"); //src
    if (!detect_scaler1_cam1_src){
        g_printerr("Failed to create detect_scaler1 source pad of camera 1");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler1_cam1_src),
    "roi-width",3280,
    "roi-height",2464,
    NULL);
    gst_object_unref(detect_scaler1_cam1_src);
    
    GstPad *detect_scaler1_cam1_sink = gst_element_get_static_pad(detect_scaler1_cam1,"sink"); //sink
    if (!detect_scaler1_cam1_sink ){
        g_printerr("Failed to create detect_scaler1 sink pad of camera 1");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler1_cam1_sink),
    "roi-width",820,
    "roi-height",616,
    NULL);
    gst_object_unref(detect_scaler1_cam1_sink );
    
    //detect_scaler2_cam1
    GstPad *detect_scaler2_cam1_src = gst_element_request_pad_simple(detect_scaler2_cam1, "src_%u"); //src
    if (!detect_scaler2_cam1_src){
        g_printerr("Failed to create detect_scaler2 source pad of camera 1");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler2_cam1_src),
    "roi-width",820,
    "roi-height",616,
    NULL);
    gst_object_unref(detect_scaler2_cam1_src);
    
    GstPad *detect_scaler2_cam1_sink = gst_element_get_static_pad(detect_scaler2_cam1,"sink"); //sink
    if (!detect_scaler2_cam1_sink ){
        g_printerr("Failed to create detect_scaler2 sink pad of camera 1");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler2_cam1_sink),
    "roi-width",320,
    "roi-height",320,
    NULL);
    gst_object_unref(detect_scaler2_cam1_sink);
    
    //Set pad properties for CAM2
    
    //Set tiovxisp_cam2 sink pad properties
    GstPad *tiovx_pad2 = gst_element_request_pad_simple(tiovxisp_cam2,"sink_%u");
    if (!tiovx_pad2){
        g_printerr("Failed to get sink pad from tiovxisp pad2\n");
        return -1;
        }
    g_object_set(G_OBJECT(tiovx_pad2),
        "dcc-2a-file","/opt/imaging/imx219/linear/dcc_2a_3280x2464_10b.bin",
        "device","/dev/v4l-imx219-subdev1",
        NULL);
    gst_object_unref(tiovx_pad2);
    
    //detect_scaler1_cam2
    GstPad *detect_scaler1_cam2_src = gst_element_request_pad_simple(detect_scaler1_cam2, "src_%u"); //src
    if (!detect_scaler1_cam2_src){
        g_printerr("Failed to create detect_scaler1 source pad of camera 2");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler1_cam2_src),
    "roi-width",3280,
    "roi-height",2464,
    NULL);
    gst_object_unref(detect_scaler1_cam2_src);
    
    GstPad *detect_scaler1_cam2_sink = gst_element_get_static_pad(detect_scaler1_cam2,"sink"); //sink
    if (!detect_scaler1_cam2_sink ){
        g_printerr("Failed to create detect_scaler1 sink pad of camera 2");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler1_cam2_sink),
    "roi-width",820,
    "roi-height",616,
    NULL);
    gst_object_unref(detect_scaler1_cam2_sink );
    
    //detect_scaler2_cam2
    GstPad *detect_scaler2_cam2_src = gst_element_request_pad_simple(detect_scaler2_cam2, "src_%u"); //src
    if (!detect_scaler2_cam2_src){
        g_printerr("Failed to create detect_scaler2 source pad of camera 2");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler2_cam2_src),
    "roi-width",820,
    "roi-height",616,
    NULL);
    gst_object_unref(detect_scaler2_cam2_src);
    
    GstPad *detect_scaler2_cam2_sink = gst_element_get_static_pad(detect_scaler2_cam2,"sink"); //sink
    if (!detect_scaler2_cam2_sink ){
        g_printerr("Failed to create detect_scaler2 sink pad of camera 2");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler2_cam2_sink),
    "roi-width",320,
    "roi-height",320,
    NULL);
    gst_object_unref(detect_scaler2_cam2_sink);
    
    //Set pad properties for CAM3
    
    //Set tiovxisp_cam3 sink pad properties
    GstPad *tiovx_pad3 = gst_element_request_pad_simple(tiovxisp_cam3,"sink_%u");
    if (!tiovx_pad3){
        g_printerr("Failed to get sink pad from tiovxisp pad3\n");
        return -1;
        }
    g_object_set(G_OBJECT(tiovx_pad3),
        "dcc-2a-file","/opt/imaging/imx219/linear/dcc_2a_3280x2464_10b.bin",
        "device","/dev/v4l-imx219-subdev2",
        NULL);
    gst_object_unref(tiovx_pad3);
    
    //detect_scaler1_cam3
    
    GstPad *detect_scaler1_cam3_src = gst_element_request_pad_simple(detect_scaler1_cam3, "src_%u"); //src
    if (!detect_scaler1_cam3_src){
        g_printerr("Failed to create detect_scaler1 source pad of camera 3");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler1_cam3_src),
    "roi-width",3280,
    "roi-height",2464,
    NULL);
    gst_object_unref(detect_scaler1_cam3_src);
    
    GstPad *detect_scaler1_cam3_sink = gst_element_get_static_pad(detect_scaler1_cam3,"sink"); //sink
    if (!detect_scaler1_cam3_sink ){
        g_printerr("Failed to create detect_scaler1 sink pad of camera 3");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler1_cam3_sink),
    "roi-width",820,
    "roi-height",616,
    NULL);
    gst_object_unref(detect_scaler1_cam3_sink );
    
    //detect_scaler2_cam3
    GstPad *detect_scaler2_cam3_src = gst_element_request_pad_simple(detect_scaler2_cam3, "src_%u"); //src
    if (!detect_scaler2_cam3_src){
        g_printerr("Failed to create detect_scaler2 source pad of camera 3");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler2_cam3_src),
    "roi-width",820,
    "roi-height",616,
    NULL);
    gst_object_unref(detect_scaler2_cam3_src);
    
    GstPad *detect_scaler2_cam3_sink = gst_element_get_static_pad(detect_scaler2_cam3,"sink"); //sink
    if (!detect_scaler2_cam3_sink ){
        g_printerr("Failed to create detect_scaler2 sink pad of camera 3");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler2_cam3_sink),
    "roi-width",320,
    "roi-height",320,
    NULL);
    gst_object_unref(detect_scaler2_cam3_sink);
    
    //LINK ELEMENTS
    
    //Link elements for CAM1
    
    //Link source1 to tee1
    if (!gst_element_link_many(source1, source_queue1, source_caps_cam1, tiovxisp_cam1, source_videoconvert_cam1, nv12_caps_cam1, tee1, NULL)){
        g_printerr("Failed to link source1 to tee1.\n");
        return -1;
    }
    
    //Link tee1 to saving and streaming branch
    GstPad *tee1_pad1 = gst_element_request_pad_simple(tee1, "src_%u");
    GstPad *tee1_pad2 = gst_element_request_pad_simple(tee1, "src_%u");
    GstPad *scaler_split_pad1 = gst_element_get_static_pad(scaler_split1,"sink");
    GstPad *stream_pad1 = gst_element_get_static_pad(stream_queue_cam1,"sink");
    if(gst_pad_link(tee1_pad1, scaler_split_pad1) != GST_PAD_LINK_OK || 
    gst_pad_link(tee1_pad2, stream_pad1) != GST_PAD_LINK_OK){
        g_printerr("Failed to link tee1 from CAM1 to saving and streaming branch.\n");
        return -1;
        }
    gst_object_unref(tee1_pad1);
    gst_object_unref(tee1_pad2);
    gst_object_unref(scaler_split_pad1);
    gst_object_unref(stream_pad1);
    
    //Multiscaler pad connection to sink and detection of CAM1
    // Link multiscaler to both detection and sink branches
    GstPad *scaler1_src_pad1 = gst_element_request_pad_simple(scaler_split1, "src_%u");
    if(!scaler1_src_pad1){
        g_printerr("Failed to get source pad1 from multiscaler of CAM1.\n");
        return -1;
        }
    g_object_set(G_OBJECT(scaler1_src_pad1),
        "roi-width",3280,
        "roi-height",2464,
        NULL);
    GstPad *scaler1_src_pad2 = gst_element_request_pad_simple(scaler_split1, "src_%u");
    if(!scaler1_src_pad2){
        g_printerr("Failed to get source pad2 from multiscaler of camera1.\n");
        return -1;
        }
    g_object_set(G_OBJECT(scaler1_src_pad2),
        "roi-width",3280,
        "roi-height",2464,
        NULL);
    GstPad *detect_sink_pad1 = gst_element_get_static_pad(detect_queue1, "sink");
    GstPad *sink_sink_pad1 = gst_element_get_static_pad(sink_queue1, "sink");
    
    if (gst_pad_link(scaler1_src_pad1, detect_sink_pad1) != GST_PAD_LINK_OK ||
    gst_pad_link(scaler1_src_pad2, sink_sink_pad1) != GST_PAD_LINK_OK) {
        g_printerr("Failed to link  scaler1 tee branches of CAM1.\n");
        return -1;
        }
    
    gst_object_unref(scaler1_src_pad1);
    gst_object_unref(scaler1_src_pad2);
    gst_object_unref(detect_sink_pad1);
    gst_object_unref(sink_sink_pad1);
    
    //Link detection branch
    if (!gst_element_link_many(detect_queue1, detect_scaler1_cam1, intermediate_caps_cam1, detect_scaler2_cam1, detect_caps_cam1, preproc_cam1, tensor_caps_cam1, inference_cam1, NULL)){
        g_printerr("Failed to link detection branch of CAM1.\n");
        return -1;
        }
    
    //manually link inference output to post.tensor of CAM1
    GstPad *inference_src1 = gst_element_get_static_pad(inference_cam1, "src");
    GstPad *post_tensor1 = gst_element_get_static_pad(postproc_cam1, "tensor");
    if (gst_pad_link(inference_src1, post_tensor1) != GST_PAD_LINK_OK){
            g_printerr("Failed to link inference to post.tensor of CAM1.\n");
            return -1;
        }
    gst_object_unref(inference_src1);
    gst_object_unref(post_tensor1);
    
    //Link sink branch of CAM1
    if(!gst_element_link(sink_queue1, sink_caps_cam1)){
            g_printerr("Failed to link sink queue of CAM1.\n");
            return -1;
    }
    
    //Link display branch to post.sink of CAM1
    GstPad *sink_src1 = gst_element_get_static_pad(sink_caps_cam1, "src");
    GstPad *post_sink1 = gst_element_get_static_pad(postproc_cam1, "sink");
    if (gst_pad_link(sink_src1, post_sink1) != GST_PAD_LINK_OK){
        g_printerr("Failed to link sink_caps.src to post.sink of CAM1.\n");
        return -1;
    }
    gst_object_unref(sink_src1);
    gst_object_unref(post_sink1);
    
    //Link postproc to splitsink of CAM1
    if (!gst_element_link_many(postproc_cam1, final_scaler_save_cam1, final_caps_save_cam1, split_videoconvert_cam1, encoder_queue1, encoder_cam1, parser_save_cam1, split_sink1, NULL )){
        g_printerr("Failed to link postproc to split saving of CAM1.\n");
        return -1;
    }
    
    //Link post.text to text_queue.sink pad
    GstPad *text_src1 = gst_element_request_pad_simple(postproc_cam1, "text");
    GstPad *text_queue_pad1 = gst_element_get_static_pad(text_queue_cam1, "sink");
    if (gst_pad_link(text_src1, text_queue_pad1) != GST_PAD_LINK_OK){
        g_printerr("Failed to link text pad branch of CAM1.\n");
        return -1;
    }
    gst_object_unref(text_src1);
    gst_object_unref(text_queue_pad1);
    
    //Link text branch of CAM1
    if (!gst_element_link(text_queue_cam1, filesink1)){
        g_printerr("Failed to link text branch of CAM1.\n");
        return -1;
    }
    
    //Link stream branch of CAM1
    if (!gst_element_link_many(stream_queue_cam1, final_scaler_stream_cam1, final_caps_stream_cam1, stream_videoconvert_cam1, stream_encoder_cam1, 
        parser_stream_cam1, stream_muxer_cam1, stream_sink1, NULL)){
        g_printerr("Failed to link stream branch of CAM1.\n");
        return -1;
    }
    
    
    //Link elements of CAM2
    
    //Link source2 to tee2 of CAM2
    if (!gst_element_link_many(source2, source_queue2, source_caps_cam2, tiovxisp_cam2, source_videoconvert_cam2, nv12_caps_cam2, tee2, NULL)){
        g_printerr("Failed to link source2 to tee2.\n");
        return -1;
    }
    
    //Link tee2 to saving and streaming branch of CAM2
    GstPad *tee2_pad1 = gst_element_request_pad_simple(tee2, "src_%u");
    GstPad *tee2_pad2 = gst_element_request_pad_simple(tee2, "src_%u");
    GstPad *scaler_split_pad2 = gst_element_get_static_pad(scaler_split2,"sink");
    GstPad *stream_pad2 = gst_element_get_static_pad(stream_queue_cam2,"sink");
    if(gst_pad_link(tee2_pad1, scaler_split_pad2) != GST_PAD_LINK_OK || 
    gst_pad_link(tee2_pad2, stream_pad2) != GST_PAD_LINK_OK){
        g_printerr("Failed to link tee2 from CAM2 to saving and streaming branch.\n");
        return -1;
        }
    gst_object_unref(tee2_pad1);
    gst_object_unref(tee2_pad2);
    gst_object_unref(scaler_split_pad2);
    gst_object_unref(stream_pad2);
    
    //Multiscaler pad connection to sink and detection of CAM2
    // Link multiscaler to both detection and sink branches
    GstPad *scaler2_src_pad1 = gst_element_request_pad_simple(scaler_split2, "src_%u");
    if(!scaler2_src_pad1){
        g_printerr("Failed to get source pad1 from multiscaler of CAM2.\n");
        return -1;
        }
    g_object_set(G_OBJECT(scaler2_src_pad1),
        "roi-width",3280,
        "roi-height",2464,
        NULL);
    GstPad *scaler2_src_pad2 = gst_element_request_pad_simple(scaler_split2, "src_%u");
    if(!scaler2_src_pad2){
        g_printerr("Failed to get source pad2 from multiscaler of CAM2.\n");
        return -1;
        }
    g_object_set(G_OBJECT(scaler2_src_pad2),
        "roi-width",3280,
        "roi-height",2464,
        NULL);
    GstPad *detect_sink_pad2 = gst_element_get_static_pad(detect_queue2, "sink");
    GstPad *sink_sink_pad2 = gst_element_get_static_pad(sink_queue2, "sink");
    
    if (gst_pad_link(scaler2_src_pad1, detect_sink_pad2) != GST_PAD_LINK_OK ||
    gst_pad_link(scaler2_src_pad2, sink_sink_pad2) != GST_PAD_LINK_OK) {
        g_printerr("Failed to link scaler 2 tee branches of CAM2.\n");
        return -1;
        }
    
    gst_object_unref(scaler2_src_pad1);
    gst_object_unref(scaler2_src_pad2);
    gst_object_unref(detect_sink_pad2);
    gst_object_unref(sink_sink_pad2);
    
    //Link detection branch of CAM2
    if (!gst_element_link_many(detect_queue2, detect_scaler1_cam2, intermediate_caps_cam2, detect_scaler2_cam2, detect_caps_cam2, preproc_cam2, tensor_caps_cam2, inference_cam2, NULL)){
        g_printerr("Failed to link detection branch of CAM2.\n");
        return -1;
        }
    
    //manually link inference output to post.tensor
    GstPad *inference_src2 = gst_element_get_static_pad(inference_cam2, "src");
    GstPad *post_tensor2 = gst_element_get_static_pad(postproc_cam2, "tensor");
    if (gst_pad_link(inference_src2, post_tensor2) != GST_PAD_LINK_OK){
        g_printerr("Failed to link inference to post.tensor of CAM2.\n");
        return -1;
        }
    gst_object_unref(inference_src2);
    gst_object_unref(post_tensor2);
    
    //Link sink branch of CAM2
    if(!gst_element_link(sink_queue2,sink_caps_cam2)){
        g_printerr("Failed to link sink queue of CAM2.\n");
        return -1;
    }
    
    //Link display branch to post.sink of CAM2
    GstPad *sink_src2 = gst_element_get_static_pad(sink_caps_cam2, "src");
    GstPad *post_sink2 = gst_element_get_static_pad(postproc_cam2, "sink");
    if (gst_pad_link(sink_src2, post_sink2) != GST_PAD_LINK_OK){
        g_printerr("Failed to link sink src to post.sink of CAM2.\n");
        return -1;
    }
    gst_object_unref(sink_src2);
    gst_object_unref(post_sink2);
    
    //Link postproc to splitsink of CAM2
    if (!gst_element_link_many(postproc_cam2, final_scaler_save_cam2, final_caps_save_cam2, split_videoconvert_cam2, encoder_queue2, encoder_cam2, parser_save_cam2, split_sink2, NULL)){
        g_printerr("Failed to link postproc to split saving of CAM2.\n");
        return -1;
    }
    
    //Link post.text to text_queue.sink pad
    GstPad *text_src2 = gst_element_request_pad_simple(postproc_cam2, "text");
    GstPad *text_queue_pad2 = gst_element_get_static_pad(text_queue_cam2, "sink");
    if (gst_pad_link(text_src2, text_queue_pad2) != GST_PAD_LINK_OK){
        g_printerr("Failed to link text pad branch of CAM2.\n");
        return -1;
    }
    gst_object_unref(text_src2);
    gst_object_unref(text_queue_pad2);
    
    //Link text branch of CAM2
    if (!gst_element_link(text_queue_cam2, filesink2)){
        g_printerr("Failed to link text branch of CAM2.\n");
        return -1;
    }
    
    //Link stream branch of CAM2
    if (!gst_element_link_many(stream_queue_cam2, final_scaler_stream_cam2, final_caps_stream_cam2, stream_videoconvert_cam2, stream_encoder_cam2, 
        parser_stream_cam2, stream_muxer_cam2, stream_sink2 ,NULL)){
        g_printerr("Failed to link stream branch of CAM2.\n");
        return -1;
    }
    
    //Link elements of CAM3
    
    //Link source3 to tee3 of CAM3
    if (!gst_element_link_many(source3, source_queue3, source_caps_cam3, tiovxisp_cam3, source_videoconvert_cam3, nv12_caps_cam3, tee3, NULL)){
        g_printerr("Failed to link source3 to tee3.\n");
        return -1;
    }
    
    //Link tee3 to saving and streaming branch of CAM3
    GstPad *tee3_pad1 = gst_element_request_pad_simple(tee3, "src_%u");
    GstPad *tee3_pad2 = gst_element_request_pad_simple(tee3, "src_%u");
    GstPad *scaler_split_pad3 = gst_element_get_static_pad(scaler_split3,"sink");
    GstPad *stream_pad3 = gst_element_get_static_pad(stream_queue_cam3,"sink");
    if(gst_pad_link(tee3_pad1, scaler_split_pad3) != GST_PAD_LINK_OK || 
    gst_pad_link(tee3_pad2, stream_pad3) != GST_PAD_LINK_OK){
        g_printerr("Failed to link tee3 from CAM3 to saving and streaming branch.\n");
        return -1;
        }
    gst_object_unref(tee3_pad1);
    gst_object_unref(tee3_pad2);
    gst_object_unref(scaler_split_pad3);
    gst_object_unref(stream_pad3);
    
    //Multiscaler pad connection to sink and detection of CAM3
    // Link multiscaler to both detection and sink branches
    GstPad *scaler3_src_pad1 = gst_element_request_pad_simple(scaler_split3, "src_%u");
    if(!scaler3_src_pad1){
        g_printerr("Failed to get source pad1 from multiscaler of CAM3.\n");
        return -1;
        }
    g_object_set(G_OBJECT(scaler3_src_pad1),
        "roi-width",3280,
        "roi-height",2464,
        NULL);
    GstPad *scaler3_src_pad2 = gst_element_request_pad_simple(scaler_split3, "src_%u");
    if(!scaler3_src_pad2){
        g_printerr("Failed to get source pad2 from multiscaler of CAM3.\n");
        return -1;
        }
    g_object_set(G_OBJECT(scaler3_src_pad2),
        "roi-width",3280,
        "roi-height",2464,
        NULL);
    GstPad *detect_sink_pad3 = gst_element_get_static_pad(detect_queue3, "sink");
    GstPad *sink_sink_pad3 = gst_element_get_static_pad(sink_queue3, "sink");
    
    if (gst_pad_link(scaler3_src_pad1, detect_sink_pad3) != GST_PAD_LINK_OK ||
    gst_pad_link(scaler3_src_pad2, sink_sink_pad3) != GST_PAD_LINK_OK) {
        g_printerr("Failed to link scaler 3 tee branches of CAM3.\n");
        return -1;
        }
    
    gst_object_unref(scaler3_src_pad1);
    gst_object_unref(scaler3_src_pad2);
    gst_object_unref(detect_sink_pad3);
    gst_object_unref(sink_sink_pad3);
    
    //Link detection branch of CAM3
    if (!gst_element_link_many(detect_queue3, detect_scaler1_cam3, intermediate_caps_cam3, detect_scaler2_cam3, detect_caps_cam3, preproc_cam3, tensor_caps_cam3, inference_cam3, NULL)){
        g_printerr("Failed to link detection branch of CAM3.\n");
        return -1;
        }
    
    //manually link inference output to post.tensor
    GstPad *inference_src3 = gst_element_get_static_pad(inference_cam3, "src");
    GstPad *post_tensor3 = gst_element_get_static_pad(postproc_cam3, "tensor");
    if (gst_pad_link(inference_src3, post_tensor3) != GST_PAD_LINK_OK){
        g_printerr("Failed to link inference to post.tensor of CAM3.\n");
        return -1;
        }
    gst_object_unref(inference_src3);
    gst_object_unref(post_tensor3);
    
    //Link sink branch of CAM3
    if(!gst_element_link(sink_queue3,sink_caps_cam3)){
        g_printerr("Failed to link sink queue of CAM3.\n");
        return -1;
    }
    
    //Link display branch to post.sink of CAM3
    GstPad *sink_src3 = gst_element_get_static_pad(sink_caps_cam3, "src");
    GstPad *post_sink3 = gst_element_get_static_pad(postproc_cam3, "sink");
    if (gst_pad_link(sink_src3, post_sink3) != GST_PAD_LINK_OK){
        g_printerr("Failed to link sink src to post.sink of CAM3.\n");
        return -1;
    }
    gst_object_unref(sink_src3);
    gst_object_unref(post_sink3);
    
    //Link postproc to splitsink of CAM3
    if (!gst_element_link_many(postproc_cam3, final_scaler_save_cam3, final_caps_save_cam3, split_videoconvert_cam3, encoder_queue3, encoder_cam3, parser_save_cam3, split_sink3, NULL)){
        g_printerr("Failed to link postproc to split saving of CAM3.\n");
        return -1;
    }
    
    //Link post.text to text_queue.sink pad
    GstPad *text_src3 = gst_element_request_pad_simple(postproc_cam3, "text");
    GstPad *text_queue_pad3 = gst_element_get_static_pad(text_queue_cam3, "sink");
    if (gst_pad_link(text_src3, text_queue_pad3) != GST_PAD_LINK_OK){
        g_printerr("Failed to link text pad branch of CAM3.\n");
        return -1;
    }
    gst_object_unref(text_src3);
    gst_object_unref(text_queue_pad3);
    
    //Link text branch of CAM3
    if (!gst_element_link(text_queue_cam3, filesink3)){
        g_printerr("Failed to link text branch of CAM3.\n");
        return -1;
    }
    
    //Link stream branch of CAM3
    if (!gst_element_link_many(stream_queue_cam3, final_scaler_stream_cam3, final_caps_stream_cam3, stream_videoconvert_cam3, stream_encoder_cam3, 
        parser_stream_cam3, stream_muxer_cam3, stream_sink3 ,NULL)){
        g_printerr("Failed to link stream branch of CAM3.\n");
        return -1;
    }
    
    //AUDIO BRANCH
    //Link audio branch
    if(!gst_element_link_many(alsasrc, audio_src_queue, audioconvert, audioresample, audio_encoder, audio_parser, audio_tee, NULL)){
        g_printerr("Failed to link audio branch");
        return -1;
        }
    //Link audio_tee to audio_queue's
    GstPad *audio_tee_pad1 = gst_element_request_pad_simple(audio_tee, "src_%u");
    GstPad *audio_tee_pad2 = gst_element_request_pad_simple(audio_tee, "src_%u");
    GstPad *audio_tee_pad3 = gst_element_request_pad_simple(audio_tee, "src_%u");
    GstPad *audio_sink_sink_queue1_pad = gst_element_get_static_pad(audio_sink_queue1, "sink");
    GstPad *audio_sink_sink_queue2_pad = gst_element_get_static_pad(audio_sink_queue2, "sink");
    GstPad *audio_sink_sink_queue3_pad = gst_element_get_static_pad(audio_sink_queue3, "sink");
    if(gst_pad_link(audio_tee_pad1, audio_sink_sink_queue1_pad) != GST_PAD_LINK_OK ||
      gst_pad_link(audio_tee_pad2, audio_sink_sink_queue2_pad) != GST_PAD_LINK_OK ||
      gst_pad_link(audio_tee_pad3, audio_sink_sink_queue3_pad) != GST_PAD_LINK_OK){
        g_printerr("Failed to link audio_tee to audio queue pads");
        return -1;
        }
        
    gst_element_release_request_pad(audio_tee, audio_tee_pad1);
    gst_element_release_request_pad(audio_tee, audio_tee_pad2);
    gst_element_release_request_pad(audio_tee, audio_tee_pad3);
    
    gst_object_unref(audio_tee_pad1);
    gst_object_unref(audio_tee_pad2);
    gst_object_unref(audio_tee_pad3);
    gst_object_unref(audio_sink_sink_queue1_pad);
    gst_object_unref(audio_sink_sink_queue2_pad); 
    gst_object_unref(audio_sink_sink_queue3_pad);   
    
        
    
    //manually link audio branch  to CAM1,CAM2 and CAM3 videos
    
    GstPad *audio_sink_src_queue1_pad = gst_element_get_static_pad(audio_sink_queue1, "src");
    GstPad *audio_sink_src_queue2_pad = gst_element_get_static_pad(audio_sink_queue2, "src");
    GstPad *audio_sink_src_queue3_pad = gst_element_get_static_pad(audio_sink_queue3, "src");
    GstPad *muxer_audio_pad_cam1 = gst_element_request_pad_simple(split_sink1, "audio_%u");
    GstPad *muxer_audio_pad_cam2 = gst_element_request_pad_simple(split_sink2, "audio_%u");
    GstPad *muxer_audio_pad_cam3 = gst_element_request_pad_simple(split_sink3, "audio_%u");
    
    
    if(gst_pad_link(audio_sink_src_queue1_pad, muxer_audio_pad_cam1) != GST_PAD_LINK_OK ||
      gst_pad_link(audio_sink_src_queue2_pad, muxer_audio_pad_cam2) != GST_PAD_LINK_OK ||
      gst_pad_link(audio_sink_src_queue3_pad, muxer_audio_pad_cam3) != GST_PAD_LINK_OK) {
      g_printerr("Failed to link pads to splitmuxsink.\n");
      return -1;
      }
      
    
    gst_object_unref(audio_sink_src_queue1_pad);
    gst_object_unref(audio_sink_src_queue2_pad);
    gst_object_unref(audio_sink_src_queue3_pad);
    gst_object_unref(muxer_audio_pad_cam1);
    gst_object_unref(muxer_audio_pad_cam2);
    gst_object_unref(muxer_audio_pad_cam3);
    
    
    
    //Start pipeline
    ret = gst_element_set_state(pipeline, GST_STATE_PLAYING);
    if (ret == GST_STATE_CHANGE_FAILURE) {
        g_printerr("Unable to set the pipeline to the playing state.\n");
        gst_object_unref(pipeline);
        return -1;
    }
    
    // Bus message handling
    bus = gst_element_get_bus(pipeline);
    do {
        msg = gst_bus_timed_pop_filtered(bus, GST_CLOCK_TIME_NONE,
            GST_MESSAGE_ERROR | GST_MESSAGE_EOS | GST_MESSAGE_STATE_CHANGED);
          
        if (msg != NULL) {
            GError *err;
            gchar *debug_info;
    
        switch (GST_MESSAGE_TYPE(msg)) {
            case GST_MESSAGE_ERROR:
                gst_message_parse_error(msg, &err, &debug_info);
                g_printerr("Error received from element %s: %s\n",
                    GST_OBJECT_NAME(msg->src), err->message);
                g_printerr("Debugging information: %s\n",
                    debug_info ? debug_info : "none");
                g_clear_error(&err);
                g_free(debug_info);
                terminate = TRUE;
                break;
            case GST_MESSAGE_EOS:
                g_print("End of stream reached.\n");
                terminate = TRUE;
                break;
            case GST_MESSAGE_STATE_CHANGED:
                if (GST_MESSAGE_SRC(msg) == GST_OBJECT(pipeline)) {
                    GstState old_state, new_state, pending_state;
                    gst_message_parse_state_changed(msg, &old_state, &new_state, &pending_state);
                    g_print("Pipeline state changed from %s to %s\n",
                    gst_element_state_get_name(old_state),
                    gst_element_state_get_name(new_state));
                    }
                break;
            default:
                g_printerr("Unexpected message received.\n");
                break;
            }
        gst_message_unref(msg);
        }
    } while (!terminate);
    
    // Cleanup
    gst_object_unref(bus);
    gst_element_set_state(pipeline, GST_STATE_NULL);
    gst_object_unref(pipeline);
    
    return 0;
    }
    

    #include <gst/gst.h>
    #include <glib.h>
    
    int main(int argc, char *argv[]){
    GstElement *pipeline; 
    
    //cam1
    GstElement *source_cam1, *source_queue_cam1, *source_caps_cam1, *tiovxisp_cam1, *videoconvert_cam1, *nv12_caps_cam1, *scaler_split_cam1;
    GstElement *detect_queue_cam1, *detect_scaler1_cam1, *intermediate_caps_cam1, *detect_scaler2_cam1,  *detect_caps_cam1, *preproc_cam1, *tensor_caps_cam1, *inference_cam1;
    GstElement *sink_queue_cam1, *sink_caps_cam1;
    GstElement *text_queue_cam1, *filesink_cam1;
    GstElement *postproc_cam1, *final_scaler_cam1, *final_caps_cam1, *videoconvert_split_cam1, *encoder_queue_cam1, *encoder_cam1, *parser_cam1, *split_sink_cam1;
    
    //cam2
    GstElement *source_cam2, *source_queue_cam2, *source_caps_cam2, *tiovxisp_cam2, *videoconvert_cam2, *nv12_caps_cam2, *scaler_split_cam2;
    GstElement *detect_queue_cam2, *detect_scaler1_cam2, *intermediate_caps_cam2, *detect_scaler2_cam2, *detect_caps_cam2, *preproc_cam2, *tensor_caps_cam2, *inference_cam2;
    GstElement *sink_queue_cam2, *sink_caps_cam2;
    GstElement *text_queue_cam2, *filesink_cam2;
    GstElement *postproc_cam2, *final_scaler_cam2, *final_caps_cam2, *videoconvert_split_cam2, *encoder_queue_cam2, *encoder_cam2, *parser_cam2, *split_sink_cam2;
    //cam3
    GstElement *source_cam3, *source_queue_cam3, *source_caps_cam3, *tiovxisp_cam3, *videoconvert_cam3, *nv12_caps_cam3, *scaler_split_cam3;
    GstElement *detect_queue_cam3, *detect_scaler1_cam3, *intermediate_caps_cam3, *detect_scaler2_cam3, *detect_caps_cam3, *preproc_cam3, *tensor_caps_cam3, *inference_cam3;
    GstElement *sink_queue_cam3, *sink_caps_cam3;
    GstElement *text_queue_cam3, *filesink_cam3;
    GstElement *postproc_cam3, *final_scaler_cam3, *final_caps_cam3, *videoconvert_split_cam3, *encoder_queue_cam3, *encoder_cam3, *parser_cam3, *split_sink_cam3;
    
    //audio
    GstElement *audio_src_queue, *alsasrc, *audioconvert, *audioresample, *audio_encoder, *audio_parser, *audio_tee, *audio_sink_queue_cam1, *audio_sink_queue_cam2, *audio_sink_queue_cam3;
    
    GstBus *bus;
    GstMessage *msg;
    GstStateChangeReturn ret;
    gboolean terminate = FALSE;
    
    gst_init(&argc, &argv);
    
    //create elements
    
    pipeline = gst_pipeline_new("Video-Pipeline");
    //cam1
    source_cam1 = gst_element_factory_make("v4l2src","source_cam1");
    source_queue_cam1 = gst_element_factory_make("queue","source_queue_cam1");
    source_caps_cam1 = gst_element_factory_make("capsfilter","source_caps_cam1");
    tiovxisp_cam1 = gst_element_factory_make("tiovxisp","tiovxisp_cam1");
    videoconvert_cam1 = gst_element_factory_make("videoconvert","videoconvert_cam1");
    nv12_caps_cam1 = gst_element_factory_make("capsfilter","nv12_caps_cam1");
    scaler_split_cam1 = gst_element_factory_make("tiovxmultiscaler","scaler_split_cam1");
    detect_queue_cam1 = gst_element_factory_make("queue","detect_queue_cam1");
    detect_scaler1_cam1 = gst_element_factory_make("tiovxmultiscaler", "detect_scaler1_cam1");
    intermediate_caps_cam1 = gst_element_factory_make("capsfilter", "intermediate_caps_cam1");
    detect_scaler2_cam1 = gst_element_factory_make("tiovxmultiscaler", "detect_scaler2_cam1");
    detect_caps_cam1 = gst_element_factory_make("capsfilter","detect_caps_cam1");
    preproc_cam1 = gst_element_factory_make("tiovxdlpreproc","preproc_cam1");
    tensor_caps_cam1 = gst_element_factory_make("capsfilter","tensor_caps_cam1");
    inference_cam1 = gst_element_factory_make("tidlinferer","inference_cam1");
    sink_queue_cam1 = gst_element_factory_make("queue","sink_queue_cam1");
    sink_caps_cam1 = gst_element_factory_make("capsfilter","sink_caps_cam1");
    postproc_cam1 = gst_element_factory_make("tidlpostproc","postproc_cam1");
    final_scaler_cam1 = gst_element_factory_make("tiovxmultiscaler","final_scaler_cam1");
    final_caps_cam1 = gst_element_factory_make("capsfilter","final_caps_cam1");
    videoconvert_split_cam1 = gst_element_factory_make("videoconvert","videoconvert_split_cam1");
    encoder_queue_cam1 = gst_element_factory_make("queue","encoder_queue_cam1");
    encoder_cam1 = gst_element_factory_make("v4l2h264enc","encoder_cam1");
    parser_cam1 = gst_element_factory_make("h264parse","parser_cam1");
    split_sink_cam1 = gst_element_factory_make("splitmuxsink","split_sink_cam1");
    text_queue_cam1 = gst_element_factory_make("queue","text_queue_cam1");
    filesink_cam1 = gst_element_factory_make("filesink","filesink_cam1");
    
    //cam2
    source_cam2 = gst_element_factory_make("v4l2src","source_cam2");
    source_queue_cam2 = gst_element_factory_make("queue","source_queue_cam2");
    source_caps_cam2 = gst_element_factory_make("capsfilter","source_caps_cam2");
    tiovxisp_cam2 = gst_element_factory_make("tiovxisp","tiovxisp_cam2");
    videoconvert_cam2 = gst_element_factory_make("videoconvert","videoconvert_cam2");
    nv12_caps_cam2 = gst_element_factory_make("capsfilter","nv12_caps_cam2");
    scaler_split_cam2 = gst_element_factory_make("tiovxmultiscaler","scaler_split_cam2");
    detect_queue_cam2 = gst_element_factory_make("queue","detect_queue_cam2");
    detect_scaler1_cam2 = gst_element_factory_make("tiovxmultiscaler", "detect_scaler1_cam2");
    intermediate_caps_cam2 = gst_element_factory_make("capsfilter", "intermediate_caps_cam2");
    detect_scaler2_cam2 = gst_element_factory_make("tiovxmultiscaler", "detect_scaler2_cam2");
    detect_caps_cam2 = gst_element_factory_make("capsfilter","detect_caps_cam2");
    preproc_cam2 = gst_element_factory_make("tiovxdlpreproc","preproc_cam2");
    tensor_caps_cam2 = gst_element_factory_make("capsfilter","tensor_caps_cam2");
    inference_cam2 = gst_element_factory_make("tidlinferer","inference_cam2");
    sink_queue_cam2 = gst_element_factory_make("queue","sink_queue_cam2");
    sink_caps_cam2 = gst_element_factory_make("capsfilter","sink_caps_cam2");
    postproc_cam2 = gst_element_factory_make("tidlpostproc","postproc_cam2");
    final_scaler_cam2 = gst_element_factory_make("tiovxmultiscaler","final_scaler_cam2");
    final_caps_cam2 = gst_element_factory_make("capsfilter","final_caps_cam2");
    videoconvert_split_cam2 = gst_element_factory_make("videoconvert","videoconvert_split_cam2");
    encoder_queue_cam2 = gst_element_factory_make("queue","encoder_queue_cam2");
    encoder_cam2 = gst_element_factory_make("v4l2h264enc","encoder_cam2");
    parser_cam2 = gst_element_factory_make("h264parse","parser_cam2");
    split_sink_cam2 = gst_element_factory_make("splitmuxsink","split_sink_cam2");
    text_queue_cam2 = gst_element_factory_make("queue","text_queue_cam2");
    filesink_cam2 = gst_element_factory_make("filesink","filesink_cam2");
    
    //cam3
    source_cam3 = gst_element_factory_make("v4l2src","source_cam3");
    source_queue_cam3 = gst_element_factory_make("queue","source_queue_cam3");
    source_caps_cam3 = gst_element_factory_make("capsfilter","source_caps_cam3");
    tiovxisp_cam3 = gst_element_factory_make("tiovxisp","tiovxisp_cam3");
    videoconvert_cam3 = gst_element_factory_make("videoconvert","videoconvert_cam3");
    nv12_caps_cam3 = gst_element_factory_make("capsfilter","nv12_caps_cam3");
    scaler_split_cam3 = gst_element_factory_make("tiovxmultiscaler","scaler_split_cam3");
    detect_queue_cam3 = gst_element_factory_make("queue","detect_queue_cam3");
    detect_scaler1_cam3 = gst_element_factory_make("tiovxmultiscaler", "detect_scaler1_cam3");
    intermediate_caps_cam3 = gst_element_factory_make("capsfilter", "intermediate_caps_cam3");
    detect_scaler2_cam3 = gst_element_factory_make("tiovxmultiscaler", "detect_scaler2_cam3");
    detect_caps_cam3 = gst_element_factory_make("capsfilter","detect_caps_cam3");
    preproc_cam3 = gst_element_factory_make("tiovxdlpreproc","preproc_cam3");
    tensor_caps_cam3 = gst_element_factory_make("capsfilter","tensor_caps_cam3");
    inference_cam3 = gst_element_factory_make("tidlinferer","inference_cam3");
    sink_queue_cam3 = gst_element_factory_make("queue","sink_queue_cam3");
    sink_caps_cam3 = gst_element_factory_make("capsfilter","sink_caps_cam3");
    postproc_cam3 = gst_element_factory_make("tidlpostproc","postproc_cam3");
    final_scaler_cam3 = gst_element_factory_make("tiovxmultiscaler","final_scaler_cam3");
    final_caps_cam3 = gst_element_factory_make("capsfilter","final_caps_cam3");
    videoconvert_split_cam3 = gst_element_factory_make("videoconvert","videoconvert_split_cam3");
    encoder_queue_cam3 = gst_element_factory_make("queue","encoder_queue_cam3");
    encoder_cam3 = gst_element_factory_make("v4l2h264enc","encoder_cam3");
    parser_cam3 = gst_element_factory_make("h264parse","parser_cam3");
    split_sink_cam3 = gst_element_factory_make("splitmuxsink","split_sink_cam3");
    text_queue_cam3 = gst_element_factory_make("queue","text_queue_cam3");
    filesink_cam3 = gst_element_factory_make("filesink","filesink_cam3");
    
    //audio
    alsasrc = gst_element_factory_make("alsasrc","alsasrc");
    audio_src_queue = gst_element_factory_make("queue","audio_queue");
    audioconvert = gst_element_factory_make("audioconvert","audioconvert");
    audioresample = gst_element_factory_make("audioresample","audioresample");
    audio_encoder = gst_element_factory_make("avenc_aac","audio_encoder");
    audio_parser = gst_element_factory_make("aacparse","audio_parser");
    audio_tee = gst_element_factory_make("tee", "audio_tee");
    audio_sink_queue_cam1 = gst_element_factory_make("queue","audio_sink_queue_cam1");
    audio_sink_queue_cam2 = gst_element_factory_make("queue","audio_sink_queue_cam2");
    audio_sink_queue_cam3 = gst_element_factory_make("queue","audio_sink_queue_cam3");
    
    //verify element created
    if(!pipeline 
        // Camera 1
        || !source_cam1 || !source_queue_cam1 || !source_caps_cam1 || !tiovxisp_cam1 || !videoconvert_cam1 || !nv12_caps_cam1 || !scaler_split_cam1
        || !detect_queue_cam1 || !detect_scaler1_cam1 || !intermediate_caps_cam1 || !detect_scaler2_cam1 || !detect_caps_cam1 || !preproc_cam1 || !tensor_caps_cam1 || !inference_cam1
        || !sink_queue_cam1 || !sink_caps_cam1
        || !postproc_cam1 || !final_scaler_cam1 || !final_caps_cam1 || !videoconvert_split_cam1 || !encoder_queue_cam1 || !encoder_cam1 || !parser_cam1
        || !split_sink_cam1 || !text_queue_cam1 || !filesink_cam1
    
        // Camera 2
        || !source_cam2 || !source_queue_cam2 || !source_caps_cam2 || !tiovxisp_cam2 || !videoconvert_cam2 || !nv12_caps_cam2 || !scaler_split_cam2
        || !detect_queue_cam2 || !detect_scaler1_cam2 || !intermediate_caps_cam2 || !detect_scaler2_cam2 || !detect_caps_cam2 || !preproc_cam2 || !tensor_caps_cam2 || !inference_cam2
        || !sink_queue_cam2 || !sink_caps_cam2
        || !postproc_cam2 || !final_scaler_cam2 || !final_caps_cam2 || !videoconvert_split_cam2 || !encoder_queue_cam2 || !encoder_cam2 || !parser_cam2
        || !split_sink_cam2 || !text_queue_cam2 || !filesink_cam2
    
        // Camera 3
        || !source_cam3 || !source_queue_cam3 || !source_caps_cam3 || !tiovxisp_cam3 || !videoconvert_cam3 || !nv12_caps_cam3 || !scaler_split_cam3
        || !detect_queue_cam3 || !detect_scaler1_cam3 || !intermediate_caps_cam3 || !detect_scaler2_cam3 || !detect_caps_cam3 || !preproc_cam3 || !tensor_caps_cam3 || !inference_cam3
        || !sink_queue_cam3 || !sink_caps_cam3
        || !postproc_cam3 || !final_scaler_cam3 || !final_caps_cam3 || !videoconvert_split_cam3 || !encoder_queue_cam3 || !encoder_cam3 || !parser_cam3
        || !split_sink_cam3 || !text_queue_cam3 || !filesink_cam3
        //audio
        || !alsasrc || !audio_src_queue || !audioconvert || !audioresample || !audio_encoder || !audio_parser || !audio_tee || !audio_sink_queue_cam1 || !audio_sink_queue_cam2 || !audio_sink_queue_cam3){
        g_printerr("Failed to create all elements");
        return -1;
    }
    
    //Set properties for elements 
    
    
    // ========== CAM1 ==========
    g_object_set(G_OBJECT(source_cam1), "device","/dev/video-imx219-cam0", "io-mode",5, NULL);
    
    g_object_set(G_OBJECT(source_queue_cam1), "max-size-buffers",4, "leaky",2, NULL);
    
    GstCaps *source_caps_val_cam1 = gst_caps_new_simple("video/x-bayer",
        "width", G_TYPE_INT, 3280,
        "height", G_TYPE_INT, 2464,
        "format", G_TYPE_STRING, "rggb10",
        "framerate", GST_TYPE_FRACTION, 15, 1, NULL);
    g_object_set(G_OBJECT(source_caps_cam1), "caps", source_caps_val_cam1, NULL);
    gst_caps_unref(source_caps_val_cam1);
    
    g_object_set(G_OBJECT(tiovxisp_cam1),
        "sensor-name", "SENSOR_SONY_IMX219_RPI",
        "dcc-isp-file", "/opt/imaging/imx219/linear/dcc_viss_3280x2464_10b.bin",
        "format-msb", 9, NULL);
    
    GstCaps *nv12_caps_val_cam1 = gst_caps_new_simple("video/x-raw", "format", G_TYPE_STRING, "NV12", NULL);
    g_object_set(G_OBJECT(nv12_caps_cam1), "caps", nv12_caps_val_cam1, NULL);
    gst_caps_unref(nv12_caps_val_cam1);
    
    g_object_set(G_OBJECT(detect_queue_cam1), "leaky",2, "max-size-buffers",2, NULL);
    
    GstCaps *intermediate_caps_val_cam1 = gst_caps_new_simple("video/x-raw", "width", G_TYPE_INT, 820, "height", G_TYPE_INT, 616, NULL);
    g_object_set(G_OBJECT(intermediate_caps_cam1), "caps", intermediate_caps_val_cam1, NULL);
    gst_caps_unref(intermediate_caps_val_cam1);
    
    GstCaps *detect_caps_val_cam1 = gst_caps_new_simple("video/x-raw", "width", G_TYPE_INT, 320, "height", G_TYPE_INT, 320, NULL);
    g_object_set(G_OBJECT(detect_caps_cam1), "caps", detect_caps_val_cam1, NULL);
    gst_caps_unref(detect_caps_val_cam1);
    
    g_object_set(G_OBJECT(preproc_cam1),
        "model", "/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320", 
        "out-pool-size", 4, NULL);
    
    GstCaps *tensor_caps_val_cam1 = gst_caps_new_simple("application/x-tensor-tiovx", NULL);
    g_object_set(G_OBJECT(tensor_caps_cam1), "caps", tensor_caps_val_cam1, NULL);
    gst_caps_unref(tensor_caps_val_cam1);
    
    g_object_set(G_OBJECT(inference_cam1),
        "target", 1,
        "model", "/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320", NULL);
    
    g_object_set(G_OBJECT(sink_queue_cam1), "leaky", 2, "max-size-buffers", 2, NULL);
    
    GstCaps *sink_caps_val_cam1 = gst_caps_new_simple("video/x-raw", "width", G_TYPE_INT, 3280, "height", G_TYPE_INT, 2464, NULL);
    g_object_set(G_OBJECT(sink_caps_cam1), "caps", sink_caps_val_cam1, NULL);
    gst_caps_unref(sink_caps_val_cam1);
    
    g_object_set(G_OBJECT(postproc_cam1),
        "model", "/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320",
        "alpha", 0.4,
        "viz-threshold", 0.6,
        "top-N", 5,
        "display-model", TRUE, NULL);
    
    GstCaps *final_caps_val_cam1 = gst_caps_new_simple("video/x-raw",
        "width", G_TYPE_INT, 1920,
        "height", G_TYPE_INT, 1080,
        "framerate", GST_TYPE_FRACTION, 15, 1, NULL);
    g_object_set(G_OBJECT(final_caps_cam1), "caps", final_caps_val_cam1, NULL);
    gst_caps_unref(final_caps_val_cam1);
    
    g_object_set(G_OBJECT(encoder_queue_cam1), "max-size-buffers", 1, NULL);
    
    GstStructure *extra_controls_cam1 = gst_structure_new("controls",
        "h264_i_frame_period", G_TYPE_INT, 10, NULL);
    g_object_set(G_OBJECT(encoder_cam1), "extra-controls", extra_controls_cam1, NULL);
    
    g_object_set(G_OBJECT(split_sink_cam1),
        "location", "camera1_event/video%02d.mkv",
        "max-size-time", 10000000000,
        "max-files", 4,
        "muxer-factory", "matroskamux", NULL);
    
    g_object_set(G_OBJECT(filesink_cam1),
        "location", "camera1_event/class.yaml",
        "sync", TRUE, NULL);
    
    // ========== CAM2 ==========
    g_object_set(G_OBJECT(source_cam2), "device","/dev/video-imx219-cam1", "io-mode",5, NULL);
    
    g_object_set(G_OBJECT(source_queue_cam2), "max-size-buffers",4, "leaky",2, NULL);
    
    GstCaps *source_caps_val_cam2 = gst_caps_new_simple("video/x-bayer",
        "width", G_TYPE_INT, 3280,
        "height", G_TYPE_INT, 2464,
        "format", G_TYPE_STRING, "rggb10",
        "framerate", GST_TYPE_FRACTION, 15, 1, NULL);
    g_object_set(G_OBJECT(source_caps_cam2), "caps", source_caps_val_cam2, NULL);
    gst_caps_unref(source_caps_val_cam2);
    
    g_object_set(G_OBJECT(tiovxisp_cam2),
        "sensor-name", "SENSOR_SONY_IMX219_RPI",
        "dcc-isp-file", "/opt/imaging/imx219/linear/dcc_viss_3280x2464_10b.bin",
        "format-msb", 9, NULL);
    
    GstCaps *nv12_caps_val_cam2 = gst_caps_new_simple("video/x-raw", "format", G_TYPE_STRING, "NV12", NULL);
    g_object_set(G_OBJECT(nv12_caps_cam2), "caps", nv12_caps_val_cam2, NULL);
    gst_caps_unref(nv12_caps_val_cam2);
    
    g_object_set(G_OBJECT(detect_queue_cam2), "leaky",2, "max-size-buffers",2, NULL);
    
    GstCaps *intermediate_caps_val_cam2 = gst_caps_new_simple("video/x-raw", "width", G_TYPE_INT, 820, "height", G_TYPE_INT, 616, NULL);
    g_object_set(G_OBJECT(intermediate_caps_cam2), "caps", intermediate_caps_val_cam2, NULL);
    gst_caps_unref(intermediate_caps_val_cam2);
    
    GstCaps *detect_caps_val_cam2 = gst_caps_new_simple("video/x-raw", "width", G_TYPE_INT, 320, "height", G_TYPE_INT, 320, NULL);
    g_object_set(G_OBJECT(detect_caps_cam2), "caps", detect_caps_val_cam2, NULL);
    gst_caps_unref(detect_caps_val_cam2);
    
    g_object_set(G_OBJECT(preproc_cam2),
        "model", "/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320", 
        "out-pool-size", 4, NULL);
    
    GstCaps *tensor_caps_val_cam2 = gst_caps_new_simple("application/x-tensor-tiovx", NULL);
    g_object_set(G_OBJECT(tensor_caps_cam2), "caps", tensor_caps_val_cam2, NULL);
    gst_caps_unref(tensor_caps_val_cam2);
    
    g_object_set(G_OBJECT(inference_cam2),
        "target", 1,
        "model", "/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320", NULL);
    
    g_object_set(G_OBJECT(sink_queue_cam2), "leaky", 2, "max-size-buffers", 2, NULL);
    
    GstCaps *sink_caps_val_cam2 = gst_caps_new_simple("video/x-raw", "width", G_TYPE_INT, 3280, "height", G_TYPE_INT, 2464, NULL);
    g_object_set(G_OBJECT(sink_caps_cam2), "caps", sink_caps_val_cam2, NULL);
    gst_caps_unref(sink_caps_val_cam2);
    
    g_object_set(G_OBJECT(postproc_cam2),
        "model", "/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320",
        "alpha", 0.4,
        "viz-threshold", 0.6,
        "top-N", 5,
        "display-model", TRUE, NULL);
    
    GstCaps *final_caps_val_cam2 = gst_caps_new_simple("video/x-raw",
        "width", G_TYPE_INT, 1920,
        "height", G_TYPE_INT, 1080,
        "framerate", GST_TYPE_FRACTION, 15, 1, NULL);
    g_object_set(G_OBJECT(final_caps_cam2), "caps", final_caps_val_cam2, NULL);
    gst_caps_unref(final_caps_val_cam2);
    
    g_object_set(G_OBJECT(encoder_queue_cam2), "max-size-buffers", 1, NULL);
    
    GstStructure *extra_controls_cam2 = gst_structure_new("controls",
        "h264_i_frame_period", G_TYPE_INT, 10, NULL);
    g_object_set(G_OBJECT(encoder_cam2), "extra-controls", extra_controls_cam2, NULL);
    
    g_object_set(G_OBJECT(split_sink_cam2),
        "location", "camera2_event/video%02d.mkv",
        "max-size-time", 10000000000,
        "max-files", 4,
        "muxer-factory", "matroskamux", NULL);
    
    g_object_set(G_OBJECT(filesink_cam2),
        "location", "camera2_event/class.yaml",
        "sync", TRUE, NULL);
    
    // ========== CAM3 ==========
    g_object_set(G_OBJECT(source_cam3), "device","/dev/video-imx219-cam2", "io-mode",5, NULL);
    
    g_object_set(G_OBJECT(source_queue_cam3), "max-size-buffers",4, "leaky",2, NULL);
    
    GstCaps *source_caps_val_cam3 = gst_caps_new_simple("video/x-bayer",
        "width", G_TYPE_INT, 3280,
        "height", G_TYPE_INT, 2464,
        "format", G_TYPE_STRING, "rggb10",
        "framerate", GST_TYPE_FRACTION, 15, 1, NULL);
    g_object_set(G_OBJECT(source_caps_cam3), "caps", source_caps_val_cam3, NULL);
    gst_caps_unref(source_caps_val_cam3);
    
    g_object_set(G_OBJECT(tiovxisp_cam3),
        "sensor-name", "SENSOR_SONY_IMX219_RPI",
        "dcc-isp-file", "/opt/imaging/imx219/linear/dcc_viss_3280x2464_10b.bin",
        "format-msb", 9, NULL);
    
    GstCaps *nv12_caps_val_cam3 = gst_caps_new_simple("video/x-raw", "format", G_TYPE_STRING, "NV12", NULL);
    g_object_set(G_OBJECT(nv12_caps_cam3), "caps", nv12_caps_val_cam3, NULL);
    gst_caps_unref(nv12_caps_val_cam3);
    
    g_object_set(G_OBJECT(detect_queue_cam3), "leaky",2, "max-size-buffers",2, NULL);
    
    GstCaps *intermediate_caps_val_cam3 = gst_caps_new_simple("video/x-raw", "width", G_TYPE_INT, 820, "height", G_TYPE_INT, 616, NULL);
    g_object_set(G_OBJECT(intermediate_caps_cam3), "caps", intermediate_caps_val_cam3, NULL);
    gst_caps_unref(intermediate_caps_val_cam3);
    
    GstCaps *detect_caps_val_cam3 = gst_caps_new_simple("video/x-raw", "width", G_TYPE_INT, 320, "height", G_TYPE_INT, 320, NULL);
    g_object_set(G_OBJECT(detect_caps_cam3), "caps", detect_caps_val_cam3, NULL);
    gst_caps_unref(detect_caps_val_cam3);
    
    g_object_set(G_OBJECT(preproc_cam3),
        "model", "/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320", 
        "out-pool-size", 4, NULL);
    
    GstCaps *tensor_caps_val_cam3 = gst_caps_new_simple("application/x-tensor-tiovx", NULL);
    g_object_set(G_OBJECT(tensor_caps_cam3), "caps", tensor_caps_val_cam3, NULL);
    gst_caps_unref(tensor_caps_val_cam3);
    
    g_object_set(G_OBJECT(inference_cam3),
        "target", 1,
        "model", "/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320", NULL);
    
    g_object_set(G_OBJECT(sink_queue_cam3), "leaky", 2, "max-size-buffers", 2, NULL);
    
    GstCaps *sink_caps_val_cam3 = gst_caps_new_simple("video/x-raw", "width", G_TYPE_INT, 3280, "height", G_TYPE_INT, 2464, NULL);
    g_object_set(G_OBJECT(sink_caps_cam3), "caps", sink_caps_val_cam3, NULL);
    gst_caps_unref(sink_caps_val_cam3);
    
    g_object_set(G_OBJECT(postproc_cam3),
        "model", "/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320",
        "alpha", 0.4,
        "viz-threshold", 0.6,
        "top-N", 5,
        "display-model", TRUE, NULL);
    
    GstCaps *final_caps_val_cam3 = gst_caps_new_simple("video/x-raw",
        "width", G_TYPE_INT, 1920,
        "height", G_TYPE_INT, 1080,
        "framerate", GST_TYPE_FRACTION, 15, 1, NULL);
    g_object_set(G_OBJECT(final_caps_cam3), "caps", final_caps_val_cam3, NULL);
    gst_caps_unref(final_caps_val_cam3);
    
    g_object_set(G_OBJECT(encoder_queue_cam3), "max-size-buffers", 1, NULL);
    
    GstStructure *extra_controls_cam3 = gst_structure_new("controls",
        "h264_i_frame_period", G_TYPE_INT, 10, NULL);
    g_object_set(G_OBJECT(encoder_cam3), "extra-controls", extra_controls_cam3, NULL);
    
    g_object_set(G_OBJECT(split_sink_cam3),
        "location", "camera3_event/video%02d.mkv",
        "max-size-time", 10000000000,
        "max-files", 4,
        "muxer-factory", "matroskamux", NULL);
    
    g_object_set(G_OBJECT(filesink_cam3),
        "location", "camera3_event/class.yaml",
        "sync", TRUE, NULL);
    
    
    // Add elements to pipeline
    gst_bin_add_many(GST_BIN(pipeline),
        //cam1
        source_cam1, source_queue_cam1, source_caps_cam1, tiovxisp_cam1, videoconvert_cam1, nv12_caps_cam1, scaler_split_cam1,
        detect_queue_cam1, detect_caps_cam1, detect_scaler1_cam1, intermediate_caps_cam1, detect_scaler2_cam1, preproc_cam1, tensor_caps_cam1, inference_cam1,
        sink_queue_cam1, sink_caps_cam1,
        postproc_cam1, final_scaler_cam1, final_caps_cam1, videoconvert_split_cam1, encoder_queue_cam1, encoder_cam1, parser_cam1, split_sink_cam1,
        text_queue_cam1, filesink_cam1,
        //cam2
        source_cam2, source_queue_cam2, source_caps_cam2, tiovxisp_cam2, videoconvert_cam2, nv12_caps_cam2, scaler_split_cam2,
        detect_queue_cam2, detect_caps_cam2, detect_scaler1_cam2, intermediate_caps_cam2, detect_scaler2_cam2, preproc_cam2, tensor_caps_cam2, inference_cam2,
        sink_queue_cam2, sink_caps_cam2,
        postproc_cam2, final_scaler_cam2, final_caps_cam2, videoconvert_split_cam2, encoder_queue_cam2, encoder_cam2, parser_cam2, split_sink_cam2,
        text_queue_cam2, filesink_cam2,
        //cam3
        source_cam3, source_queue_cam3, source_caps_cam3, tiovxisp_cam3, videoconvert_cam3, nv12_caps_cam3, scaler_split_cam3,
        detect_queue_cam3, detect_caps_cam3, detect_scaler1_cam3, intermediate_caps_cam3, detect_scaler2_cam3, preproc_cam3, tensor_caps_cam3, inference_cam3,
        sink_queue_cam3, sink_caps_cam3,
        postproc_cam3, final_scaler_cam3, final_caps_cam3, videoconvert_split_cam3, encoder_queue_cam3, encoder_cam3, parser_cam3, split_sink_cam3,
        text_queue_cam3, filesink_cam3,
        //audio
        alsasrc, audio_src_queue, audioconvert, audioresample, audio_encoder, audio_parser, audio_tee, audio_sink_queue_cam1,audio_sink_queue_cam2, audio_sink_queue_cam3,
        NULL);
    
    //Set Pad Properties
    //CAM1   
    GstPad *tiovx_pad_cam1 = gst_element_request_pad_simple(tiovxisp_cam1,"sink_%u");
    if (!tiovx_pad_cam1){
        g_printerr("Failed to get sink pad from tiovxisp of cam1\n");
        return -1;
        }
    g_object_set(G_OBJECT(tiovx_pad_cam1),
        "dcc-2a-file","/opt/imaging/imx219/linear/dcc_2a_3280x2464_10b.bin",
        "device","/dev/v4l-imx219-subdev0",
        NULL);
    gst_object_unref(tiovx_pad_cam1);
    //CAM2
    GstPad *tiovx_pad_cam2 = gst_element_request_pad_simple(tiovxisp_cam2, "sink_%u");
    if (!tiovx_pad_cam2) {
        g_printerr("Failed to get sink pad from tiovxisp of cam2\n");
        return -1;
    }
    g_object_set(G_OBJECT(tiovx_pad_cam2),
        "dcc-2a-file", "/opt/imaging/imx219/linear/dcc_2a_3280x2464_10b.bin",
        "device", "/dev/v4l-imx219-subdev1",
        NULL);
    gst_object_unref(tiovx_pad_cam2);
    
    //CAM3
    GstPad *tiovx_pad_cam3 = gst_element_request_pad_simple(tiovxisp_cam3, "sink_%u");
    if (!tiovx_pad_cam3) {
        g_printerr("Failed to get sink pad from tiovxisp of cam3\n");
        return -1;
    }
    g_object_set(G_OBJECT(tiovx_pad_cam3),
        "dcc-2a-file", "/opt/imaging/imx219/linear/dcc_2a_3280x2464_10b.bin",
        "device", "/dev/v4l-imx219-subdev2",
        NULL);
    gst_object_unref(tiovx_pad_cam3);
    
        
    // Link elements
    // CAM1: Link elements of CAM1
    
    // CAM1: Link source to scaler split    
    if (!gst_element_link_many(source_cam1, source_queue_cam1, source_caps_cam1, tiovxisp_cam1, videoconvert_cam1, nv12_caps_cam1, scaler_split_cam1, NULL)) {
        g_printerr("CAM1: Source to nv12_caps link failed.\n");
        return -1;
    }
    
    // CAM1: detect_scaler1_cam1
    GstPad *detect_scaler1_src_cam1 = gst_element_request_pad_simple(detect_scaler1_cam1, "src_%u");
    if (!detect_scaler1_src_cam1){
        g_printerr("CAM1: Failed to create detect_scaler1 source pad");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler1_src_cam1),
        "roi-width",3280,
        "roi-height",2464,
        NULL);
    gst_object_unref(detect_scaler1_src_cam1);
    
    GstPad *detect_scaler1_sink_cam1 = gst_element_get_static_pad(detect_scaler1_cam1,"sink");
    if (!detect_scaler1_sink_cam1){
        g_printerr("CAM1: Failed to create detect_scaler1 sink pad");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler1_sink_cam1),
        "roi-width",820,
        "roi-height",616,
        NULL);
    gst_object_unref(detect_scaler1_sink_cam1);
    
    // CAM1: detect_scaler2_cam1
    GstPad *detect_scaler2_src_cam1 = gst_element_request_pad_simple(detect_scaler2_cam1, "src_%u");
    if (!detect_scaler2_src_cam1){
        g_printerr("CAM1: Failed to create detect_scaler2 source pad");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler2_src_cam1),
        "roi-width",820,
        "roi-height",616,
        NULL);
    gst_object_unref(detect_scaler2_src_cam1);
    
    GstPad *detect_scaler2_sink_cam1 = gst_element_get_static_pad(detect_scaler2_cam1,"sink");
    if (!detect_scaler2_sink_cam1){
        g_printerr("CAM1: Failed to create detect_scaler2 sink pad");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler2_sink_cam1),
        "roi-width",320,
        "roi-height",320,
        NULL);
    gst_object_unref(detect_scaler2_sink_cam1);
    
    // CAM1: Link detection branch
    if (!gst_element_link_many(detect_queue_cam1, detect_scaler1_cam1, intermediate_caps_cam1, detect_scaler2_cam1, detect_caps_cam1, preproc_cam1, tensor_caps_cam1, inference_cam1, NULL)){
        g_printerr("CAM1: Detection branch link failed!");
        return -1;
    }
    
    // CAM1: manually link inference output to post.tensor
    GstPad *inference_src_cam1 = gst_element_get_static_pad(inference_cam1, "src");
    GstPad *post_tensor_cam1 = gst_element_get_static_pad(postproc_cam1, "tensor");
    if (gst_pad_link(inference_src_cam1, post_tensor_cam1) != GST_PAD_LINK_OK){
        g_printerr("CAM1: Failed to link inference to post.tensor");
        return -1;
    }
    gst_object_unref(inference_src_cam1);
    gst_object_unref(post_tensor_cam1);
    
    // CAM1: Link sink branch
    if(!gst_element_link(sink_queue_cam1, sink_caps_cam1)){
        g_printerr("CAM1: scaler postproc.sink link failed!");
        return -1;
    }
          
    // CAM1: Link display branch to post.sink
    GstPad *sink_src_cam1 = gst_element_get_static_pad(sink_caps_cam1, "src");
    GstPad *post_sink_cam1 = gst_element_get_static_pad(postproc_cam1, "sink");
    if (gst_pad_link(sink_src_cam1, post_sink_cam1) != GST_PAD_LINK_OK){
        g_printerr("CAM1: Failed to link sink src to post.sink\n");
        return -1;
    }
    gst_object_unref(sink_src_cam1);
    gst_object_unref(post_sink_cam1);
        
    // CAM1: Link postproc to save branch   
    if(!gst_element_link_many(postproc_cam1, final_scaler_cam1, final_caps_cam1, videoconvert_split_cam1, encoder_queue_cam1, encoder_cam1, parser_cam1, split_sink_cam1,NULL)){
        g_printerr("CAM1: Failed to link postproc to splitsink");
        return -1;
    }
    
    // CAM1: Link multiscaler to both detection and sink branches
    GstPad *scaler_src_pad1_cam1 = gst_element_request_pad_simple(scaler_split_cam1, "src_%u");
    if(!scaler_src_pad1_cam1){
        g_printerr("CAM1: Failed to get source pad1 from multiscaler");
        return -1;
    }
    g_object_set(G_OBJECT(scaler_src_pad1_cam1),
        "roi-width",3280,
        "roi-height",2464,
        NULL);
    GstPad *scaler_src_pad2_cam1 = gst_element_request_pad_simple(scaler_split_cam1, "src_%u");
    if(!scaler_src_pad2_cam1){
        g_printerr("CAM1: Failed to get source pad2 from multiscaler");
        return -1;
    }
    g_object_set(G_OBJECT(scaler_src_pad2_cam1),
        "roi-width",3280,
        "roi-height",2464,
        NULL);
    GstPad *detect_sink_pad_cam1 = gst_element_get_static_pad(detect_queue_cam1, "sink");
    GstPad *sink_sink_pad_cam1 = gst_element_get_static_pad(sink_queue_cam1, "sink");
    
    if (gst_pad_link(scaler_src_pad1_cam1, detect_sink_pad_cam1) != GST_PAD_LINK_OK ||
        gst_pad_link(scaler_src_pad2_cam1, sink_sink_pad_cam1) != GST_PAD_LINK_OK) {
        g_printerr("CAM1: Failed to link tee branches\n");
        return -1;
    }
    
    gst_object_unref(scaler_src_pad1_cam1);
    gst_object_unref(scaler_src_pad2_cam1);
    gst_object_unref(detect_sink_pad_cam1);
    gst_object_unref(sink_sink_pad_cam1);
      
    // CAM1: Link post.text to text_queue pad
    GstPad *text_src_cam1 = gst_element_request_pad_simple(postproc_cam1,"text");
    GstPad *text_queue_pad_cam1 = gst_element_get_static_pad(text_queue_cam1,"sink");
    if (gst_pad_link(text_src_cam1,text_queue_pad_cam1) != GST_PAD_LINK_OK){
        g_printerr("CAM1: Failed to link text pad branch\n");
        return -1;
    }
    gst_object_unref(text_src_cam1);
    gst_object_unref(text_queue_pad_cam1);
    
    // CAM1: Link text branch
    if(!gst_element_link(text_queue_cam1, filesink_cam1)){
        g_printerr("CAM1: Failed to link text branch.\n");
        return -1;
    }
    
    // CAM2: Link elements of CAM2
    
    // CAM2: Link source to scaler split    
    if (!gst_element_link_many(source_cam2, source_queue_cam2, source_caps_cam2, tiovxisp_cam2, videoconvert_cam2, nv12_caps_cam2, scaler_split_cam2, NULL)) {
        g_printerr("CAM2: Source to nv12_caps link failed.\n");
        return -1;
    }
    
    // CAM2: detect_scaler1_cam2
    GstPad *detect_scaler1_src_cam2 = gst_element_request_pad_simple(detect_scaler1_cam2, "src_%u");
    if (!detect_scaler1_src_cam2){
        g_printerr("CAM2: Failed to create detect_scaler1 source pad");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler1_src_cam2),
        "roi-width",3280,
        "roi-height",2464,
        NULL);
    gst_object_unref(detect_scaler1_src_cam2);
    
    GstPad *detect_scaler1_sink_cam2 = gst_element_get_static_pad(detect_scaler1_cam2,"sink");
    if (!detect_scaler1_sink_cam2){
        g_printerr("CAM2: Failed to create detect_scaler1 sink pad");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler1_sink_cam2),
        "roi-width",820,
        "roi-height",616,
        NULL);
    gst_object_unref(detect_scaler1_sink_cam2);
    
    // CAM2: detect_scaler2_cam2
    GstPad *detect_scaler2_src_cam2 = gst_element_request_pad_simple(detect_scaler2_cam2, "src_%u");
    if (!detect_scaler2_src_cam2){
        g_printerr("CAM2: Failed to create detect_scaler2 source pad");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler2_src_cam2),
        "roi-width",820,
        "roi-height",616,
        NULL);
    gst_object_unref(detect_scaler2_src_cam2);
    
    GstPad *detect_scaler2_sink_cam2 = gst_element_get_static_pad(detect_scaler2_cam2,"sink");
    if (!detect_scaler2_sink_cam2){
        g_printerr("CAM2: Failed to create detect_scaler2 sink pad");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler2_sink_cam2),
        "roi-width",320,
        "roi-height",320,
        NULL);
    gst_object_unref(detect_scaler2_sink_cam2);
    
    // CAM2: Link detection branch
    if (!gst_element_link_many(detect_queue_cam2, detect_scaler1_cam2, intermediate_caps_cam2, detect_scaler2_cam2, detect_caps_cam2, preproc_cam2, tensor_caps_cam2, inference_cam2, NULL)){
        g_printerr("CAM2: Detection branch link failed!");
        return -1;
    }
    
    // CAM2: manually link inference output to post.tensor
    GstPad *inference_src_cam2 = gst_element_get_static_pad(inference_cam2, "src");
    GstPad *post_tensor_cam2 = gst_element_get_static_pad(postproc_cam2, "tensor");
    if (gst_pad_link(inference_src_cam2, post_tensor_cam2) != GST_PAD_LINK_OK){
        g_printerr("CAM2: Failed to link inference to post.tensor");
        return -1;
    }
    gst_object_unref(inference_src_cam2);
    gst_object_unref(post_tensor_cam2);
    
    // CAM2: Link sink branch
    if(!gst_element_link(sink_queue_cam2, sink_caps_cam2)){
        g_printerr("CAM2: scaler postproc.sink link failed!");
        return -1;
    }
          
    // CAM2: Link display branch to post.sink
    GstPad *sink_src_cam2 = gst_element_get_static_pad(sink_caps_cam2, "src");
    GstPad *post_sink_cam2 = gst_element_get_static_pad(postproc_cam2, "sink");
    if (gst_pad_link(sink_src_cam2, post_sink_cam2) != GST_PAD_LINK_OK){
        g_printerr("CAM2: Failed to link sink src to post.sink\n");
        return -1;
    }
    gst_object_unref(sink_src_cam2);
    gst_object_unref(post_sink_cam2);
        
    // CAM2: Link postproc to save branch   
    if(!gst_element_link_many(postproc_cam2, final_scaler_cam2, final_caps_cam2, videoconvert_split_cam2, encoder_queue_cam2, encoder_cam2, parser_cam2, split_sink_cam2,NULL)){
        g_printerr("CAM2: Failed to link postproc to splitsink");
        return -1;
    }
    
    // CAM2: Link multiscaler to both detection and sink branches
    GstPad *scaler_src_pad1_cam2 = gst_element_request_pad_simple(scaler_split_cam2, "src_%u");
    if(!scaler_src_pad1_cam2){
        g_printerr("CAM2: Failed to get source pad1 from multiscaler");
        return -1;
    }
    g_object_set(G_OBJECT(scaler_src_pad1_cam2),
        "roi-width",3280,
        "roi-height",2464,
        NULL);
    GstPad *scaler_src_pad2_cam2 = gst_element_request_pad_simple(scaler_split_cam2, "src_%u");
    if(!scaler_src_pad2_cam2){
        g_printerr("CAM2: Failed to get source pad2 from multiscaler");
        return -1;
    }
    g_object_set(G_OBJECT(scaler_src_pad2_cam2),
        "roi-width",3280,
        "roi-height",2464,
        NULL);
    GstPad *detect_sink_pad_cam2 = gst_element_get_static_pad(detect_queue_cam2, "sink");
    GstPad *sink_sink_pad_cam2 = gst_element_get_static_pad(sink_queue_cam2, "sink");
    
    if (gst_pad_link(scaler_src_pad1_cam2, detect_sink_pad_cam2) != GST_PAD_LINK_OK ||
        gst_pad_link(scaler_src_pad2_cam2, sink_sink_pad_cam2) != GST_PAD_LINK_OK) {
        g_printerr("CAM2: Failed to link tee branches\n");
        return -1;
    }
    
    gst_object_unref(scaler_src_pad1_cam2);
    gst_object_unref(scaler_src_pad2_cam2);
    gst_object_unref(detect_sink_pad_cam2);
    gst_object_unref(sink_sink_pad_cam2);
      
    // CAM2: Link post.text to text_queue pad
    GstPad *text_src_cam2 = gst_element_request_pad_simple(postproc_cam2,"text");
    GstPad *text_queue_pad_cam2 = gst_element_get_static_pad(text_queue_cam2,"sink");
    if (gst_pad_link(text_src_cam2,text_queue_pad_cam2) != GST_PAD_LINK_OK){
        g_printerr("CAM2: Failed to link text pad branch\n");
        return -1;
    }
    gst_object_unref(text_src_cam2);
    gst_object_unref(text_queue_pad_cam2);
    
    // CAM2: Link text branch
    if(!gst_element_link(text_queue_cam2, filesink_cam2)){
        g_printerr("CAM2: Failed to link text branch.\n");
        return -1;
    }
    
    // CAM3: Link elements of CAM3
    
    // CAM3: Link source to scaler split    
    if (!gst_element_link_many(source_cam3, source_queue_cam3, source_caps_cam3, tiovxisp_cam3, videoconvert_cam3, nv12_caps_cam3, scaler_split_cam3, NULL)) {
        g_printerr("CAM3: Source to nv12_caps link failed.\n");
        return -1;
    }
    
    // CAM3: detect_scaler1_cam3
    GstPad *detect_scaler1_src_cam3 = gst_element_request_pad_simple(detect_scaler1_cam3, "src_%u");
    if (!detect_scaler1_src_cam3){
        g_printerr("CAM3: Failed to create detect_scaler1 source pad");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler1_src_cam3),
        "roi-width",3280,
        "roi-height",2464,
        NULL);
    gst_object_unref(detect_scaler1_src_cam3);
    
    GstPad *detect_scaler1_sink_cam3 = gst_element_get_static_pad(detect_scaler1_cam3,"sink");
    if (!detect_scaler1_sink_cam3){
        g_printerr("CAM3: Failed to create detect_scaler1 sink pad");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler1_sink_cam3),
        "roi-width",820,
        "roi-height",616,
        NULL);
    gst_object_unref(detect_scaler1_sink_cam3);
    
    // CAM3: detect_scaler2_cam3
    GstPad *detect_scaler2_src_cam3 = gst_element_request_pad_simple(detect_scaler2_cam3, "src_%u");
    if (!detect_scaler2_src_cam3){
        g_printerr("CAM3: Failed to create detect_scaler2 source pad");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler2_src_cam3),
        "roi-width",820,
        "roi-height",616,
        NULL);
    gst_object_unref(detect_scaler2_src_cam3);
    
    GstPad *detect_scaler2_sink_cam3 = gst_element_get_static_pad(detect_scaler2_cam3,"sink");
    if (!detect_scaler2_sink_cam3){
        g_printerr("CAM3: Failed to create detect_scaler2 sink pad");
        return -1;
    }
    g_object_set(G_OBJECT(detect_scaler2_sink_cam3),
        "roi-width",320,
        "roi-height",320,
        NULL);
    gst_object_unref(detect_scaler2_sink_cam3);
    
    // CAM3: Link detection branch
    if (!gst_element_link_many(detect_queue_cam3, detect_scaler1_cam3, intermediate_caps_cam3, detect_scaler2_cam3, detect_caps_cam3, preproc_cam3, tensor_caps_cam3, inference_cam3, NULL)){
        g_printerr("CAM3: Detection branch link failed!");
        return -1;
    }
    
    // CAM3: manually link inference output to post.tensor
    GstPad *inference_src_cam3 = gst_element_get_static_pad(inference_cam3, "src");
    GstPad *post_tensor_cam3 = gst_element_get_static_pad(postproc_cam3, "tensor");
    if (gst_pad_link(inference_src_cam3, post_tensor_cam3) != GST_PAD_LINK_OK){
        g_printerr("CAM3: Failed to link inference to post.tensor");
        return -1;
    }
    gst_object_unref(inference_src_cam3);
    gst_object_unref(post_tensor_cam3);
    
    // CAM3: Link sink branch
    if(!gst_element_link(sink_queue_cam3, sink_caps_cam3)){
        g_printerr("CAM3: scaler postproc.sink link failed!");
        return -1;
    }
          
    // CAM3: Link display branch to post.sink
    GstPad *sink_src_cam3 = gst_element_get_static_pad(sink_caps_cam3, "src");
    GstPad *post_sink_cam3 = gst_element_get_static_pad(postproc_cam3, "sink");
    if (gst_pad_link(sink_src_cam3, post_sink_cam3) != GST_PAD_LINK_OK){
        g_printerr("CAM3: Failed to link sink src to post.sink\n");
        return -1;
    }
    gst_object_unref(sink_src_cam3);
    gst_object_unref(post_sink_cam3);
        
    // CAM3: Link postproc to save branch   
    if(!gst_element_link_many(postproc_cam3, final_scaler_cam3, final_caps_cam3, videoconvert_split_cam3, encoder_queue_cam3, encoder_cam3, parser_cam3, split_sink_cam3,NULL)){
        g_printerr("CAM3: Failed to link postproc to splitsink");
        return -1;
    }
    
    // CAM3: Link multiscaler to both detection and sink branches
    GstPad *scaler_src_pad1_cam3 = gst_element_request_pad_simple(scaler_split_cam3, "src_%u");
    if(!scaler_src_pad1_cam3){
        g_printerr("CAM3: Failed to get source pad1 from multiscaler");
        return -1;
    }
    g_object_set(G_OBJECT(scaler_src_pad1_cam3),
        "roi-width",3280,
        "roi-height",2464,
        NULL);
    GstPad *scaler_src_pad2_cam3 = gst_element_request_pad_simple(scaler_split_cam3, "src_%u");
    if(!scaler_src_pad2_cam3){
        g_printerr("CAM3: Failed to get source pad2 from multiscaler");
        return -1;
    }
    g_object_set(G_OBJECT(scaler_src_pad2_cam3),
        "roi-width",3280,
        "roi-height",2464,
        NULL);
    GstPad *detect_sink_pad_cam3 = gst_element_get_static_pad(detect_queue_cam3, "sink");
    GstPad *sink_sink_pad_cam3 = gst_element_get_static_pad(sink_queue_cam3, "sink");
    
    if (gst_pad_link(scaler_src_pad1_cam3, detect_sink_pad_cam3) != GST_PAD_LINK_OK ||
        gst_pad_link(scaler_src_pad2_cam3, sink_sink_pad_cam3) != GST_PAD_LINK_OK) {
        g_printerr("CAM3: Failed to link tee branches\n");
        return -1;
    }
    
    gst_object_unref(scaler_src_pad1_cam3);
    gst_object_unref(scaler_src_pad2_cam3);
    gst_object_unref(detect_sink_pad_cam3);
    gst_object_unref(sink_sink_pad_cam3);
      
    // CAM3: Link post.text to text_queue pad
    GstPad *text_src_cam3 = gst_element_request_pad_simple(postproc_cam3,"text");
    GstPad *text_queue_pad_cam3 = gst_element_get_static_pad(text_queue_cam3,"sink");
    if (gst_pad_link(text_src_cam3,text_queue_pad_cam3) != GST_PAD_LINK_OK){
        g_printerr("CAM3: Failed to link text pad branch\n");
        return -1;
    }
    gst_object_unref(text_src_cam3);
    gst_object_unref(text_queue_pad_cam3);
    
    // CAM3: Link text branch
    if(!gst_element_link(text_queue_cam3, filesink_cam3)){
        g_printerr("CAM3: Failed to link text branch.\n");
        return -1;
    }
    
    //AUDIO BRANCH
    //Link audio branch
    if(!gst_element_link_many(alsasrc, audio_src_queue, audioconvert, audioresample, audio_encoder, audio_parser, audio_tee, NULL)){
        g_printerr("Failed to link audio branch");
        return -1;
        }
    //Link audio_tee to audio_queue's
    GstPad *audio_tee_pad1 = gst_element_request_pad_simple(audio_tee, "src_%u");
    GstPad *audio_tee_pad2 = gst_element_request_pad_simple(audio_tee, "src_%u");
    GstPad *audio_tee_pad3 = gst_element_request_pad_simple(audio_tee, "src_%u");
    GstPad *audio_sink_sink_queue1_pad = gst_element_get_static_pad(audio_sink_queue_cam1, "sink");
    GstPad *audio_sink_sink_queue2_pad = gst_element_get_static_pad(audio_sink_queue_cam2, "sink");
    GstPad *audio_sink_sink_queue3_pad = gst_element_get_static_pad(audio_sink_queue_cam3, "sink");
    if(gst_pad_link(audio_tee_pad1, audio_sink_sink_queue1_pad) != GST_PAD_LINK_OK ||
      gst_pad_link(audio_tee_pad2, audio_sink_sink_queue2_pad) != GST_PAD_LINK_OK ||
      gst_pad_link(audio_tee_pad3, audio_sink_sink_queue3_pad) != GST_PAD_LINK_OK){
        g_printerr("Failed to link audio_tee to audio queue pads");
        return -1;
        }
        
    gst_element_release_request_pad(audio_tee, audio_tee_pad1);
    gst_element_release_request_pad(audio_tee, audio_tee_pad2);
    gst_element_release_request_pad(audio_tee, audio_tee_pad3);
    
    gst_object_unref(audio_tee_pad1);
    gst_object_unref(audio_tee_pad2);
    gst_object_unref(audio_tee_pad3);
    gst_object_unref(audio_sink_sink_queue1_pad);
    gst_object_unref(audio_sink_sink_queue2_pad); 
    gst_object_unref(audio_sink_sink_queue3_pad);   
    
        
    
    //manually link audio branch  to CAM1,CAM2 and CAM3 videos
    
    GstPad *audio_sink_src_queue1_pad = gst_element_get_static_pad(audio_sink_queue_cam1, "src");
    GstPad *audio_sink_src_queue2_pad = gst_element_get_static_pad(audio_sink_queue_cam2, "src");
    GstPad *audio_sink_src_queue3_pad = gst_element_get_static_pad(audio_sink_queue_cam3, "src");
    GstPad *muxer_audio_pad_cam1 = gst_element_request_pad_simple(split_sink_cam1, "audio_%u");
    GstPad *muxer_audio_pad_cam2 = gst_element_request_pad_simple(split_sink_cam2, "audio_%u");
    GstPad *muxer_audio_pad_cam3 = gst_element_request_pad_simple(split_sink_cam3, "audio_%u");
    
    
    if(gst_pad_link(audio_sink_src_queue1_pad, muxer_audio_pad_cam1) != GST_PAD_LINK_OK ||
      gst_pad_link(audio_sink_src_queue2_pad, muxer_audio_pad_cam2) != GST_PAD_LINK_OK ||
      gst_pad_link(audio_sink_src_queue3_pad, muxer_audio_pad_cam3) != GST_PAD_LINK_OK) {
      g_printerr("Failed to link pads to splitmuxsink.\n");
      return -1;
      }
      
    
    gst_object_unref(audio_sink_src_queue1_pad);
    gst_object_unref(audio_sink_src_queue2_pad);
    gst_object_unref(audio_sink_src_queue3_pad);
    gst_object_unref(muxer_audio_pad_cam1);
    gst_object_unref(muxer_audio_pad_cam2);
    gst_object_unref(muxer_audio_pad_cam3);
    
          
    // Start pipeline
    ret = gst_element_set_state(pipeline, GST_STATE_PLAYING);
    if (ret == GST_STATE_CHANGE_FAILURE) {
        g_printerr("Unable to set the pipeline to the playing state.\n");
        gst_object_unref(pipeline);
        return -1;
        }
    
    // Bus message handling
    bus = gst_element_get_bus(pipeline);
    do {
        msg = gst_bus_timed_pop_filtered(bus, GST_CLOCK_TIME_NONE,
            GST_MESSAGE_ERROR | GST_MESSAGE_EOS | GST_MESSAGE_STATE_CHANGED);
            
        if (msg != NULL) {
            GError *err;
            gchar *debug_info;
    
            switch (GST_MESSAGE_TYPE(msg)) {
                case GST_MESSAGE_ERROR:
                    gst_message_parse_error(msg, &err, &debug_info);
                    g_printerr("Error received from element %s: %s\n",
                        GST_OBJECT_NAME(msg->src), err->message);
                    g_printerr("Debugging information: %s\n",
                        debug_info ? debug_info : "none");
                    g_clear_error(&err);
                    g_free(debug_info);
                    terminate = TRUE;
                    break;
                case GST_MESSAGE_EOS:
                    g_print("End of stream reached.\n");
                    terminate = TRUE;
                    break;
                case GST_MESSAGE_STATE_CHANGED:
                    if (GST_MESSAGE_SRC(msg) == GST_OBJECT(pipeline)) {
                        GstState old_state, new_state, pending_state;
                        gst_message_parse_state_changed(msg, &old_state, &new_state, &pending_state);
                        g_print("Pipeline state changed from %s to %s\n",
                            gst_element_state_get_name(old_state),
                            gst_element_state_get_name(new_state));
                        }
                    break;
                default:
                    g_printerr("Unexpected message received.\n");
                    break;
                }
            gst_message_unref(msg);
            }
        } while (!terminate);
    
    // Cleanup
    gst_object_unref(bus);
    gst_element_set_state(pipeline, GST_STATE_NULL);
    gst_object_unref(pipeline);
    
    return 0;
    }
    
    
    

  • Hello, thank you for your response. However, it doesn't seem like that's the issue. When I tested streaming with three cameras alone—without running object detection—it worked fine. I plan to write additional test code to pinpoint exactly what is consuming more memory.
    Do you have any suggestions or potential solutions for this?

  • Hi,

    The multiscaler can only downscale be at most 1/4 the image size. Does your pipeline try to downscale by more than 1/4 the size?

    Have you tried launching the pipeline with gst-launch-1.0 as a proof of concept?

    Best,
    Jared

  • Hello,
    I had implemented both object detection with saving from 3 cameras and streaming with object detection separately, and both worked fine on their own. Initially, there was an issue with streaming, which I resolved by properly freeing the encoder-related resources.

    However, the 3-camera saving with object detection didn't work correctly just by freeing resources. I managed to resolve that by setting the final output resolution of the saved videos to 1280x720.

    But when I combined both functionalities—saving (with object detection) and streaming (without object detection)—the issue reappeared, even after applying both fixes|

    I haven't done a gst-launch for POC since this was better method to control the pipeline..

    In my multiscaler setup for 3 cameras, the pipeline is structured as follows:

    Mag Object Detection Branch (Tensor Path):

    1. Initial Input Resolution: 3280x2464

    2. The first multiscaler downscales the input from 3280x2464 to 820x616.

    3. A capsfilter is applied at 820x616.

    4. A second multiscaler then further downscales it from 820x616 to 320x320.

    5. Another capsfilter is applied at 320x320 before passing the frame to the detection process.

    6. The output tensor from detection is passed to the post-processing module.

    7. The post-processing sink uses the original sink_caps resolution (3280x2464).

    Desktop️ Sink/Display Branch:

    • From the first multiscaler, a parallel branch is used for display.

    • A capsfilter with resolution 3280x2464 is applied directly for display/overlay purposes.

    Floppy disk Final Saving and Streaming Branches:

    • After detection and post-processing, separate multiscalers are used for:

      • Saving the video

      • Streaming the video

    • In these branches, the ROI width and height are not explicitly set on the multiscaler.

    • However, a capsfilter is used after the multiscaler to enforce the final resolution of 1280x720.


      Thank You

  • Hi Nihal,

    I haven't done a gst-launch for POC since this was better method to control the pipeline..

    I ask this because sometimes there are errors in the GStreamer application, so this is a simple way to verify whether there's an issue with the pipeline or the code.

    Can you share what line is causing the error using gdb or another debugger?

    Best,
    Jared

  • Hello,
    Is there a way to log memory usage or get the memory usage per element in the pipeline?
    Also, when I reduce the output resolution — by setting both the final_caps and the final_scaler's src pad properties to 816x616 — this avoids memory issues, but the video gets cropped.
    I can't set final_caps to 816x616 alone without explicitly setting the src pad property of final_scaler. With 1280x720, it worked by just setting final_caps without needing to configure the final_scaler's src pad and the output video wasn't cropped.

  • Hi Nihal,

    You could run top or free to see the total memory usage. I don't know a way to show the memory usage per element.

    Best,
    Jared

  • Hi Nihal,

    Please let me know if you have any further questions.

    Best,
    Jared

  • Hello Jared,

    I was able to check shared memory usage using top, which shows the Linux shared memory, and that works well. However, the shared memory that's failing seems to be the internal memory of the TI system. Is there a way to log or monitor that specifically?

    Thanks
    Nihal

  • Hi Nihal,

    Pipeline state changed from NULL to READY
    Pipeline state changed from READY to PAUSED
    407.231253 s: MEM: ERROR: Alloc failed with status = 12 !!!
    407.231303 s: VX_ZONE_ERROR: [tivxMemBufferAlloc:111] Shared mem ptr allocation failed
    407.354002 s: MEM: ERROR: Alloc failed with status = 12 !!!
    407.354055 s: VX_ZONE_ERROR: [tivxMemBufferAlloc:111] Shared mem ptr allocation failed
    407.494729 s: MEM: ERROR: Alloc failed with status = 12 !!!
    407.494775 s: VX_ZONE_ERROR: [tivxMemBufferAlloc:111] Shared mem ptr allocation failed

    So are you seeing this error when trying to stream from 3 cameras?

    If so, it's due to an issue with allocating the DMA buffers in the heap. Can you confirm that this is the error you are seeing?

    Best,
    Jared

  • Hello Jared,

    Yes, I am encountering this error when trying to combine saving with object detection and streaming from all three cameras. However, streaming alone—both with and without object detection—works fine with all three cameras, and the same applies to saving as well.

    Thanks
    Nihal

  • Hi Nihal,

    Can you try running the following pipeline:

    $ gst-launch-1.0 \
    v4l2src device=/dev/video-imx219-cam0 io-mode=5 ! queue leaky=2 ! \
    video/x-bayer, width=3280, height=2464, format=rggb10, framerate=15/1 ! \
    tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss_3280x2464_10b.bin format-msb=9 \
    sink_0::dcc-2a-file=/opt/imaging/imx219/linear/dcc_2a_3280x2464_10b.bin sink_0::device=/dev/v4l-imx219-subdev0 ! \
    video/x-raw, format=NV12 ! queue ! \
    \
    v4l2h264enc output-io-mode=5 ! filesink location=cam0-out.h264 \
    \
    v4l2src device=/dev/video-imx219-cam1 io-mode=5 ! queue leaky=2 ! \
    video/x-bayer, width=3280, height=2464, format=rggb10, framerate=15/1 ! \
    tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss_3280x2464_10b.bin format-msb=9 \
    sink_0::dcc-2a-file=/opt/imaging/imx219/linear/dcc_2a_3280x2464_10b.bin sink_0::device=/dev/v4l-imx219-subdev1 ! \
    video/x-raw, format=NV12 ! queue ! \
    \
    v4l2h264enc output-io-mode=5 ! filesink location=cam1-out.h264 \
    \
    v4l2src device=/dev/video-imx219-cam2 io-mode=5 ! queue leaky=2 ! \
    video/x-bayer, width=3280, height=2464, format=rggb10, framerate=15/1 ! \
    tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss_3280x2464_10b.bin format-msb=9 \
    sink_0::dcc-2a-file=/opt/imaging/imx219/linear/dcc_2a_3280x2464_10b.bin sink_0::device=/dev/v4l-imx219-subdev2 ! \
    video/x-raw, format=NV12 ! queue ! \
    \
    v4l2h264enc output-io-mode=5 ! filesink location=cam2-out.h264

    Best,
    Jared

  • Hello Jared,
    I tried the pipeline you suggested, but it didn’t work as expected.
    I was able to resolve the issue by setting the sink caps resolution to 1280x720 instead of 3280x2464, which was being used for the post-processing sink pad (saving branch).
    Thank you for your continued support.

    Best regards,
    Nihal