This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: SIngle Camera Mosaic view consists OD, SD, TV

Part Number: TDA4VM

Hello Team,

I'm working with SDK 08_02_00_05(Linux+RTOS).

I'm trying to develop a application which captures feed from a single camera and display the output in mosaic view, consisting of 1st view- camera 0 feed, 2nd view- OD of cam 0, 3rd view - SD of cam 0, 4th view - Top view of Cam 0.

For this I have a working application which displays camera 0 feed in all 4 views of mosaic.

Could you please guide me how can I approach from here.

Regards,

Chaitanya Prakash Uppala

  • Hi Chaitanya,

    Do you have the 1st view- camera 0 feed, 2nd view- OD of cam 0, 3rd view - SD of cam 0, 4th view - Top view of Cam 0 outputs available?

    Are all of them object_arrays of same size?

    Regards,

    Nikhil

  • Hi Nikhil,

    Please find the attached image for the application pipeline.

    Do you have the 1st view- camera 0 feed, 2nd view- OD of cam 0, 3rd view - SD of cam 0, 4th view - Top view of Cam 0 outputs available?

    I have each individual working application which uses 4 cameras.

    Regards,

    Chaitanya Prakash Uppala

  • Hi Chaitanya,

    The mosaic node accepts an array of Object array. So you can feed each object array output to mosaic node input as shown below 

    obj->imgMosaicObj.input_arr[idx++] = mosaic_in_arr1;

    obj->imgMosaicObj.input_arr[idx++] = mosaic_in_arr2;   etc....

    After that, since there is only 1 channel available in each object array, you should play with the input_select parameter

    imgMosaicObj->params.windows[idx++].input_select   = 0;

    imgMosaicObj->params.windows[idx++].input_select   = 1;   etc...

    So that each input is corresponding to each input select.

    You can refer set_img_mosaic_params() in multicam demo (where input select changes after 4 channels)

    Regards,

    Nikhil

  • Hello Nikhil,

    Yesterday I have tried modifying as shown below. I have taken working semantic segmentation single camera mosaic view application and made changes.

    static vx_status app_create_graph(AppObj *obj)
    {
        vx_status status = VX_SUCCESS;
        vx_graph_parameter_queue_params_t graph_parameters_queue_params_list[2];
        vx_int32 graph_parameter_index;
    
        obj->graph = vxCreateGraph(obj->context);
        status = vxGetStatus((vx_reference)obj->graph);
        vxSetReferenceName((vx_reference)obj->graph, "app_btc_seg_cam_graph");
        APP_PRINTF("Graph create done!\n");
    
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_capture(obj->graph, &obj->captureObj);
            APP_PRINTF("Capture graph done!\n");
        }
        if(status == VX_SUCCESS)
            {
                status = app_create_graph_color_conv(obj->graph, &obj->colorConvObj, obj->captureObj.raw_image_arr[0]);
                APP_PRINTF("Color Conversion graph done!\n");
            }
       /* if(status == VX_SUCCESS)
        {
            status = app_create_graph_viss(obj->graph, &obj->vissObj, obj->captureObj.raw_image_arr[0]);
            APP_PRINTF("VISS graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_aewb(obj->graph, &obj->aewbObj, obj->vissObj.h3a_stats_arr);
            APP_PRINTF("AEWB graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_ldc(obj->graph, &obj->ldcObj, obj->vissObj.output_arr);
            APP_PRINTF("LDC graph done!\n");
        }*/
    //    if(status == VX_SUCCESS)
    //        {
    //    //        status = app_create_graph_ldc(obj->graph, &obj->ldcObj, obj->captureObj.raw_image_arr[0]);
    //            status = app_create_graph_ldc(obj->graph, &obj->ldcObj, obj->colorConvObj.dst_image_arr);
    //            APP_PRINTF("LDC graph done!\n");
    //        }
    
        if(status == VX_SUCCESS)
        {
            //app_create_graph_scaler(obj->context, obj->graph, &obj->scalerObj, obj->ldcObj.output_arr);
            app_create_graph_scaler(obj->context, obj->graph, &obj->scalerObj, obj->colorConvObj.dst_image_arr);
    
            APP_PRINTF("Scaler graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            app_create_graph_pre_proc(obj->graph, &obj->preProcObj, obj->scalerObj.output[0].arr);
            APP_PRINTF("Pre proc graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            app_create_graph_tidl(obj->context, obj->graph, &obj->tidlObj, obj->preProcObj.output_tensor_arr);
            APP_PRINTF("TIDL graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            app_create_graph_post_proc(obj->graph, &obj->postProcObj, obj->scalerObj.output[1].arr, obj->tidlObj.out_args_arr, obj->tidlObj.output_tensor_arr[0]);
            APP_PRINTF("Draw detections graph done!\n");
        }
    
        vx_int32 idx = 0;
        for (int i = 0; i < 3; i++)
        {
        obj->imgMosaicObj.input_arr[idx++] = obj->colorConvObj.dst_image_arr;
        }
    
        obj->imgMosaicObj.input_arr[idx++] = obj->postProcObj.output_image_arr;
        //obj->imgMosaicObj.input_arr[idx++] = obj->colorConvObj.dst_image_arr;
        obj->imgMosaicObj.num_inputs = idx;
    
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_img_mosaic(obj->graph, &obj->imgMosaicObj, NULL);
            APP_PRINTF("Img Mosaic graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_display(obj->graph, &obj->displayObj, obj->imgMosaicObj.output_image[0]);
            APP_PRINTF("Display graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            graph_parameter_index = 0;
            add_graph_parameter_by_node_index(obj->graph, obj->captureObj.node, 1);
            obj->captureObj.graph_parameter_index = graph_parameter_index;
            graph_parameters_queue_params_list[graph_parameter_index].graph_parameter_index = graph_parameter_index;
            graph_parameters_queue_params_list[graph_parameter_index].refs_list_size = APP_BUFFER_Q_DEPTH;
            graph_parameters_queue_params_list[graph_parameter_index].refs_list = (vx_reference*)&obj->captureObj.raw_image_arr[0];
            graph_parameter_index++;
    
            vxSetGraphScheduleConfig(obj->graph,
                    VX_GRAPH_SCHEDULE_MODE_QUEUE_AUTO,
                    graph_parameter_index,
                    graph_parameters_queue_params_list);
    
            tivxSetGraphPipelineDepth(obj->graph, APP_PIPELINE_DEPTH);
    
          /*  tivxSetNodeParameterNumBufByIndex(obj->vissObj.node, 6, APP_BUFFER_Q_DEPTH);
            tivxSetNodeParameterNumBufByIndex(obj->vissObj.node, 9, APP_BUFFER_Q_DEPTH);
            tivxSetNodeParameterNumBufByIndex(obj->aewbObj.node, 4, APP_BUFFER_Q_DEPTH);*/
            tivxSetNodeParameterNumBufByIndex(obj->colorConvObj.node, 1, 4);
    
         //   tivxSetNodeParameterNumBufByIndex(obj->ldcObj.node, 7, APP_BUFFER_Q_DEPTH);
    
            /*This output is accessed slightly later in the pipeline by mosaic node so queue depth is larger */
            tivxSetNodeParameterNumBufByIndex(obj->scalerObj.node, 1, 6);
            tivxSetNodeParameterNumBufByIndex(obj->scalerObj.node, 2, 6);
    
            tivxSetNodeParameterNumBufByIndex(obj->preProcObj.node, 2, 4);
    
            tivxSetNodeParameterNumBufByIndex(obj->tidlObj.node, 4, 4);
            tivxSetNodeParameterNumBufByIndex(obj->tidlObj.node, 7, 4);
    
            tivxSetNodeParameterNumBufByIndex(obj->postProcObj.node, 4, 4);
    
            if(!((obj->en_out_img_write == 1) || (obj->test_mode == 1)))
            {
                status = tivxSetNodeParameterNumBufByIndex(obj->imgMosaicObj.node, 1, 4);
            }
            APP_PRINTF("Pipeline params setup done!\n");
        }
    
        return status;
    }
    
    
    
    static void update_img_mosaic_defaults(ImgMosaicObj *imgMosaicObj, vx_uint32 in_width, vx_uint32 in_height, vx_int32 numCh)
    {
            vx_int32 idx = 0;
            imgMosaicObj->out_width = 1920;
            imgMosaicObj->out_height = 1080;
            imgMosaicObj->num_inputs = 4; // We have 4 windows, 3 for camera feed, 1 for segmentation
    
            tivxImgMosaicParamsSetDefaults(&imgMosaicObj->params);
    
            // Window 1 - Camera 0 feed
            imgMosaicObj->params.windows[idx].startX = 0;
            imgMosaicObj->params.windows[idx].startY = 0;
            imgMosaicObj->params.windows[idx].width = imgMosaicObj->out_width / 2;
            imgMosaicObj->params.windows[idx].height = imgMosaicObj->out_height / 2;
            imgMosaicObj->params.windows[idx].input_select = 0; // Camera 0 feed
            imgMosaicObj->params.windows[idx].channel_select = 0;
            idx++;
    
            // Window 2 - Camera 0 feed
            imgMosaicObj->params.windows[idx].startX = imgMosaicObj->out_width / 2;
            imgMosaicObj->params.windows[idx].startY = 0;
            imgMosaicObj->params.windows[idx].width = imgMosaicObj->out_width / 2;
            imgMosaicObj->params.windows[idx].height = imgMosaicObj->out_height / 2;
            imgMosaicObj->params.windows[idx].input_select = 0; // Camera 0 feed
            imgMosaicObj->params.windows[idx].channel_select = 0;
            idx++;
    
            // Window 3 - Camera 0 feed
            imgMosaicObj->params.windows[idx].startX = 0;
            imgMosaicObj->params.windows[idx].startY = imgMosaicObj->out_height / 2;
            imgMosaicObj->params.windows[idx].width = imgMosaicObj->out_width / 2;
            imgMosaicObj->params.windows[idx].height = imgMosaicObj->out_height / 2;
            imgMosaicObj->params.windows[idx].input_select = 0; // Camera 0 feed
            imgMosaicObj->params.windows[idx].channel_select = 0;
            idx++;
    
            // Window 4 - Semantic Segmentation output
            imgMosaicObj->params.windows[idx].startX = imgMosaicObj->out_width / 2;
            imgMosaicObj->params.windows[idx].startY = imgMosaicObj->out_height / 2;
            imgMosaicObj->params.windows[idx].width = imgMosaicObj->out_width / 2;
            imgMosaicObj->params.windows[idx].height = imgMosaicObj->out_height / 2;
            imgMosaicObj->params.windows[idx].input_select = 1; // Assuming input_select 1 is for segmentation output
            imgMosaicObj->params.windows[idx].channel_select = 0;
            idx++;
    
            imgMosaicObj->params.num_windows = idx; // Set the number of windows to the index count
    
            // Number of times to clear the output buffer before it gets reused
            imgMosaicObj->params.clear_count = APP_BUFFER_Q_DEPTH;
    
    
            // Check if the input_select is set correctly for each window
            printf("Window 1 input_select: %d\n", imgMosaicObj->params.windows[0].input_select);
            printf("Window 2 input_select: %d\n", imgMosaicObj->params.windows[1].input_select);
            printf("Window 3 input_select: %d\n", imgMosaicObj->params.windows[2].input_select);
            printf("Window 4 input_select (should be for segmentation): %d\n", imgMosaicObj->params.windows[3].input_select);
    }
    

    Note: The changes are not for the pipeline i showed above, this just includes showing one window semantic segmentation output and rest three window having capture output from sensor.

    But I haven't got the expected output. The output with this change is like all the 4 windows in mosaic displayed natural image capttured from sensor and freezes after 1 frame. If I revert back to my original code without these changes application runs fine without freeze.

    Also, when i passed the capture output to mosaic instead of semantic segmentation postproc output I observed the same behaviour (4 windows in mosaic displayed natural image capttured from sensor and freezes after 1 frame). PFA for the display output.

    Regards,

    Chaitanya Prakash Uppala

  • Hi,

    You have 4 inputs to the mosaic node right?

    But you have set input_select = 0 for first three and input_select = 1 for the forth

    Could you try input_select = 0, 1, 2, 3 for window 0, 1, 2, 3 respectively?

    Regards,

    Nikhil

  • Hi Nikhil,

    After modifying as per your suggestion, able to get the required view that is 3 windows in mosaic with cam feed and one window with seg output. But after the 1st frame display, the output is freezed.

    vx_int32 idx = 0;
    for (int i = 0; i < 3; i++)
    {
    obj->imgMosaicObj.input_arr[idx++] = obj->scalerObj.output[0].arr;
    }
    obj->imgMosaicObj.input_arr[idx++] = obj->postProcObj.output_image_arr;
    obj->imgMosaicObj.num_inputs = idx;

    When I change above snippet to as shown below, I'm getting continuous semantic segmentation outpuit in 4 windows.

    vx_int32 idx = 0;
    obj->imgMosaicObj.input_arr[idx++] = obj->postProcObj.output_image_arr;
    obj->imgMosaicObj.num_inputs = idx;

    Please find the attachment for the display output.

    Regards,

    Chaitanya Prakash Uppala

  • Hi Chaitanya,

    Please find the attached image for the application pipeline.

    I thought the above was your pipeline. 

    In this the 4 inputs are OD, Seg, Bev and Scalar. 

    But currently you are trying 2 Scalar and 1 Seg. 

    May I know which is the correct flow?

    Regards,

    Nikhil

  • Hi Nikhil,

    My correct flow is the one in the attached image.

    I have started developing the application using a working 4 cam segmentation application and modified it for single camera at the mosaic level. Later modified the app_create_graph and update_mosaic_defaults to get output as in 3 win cam feed and in 1 win seg output. After reaching this I planned for adding OD to one more window and after successful of OD planned to add Top view to one of the window.

    I'm trying to develop a application which captures feed from a single camera and display the output in mosaic view, consisting of 1st view- camera 0 feed, 2nd view- OD of cam 0, 3rd view - SD of cam 0, 4th view - Top view of Cam 0.

    This is how the application output should be.

    Let me know if I'm going in a wrong way, or there is any suggestion from your end.

    Regards,

    Chaitanya Prakash Uppala

  • ok, but in this case, your since your mosaic is working as configured, We would have to look in the source code as to why this hang is seen.

    Do you see any error logs? 

    Have you identified the place in the code where it got hanged?

    Regards,

    Nikhil

  • Hi Nikhil,

    Do you see any error logs? 

    No, there are no any error logs at runtime and build time.

    Please find the attached run time logs.

    root@j7-evm:/opt/vision_apps# ./test_4cam_app.out --cfg btc_seg_cam.cfg 
    APP: Init ... !!!
    MEM: Init ... !!!
    MEM: Initialized DMA HEAP (fd=4) !!!
    MEM: Init ... Done !!!
    IPC: Init ... !!!
    IPC: Init ... Done !!!
    REMOTE_SERVICE: Init ... !!!
    REMOTE_SERVICE: Init ... Done !!!
        48.174146 s: GTC Frequency = 200 MHz
    APP: Init ... Done !!!
        48.180784 s:  VX_ZONE_INIT:Enabled
        48.180797 s:  VX_ZONE_ERROR:Enabled
        48.180810 s:  VX_ZONE_WARNING:Enabled
        48.186947 s:  VX_ZONE_INIT:[tivxInitLocal:130] Initialization Done !!!
        48.187106 s:  VX_ZONE_INIT:[tivxHostInitLocal:86] Initialization Done for HOST !!!
    Default param set! 
    Parsed user params! 
        48.191302 s: ISS: Enumerating sensors ... !!!
    [MCU2_0]     48.191541 s: ImageSensor_RemoteServiceHandler: IM_SENSOR_CMD_ENUMERATE 
    [MCU2_0]     48.191798 s: write 0xfe to TCA6408 register 0x3 
    [MCU2_0]     48.291396 s: UB960 config start 
    [MCU2_0]     50.787656 s: End of UB960 config 
    [MCU2_0]     50.787722 s: UB960 config start 
        50.987896 s: ISS: Enumerating sensors ... found 0 : IMX390-UB953_D3
        50.987911 s: ISS: Enumerating sensors ... found 1 : AR0233-UB953_MARS
        50.987931 s: ISS: Enumerating sensors ... found 2 : AR0820-UB953_LI
        50.987936 s: ISS: Enumerating sensors ... found 3 : UB9xxx_RAW12_TESTPATTERN
        50.987941 s: ISS: Enumerating sensors ... found 4 : UB96x_UYVY_TESTPATTERN
        50.987946 s: ISS: Enumerating sensors ... found 5 : GW_AR0233_UYVY
        50.987951 s: ISS: Enumerating sensors ... found 6 : ISX016_UB913A_Q1
    7 sensor(s) found 
    Supported sensor list: 
    a : IMX390-UB953_D3 
    b : AR0233-UB953_MARS 
    c : AR0820-UB953_LI 
    d : UB9xxx_RAW12_TESTPATTERN 
    e : UB96x_UYVY_TESTPATTERN 
    f : GW_AR0233_UYVY 
    g : ISX016_UB913A_Q1 
    Select a sensor above or press '0' to autodetect the sensor 
    [MCU2_0]     50.987658 s: End of UB960 config 
    g
    Sensor selected : ISX016_UB913A_Q1
    Querying ISX016_UB913A_Q1 
        51.670185 s: ISS: Querying sensor [ISX016_UB913A_Q1] ... !!!
        51.670668 s: ISS: Querying sensor [ISX016_UB913A_Q1] ... Done !!!
    LDC Selection Yes(1)/No(0)
    Invalid selection 
    . Try again 
    LDC Selection Yes(1)/No(0)
    [MCU2_0]     51.670412 s: ImageSensor_RemoteServiceHandler: IM_SENSOR_CMD_QUERY 
    [MCU2_0]     51.670485 s: Received Query for ISX016_UB913A_Q1 
    1
    Max number of cameras supported by sensor ISX016_UB913A_Q1 = 4 
    Please enter number of cameras to be enabled 
    Invalid selection 
    . Try again 
    Max number of cameras supported by sensor ISX016_UB913A_Q1 = 4 
    Please enter number of cameras to be enabled 
    1
    Sensor params queried! 
     ### APP DEBUG: width = 1280, height = 944Window 1 input_select: 0
    Window 2 input_select: 1
    Window 3 input_select: 2
    Window 4 input_select (should be for segmentation): 3
    Updated user params! 
    Creating context done!
    Kernel loading done!
        52.915503 s: ISS: Initializing sensor [ISX016_UB913A_Q1], doing IM_SENSOR_CMD_PWRON ... !!!
        52.916031 s: ISS: Initializing sensor [ISX016_UB913A_Q1], doing IM_SENSOR_CMD_CONFIG ... !!!
    [MCU2_0]     52.915709 s: ImageSensor_RemoteServiceHandler: IM_SENSOR_CMD_PWRON 
    [MCU2_0]     52.915784 s: IM_SENSOR_CMD_PWRON : channel_mask = 0x1 
    [MCU2_0]     52.915855 s: ISX016_PowerOn : chId = 0x0 
    [MCU2_0]     52.916202 s: ImageSensor_RemoteServiceHandler: IM_SENSOR_CMD_CONFIG 
    [MCU2_0]     52.916254 s: Application requested features = 0x158 
    [MCU2_0]  
    [MCU2_0]     52.916301 s: UB960 config start 
    [MCU2_0]     53.172489 s: End of UB960 config 
    [MCU2_0]     53.172556 s: UB960 config start 
    [MCU2_0]     53.428485 s: End of UB960 config 
    [MCU2_0]     53.428689 s: UB960 config start 
    [MCU2_0]     53.432489 s: End of UB960 config 
    [MCU2_0]     53.432541 s: ub953 config start : slaveAddr = 0xb0 
    [MCU2_0]     53.640539 s:  End of UB953 config 
    [MCU2_0]     53.641422 s: Configuring camera # 0 
    [MCU2_0]     53.641614 s: UB960 config start 
    [MCU2_0]     53.645488 s: End of UB960 config 
    [MCU2_0]     53.645539 s: ub953 config start : slaveAddr = 0xb0 
    [MCU2_0]     53.853533 s:  End of UB953 config 
    [MCU2_0]     53.853626 s:  Configuring ISX016 imager 0x50.. Please wait till it finishes 
        53.953616 s: ISS: Initializing sensor [ISX016_UB913A_Q1] ... Done !!!
    Sensor init done!
    Capture init done!
    app_init_color_conv() : ENTERING 
    app_init_color_conv() : EXITING 
    Color Conv init done!
    Scaler init done!
    [MCU2_0]     53.953394 s: IM_SENSOR_CMD_CONFIG returning status = 0 
    Computing checksum at 0x0000FFFFA638A540, size = 937904
    TIDL Init Done! 
    Pre Proc Update Done! 
    Pre Proc Init Done! 
    Post Proc Update Done! 
    Post Proc Init Done! 
    Img Mosaic init done!
    Display init done!
    App Init Done! 
     app_create_graph() : ENTERING 
    Graph create done!
    Capture graph done!
    app_create_graph_color_conv() : ENTERING 
    Color convert node create started
    Color convert node create done 
    app_create_graph_color_conv() : EXITING 
    Color Conversion graph done!
    Scaler graph done!
    Pre proc graph done!
    TIDL graph done!
    Draw detections graph done!
    Img Mosaic graph done!
    Display graph done!
    Pipeline params setup done!
     app_create_graph : Exiting 
    App Create Graph Done! 
    Grapy verify SUCCESS!
    App Verify Graph Done! 
    ### APP DEBUG: app_run_graph() :: ENTERING 
        54.302285 s: ISS: Starting sensor [ISX016_UB913A_Q1] ... !!!
    [MCU2_0]     54.302830 s: ImageSensor_RemoteServiceHandler: IM_SENSOR_CMD_STREAM_ON 
    [MCU2_0]     54.302913 s: IM_SENSOR_CMD_STREAM_ON:  channel_mask = 0x1
    [MCU2_0]     54.302961 s: UB960 config start 
    [MCU2_0]     54.558490 s: End of UB960 config 
    [MCU2_0]     54.558558 s: UB960 config start 
    [MCU2_0]     54.814491 s: End of UB960 config 
    [MCU2_0]     54.824536 s: UB960 config start 
        54.856717 s: ISS: Starting sensor [ISX016_UB913A_Q1] ... !!!
    appStartImageSensor returned with status: 0
    before app_run_graph_for_one_frame_pipeline
    inside app_run_graph_for_one_frame_pipeline
    inside obj->pipeline <= 0
     exiting app-run_graph_for_one_frame_pipeline
    after app_run_graph_for_one_frame_pipeline
    before app_run_graph_for_one_frame_pipeline
    inside app_run_graph_for_one_frame_pipeline
    inside obj->pipeline <= 0
     exiting app-run_graph_for_one_frame_pipeline
    after app_run_graph_for_one_frame_pipeline
    before app_run_graph_for_one_frame_pipeline
    inside app_run_graph_for_one_frame_pipeline
    
    
     ==========================================
     ITC Demo - 1-Camera Mosaic Display
     ==========================================
    
     p: Print performance statistics
    
     x: Exit
    
     Enter Choice: 
    inside obj->pipeline <= 0
     exiting app-run_graph_for_one_frame_pipeline
    after app_run_graph_for_one_frame_pipeline
    before app_run_graph_for_one_frame_pipeline
    inside app_run_graph_for_one_frame_pipeline
    
    
     ==========================================
     ITC Demo - 1-Camera Mosaic Display
     ==========================================
    
     p: Print performance statistics
    
     x: Exit
    
     Enter Choice: inside obj->pipeline <= 0
    [MCU2_0]     54.856497 s: End of UB960 config 
     inside obj->pipeline > 0
    exiting app-run_graph_for_one_frame_pipeline
    after app_run_graph_for_one_frame_pipeline
    before app_run_graph_for_one_frame_pipeline
    inside app_run_graph_for_one_frame_pipeline
    inside obj->pipeline > 0
    exiting app-run_graph_for_one_frame_pipeline
    after app_run_graph_for_one_frame_pipeline
    before app_run_graph_for_one_frame_pipeline
    inside app_run_graph_for_one_frame_pipeline
    inside obj->pipeline > 0
    exiting app-run_graph_for_one_frame_pipeline
    after app_run_graph_for_one_frame_pipeline
    before app_run_graph_for_one_frame_pipeline
    inside app_run_graph_for_one_frame_pipeline
    inside obj->pipeline > 0
    exiting app-run_graph_for_one_frame_pipeline
    after app_run_graph_for_one_frame_pipeline
    before app_run_graph_for_one_frame_pipeline
    inside app_run_graph_for_one_frame_pipeline
    inside obj->pipeline > 0
    exiting app-run_graph_for_one_frame_pipeline
    after app_run_graph_for_one_frame_pipeline
    before app_run_graph_for_one_frame_pipeline
    inside app_run_graph_for_one_frame_pipeline
    inside obj->pipeline > 0
    exiting app-run_graph_for_one_frame_pipeline
    after app_run_graph_for_one_frame_pipeline
    before app_run_graph_for_one_frame_pipeline
    inside app_run_graph_for_one_frame_pipeline
    inside obj->pipeline > 0
    exiting app-run_graph_for_one_frame_pipeline
    after app_run_graph_for_one_frame_pipeline
    before app_run_graph_for_one_frame_pipeline
    inside app_run_graph_for_one_frame_pipeline
    inside obj->pipeline > 0
    exiting app-run_graph_for_one_frame_pipeline
    after app_run_graph_for_one_frame_pipeline
    before app_run_graph_for_one_frame_pipeline
    inside app_run_graph_for_one_frame_pipeline
    inside obj->pipeline > 0
    exiting app-run_graph_for_one_frame_pipeline
    after app_run_graph_for_one_frame_pipeline
    before app_run_graph_for_one_frame_pipeline
    inside app_run_graph_for_one_frame_pipeline
    inside obj->pipeline > 0
    exiting app-run_graph_for_one_frame_pipeline
    after app_run_graph_for_one_frame_pipeline
    before app_run_graph_for_one_frame_pipeline
    inside app_run_graph_for_one_frame_pipeline
    

    Regards,
    Chaitanya Prakash Uppala

  • Hi Chaitanya,

    Have you used any graph parameters here in your application?

    What is the buffer depth for the scalar output?

    Could you try increasing the same?

    Regards,

    Nikhil

  • Hi Nikhil,

    Please find the attached snippet.

            tivxSetGraphPipelineDepth(obj->graph, APP_PIPELINE_DEPTH);
            
            tivxSetNodeParameterNumBufByIndex(obj->colorConvObj.node, 1, APP_BUFFER_Q_DEPTH);
    
            //tivxSetNodeParameterNumBufByIndex(obj->ldcObj.node, 7, APP_BUFFER_Q_DEPTH);
    
            /*This output is accessed slightly later in the pipeline by mosaic node so queue depth is larger */
            tivxSetNodeParameterNumBufByIndex(obj->scalerObj.node, 1, 6);
            tivxSetNodeParameterNumBufByIndex(obj->scalerObj.node, 2, 6);
    
            tivxSetNodeParameterNumBufByIndex(obj->sdpreProcObj.node, 2, APP_BUFFER_Q_DEPTH);
    
            tivxSetNodeParameterNumBufByIndex(obj->sdtidlObj.node, 4, APP_BUFFER_Q_DEPTH);
            tivxSetNodeParameterNumBufByIndex(obj->sdtidlObj.node, 7, APP_BUFFER_Q_DEPTH);
    
            tivxSetNodeParameterNumBufByIndex(obj->sdpostProcObj.node, 4, APP_BUFFER_Q_DEPTH);
    
            tivxSetNodeParameterNumBufByIndex(obj->imgMosaicObj.node, 1, 4);

    I tried modifying value from 6 to 2 and 8 in this line tivxSetNodeParameterNumBufByIndex(obj->scalerObj.node, 1, 6);

     Also tried increasing buffer depth for mosaicObj also. There was no use.

    Regards,

    Chaitanya Prakash Uppala

  • Hi,

    Could you try giving obj->scalerObj.output[1].arr to mosaic node instead of obj->scalerObj.output[0].arr

    and check if you facing the same issue?

    Regards,

    Nikhil

  • Hi Nikhil,

    When I have passed as shown below, the application was not freezing but getting a green background as shown in the attached video.

    in app_create_graph function
    
    vx_int32 idx = 0;
    
    obj->imgMosaicObj.input_arr[idx++] = obj->scalerObj.output[1].arr;
    
    ------------------------------------------------------------
    
    static void update_img_mosaic_defaults(ImgMosaicObj *imgMosaicObj, vx_uint32 in_width, vx_uint32 in_height, vx_int32 numCh)
    {
        vx_int32 idx, ch;
        vx_int32 grid_size = calc_grid_size(numCh);
        imgMosaicObj->out_width    = DISPLAY_WIDTH;
        imgMosaicObj->out_height   = DISPLAY_HEIGHT;
        imgMosaicObj->num_inputs   = 1;
    
        tivxImgMosaicParamsSetDefaults(&imgMosaicObj->params);
    
        idx = 0;
        for(ch = 0; ch < numCh; ch++)
        {
            vx_int32 startX, startY, winX, winY, winWidth, winHeight;
    
            winX = ch%grid_size;
            winY = ch/grid_size;
    
            if((in_width * grid_size) >= imgMosaicObj->out_width)
            {
                winWidth = imgMosaicObj->out_width / grid_size;
                startX = 0;
            }
            else
            {
                winWidth = in_width;
                startX = (imgMosaicObj->out_width - (in_width * grid_size)) / 2;
            }
    
            if((in_height * grid_size) >= imgMosaicObj->out_height)
            {
                winHeight = imgMosaicObj->out_height / grid_size;
                startY = 0;
            }
            else
            {
                winHeight = in_height;
                startY = (imgMosaicObj->out_height - (in_height * grid_size)) / 2;
            }
    
            imgMosaicObj->params.windows[idx].startX  = startX + (winWidth * winX);
            imgMosaicObj->params.windows[idx].startY  = startY + (winHeight * winY);
            imgMosaicObj->params.windows[idx].width   = winWidth;
            imgMosaicObj->params.windows[idx].height  = winHeight;
            imgMosaicObj->params.windows[idx].input_select   = 0;
            imgMosaicObj->params.windows[idx].channel_select = 0;
            idx++;
        }
    
        imgMosaicObj->params.num_windows  = 4;
    
        /* Number of time to clear the output buffer before it gets reused */
        imgMosaicObj->params.clear_count  = APP_BUFFER_Q_DEPTH;
    }

    As the output is not freezing, modified tha application further for 3 win of cam feed and 1 win of seg output, but there is no improvement in the output. It is still freezing with green background as shown in attached image.

      

    Regards,

    Chaitanya Prakash Uppala

  • Hi Chaitanya,

    Let me try this at my end and get back to you by end of this week.

    Regards,

    Nikhil

  • Okay. Will be waiting for your reply.

    Also I have a query, Please confirm whether replication in nodes like scalar, preproc and postproc should be removed or not as I'm working with single camera.

    Regards,

    Chaitanya Prakash Uppala

  • Since your object array from capture node is of size 1, it would not replicate the node anyways even if the replicate node is there. 

    So, it doesn't matter if there is replicate node or not. It would be a dead code.

    Regards,

    Nikhil

  • Hi Nikhil,

    Thanks for the reponse provided for the query being asked.

    Let me try this at my end and get back to you by end of this week.

    As you said this, I started with taking different application and integrating from scratch again in the meantime.

    I have integrated as per the pipeline I described you at the beginning. Please find the attached display output for the recent run, there are no build and run time errors.

    There is no freezing in the display but lot of distortion present. kindly help me in resolving that distortion or fluctuations in output.

    Note: The topview output in 4th window is not coming as expected. I'm currently looking in resolving that.

    Regards,

    Chaitanya Prakash Uppala

  • Hi Chaitanya,

    Could you please share me the source code for this application? It would be easier for me to review and try to build a similar application at my end to test.

    Regards,

    Nikhil

  • hi Nikhil,

    Sorry, I couldn't share the source code to you as per company norms, instead I can provide you with the snippets of functions which you require to look into.

    Hope you understand.

    Regards,

    Chaitanya Prakash Uppala

  • Ok,

    So can you share the snippet of the create graph implementation, the graph parameter implementation and place where you are setting all the buffer depths?

    Also please share the snippet of run graph as well.

    Regards,

    Nikhil

  • Hi Nikhil,

    Please find the attached snippets.

    1)app_create_graph

    static vx_status app_create_graph(AppObj *obj)
    {
        vx_status status = VX_SUCCESS;
        vx_graph_parameter_queue_params_t graph_parameters_queue_params_list[2];
        vx_int32 graph_parameter_index;
    
        obj->graph = vxCreateGraph(obj->context);
        status = vxGetStatus((vx_reference)obj->graph);
        vxSetReferenceName((vx_reference)obj->graph, "app_btc_tidl_od_cam_graph");
        APP_PRINTF("Graph create done!\n");
    
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_capture(obj->graph, &obj->captureObj);
            APP_PRINTF("Capture graph done!\n");
        }
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_color_conv(obj->graph, &obj->colorConvObj, obj->captureObj.raw_image_arr[0]);
            APP_PRINTF("Color Conversion graph done!\n");
        }
        
        if(status == VX_SUCCESS)
        {
          //  app_create_graph_scaler(obj->context, obj->graph, &obj->scalerObj, obj->ldcObj.output_arr);
    
            app_create_graph_scaler(obj->context, obj->graph, &obj->scalerObj, obj->colorConvObj.dst_image_arr);
    
            APP_PRINTF("Scaler graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            app_create_graph_pre_proc(obj->graph, &obj->odpreProcObj, obj->scalerObj.output_1.arr[0]);
            APP_PRINTF("od Pre proc graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            app_create_graph_pre_proc(obj->graph, &obj->segpreProcObj, obj->scalerObj.output_2.arr[0]);
            APP_PRINTF("seg Pre proc graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            app_create_graph_pre_proc(obj->graph, &obj->tvpreProcObj, obj->scalerObj.output_3.arr[0]);
            APP_PRINTF("seg Pre proc graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            app_create_graph_tidl(obj->context, obj->graph, &obj->odtidlObj, obj->odpreProcObj.output_tensor_arr);
            APP_PRINTF("TIDL graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            app_create_graph_tidl(obj->context, obj->graph, &obj->segtidlObj, obj->segpreProcObj.output_tensor_arr);
            APP_PRINTF("TIDL graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            //app_create_graph_draw_detections(obj->graph, &obj->drawDetectionsObj, obj->tidlObj.output_tensor_arr[0], obj->scalerObj.output[1].arr);
            APP_PRINTF("Draw detections graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            app_rci_create_graph_postproc(obj->graph, &obj->rcipostprocObj, obj->odtidlObj.output_tensor_arr, obj->scalerObj.output_1.arr[0],obj->odtidlObj.postproc_tensor_arr);
            APP_PRINTF("od RCI postproc graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            app_create_graph_post_proc(obj->graph, &obj->segpostprocObj, obj->scalerObj.output_2.arr[0], obj->segtidlObj.out_args_arr, obj->segtidlObj.output_tensor_arr[0]);
            APP_PRINTF("seg postproc graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
             app_create_graph_view_conv(obj->graph, &obj->viewConvObj, &obj->vcObj,obj->tvpreProcObj.output_tensor_arr);
             APP_PRINTF("View Conversion graph done!\n");
        }
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_color_conv_RGB_NV12(obj->graph, &obj->colorConvRGBNV12Obj, obj->viewConvObj.output_image_arr);
            APP_PRINTF("Color Conv RGB_NV12 graph done!\n");
        }
    
    //******************m2m is used for upscaling output image*****************************//
        //status = app_create_graph_display_m2m(obj->graph, &obj->displaym2mObj, obj->rcipostprocObj.output_image_arr);
    
        vx_int32 idx = 0;
        //obj->imgMosaicObj.input_arr[idx++] = obj->displaym2mObj.dst_image_arr;
        //for(int i=0;i<2;i++){
        obj->imgMosaicObj.input_arr[idx++] = obj->scalerObj.output_4.arr[0];
        obj->imgMosaicObj.input_arr[idx++] = obj->rcipostprocObj.output_image_arr;
        //}
        //for(int i=0;i<2;i++){
        obj->imgMosaicObj.input_arr[idx++] = obj->segpostprocObj.output_image_arr;
        //}
        obj->imgMosaicObj.input_arr[idx++] = obj->colorConvRGBNV12Obj.dst_image_arr;
        obj->imgMosaicObj.num_inputs = idx;
    
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_img_mosaic(obj->graph, &obj->imgMosaicObj, NULL);
            APP_PRINTF("Img Mosaic graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_display(obj->graph, &obj->displayObj, obj->imgMosaicObj.output_image[0]);
            APP_PRINTF("Display graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            graph_parameter_index = 0;
            add_graph_parameter_by_node_index(obj->graph, obj->captureObj.node, 1);
            obj->captureObj.graph_parameter_index = graph_parameter_index;
            graph_parameters_queue_params_list[graph_parameter_index].graph_parameter_index = graph_parameter_index;
            graph_parameters_queue_params_list[graph_parameter_index].refs_list_size = APP_BUFFER_Q_DEPTH;
            graph_parameters_queue_params_list[graph_parameter_index].refs_list = (vx_reference*)&obj->captureObj.raw_image_arr[0];
            graph_parameter_index++;
    
            vxSetGraphScheduleConfig(obj->graph,
                    VX_GRAPH_SCHEDULE_MODE_QUEUE_AUTO,
                    graph_parameter_index,
                    graph_parameters_queue_params_list);
    
            tivxSetGraphPipelineDepth(obj->graph, APP_PIPELINE_DEPTH);
            
            tivxSetNodeParameterNumBufByIndex(obj->colorConvObj.node, 1, APP_BUFFER_Q_DEPTH);
    
            //tivxSetNodeParameterNumBufByIndex(obj->ldcObj.node, 7, APP_BUFFER_Q_DEPTH);
    
            /*This output is accessed slightly later in the pipeline by mosaic node so queue depth is larger */
            tivxSetNodeParameterNumBufByIndex(obj->scalerObj.node, 1, 6);
            tivxSetNodeParameterNumBufByIndex(obj->scalerObj.node, 2, 6);
    
            tivxSetNodeParameterNumBufByIndex(obj->odpreProcObj.node, 2, APP_BUFFER_Q_DEPTH);
            tivxSetNodeParameterNumBufByIndex(obj->segpreProcObj.node, 2, APP_BUFFER_Q_DEPTH);
            tivxSetNodeParameterNumBufByIndex(obj->tvpreProcObj.node, 2, APP_BUFFER_Q_DEPTH);
    
            tivxSetNodeParameterNumBufByIndex(obj->odtidlObj.node, 4, APP_BUFFER_Q_DEPTH);
            tivxSetNodeParameterNumBufByIndex(obj->odtidlObj.node, 7, APP_BUFFER_Q_DEPTH);
    
            tivxSetNodeParameterNumBufByIndex(obj->segtidlObj.node, 4, APP_BUFFER_Q_DEPTH);
            tivxSetNodeParameterNumBufByIndex(obj->segtidlObj.node, 7, APP_BUFFER_Q_DEPTH);
            //printf("after this -1\n");
            //tivxSetNodeParameterNumBufByIndex(obj->tvtidlObj.node, 4, APP_BUFFER_Q_DEPTH);
            //tivxSetNodeParameterNumBufByIndex(obj->tvtidlObj.node, 7, APP_BUFFER_Q_DEPTH);
            //printf("after this 0\n");
            //tivxSetNodeParameterNumBufByIndex(obj->drawDetectionsObj.node, 3, APP_BUFFER_Q_DEPTH);
            tivxSetNodeParameterNumBufByIndex(obj->rcipostprocObj.node, 5, APP_BUFFER_Q_DEPTH);
            //printf("after this 1\n");
            tivxSetNodeParameterNumBufByIndex(obj->segpostprocObj.node, 4, APP_BUFFER_Q_DEPTH);
            //printf("after this 2\n");
            tivxSetNodeParameterNumBufByIndex(obj->viewConvObj.node, 2, 4);
            //printf("after this 2.1\n");
            tivxSetNodeParameterNumBufByIndex(obj->colorConvRGBNV12Obj.node, 2, APP_BUFFER_Q_DEPTH);
            //printf("after this 2.2\n");
            //tivxSetNodeParameterNumBufByIndex(obj->displaym2mObj.node, 2, APP_BUFFER_Q_DEPTH);
            //printf("after this 3\n");
            tivxSetNodeParameterNumBufByIndex(obj->imgMosaicObj.node, 1, APP_BUFFER_Q_DEPTH);
            //printf("after this 4\n");
            APP_PRINTF("Pipeline params setup done!\n");
        }
    
        return status;
    }

    2)run_graph

    static vx_status app_run_graph_for_one_frame_pipeline(AppObj *obj, vx_int32 frame_id)
    {
        vx_status status = VX_SUCCESS;
    
        appPerfPointBegin(&obj->total_perf);
        CaptureObj *captureObj = &obj->captureObj;
    
        if(obj->pipeline <= 0)
        {
            /* Enqueue outpus */
            /* Enqueue inputs during pipeup dont execute */
            vxGraphParameterEnqueueReadyRef(obj->graph, captureObj->graph_parameter_index, (vx_reference*)&captureObj->raw_image_arr[obj->enqueueCnt], 1);
    
            obj->enqueueCnt++;
            obj->enqueueCnt   = (obj->enqueueCnt  >= APP_BUFFER_Q_DEPTH)? 0 : obj->enqueueCnt;
            obj->pipeline++;
        }
    
    
        if(obj->pipeline > 0)
        {
            vx_image capture_input_image;
            uint32_t num_refs;
    
            /* Dequeue input */
            vxGraphParameterDequeueDoneRef(obj->graph, captureObj->graph_parameter_index, (vx_reference*)&capture_input_image, 1, &num_refs);
    
            /* Enqueue input - start execution */
            vxGraphParameterEnqueueReadyRef(obj->graph, captureObj->graph_parameter_index, (vx_reference*)&capture_input_image, 1);
    
            obj->enqueueCnt++;
            obj->dequeueCnt++;
    
            obj->enqueueCnt = (obj->enqueueCnt >= APP_BUFFER_Q_DEPTH)? 0 : obj->enqueueCnt;
            obj->dequeueCnt = (obj->dequeueCnt >= APP_BUFFER_Q_DEPTH)? 0 : obj->dequeueCnt;
        }
    
        appPerfPointEnd(&obj->total_perf);
    
        return status;
    }
    
    
    static vx_status app_run_graph(AppObj *obj)
    {
        vx_status status = VX_SUCCESS;
    
        SensorObj *sensorObj = &obj->sensorObj;
        vx_int32 frame_id;
        int32_t ch_mask = obj->sensorObj.ch_mask;
    
        app_pipeline_params_defaults(obj);
    
        if(NULL == sensorObj->sensor_name)
        {
            printf("sensor name is NULL \n");
            return VX_FAILURE;
        }
        status = appStartImageSensor(sensorObj->sensor_name, ch_mask);
        APP_PRINTF("appStartImageSensor returned with status: %d\n", status);
    
        for(frame_id = 0; frame_id < obj->num_frames_to_run; frame_id++)
        {
    #ifdef APP_WRITE_INTERMEDIATE_OUTPUTS
            if(obj->write_file == 1)
            {
                if(obj->captureObj.en_out_capture_write == 1)
                {
                    app_send_cmd_capture_write_node(&obj->captureObj, frame_id, obj->num_frames_to_write, obj->num_frames_to_skip);
                }
                if(obj->vissObj.en_out_viss_write == 1)
                {
                    app_send_cmd_viss_write_node(&obj->vissObj, frame_id, obj->num_frames_to_write, obj->num_frames_to_skip);
                }
                if(obj->ldcObj.en_out_ldc_write == 1)
                {
                    app_send_cmd_ldc_write_node(&obj->ldcObj, frame_id, obj->num_frames_to_write, obj->num_frames_to_skip);
                }
                if(obj->scalerObj.en_out_scaler_write == 1)
                {
                    app_send_cmd_scaler_write_node(&obj->scalerObj, frame_id, obj->num_frames_to_write, obj->num_frames_to_skip);
                }
                if(obj->preProcObj.en_out_pre_proc_write == 1)
                {
                    app_send_cmd_pre_proc_write_node(&obj->preProcObj, frame_id, obj->num_frames_to_write, obj->num_frames_to_skip);
                }
                obj->write_file = 0;
            }
    #endif
            app_run_graph_for_one_frame_pipeline(obj, frame_id);
            /*for(int i=0;i<10000000;i++){
    
            }*/
            //tivxTaskWaitMsecs(1000);
            /* user asked to stop processing */
            if(obj->stop_task)
                break;
        }
    
        vxWaitGraph(obj->graph);
    
        obj->stop_task = 1;
    
        status = appStopImageSensor(obj->sensorObj.sensor_name, ch_mask);
    
        return status;
    }

    Also, please find the attached run time log.

    root@j7-evm:/opt/vision_apps# ./app_btc_tidl_od_cam.out --cfg sd_od_tv.cfg 
    APP: Init ... !!!
    MEM: Init ... !!!
    MEM: Initialized DMA HEAP (fd=4) !!!
    MEM: Init ... Done !!!
    IPC: Init ... !!!
    IPC: Init ... Done !!!
    REMOTE_SERVICE: Init ... !!!
    REMOTE_SERVICE: Init ... Done !!!
        51.203432 s: GTC Frequency = 200 MHz
    APP: Init ... Done !!!
        51.210229 s:  VX_ZONE_INIT:Enabled
        51.210257 s:  VX_ZONE_ERROR:Enabled
        51.210269 s:  VX_ZONE_WARNING:Enabled
        51.214366 s:  VX_ZONE_INIT:[tivxInitLocal:130] Initialization Done !!!
        51.214541 s:  VX_ZONE_INIT:[tivxHostInitLocal:86] Initialization Done for HOST !!!
    Default param set! 
    Parsed user params! 
        51.218732 s: ISS: Enumerating sensors ... !!!
    [MCU2_0]     51.218948 s: ImageSensor_RemoteServiceHandler: IM_SENSOR_CMD_ENUMERATE 
    [MCU2_0]     51.219225 s: write 0xfe to TCA6408 register 0x3 
    [MCU2_0]     51.318565 s: UB960 config start 
    [MCU2_0]     53.878831 s: End of UB960 config 
    [MCU2_0]     53.878896 s: UB960 config start 
        54.079068 s: ISS: Enumerating sensors ... found 0 : IMX390-UB953_D3
        54.079092 s: ISS: Enumerating sensors ... found 1 : AR0233-UB953_MARS
        54.079112 s: ISS: Enumerating sensors ... found 2 : AR0820-UB953_LI
        54.079118 s: ISS: Enumerating sensors ... found 3 : UB9xxx_RAW12_TESTPATTERN
        54.079123 s: ISS: Enumerating sensors ... found 4 : UB96x_UYVY_TESTPATTERN
        54.079128 s: ISS: Enumerating sensors ... found 5 : GW_AR0233_UYVY
        54.079133 s: ISS: Enumerating sensors ... found 6 : ISX016-UB913
    7 sensor(s) found 
    Supported sensor list: 
    a : IMX390-UB953_D3 
    b : AR0233-UB953_MARS 
    c : AR0820-UB953_LI 
    d : UB9xxx_RAW12_TESTPATTERN 
    e : UB96x_UYVY_TESTPATTERN 
    f : GW_AR0233_UYVY 
    g : ISX016-UB913 
    Select a sensor above or press '0' to autodetect the sensor 
    [MCU2_0]     54.078824 s: End of UB960 config 
    g
    Sensor selected : ISX016-UB913
    Querying ISX016-UB913 
        55.404618 s: ISS: Querying sensor [ISX016-UB913] ... !!!
        55.405092 s: ISS: Querying sensor [ISX016-UB913] ... Done !!!
    LDC Selection Yes(1)/No(0)
    Invalid selection 
    . Try again 
    LDC Selection Yes(1)/No(0)
    [MCU2_0]     55.404829 s: ImageSensor_RemoteServiceHandler: IM_SENSOR_CMD_QUERY 
    [MCU2_0]     55.404906 s: Received Query for ISX016-UB913 
    1
    Max number of cameras supported by sensor ISX016-UB913 = 4 
    Please enter number of cameras to be enabled 
    Invalid selection 
    . Try again 
    Max number of cameras supported by sensor ISX016-UB913 = 4 
    Please enter number of cameras to be enabled 
    1
    Sensor params queried! 
    Window 1 input_select: 0
    Window 2 input_select: 1
    Window 3 input_select: 2
    Window 4 input_select (should be for segmentation): 3
    Updated user params! 
    Creating context done!
    Kernel loading done!
        57.782328 s: ISS: Initializing sensor [ISX016-UB913], doing IM_SENSOR_CMD_PWRON ... !!!
        57.782894 s: ISS: Initializing sensor [ISX016-UB913], doing IM_SENSOR_CMD_CONFIG ... !!!
    [MCU2_0]     57.782535 s: ImageSensor_RemoteServiceHandler: IM_SENSOR_CMD_PWRON 
    [MCU2_0]     57.782644 s: IM_SENSOR_CMD_PWRON : channel_mask = 0x1 
    [MCU2_0]     57.782718 s: ISX016_PowerOn : chId = 0x0 
    [MCU2_0]     57.783069 s: ImageSensor_RemoteServiceHandler: IM_SENSOR_CMD_CONFIG 
    [MCU2_0]     57.783124 s: Application requested features = 0x158 
    [MCU2_0]  
    [MCU2_0]     57.783170 s: UB960 config start 
    [MCU2_0]     58.038660 s: End of UB960 config 
    [MCU2_0]     58.038731 s: UB960 config start 
    [MCU2_0]     58.294654 s: End of UB960 config 
    [MCU2_0]     58.294717 s: Configuring camera # 0 
    [MCU2_0]     58.294769 s: Iss_sensor_isx016.c ...i am in ISX016_Config config script 
    [MCU2_0]     58.294957 s: UB960 config start 
    [MCU2_0]     58.298670 s: End of UB960 config 
    [MCU2_0]     58.298725 s: ub953 config start : slaveAddr = 0x74 
    [MCU2_0]     58.490698 s:  End of UB953 config 
    [MCU2_0]     58.490791 s:  Configuring ISX016 imager 0x40.. Please wait till it finishes 
        58.808872 s: ISS: Initializing sensor [ISX016-UB913] ... Done !!!
    Sensor init done!
    Capture init done!
    app_init_color_conv() : ENTERING 
    app_init_color_conv() : EXITING 
    Color Conv init done!
    Scaler init done!
    [MCU2_0]     58.808578 s:  ISX016 config done 
    [MCU2_0]     58.808645 s:   iss_sensor_isx016.c ---> ISX016_Config finished 
    [MCU2_0]     58.808689 s: IM_SENSOR_CMD_CONFIG returning status = 0 
    Computing checksum at 0x0000FFFF75039280, size = 823280
    od TIDL Init Done! 
    Computing checksum at 0x0000FFFF7437A540, size = 937904
    seg TIDL Init Done! 
    Computing checksum at 0x0000FFFF73830000, size = 6232088
    TV TIDL Init Done! 
    od Pre Proc Update Done! 
    seg Pre Proc Update Done! 
    tv Pre Proc Update Done! 
    Pre Proc Init Done! 
    seg Pre Proc Init Done! 
    tv Pre Proc Init Done! 
    APP_INIT Number of Input Tensors in draw detections 3
    rci postproc Update Done! 
    seg Post Proc Update Done! 
    seg Post Proc Init Done! 
    rci postproc Init Done! 
    end app_init_view_conv 
    inside status mesh_img
    after vxGetstatus
    above in_arg_right
    above in_arg_rear
    above in_arg_front
    View Conv Init Done! 
    app_init_color_conv_RGB_NV12() : ENTERING 
    app_init_color_conv_RGB_NV12() : EXITING 
    Color Conv RGB-NV12 Init Done! 
    Img Mosaic init done!
    Display init done!
    App Init Done! 
    Graph create done!
    Capture graph done!
    app_create_graph_color_conv() : ENTERING 
    Color convert node create started
    Color convert node create done 
    app_create_graph_color_conv() : EXITING 
    Color Conversion graph done!
    Scaler graph done!
    od Pre proc graph done!
    seg Pre proc graph done!
    seg Pre proc graph done!
    TIDL graph done!
    TIDL graph done!
    Draw detections graph done!
    od RCI postproc graph done!
    seg postproc graph done!
    I am inside app_create_graph_view_conv
      Before tivxViewconvertNode !
     tivxViewconvertNode Done!
     Tensor releasing Start Done!
     Tensor releasing  Done!
     View Conversion graph done!
    app_create_graph_color_conv_RGB_NV12() : ENTERING 
    Color convert RGB_NV12 node create started
    Color convert node RGB_NV12 create done 
    app_create_graph_color_conv_RGB_NV12() : EXITING 
    Color Conv RGB_NV12 graph done!
    Img Mosaic graph done!
    Display graph done!
    Pipeline params setup done!
    App Create Graph Done! 
    Grapy verify SUCCESS!
    App Verify Graph Done! 
        59.686725 s: ISS: Starting sensor [ISX016-UB913] ... !!!
    [MCU2_0]     59.687262 s: ImageSensor_RemoteServiceHandler: IM_SENSOR_CMD_STREAM_ON 
    [MCU2_0]     59.687346 s: IM_SENSOR_CMD_STREAM_ON:  channel_mask = 0x1
    [MCU2_0]     59.687393 s: UB960 config start 
    [MCU2_0]     59.942652 s: End of UB960 config 
    [MCU2_0]     59.942722 s: UB960 config start 
    [MCU2_0]     60.198667 s: End of UB960 config 
    [MCU2_0]     60.208703 s: UB960 config start 
    [MCU2_0]     60.240655 s: End of UB960 config 
    [MCU2_0]     60.240720 s: UB960 config start 
    [MCU2_0]     60.496654 s: End of UB960 config 
    [MCU2_0]     60.496730 s: 
    [MCU2_0]     60.496789 s: 
    [MCU2_0]  I2C: Reading 0x35 registers starting from REG 0x00 of device 0x74 ... !!!
    [MCU2_0]     60.506472 s: 
    [MCU2_0]     60.506584 s: 
    [MCU2_0]  I2C: Reading 0xfe registers starting from REG 0x00 of device 0x3d ... !!!
        60.538653 s: ISS: Starting sensor [ISX016-UB913] ... !!!
    appStartImageSensor returned with status: 0
    
    
     ==========================================
     BTC Demo - Camera based Object Detection
     ==========================================
    
     p: Print performance statistics
    
     x: Exit
    
     Enter Choice: 
    
    
     ==========================================
     BTC Demo - Camera based Object Detection
     ==========================================
    
     p: Print performance statistics
    
     x: Exit
    
     Enter Choice: p
    
    
    Summary of CPU load,
    ====================
    
    CPU: mpu1_0: TOTAL LOAD =  49.16 % ( HWI =   0.40 %, SWI =   0. 4 % )
    CPU: mcu2_0: TOTAL LOAD =  27. 0 % ( HWI =   0. 0 %, SWI =   0. 0 % )
    CPU: mcu2_1: TOTAL LOAD =   1. 0 % ( HWI =   0. 0 %, SWI =   0. 0 % )
    CPU:  c6x_1: TOTAL LOAD =   7. 0 % ( HWI =   0. 0 %, SWI =   0. 0 % )
    CPU:  c6x_2: TOTAL LOAD =  11. 0 % ( HWI =   0. 0 %, SWI =   0. 0 % )
    CPU:  c7x_1: TOTAL LOAD =  53. 0 % ( HWI =   0. 0 %, SWI =   0. 0 % )
    
    
    HWA performance statistics,
    ===========================
    
    HWA:   MSC0: LOAD =  18.53 % ( 106 MP/s )
    HWA:   MSC1: LOAD =   9.26 % ( 25 MP/s )
    
    
    DDR performance statistics,
    ===========================
    
    DDR: READ  BW: AVG =   2230 MB/s, PEAK =  10051 MB/s
    DDR: WRITE BW: AVG =    768 MB/s, PEAK =   3696 MB/s
    DDR: TOTAL BW: AVG =   2998 MB/s, PEAK =  13747 MB/s
    
    
    Detailed CPU performance/memory statistics,
    ===========================================
    
    CPU: mcu2_0: TASK:           IPC_RX:   1.14 %
    CPU: mcu2_0: TASK:       REMOTE_SRV:   0.34 %
    CPU: mcu2_0: TASK:        LOAD_TEST:   0. 0 %
    CPU: mcu2_0: TASK:       TIVX_CPU_0:   0. 0 %
    CPU: mcu2_0: TASK:          TIVX_NF:   0. 0 %
    CPU: mcu2_0: TASK:        TIVX_LDC1:   0. 0 %
    CPU: mcu2_0: TASK:        TIVX_MSC1:   8.53 %
    CPU: mcu2_0: TASK:        TIVX_MSC2:   0. 0 %
    CPU: mcu2_0: TASK:       TIVX_VISS1:   0. 0 %
    CPU: mcu2_0: TASK:       TIVX_CAPT1:   1.18 %
    CPU: mcu2_0: TASK:       TIVX_CAPT2:   0. 0 %
    CPU: mcu2_0: TASK:       TIVX_DISP1:   0.99 %
    CPU: mcu2_0: TASK:       TIVX_DISP2:   0. 0 %
    CPU: mcu2_0: TASK:       TIVX_CSITX:   0. 0 %
    CPU: mcu2_0: TASK:       TIVX_CAPT3:   0. 0 %
    CPU: mcu2_0: TASK:       TIVX_CAPT4:   0. 0 %
    CPU: mcu2_0: TASK:       TIVX_CAPT5:   0. 0 %
    CPU: mcu2_0: TASK:       TIVX_CAPT6:   0. 0 %
    CPU: mcu2_0: TASK:       TIVX_CAPT7:   0. 0 %
    CPU: mcu2_0: TASK:       TIVX_CAPT8:   0. 0 %
    CPU: mcu2_0: TASK:      TIVX_DISP_M:   3.49 %
    CPU: mcu2_0: TASK:      TIVX_DISP_M:   0. 0 %
    CPU: mcu2_0: TASK:      TIVX_DISP_M:   0. 0 %
    CPU: mcu2_0: TASK:      TIVX_DISP_M:   0. 0 %
    
    CPU: mcu2_0: HEAP:   DDR_SHARED_MEM: size =   16777216 B, free =   16693248 B ( 99 % unused)
    CPU: mcu2_0: HEAP:           L3_MEM: size =     262144 B, free =     261888 B ( 99 % unused)
    
    CPU: mcu2_1: TASK:           IPC_RX:   0. 0 %
    CPU: mcu2_1: TASK:       REMOTE_SRV:   0. 8 %
    CPU: mcu2_1: TASK:        LOAD_TEST:   0. 0 %
    CPU: mcu2_1: TASK:         TIVX_SDE:   0. 0 %
    CPU: mcu2_1: TASK:         TIVX_DOF:   0. 0 %
    CPU: mcu2_1: TASK:       TIVX_CPU_1:   0. 0 %
    CPU: mcu2_1: TASK:      IPC_TEST_RX:   0. 0 %
    CPU: mcu2_1: TASK:      IPC_TEST_TX:   0. 0 %
    CPU: mcu2_1: TASK:      IPC_TEST_TX:   0. 0 %
    CPU: mcu2_1: TASK:      IPC_TEST_TX:   0. 0 %
    CPU: mcu2_1: TASK:      IPC_TEST_TX:   0. 0 %
    CPU: mcu2_1: TASK:      IPC_TEST_TX:   0. 0 %
    
    CPU: mcu2_1: HEAP:   DDR_SHARED_MEM: size =   16777216 B, free =   16773376 B ( 99 % unused)
    CPU: mcu2_1: HEAP:           L3_MEM: size =     262144 B, free =     262144 B (100 % unused)
    
    CPU:  c6x_1: TASK:           IPC_RX:   0. 9 %
    CPU:  c6x_1: TASK:       REMOTE_SRV:   0. 0 %
    CPU:  c6x_1: TASK:        LOAD_TEST:   0. 0 %
    CPU:  c6x_1: TASK:         TIVX_CPU:   7. 1 %
    CPU:  c6x_1: TASK:      IPC_TEST_RX:   0. 0 %
    CPU:  c6x_1: TASK:      IPC_TEST_TX:   0. 0 %
    CPU:  c6x_1: TASK:      IPC_TEST_TX:   0. 0 %
    CPU:  c6x_1: TASK:      IPC_TEST_TX:   0. 0 %
    CPU:  c6x_1: TASK:      IPC_TEST_TX:   0. 0 %
    CPU:  c6x_1: TASK:      IPC_TEST_TX:   0. 0 %
    
    CPU:  c6x_1: HEAP:   DDR_SHARED_MEM: size =   16777216 B, free =   16752384 B ( 99 % unused)
    CPU:  c6x_1: HEAP:           L2_MEM: size =     229376 B, free =          0 B (  0 % unused)
    CPU:  c6x_1: HEAP:  DDR_SCRATCH_MEM: size =   50331648 B, free =   50331648 B (100 % unused)
    
    CPU:  c6x_2: TASK:           IPC_RX:   0.11 %
    CPU:  c6x_2: TASK:       REMOTE_SRV:   0. 1 %
    CPU:  c6x_2: TASK:        LOAD_TEST:   0. 0 %
    CPU:  c6x_2: TASK:         TIVX_CPU:  11. 4 %
    CPU:  c6x_2: TASK:      IPC_TEST_RX:   0. 0 %
    CPU:  c6x_2: TASK:      IPC_TEST_TX:   0. 0 %
    CPU:  c6x_2: TASK:      IPC_TEST_TX:   0. 0 %
    CPU:  c6x_2: TASK:      IPC_TEST_TX:   0. 0 %
    CPU:  c6x_2: TASK:      IPC_TEST_TX:   0. 0 %
    CPU:  c6x_2: TASK:      IPC_TEST_TX:   0. 0 %
    
    CPU:  c6x_2: HEAP:   DDR_SHARED_MEM: size =   16777216 B, free =   16773376 B ( 99 % unused)
    CPU:  c6x_2: HEAP:           L2_MEM: size =     229376 B, free =     229376 B (100 % unused)
    CPU:  c6x_2: HEAP:  DDR_SCRATCH_MEM: size =   50331648 B, free =   50331648 B (100 % unused)
    
    CPU:  c7x_1: TASK:           IPC_RX:   0. 9 %
    CPU:  c7x_1: TASK:       REMOTE_SRV:   0. 0 %
    CPU:  c7x_1: TASK:        LOAD_TEST:   0. 0 %
    CPU:  c7x_1: TASK:      TIVX_CPU_PR:  52.59 %
    CPU:  c7x_1: TASK:      TIVX_CPU_PR:   0. 0 %
    CPU:  c7x_1: TASK:      TIVX_CPU_PR:   0. 0 %
    CPU:  c7x_1: TASK:      TIVX_CPU_PR:   0. 0 %
    CPU:  c7x_1: TASK:      TIVX_CPU_PR:   0. 0 %
    CPU:  c7x_1: TASK:      TIVX_CPU_PR:   0. 0 %
    CPU:  c7x_1: TASK:      TIVX_CPU_PR:   0. 0 %
    CPU:  c7x_1: TASK:      TIVX_CPU_PR:   0. 0 %
    CPU:  c7x_1: TASK:      IPC_TEST_RX:   0. 0 %
    CPU:  c7x_1: TASK:      IPC_TEST_TX:   0. 0 %
    CPU:  c7x_1: TASK:      IPC_TEST_TX:   0. 0 %
    CPU:  c7x_1: TASK:      IPC_TEST_TX:   0. 0 %
    CPU:  c7x_1: TASK:      IPC_TEST_TX:   0. 0 %
    CPU:  c7x_1: TASK:      IPC_TEST_TX:   0. 0 %
    
    CPU:  c7x_1: HEAP:   DDR_SHARED_MEM: size =  268435456 B, free =  198496256 B ( 73 % unused)
    CPU:  c7x_1: HEAP:           L3_MEM: size =    8159232 B, free =          0 B (  0 % unused)
    CPU:  c7x_1: HEAP:           L2_MEM: size =     458752 B, free =     458752 B (100 % unused)
    CPU:  c7x_1: HEAP:           L1_MEM: size =      16384 B, free =          0 B (  0 % unused)
    CPU:  c7x_1: HEAP:  DDR_SCRATCH_MEM: size =  385875968 B, free =  385858750 B ( 99 % unused)
    
    
    GRAPH: app_btc_tidl_od_cam_graph (#nodes =  14, #executions =    742)
     NODE:       CAPTURE1:             capture_node: avg =    280 usecs, min/max =     67 /  39641 usecs, #executions =        742
     NODE:          A72-0:           colorConv_node: avg =   1128 usecs, min/max =    583 /   1366 usecs, #executions =        742
     NODE:      VPAC_MSC1:               ScalerNode: avg =   2681 usecs, min/max =   2551 /   3156 usecs, #executions =        742
     NODE:          DSP-1:              PreProcNode: avg =    658 usecs, min/max =    623 /   4209 usecs, #executions =        742
     NODE:          A72-0:             ViewConvNode: avg =   6062 usecs, min/max =   5890 /   6593 usecs, #executions =        742
     NODE:       DSS_M2M1:   colorConv_RGB_NV12node: avg =   1512 usecs, min/max =     53 /   3333 usecs, #executions =        742
     NODE:          DSP-1:              PreProcNode: avg =    675 usecs, min/max =    606 /   3965 usecs, #executions =        742
     NODE:       DSP_C7-1:                tidl_node: avg =   6726 usecs, min/max =   6693 /   7516 usecs, #executions =        742
     NODE:          DSP-2:             PostProcNode: avg =   2984 usecs, min/max =   2959 /   4049 usecs, #executions =        742
     NODE:          DSP-1:              PreProcNode: avg =    316 usecs, min/max =    299 /   1643 usecs, #executions =        742
     NODE:       DSP_C7-1:                tidl_node: avg =   7788 usecs, min/max =   7342 /   8284 usecs, #executions =        742
     NODE:          A72-0: RCIDrawBoxDetectionsNode: avg =  19048 usecs, min/max =  17551 /  25707 usecs, #executions =        742
     NODE:      VPAC_MSC1:              mosaic_node: avg =   3670 usecs, min/max =   3211 /  35122 usecs, #executions =        742
     NODE:       DISPLAY1:              DisplayNode: avg =   8515 usecs, min/max =     83 /  16928 usecs, #executions =        742
    
     PERF:           FILEIO: avg =      0 usecs, min/max = 4294967295 /      0 usecs, #executions =          0
     PERF:            TOTAL: avg =  26396 usecs, min/max =      3 /  68801 usecs, #executions =        751
    
     PERF:            TOTAL:   37.88 FPS
    
    
    
     ==========================================
     BTC Demo - Camera based Object Detection
     ==========================================
    
     p: Print performance statistics
    
     x: Exit
    
     Enter Choice: 
    
    
     ==========================================
     BTC Demo - Camera based Object Detection
     ==========================================
    
     p: Print performance statistics
    
     x: Exit
    
    

    The above attached app_create_graph is with respect to attached video reference for you. Later I have modified the snippet where we generally modify the buffer depths for the parameters. Please find the attched code below for the same.

    static vx_status app_create_graph(AppObj *obj)
    {
        vx_status status = VX_SUCCESS;
        vx_graph_parameter_queue_params_t graph_parameters_queue_params_list[2];
        vx_int32 graph_parameter_index;
    
        obj->graph = vxCreateGraph(obj->context);
        status = vxGetStatus((vx_reference)obj->graph);
        vxSetReferenceName((vx_reference)obj->graph, "app_btc_tidl_od_cam_graph");
        APP_PRINTF("Graph create done!\n");
    
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_capture(obj->graph, &obj->captureObj);
            APP_PRINTF("Capture graph done!\n");
        }
    
        /*if(status == VX_SUCCESS)
        {
            status = app_create_graph_viss(obj->graph, &obj->vissObj, obj->captureObj.raw_image_arr[0]);
            APP_PRINTF("VISS graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_aewb(obj->graph, &obj->aewbObj, obj->vissObj.h3a_stats_arr);
            APP_PRINTF("AEWB graph done!\n");
        }*/
    
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_color_conv(obj->graph, &obj->colorConvObj, obj->captureObj.raw_image_arr[0]);
            APP_PRINTF("Color Conversion graph done!\n");
        }
        //if(status == VX_SUCCESS)
       // {
    //        status = app_create_graph_ldc(obj->graph, &obj->ldcObj, obj->captureObj.raw_image_arr[0]);
           // status = app_create_graph_ldc(obj->graph, &obj->ldcObj, obj->colorConvObj.dst_image_arr);
         //   APP_PRINTF("LDC graph done!\n");
       // }
    
        if(status == VX_SUCCESS)
        {
          //  app_create_graph_scaler(obj->context, obj->graph, &obj->scalerObj, obj->ldcObj.output_arr);
    
            app_create_graph_scaler(obj->context, obj->graph, &obj->scalerObj, obj->colorConvObj.dst_image_arr);
    
            APP_PRINTF("Scaler graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            app_create_graph_pre_proc(obj->graph, &obj->odpreProcObj, obj->scalerObj.output_1.arr[0]);
            APP_PRINTF("od Pre proc graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            app_create_graph_pre_proc(obj->graph, &obj->segpreProcObj, obj->scalerObj.output_2.arr[0]);
            APP_PRINTF("seg Pre proc graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            app_create_graph_pre_proc(obj->graph, &obj->tvpreProcObj, obj->scalerObj.output_3.arr[0]);
            APP_PRINTF("seg Pre proc graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            app_create_graph_tidl(obj->context, obj->graph, &obj->odtidlObj, obj->odpreProcObj.output_tensor_arr);
            APP_PRINTF("TIDL graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            app_create_graph_tidl(obj->context, obj->graph, &obj->segtidlObj, obj->segpreProcObj.output_tensor_arr);
            APP_PRINTF("TIDL graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            //app_create_graph_draw_detections(obj->graph, &obj->drawDetectionsObj, obj->tidlObj.output_tensor_arr[0], obj->scalerObj.output[1].arr);
            APP_PRINTF("Draw detections graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            app_rci_create_graph_postproc(obj->graph, &obj->rcipostprocObj, obj->odtidlObj.output_tensor_arr, obj->scalerObj.output_1.arr[0],obj->odtidlObj.postproc_tensor_arr);
            APP_PRINTF("od RCI postproc graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            app_create_graph_post_proc(obj->graph, &obj->segpostprocObj, obj->scalerObj.output_2.arr[0], obj->segtidlObj.out_args_arr, obj->segtidlObj.output_tensor_arr[0]);
            APP_PRINTF("seg postproc graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
             app_create_graph_view_conv(obj->graph, &obj->viewConvObj, &obj->vcObj,obj->tvpreProcObj.output_tensor_arr);
             APP_PRINTF("View Conversion graph done!\n");
        }
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_color_conv_RGB_NV12(obj->graph, &obj->colorConvRGBNV12Obj, obj->viewConvObj.output_image_arr);
            APP_PRINTF("Color Conv RGB_NV12 graph done!\n");
        }
    
    //******************m2m is used for upscaling output image*****************************//
        //status = app_create_graph_display_m2m(obj->graph, &obj->displaym2mObj, obj->rcipostprocObj.output_image_arr);
    
        vx_int32 idx = 0;
        //obj->imgMosaicObj.input_arr[idx++] = obj->displaym2mObj.dst_image_arr;
        //for(int i=0;i<4;i++){
        obj->imgMosaicObj.input_arr[idx++] = obj->scalerObj.output_4.arr[0];
        obj->imgMosaicObj.input_arr[idx++] = obj->rcipostprocObj.output_image_arr;
        //}
        //for(int i=0;i<2;i++){
        obj->imgMosaicObj.input_arr[idx++] = obj->segpostprocObj.output_image_arr;
        //}
        obj->imgMosaicObj.input_arr[idx++] = obj->colorConvRGBNV12Obj.dst_image_arr;
        //}
        obj->imgMosaicObj.num_inputs = idx;
    
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_img_mosaic(obj->graph, &obj->imgMosaicObj, NULL);
            APP_PRINTF("Img Mosaic graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_display(obj->graph, &obj->displayObj, obj->imgMosaicObj.output_image[0]);
            APP_PRINTF("Display graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            graph_parameter_index = 0;
            add_graph_parameter_by_node_index(obj->graph, obj->captureObj.node, 1);
            obj->captureObj.graph_parameter_index = graph_parameter_index;
            graph_parameters_queue_params_list[graph_parameter_index].graph_parameter_index = graph_parameter_index;
            graph_parameters_queue_params_list[graph_parameter_index].refs_list_size = APP_BUFFER_Q_DEPTH;
            graph_parameters_queue_params_list[graph_parameter_index].refs_list = (vx_reference*)&obj->captureObj.raw_image_arr[0];
            graph_parameter_index++;
    
            vxSetGraphScheduleConfig(obj->graph,
                    VX_GRAPH_SCHEDULE_MODE_QUEUE_AUTO,
                    graph_parameter_index,
                    graph_parameters_queue_params_list);
    
            tivxSetGraphPipelineDepth(obj->graph, APP_PIPELINE_DEPTH);
    
    //        tivxSetNodeParameterNumBufByIndex(obj->vissObj.node, 6, APP_BUFFER_Q_DEPTH);
    //        tivxSetNodeParameterNumBufByIndex(obj->vissObj.node, 9, APP_BUFFER_Q_DEPTH);
    //        tivxSetNodeParameterNumBufByIndex(obj->aewbObj.node, 4, APP_BUFFER_Q_DEPTH);
    
            tivxSetNodeParameterNumBufByIndex(obj->colorConvObj.node, 1, 2);
    
            //tivxSetNodeParameterNumBufByIndex(obj->ldcObj.node, 7, APP_BUFFER_Q_DEPTH);
    
            /*This output is accessed slightly later in the pipeline by mosaic node so queue depth is larger */
            tivxSetNodeParameterNumBufByIndex(obj->scalerObj.node, 1, 3);
            tivxSetNodeParameterNumBufByIndex(obj->scalerObj.node, 2, 3);
            tivxSetNodeParameterNumBufByIndex(obj->scalerObj.node, 3, 6);
            tivxSetNodeParameterNumBufByIndex(obj->scalerObj.node, 4, 3);
            tivxSetNodeParameterNumBufByIndex(obj->odpreProcObj.node, 2, 2);
            tivxSetNodeParameterNumBufByIndex(obj->segpreProcObj.node, 2, 2);
            tivxSetNodeParameterNumBufByIndex(obj->tvpreProcObj.node, 2, 2);
    
            tivxSetNodeParameterNumBufByIndex(obj->odtidlObj.node, 4, 4);
            tivxSetNodeParameterNumBufByIndex(obj->odtidlObj.node, 7, 4);
    
            tivxSetNodeParameterNumBufByIndex(obj->segtidlObj.node, 4, 4);
            tivxSetNodeParameterNumBufByIndex(obj->segtidlObj.node, 7, 4);
            //printf("after this -1\n");
            //tivxSetNodeParameterNumBufByIndex(obj->tvtidlObj.node, 4, APP_BUFFER_Q_DEPTH);
            //tivxSetNodeParameterNumBufByIndex(obj->tvtidlObj.node, 7, APP_BUFFER_Q_DEPTH);
            //printf("after this 0\n");
            //tivxSetNodeParameterNumBufByIndex(obj->drawDetectionsObj.node, 3, APP_BUFFER_Q_DEPTH);
            tivxSetNodeParameterNumBufByIndex(obj->rcipostprocObj.node, 5, 3);
            //printf("after this 1\n");
            tivxSetNodeParameterNumBufByIndex(obj->segpostprocObj.node, 4, 2);
            //printf("after this 2\n");
            tivxSetNodeParameterNumBufByIndex(obj->viewConvObj.node, 2, 8);
            //printf("after this 2.1\n");
            tivxSetNodeParameterNumBufByIndex(obj->colorConvRGBNV12Obj.node, 2, 4);
            //printf("after this 2.2\n");
            //tivxSetNodeParameterNumBufByIndex(obj->displaym2mObj.node, 2, APP_BUFFER_Q_DEPTH);
            //printf("after this 3\n");
            tivxSetNodeParameterNumBufByIndex(obj->imgMosaicObj.node, 1, 4);
            //printf("after this 4\n");
            APP_PRINTF("Pipeline params setup done!\n");
        }
    
        return status;
    }

    In the performance statistics, the preproc node is appearing multiple times, is it because I'm using multiple graph pipeline?

    Regards,

    Chaitanya Prakash Uppala

  • Hi,

    Thank you for sharing this. 

    Let me review and get back early next week

    Regards,

    Nikhil

  • Hi Nikhil,

    Let me review and get back early next week

    Okay. Will be waiting for you response.

    Let me try this at my end and get back to you by end of this week.

    If possible and have time, could you please check this too. So that I can know what's the reason behind that behaviour, as it will help me in future if I face similar issue again.

    Regards,

    Chaitanya Prakash Uppala 

  • Ok sure.

    Let me try modifying the semantic segmentation demo and add mosaic node in it and display 3 scalar[0] output and 1 seg out.

    Will that be fine?

    Regards

    Nikhil

  • Fine but I feel combination of OD and SD will be better.

    Thanks in advance.

    Regards,

    Chaitanya Prakash Uppala

  • Hi Nikhil,

    In experimenting I got a query, kindly answer it asap so that I can continue with my experiments.

    In the /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/vision_apps/modules/include/app_scaler_module.h

    #define APP_MODULES_MAX_SCALER_OUTPUTS (5) is defined.

    Can I change this value from 5 to 7 and modify the app_scaler_module.c for more output generation? Is it allowed? Because I have been getting some other places to be modified inside tiovx/kernels_j7 as shown. 

    [TIARM] Compiling C tivx_hwa_node_api.c
    Linking /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/out/J7/R5F/FREERTOS/release/vx_kernels_hwa.lib
    vx_app_srv_calibration links against libtivision_apps.so
    [TIARM] Compiling C test_csitx_csirx.c
    vx_app_srv_camera links against libtivision_apps.so
    [TIARM] Compiling C test_display_buffer.c
    [TIARM] Compiling C test_vpac_msc_scale_multi_output.c
    [TIARM] Compiling C test_vpac_viss.c
    vx_app_srv_fileio links against libtivision_apps.so
    [TIARM] Compiling C test_display_m2m.c
    [TIARM] Compiling C test_capture.c
    [TIARM] Compiling C test_capture_display.c
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/kernels_j7/hwa/test/test_vpac_msc_scale_multi_output.c:86:46: error: too few arguments to function call, expected 9, have 7
                dst_image, NULL, NULL, NULL, NULL), VX_TYPE_NODE);
                                                 ^
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/conformance_tests/test_engine/test_utils.h:43:85: note: expanded from macro 'ASSERT_VX_OBJECT'
    #define ASSERT_VX_OBJECT(ref, type) do{ if (ct_assert_reference_impl((vx_reference)(ref), (type), VX_SUCCESS, #ref, __FUNCTION__, __FILE__, __LINE__)) {} else {CT_DO_FAIL;}}while(0)
                                                                                        ^~~
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/kernels_j7/include/TI/j7_vpac_msc.h:425:34: note: 'tivxVpacMscScaleNode' declared here
    VX_API_ENTRY vx_node VX_API_CALL tivxVpacMscScaleNode(vx_graph graph,
                                     ^
    [TIARM] Compiling C test_vpac_msc_halfscalegaussian.c
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/kernels_j7/hwa/test/test_vpac_msc_scale_multi_output.c:803:46: error: too few arguments to function call, expected 9, have 7
                dst_image, NULL, NULL, NULL, NULL), VX_TYPE_NODE);
                                                 ^
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/conformance_tests/test_engine/test_utils.h:43:85: note: expanded from macro 'ASSERT_VX_OBJECT'
    #define ASSERT_VX_OBJECT(ref, type) do{ if (ct_assert_reference_impl((vx_reference)(ref), (type), VX_SUCCESS, #ref, __FUNCTION__, __FILE__, __LINE__)) {} else {CT_DO_FAIL;}}while(0)
                                                                                        ^~~
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/kernels_j7/include/TI/j7_vpac_msc.h:425:34: note: 'tivxVpacMscScaleNode' declared here
    VX_API_ENTRY vx_node VX_API_CALL tivxVpacMscScaleNode(vx_graph graph,
                                     ^
    [TIARM] Compiling C test_capture_splitMode.c
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/kernels_j7/hwa/test/test_vpac_msc_scale_multi_output.c:1029:46: error: too few arguments to function call, expected 9, have 7
                dst_image, NULL, NULL, NULL, NULL), VX_TYPE_NODE);
                                                 ^
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/conformance_tests/test_engine/test_utils.h:43:85: note: expanded from macro 'ASSERT_VX_OBJECT'
    #define ASSERT_VX_OBJECT(ref, type) do{ if (ct_assert_reference_impl((vx_reference)(ref), (type), VX_SUCCESS, #ref, __FUNCTION__, __FILE__, __LINE__)) {} else {CT_DO_FAIL;}}while(0)
                                                                                        ^~~
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/kernels_j7/include/TI/j7_vpac_msc.h:425:34: note: 'tivxVpacMscScaleNode' declared here
    VX_API_ENTRY vx_node VX_API_CALL tivxVpacMscScaleNode(vx_graph graph,
                                     ^
    [TIARM] Compiling C test_capture_vpac_display.c
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/kernels_j7/hwa/test/test_vpac_msc_scale_multi_output.c:1356:42: error: too few arguments to function call, expected 9, have 7
            dst_image, NULL, NULL, NULL, NULL), VX_TYPE_NODE);
                                             ^
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/conformance_tests/test_engine/test_utils.h:43:85: note: expanded from macro 'ASSERT_VX_OBJECT'
    #define ASSERT_VX_OBJECT(ref, type) do{ if (ct_assert_reference_impl((vx_reference)(ref), (type), VX_SUCCESS, #ref, __FUNCTION__, __FILE__, __LINE__)) {} else {CT_DO_FAIL;}}while(0)
                                                                                        ^~~
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/kernels_j7/include/TI/j7_vpac_msc.h:425:34: note: 'tivxVpacMscScaleNode' declared here
    VX_API_ENTRY vx_node VX_API_CALL tivxVpacMscScaleNode(vx_graph graph,
                                     ^
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/kernels_j7/hwa/test/test_vpac_msc_scale_multi_output.c:1554:46: error: too few arguments to function call, expected 9, have 7
                dst_image, NULL, NULL, NULL, NULL), VX_TYPE_NODE);
                                                 ^
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/conformance_tests/test_engine/test_utils.h:43:85: note: expanded from macro 'ASSERT_VX_OBJECT'
    #define ASSERT_VX_OBJECT(ref, type) do{ if (ct_assert_reference_impl((vx_reference)(ref), (type), VX_SUCCESS, #ref, __FUNCTION__, __FILE__, __LINE__)) {} else {CT_DO_FAIL;}}while(0)
                                                                                        ^~~
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/kernels_j7/include/TI/j7_vpac_msc.h:425:34: note: 'tivxVpacMscScaleNode' declared here
    VX_API_ENTRY vx_node VX_API_CALL tivxVpacMscScaleNode(vx_graph graph,
                                     ^
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/kernels_j7/hwa/test/test_vpac_msc_scale_multi_output.c:1732:53: error: too few arguments to function call, expected 9, have 7
                dst_image0, dst_image1, NULL, NULL, NULL), VX_TYPE_NODE);
                                                        ^
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/conformance_tests/test_engine/test_utils.h:43:85: note: expanded from macro 'ASSERT_VX_OBJECT'
    #define ASSERT_VX_OBJECT(ref, type) do{ if (ct_assert_reference_impl((vx_reference)(ref), (type), VX_SUCCESS, #ref, __FUNCTION__, __FILE__, __LINE__)) {} else {CT_DO_FAIL;}}while(0)
                                                                                        ^~~
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/kernels_j7/include/TI/j7_vpac_msc.h:425:34: note: 'tivxVpacMscScaleNode' declared here
    VX_API_ENTRY vx_node VX_API_CALL tivxVpacMscScaleNode(vx_graph graph,
                                     ^
    [TIARM] Compiling C test_vpac_nf_generic.c
    [TIARM] Compiling C test_vpac_msc_gaussian_pyramid.c
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/kernels_j7/hwa/test/test_vpac_msc_scale_multi_output.c:1846:65: error: too few arguments to function call, expected 9, have 7
                dst_image[0], dst_image[1], dst_image[2], NULL, NULL), VX_TYPE_NODE);
                                                                    ^
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/conformance_tests/test_engine/test_utils.h:43:85: note: expanded from macro 'ASSERT_VX_OBJECT'
    #define ASSERT_VX_OBJECT(ref, type) do{ if (ct_assert_reference_impl((vx_reference)(ref), (type), VX_SUCCESS, #ref, __FUNCTION__, __FILE__, __LINE__)) {} else {CT_DO_FAIL;}}while(0)
                                                                                        ^~~
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/kernels_j7/include/TI/j7_vpac_msc.h:425:34: note: 'tivxVpacMscScaleNode' declared here
    VX_API_ENTRY vx_node VX_API_CALL tivxVpacMscScaleNode(vx_graph graph,
                                     ^
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/kernels_j7/hwa/test/test_vpac_msc_scale_multi_output.c:1985:73: error: too few arguments to function call, expected 9, have 7
                dst_image[0], dst_image[1], dst_image[2], dst_image[3], NULL), VX_TYPE_NODE);
                                                                            ^
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/conformance_tests/test_engine/test_utils.h:43:85: note: expanded from macro 'ASSERT_VX_OBJECT'
    #define ASSERT_VX_OBJECT(ref, type) do{ if (ct_assert_reference_impl((vx_reference)(ref), (type), VX_SUCCESS, #ref, __FUNCTION__, __FILE__, __LINE__)) {} else {CT_DO_FAIL;}}while(0)
                                                                                        ^~~
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/kernels_j7/include/TI/j7_vpac_msc.h:425:34: note: 'tivxVpacMscScaleNode' declared here
    VX_API_ENTRY vx_node VX_API_CALL tivxVpacMscScaleNode(vx_graph graph,
                                     ^
    [TIARM] Compiling C test_display_buffer_YUV422_YUYV.c
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/kernels_j7/hwa/test/test_vpac_msc_scale_multi_output.c:2102:81: error: too few arguments to function call, expected 9, have 7
                dst_image[0], dst_image[1], dst_image[2], dst_image[3], dst_image[4]), VX_TYPE_NODE);
                                                                                    ^
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/conformance_tests/test_engine/test_utils.h:43:85: note: expanded from macro 'ASSERT_VX_OBJECT'
    #define ASSERT_VX_OBJECT(ref, type) do{ if (ct_assert_reference_impl((vx_reference)(ref), (type), VX_SUCCESS, #ref, __FUNCTION__, __FILE__, __LINE__)) {} else {CT_DO_FAIL;}}while(0)
                                                                                        ^~~
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/kernels_j7/include/TI/j7_vpac_msc.h:425:34: note: 'tivxVpacMscScaleNode' declared here
    VX_API_ENTRY vx_node VX_API_CALL tivxVpacMscScaleNode(vx_graph graph,
                                     ^
    9 errors generated.
    make[1]: *** [concerto/finale.mak:313: /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/out/J7/R5F/FREERTOS/release/module/.home.prakash.ti-processor-sdk-rtos-j721e-evm-08_02_00_05.tiovx.kernels_j7.hwa.test/test_vpac_msc_scale_multi_output.obj] Error 1
    make[1]: *** Waiting for unfinished jobs....
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/kernels_j7/hwa/test/test_capture_vpac_display.c:368:75: error: too few arguments to function call, expected 9, have 7
                viss_nv12_out_img, scaler_nv12_out_img, NULL, NULL, NULL, NULL),
                                                                              ^
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/conformance_tests/test_engine/test_utils.h:43:85: note: expanded from macro 'ASSERT_VX_OBJECT'
    #define ASSERT_VX_OBJECT(ref, type) do{ if (ct_assert_reference_impl((vx_reference)(ref), (type), VX_SUCCESS, #ref, __FUNCTION__, __FILE__, __LINE__)) {} else {CT_DO_FAIL;}}while(0)
                                                                                        ^~~
    /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/kernels_j7/include/TI/j7_vpac_msc.h:425:34: note: 'tivxVpacMscScaleNode' declared here
    VX_API_ENTRY vx_node VX_API_CALL tivxVpacMscScaleNode(vx_graph graph,
                                     ^
    1 error generated.
    make[1]: *** [concerto/finale.mak:313: /home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx/out/J7/R5F/FREERTOS/release/module/.home.prakash.ti-processor-sdk-rtos-j721e-evm-08_02_00_05.tiovx.kernels_j7.hwa.test/test_capture_vpac_display.obj] Error 1
    Build Skipped for kernels.stereo.target.bam.J7.LINUX.A72.release:vx_target_kernels_stereo_bam
    Nothing to be done for j721e mcu2_0 csl_intc
    Nothing to be done for j721e pm_hal_optimized
    Nothing to be done for j721e pm_hal
    Nothing to be done for j721e mcu2_0 sbl_lib_ospi
    Nothing to be done for j721e mcu2_0 sbl_lib_mmcsd
    Nothing to be done for j721e mcu2_0 sbl_lib_uart
    Nothing to be done for j721e mcu2_0 sbl_lib_hyperflash
    Nothing to be done for j721e mcu2_0 sbl_lib_cust
    Nothing to be done for j721e mcu2_0 sbl_lib_mmcsd_hlos
    Nothing to be done for j721e mcu2_0 sbl_lib_ospi_hlos
    Nothing to be done for j721e mcu2_0 sbl_lib_hyperflash_hlos
    Nothing to be done for j721e mcu2_0 sbl_lib_ospi_nondma
    Nothing to be done for j721e mcu2_0 sbl_lib_ospi_nondma_hlos
    Nothing to be done for j721e mcu2_0 sbl_lib_ospi_hs
    Nothing to be done for j721e mcu2_0 sbl_lib_mmcsd_hs
    Nothing to be done for j721e mcu2_0 sbl_lib_uart_hs
    Nothing to be done for j721e mcu2_0 sbl_lib_hyperflash_hs
    Nothing to be done for j721e mcu2_0 sbl_lib_cust_hs
    Nothing to be done for j721e mcu2_0 sbl_lib_mmcsd_hlos_hs
    Nothing to be done for j721e mcu2_0 sbl_lib_ospi_hlos_hs
    Nothing to be done for j721e mcu2_0 sbl_lib_ospi_nondma_hs
    Nothing to be done for j721e mcu2_0 sbl_lib_ospi_nondma_hlos_hs
    Nothing to be done for j721e mcu2_0 sbl_lib_hyperflash_hlos_hs
    Nothing to be done for j721e mcu2_0 dmautils
    Nothing to be done for j721e mcu2_0 lpm
    make[1]: Leaving directory '/home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tiovx'
    make: *** [makerules/makefile_tiovx_ptk_imaging_remote_device.mak:53: tiovx] Error 2
    make: *** Waiting for unfinished jobs....
    HOST_ROOT=/home/prakash/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/vxlib
    

    Note: I have modified even the structure VX_API_ENTRY vx_node VX_API_CALL tivxVpacMscScaleNode(vx_graph graph,
    vx_image in_img,
    vx_image out0_img,
    vx_image out1_img,
    vx_image out2_img,
    vx_image out3_img,
    vx_image out4_img,
    vx_image out5_img,
    vx_image out6_img); inside tiovx/kernels_j7/include/TI/j7_vpac_msc.h and also inside the tiovx/kernels_j7/hwa/host/tivx_hwa_node_api.c files, after modifying app_scaler_module.c and .h files.

    Regards,
    Chaitanya Prakash Uppala

  • Hi,

    #define APP_MODULES_MAX_SCALER_OUTPUTS (5) is defined.

    The MSC scalar node can only output 5 images. This is the node architecture. Hence you cannot increase this macro value.

    Regards,

    Nikhil

  • Hence you cannot increase this macro value.

    Thanks for the confirmation Nikhil.

    Let me review and get back early next week

    Will be waiting for your response.

    Note: The topview output in 4th window is not coming as expected. I'm currently looking in resolving that.

    As said here, I have reviewed the code but couldn't figure out anything wrong. So I started taking working top view application as base for my project and started integrating SD and OD. Please find the attached image and video for the output being displayed on monitor.

    output_images.zip

    If you observe the pre proc output in above zip file, the left portion at the bottom is filled with some blue and yellow lines. Scaler output is as expected. 

    Could you please let me know what could be the reason for this behaviour as there are no modifications made in pre proc node apart from disabling padding(that is making padding values to 0).

    Regards,

    Chaitanya Prakash Uppala

  • Hi Nikhil,

    I'm able to get 4 windows camera feed, OD,SD, TV views without freezing but slight distortion. But the issue is in OD, there are no detections on display output.

    When I integrated OD into TV application(TV application taken as base), OD and TV outputs are good without freezing and fluctuation but false detections appearing mostly. After integrating SD, no detections appear in OD display output window. Please find the attached video for reference.

    Note: I observed wierd behaviour when integrated TV module after integrating SD and OD, that is TV output used to get greenish. Now OD module detections stopped after integrating SD.

    Could you please help me from here. What might be the issue? 

    Note: OD module postproc is custom kernel which consists of postprocessing and drawBox functionalities. The kernel is working fine, checked with different applications. And when running the application it is entering inside the kernel but not entering inside drawBox functionality.

    Regards,

    Chaitanya Prakash Uppala

  • Hi Nikhil,

    Can I know if there is any update on the query or incase you need some more input from my end.

    Regards,

    Chaitanya Prakash Uppala

  • Hi Chaitanya,

    Sorry for the delay in response. I was modifying the application and was facing few issues in the setup. I'm currently working on this.

    Please find the attached image for the application pipeline.

    Meanwhile, I believe your latest response still corresponds to the above flow right?

    Could you tell me which path is TV?

    Also, have you checked the node execution time in your case for all the nodes? Do you see any bottle-neck in any of the nodes (i.e. longer node execution time)?

    Could you please provide this data for all the nodes?

    Regards,

    Nikhil

  • Hi Nikhil,

    Meanwhile, I believe your latest response still corresponds to the above flow right?

    Yes. No change in the flow.

    Could you tell me which path is TV?

    The 3rd path corresponds to TV. Scaler2->BEV_PreProc->BEV->Color_Conv_RGBtoNV12->Mosaic

    Also, have you checked the node execution time in your case for all the nodes? Do you see any bottle-neck in any of the nodes (i.e. longer node execution time)?

    Yes, I have checked the execution times but haven't found any bottlenecks.

    Could you please provide this data for all the nodes?

    Sure. I have sent out through private message. Kindly have a look at it.

    Regards,

    Chaitanya Prakash Uppala

  • Thank you. 

    Let me look into this and get back to you with my analysis

    Regards,

    Nikhil

  • Let me look into this and get back to you with my analysis

    Will be waiting for your response. Would you be able to let me know when I might expect a response? 

    Regards,

    Chaitanya Prakash Uppala

  • Hi Nikhil,

    Any update on this thread.

    Regards,

    Chaitanya Prakash Uppala

  • Could you please respond little fast.

    Thanks,

    Chaitanya Prakash Uppala