This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

SK-TDA4VM: Using Mosaic to merge 4 camera video streams and then input to pre_proc node

Part Number: SK-TDA4VM
Other Parts Discussed in Thread: J721EXSOMXEVM,

Dear experts:

My camera is yuv format(nv12), based on OD demo code:app_tidl_od_cam,

what I would like to implement is (The flow is attached in an attached)Using Mosaic to merge 4 images the OD-20230118a.pdf:

Step  (1): ReplaceMulti-scaler MSC by Mosaic MSC:

purpose: Using Mosaic to merge 4 input camera streams(1920x1080 each) into one stream with resolution 1024x512.   

meaning that each camera is caling down to 512x256, then merge to a single 1024x512 frame. 

Step (2) : Input the merge frame (1024x512) to pre-proc (C66-1) => Object Detection (C7x/MMA) => Post-proc(C66_2)  <= same flow as TI demo code(app_tidl_od_cam)

Step (3) : Skip Mosaic MSC which follows the post-proc. and post-proc output directly to DSS.

My question are:

(1) Is step 1 doable? since Mosaic has functions of scaling down and merge.

(2) The input datatype of Pre-proc is ImgObj, which does not match the output format(vx_image) of Mosaic.  I need to do formation transformation, How can I do?

(3) Regarding to format transformation from vx_image to ImgObj, what else I need to take care about???

Thanks.

  • Hi Ming-Jiun,

    Before we go further, can you confirm that this is on SK-TDA4VM board (small, blue board) and not J721EXSOMXEVM board (large, green board). 

    Also, is this using PSDK RTOS: https://www.ti.com/tool/download/PROCESSOR-SDK-RTOS-J721E ?

    Regards,

    Takuma

  • Hello, Takuma:

    (1) We use J721EXSOMXEVM board (large, green board)

    (2) Yes, we used it. The version is 08.02.00.05

  • Hello, Takuma:

    The output data type of Mosaic node is vx_image;

    The input data type of Pre_Proc is vx_object_array.

    So, we modified tldl_od_cam main.c to transfer the Mosaic data type from vx_image to vx_object_array as shown in the following. 

    And got error message as shown in the following:

    How can we fix it???

    The following is the main.c we modified. 

    /*
     *
     * Copyright (c) 2020 Texas Instruments Incorporated
     *
     * All rights reserved not granted herein.
     *
     * Limited License.
     *
     * Texas Instruments Incorporated grants a world-wide, royalty-free, non-exclusive
     * license under copyrights and patents it now or hereafter owns or controls to make,
     * have made, use, import, offer to sell and sell ("Utilize") this software subject to the
     * terms herein.  With respect to the foregoing patent license, such license is granted
     * solely to the extent that any such patent is necessary to Utilize the software alone.
     * The patent license shall not apply to any combinations which include this software,
     * other than combinations with devices manufactured by or for TI ("TI Devices").
     * No hardware patent is licensed hereunder.
     *
     * Redistributions must preserve existing copyright notices and reproduce this license
     * (including the above copyright notice and the disclaimer and (if applicable) source
     * code license limitations below) in the documentation and/or other materials provided
     * with the distribution
     *
     * Redistribution and use in binary form, without modification, are permitted provided
     * that the following conditions are met:
     *
     * *       No reverse engineering, decompilation, or disassembly of this software is
     * permitted with respect to any software provided in binary form.
     *
     * *       any redistribution and use are licensed by TI for use only with TI Devices.
     *
     * *       Nothing shall obligate TI to provide you with source code for the software
     * licensed and provided to you in object code.
     *
     * If software source code is provided to you, modification and redistribution of the
     * source code are permitted provided that the following conditions are met:
     *
     * *       any redistribution and use of the source code, including any resulting derivative
     * works, are licensed by TI for use only with TI Devices.
     *
     * *       any redistribution and use of any object code compiled from the source code
     * and any resulting derivative works, are licensed by TI for use only with TI Devices.
     *
     * Neither the name of Texas Instruments Incorporated nor the names of its suppliers
     *
     * may be used to endorse or promote products derived from this software without
     * specific prior written permission.
     *
     * DISCLAIMER.
     *
     * THIS SOFTWARE IS PROVIDED BY TI AND TI'S LICENSORS "AS IS" AND ANY EXPRESS
     * OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
     * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
     * IN NO EVENT SHALL TI AND TI'S LICENSORS BE LIABLE FOR ANY DIRECT, INDIRECT,
     * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
     * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
     * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
     * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
     * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
     * OF THE POSSIBILITY OF SUCH DAMAGE.
     *
     */
    
    #include <utils/draw2d/include/draw2d.h>
    #include <utils/perf_stats/include/app_perf_stats.h>
    #include <utils/console_io/include/app_get.h>
    #include <utils/grpx/include/app_grpx.h>
    #include <VX/vx_khr_pipelining.h>
    
    #include "app_common.h"
    #include "app_sensor_module.h"
    #include "app_capture_module.h"
    #include "app_viss_module.h"
    #include "app_aewb_module.h"
    #include "app_ldc_module.h"
    #include "app_scaler_module.h"
    #include "app_pre_proc_module.h"
    #include "app_tidl_module.h"
    #include "app_draw_detections_module.h"
    #include "app_img_mosaic_module.h"
    #include "app_display_module.h"
    
    #define APP_BUFFER_Q_DEPTH   (4)
    #define APP_PIPELINE_DEPTH   (7)
    
    typedef struct {
    
        SensorObj         sensorObj;
        CaptureObj        captureObj;
        VISSObj           vissObj;
        AEWBObj           aewbObj;
        LDCObj            ldcObj;
        ScalerObj         scalerObj;
        PreProcObj        preProcObj;
        TIDLObj           tidlObj;
        DrawDetectionsObj drawDetectionsObj;
        ImgMosaicObj      imgMosaicObj;
        DisplayObj        displayObj;
    
        vx_char input_file_path[APP_MAX_FILE_PATH];
        vx_char output_file_path[APP_MAX_FILE_PATH];
    
        /* OpenVX references */
        vx_context context;
        vx_graph   graph;
    
        vx_uint32 is_interactive;
    
        vx_uint32 num_frames_to_run;
    
        vx_uint32 num_frames_to_write;
        vx_uint32 num_frames_to_skip;
    
        tivx_task task;
        vx_uint32 stop_task;
        vx_uint32 stop_task_done;
    
        app_perf_point_t total_perf;
        app_perf_point_t fileio_perf;
        app_perf_point_t draw_perf;
    
        int32_t pipeline;
    
        int32_t enqueueCnt;
        int32_t dequeueCnt;
    
        int32_t write_file;
    
    } AppObj;
    
    AppObj gAppObj;
    
    static void app_parse_cmd_line_args(AppObj *obj, vx_int32 argc, vx_char *argv[]);
    static vx_status app_init(AppObj *obj);
    static void app_deinit(AppObj *obj);
    static vx_status app_create_graph(AppObj *obj);
    static vx_status app_verify_graph(AppObj *obj);
    static vx_status app_run_graph(AppObj *obj);
    static vx_status app_run_graph_interactive(AppObj *obj);
    static void app_delete_graph(AppObj *obj);
    static void app_default_param_set(AppObj *obj);
    static void app_update_param_set(AppObj *obj);
    static void add_graph_parameter_by_node_index(vx_graph graph, vx_node node, vx_uint32 node_parameter_index);
    static void app_pipeline_params_defaults(AppObj *obj);
    #ifndef x86_64
    static void app_draw_graphics(Draw2D_Handle *handle, Draw2D_BufInfo *draw2dBufInfo, uint32_t update_type);
    #endif
    
    static vx_status app_run_graph_for_one_frame_pipeline(AppObj *obj, vx_int32 frame_id);
    
    
    static void app_show_usage(vx_int32 argc, vx_char* argv[])
    {
        printf("\n");
        printf(" TIDL Demo - Camera based Object Detection (c) Texas Instruments Inc. 2020\n");
        printf(" ========================================================\n");
        printf("\n");
        printf(" Usage,\n");
        printf("  %s --cfg <config file>\n", argv[0]);
        printf("\n");
    }
    
    static char menu[] = {
        "\n"
        "\n =========================================="
        "\n TIDL Demo - Camera based Object Detection"
        "\n =========================================="
    #ifdef APP_WRITE_INTERMEDIATE_OUTPUTS
        "\n"
        "\n s: Save intermediate outputs"
    #endif
        "\n"
        "\n p: Print performance statistics"
        "\n"
        "\n x: Exit"
        "\n"
        "\n Enter Choice: "
    };
    
    static void app_run_task(void *app_var)
    {
        AppObj *obj = (AppObj *)app_var;
        vx_status status = VX_SUCCESS;
        while(!obj->stop_task && (status == VX_SUCCESS))
        {
            status = app_run_graph(obj);
        }
        obj->stop_task_done = 1;
    }
    
    static vx_status app_run_task_create(AppObj *obj)
    {
        tivx_task_create_params_t params;
        vx_status status;
    
        tivxTaskSetDefaultCreateParams(&params);
        params.task_main = app_run_task;
        params.app_var = obj;
    
        obj->stop_task_done = 0;
        obj->stop_task = 0;
    
        status = tivxTaskCreate(&obj->task, &params);
    
        return status;
    }
    
    static void app_run_task_delete(AppObj *obj)
    {
        while(obj->stop_task_done==0)
        {
            tivxTaskWaitMsecs(100);
        }
    
        tivxTaskDelete(&obj->task);
    }
    
    static vx_status app_run_graph_interactive(AppObj *obj)
    {
        vx_status status = VX_SUCCESS;
        uint32_t done = 0;
    
        char ch;
        FILE *fp;
        app_perf_point_t *perf_arr[1];
    
        status = app_run_task_create(obj);
        if(status != VX_SUCCESS)
        {
            printf("app_tidl: ERROR: Unable to create task\n");
        }
        else
        {
            appPerfStatsResetAll();
            while(!done && (status == VX_SUCCESS))
            {
                printf(menu);
                ch = getchar();
                printf("\n");
    
                switch(ch)
                {
                    case 'p':
                        appPerfStatsPrintAll();
                        status = tivx_utils_graph_perf_print(obj->graph);
                        appPerfPointPrint(&obj->fileio_perf);
                        appPerfPointPrint(&obj->total_perf);
                        printf("\n");
                        appPerfPointPrintFPS(&obj->total_perf);
                        appPerfPointReset(&obj->total_perf);
                        printf("\n");
    
                        break;
                    case 'e':
                        perf_arr[0] = &obj->total_perf;
                        fp = appPerfStatsExportOpenFile(".", "dl_demos_app_tidl_od_cam");
                        if (NULL != fp)
                        {
                            appPerfStatsExportAll(fp, perf_arr, 1);
                            status = tivx_utils_graph_perf_export(fp, obj->graph);
                            appPerfStatsExportCloseFile(fp);
                            appPerfStatsResetAll();
                        }
                        else
                        {
                            printf("fp is null\n");
                        }
                        break;
    #ifdef APP_WRITE_INTERMEDIATE_OUTPUTS
                    case 's':
                        obj->write_file = 1;
                        break;
    #endif
                    case 'x':
                        obj->stop_task = 1;
                        done = 1;
                        break;
                }
            }
            app_run_task_delete(obj);
        }
        return status;
    }
    
    static void app_set_cfg_default(AppObj *obj)
    {
        snprintf(obj->captureObj.output_file_path,APP_MAX_FILE_PATH, ".");
        snprintf(obj->vissObj.output_file_path,APP_MAX_FILE_PATH, ".");
        snprintf(obj->ldcObj.output_file_path,APP_MAX_FILE_PATH, ".");
        snprintf(obj->scalerObj.output_file_path,APP_MAX_FILE_PATH, ".");
    
        obj->captureObj.en_out_capture_write = 0;
        obj->vissObj.en_out_viss_write = 0;
        obj->ldcObj.en_out_ldc_write = 0;
        obj->scalerObj.en_out_scaler_write = 0;
    
        obj->num_frames_to_write = 0;
        obj->num_frames_to_skip = 0;
    
        snprintf(obj->tidlObj.config_file_path,APP_MAX_FILE_PATH, ".");
        snprintf(obj->tidlObj.network_file_path,APP_MAX_FILE_PATH, ".");
    
        snprintf(obj->input_file_path,APP_MAX_FILE_PATH, ".");
    
    }
    
    static void app_parse_cfg_file(AppObj *obj, vx_char *cfg_file_name)
    {
        FILE *fp = fopen(cfg_file_name, "r");
        vx_char line_str[1024];
        vx_char *token;
    
        if(fp==NULL)
        {
            printf("# ERROR: Unable to open config file [%s]\n", cfg_file_name);
            exit(-1);
        }
    
        while(fgets(line_str, sizeof(line_str), fp)!=NULL)
        {
            vx_char s[]=" \t";
    
            if (strchr(line_str, '#'))
            {
                continue;
            }
    
            /* get the first token */
            token = strtok(line_str, s);
            if(token != NULL)
            {
                if(strcmp(token, "sensor_index")==0)
                {
                    token = strtok(NULL, s);
                    if(token != NULL)
                    {
                        obj->sensorObj.sensor_index = atoi(token);
                    }
                }
                else
                if(strcmp(token, "is_interactive")==0)
                {
                    token = strtok(NULL, s);
                    if(token != NULL)
                    {
                        token[strlen(token)-1]=0;
                        obj->is_interactive = atoi(token);
                        if(obj->is_interactive > 1)
                        obj->is_interactive = 1;
                    }
                    obj->sensorObj.is_interactive = obj->is_interactive;
                }
                else
                if(strcmp(token, "tidl_config")==0)
                {
                    token = strtok(NULL, s);
                    if(token != NULL)
                    {
                        token[strlen(token)-1]=0;
                        strcpy(obj->tidlObj.config_file_path, token);
                    }
                }
                else
                if(strcmp(token, "tidl_network")==0)
                {
                    token = strtok(NULL, s);
                    if(token != NULL)
                    {
                        token[strlen(token)-1]=0;
                        strcpy(obj->tidlObj.network_file_path, token);
                    }
                }
                else
                if(strcmp(token, "dl_size")==0)
                {
                    vx_int32 width, height;
    
                    token = strtok(NULL, s);
                    if(token != NULL)
                    {
                        width =  atoi(token);
                        obj->scalerObj.output[0].width   = width;
    
                        token = strtok(NULL, s);
                        if(token != NULL)
                        {
                            if(token[strlen(token)-1] == '\n')
                            token[strlen(token)-1]=0;
    
                            height =  atoi(token);
                            obj->scalerObj.output[0].height  = height;
                        }
                    }
                }
                else
                if(strcmp(token, "out_size")==0)
                {
                    vx_int32 width, height;
    
                    token = strtok(NULL, s);
                    if(token != NULL)
                    {
                        width =  atoi(token);
                        obj->scalerObj.output[1].width   = width;
    
                        token = strtok(NULL, s);
                        if(token != NULL)
                        {
                            if(token[strlen(token)-1] == '\n')
                            token[strlen(token)-1]=0;
    
                            height =  atoi(token);
                            obj->scalerObj.output[1].height  = height;
                        }
                    }
                }
                else
                if(strcmp(token, "viz_th")==0)
                {
                    token = strtok(NULL, s);
                    if(token != NULL)
                    {
                        obj->drawDetectionsObj.params.viz_th = atof(token);
                    }
                }
                else
                if(strcmp(token, "num_classes")==0)
                {
                    token = strtok(NULL, s);
                    if(token != NULL)
                    {
                        obj->drawDetectionsObj.params.num_classes = atoi(token);
                    }
                }
                else
                if(strcmp(token, "display_option")==0)
                {
                    token = strtok(NULL, s);
                    if(token != NULL)
                    {
                        obj->displayObj.display_option = atoi(token);
                    }
                }
    #ifdef APP_WRITE_INTERMEDIATE_OUTPUTS
                else
                if(strcmp(token, "num_frames_to_run")==0)
                {
                    token = strtok(NULL, s);
                    if(token != NULL)
                    {
                        token[strlen(token)-1]=0;
                        obj->num_frames_to_run = atoi(token);
                    }
                }
                else
                if(strcmp(token, "en_out_capture_write")==0)
                {
                    token = strtok(NULL, s);
                    if(token != NULL)
                    {
                        token[strlen(token)-1]=0;
                        obj->captureObj.en_out_capture_write = atoi(token);
                        if(obj->captureObj.en_out_capture_write > 1)
                        obj->captureObj.en_out_capture_write = 1;
                    }
                }
                else
                if(strcmp(token, "en_out_viss_write")==0)
                {
                    token = strtok(NULL, s);
                    if(token != NULL)
                    {
                        token[strlen(token)-1]=0;
                        obj->vissObj.en_out_viss_write = atoi(token);
                        if(obj->vissObj.en_out_viss_write > 1)
                        obj->vissObj.en_out_viss_write = 1;
                    }
                }
                else
                if(strcmp(token, "en_out_ldc_write")==0)
                {
                    token = strtok(NULL, s);
                    if(token != NULL)
                    {
                        token[strlen(token)-1]=0;
                        obj->ldcObj.en_out_ldc_write = atoi(token);
                        if(obj->ldcObj.en_out_ldc_write > 1)
                        obj->ldcObj.en_out_ldc_write = 1;
                    }
                }
                else
                if(strcmp(token, "en_out_scaler_write")==0)
                {
                    token = strtok(NULL, s);
                    if(token != NULL)
                    {
                        token[strlen(token)-1]=0;
                        obj->scalerObj.en_out_scaler_write = atoi(token);
                        if(obj->scalerObj.en_out_scaler_write > 1)
                            obj->scalerObj.en_out_scaler_write = 1;
                    }
                }
                else
                if(strcmp(token, "en_out_pre_proc_write")==0)
                {
                    token = strtok(NULL, s);
                    if(token != NULL)
                    {
                        token[strlen(token)-1]=0;
                        obj->preProcObj.en_out_pre_proc_write = atoi(token);
                        if(obj->preProcObj.en_out_pre_proc_write > 1)
                            obj->preProcObj.en_out_pre_proc_write = 1;
                    }
                }
                else
                if(strcmp(token, "output_file_path")==0)
                {
                    token = strtok(NULL, s);
                    if(token != NULL)
                    {
                        token[strlen(token)-1]=0;
                        strcpy(obj->captureObj.output_file_path, token);
                        strcpy(obj->vissObj.output_file_path, token);
                        strcpy(obj->ldcObj.output_file_path, token);
                        strcpy(obj->scalerObj.output_file_path, token);
                        strcpy(obj->preProcObj.output_file_path, token);
                    }
                }
                else
                if(strcmp(token, "num_frames_to_write")==0)
                {
                    token = strtok(NULL, s);
                    if(token != NULL)
                    {
                        token[strlen(token)-1]=0;
                        obj->num_frames_to_write = atoi(token);
                    }
                }
                else
                if(strcmp(token, "num_frames_to_skip")==0)
                {
                    token = strtok(NULL, s);
                    if(token != NULL)
                    {
                        token[strlen(token)-1]=0;
                        obj->num_frames_to_skip = atoi(token);
                    }
                }
    #endif
            }
        }
    
        fclose(fp);
    }
    
    static void app_parse_cmd_line_args(AppObj *obj, vx_int32 argc, vx_char *argv[])
    {
        vx_int32 i;
    
        app_set_cfg_default(obj);
    
        if(argc==1)
        {
            app_show_usage(argc, argv);
            exit(0);
        }
    
        for(i=0; i<argc; i++)
        {
            if(strcmp(argv[i], "--cfg")==0)
            {
                i++;
                if(i>=argc)
                {
                    app_show_usage(argc, argv);
                }
                app_parse_cfg_file(obj, argv[i]);
                break;
            }
            else
            if(strcmp(argv[i], "--help")==0)
            {
                app_show_usage(argc, argv);
                exit(0);
            }
        }
    
        #ifdef x86_64
        obj->displayObj.display_option = 0;
        obj->is_interactive = 0;
        #endif
    
        return;
    }
    
    vx_status app_tidl_od_cam_main(vx_int32 argc, vx_char* argv[])
    {
        vx_status status = VX_SUCCESS;
    
        AppObj *obj = &gAppObj;
    
        /*Optional parameter setting*/
        app_default_param_set(obj);
        APP_PRINTF("Default param set! \n");
    
        /*Config parameter reading*/
        app_parse_cmd_line_args(obj, argc, argv);
        APP_PRINTF("Parsed user params! \n");
    
        /* Querry sensor parameters */
        app_querry_sensor(&obj->sensorObj);
        APP_PRINTF("Sensor params queried! \n");
    
        /*Update of parameters are config file read*/
        app_update_param_set(obj);
        APP_PRINTF("Updated user params! \n");
    
        status = app_init(obj);
        APP_PRINTF("App Init Done! \n");
    
        if(status == VX_SUCCESS)
        {
            status = app_create_graph(obj);
            APP_PRINTF("App Create Graph Done! \n");
        }
        if(status == VX_SUCCESS)
        {
            status = app_verify_graph(obj);
            APP_PRINTF("App Verify Graph Done! \n");
        }
        if(obj->is_interactive && (status == VX_SUCCESS))
        {
            status = app_run_graph_interactive(obj);
        }
        else
        if (status == VX_SUCCESS)
        {
            status = app_run_graph(obj);
        }
    
        APP_PRINTF("App Run Graph Done! \n");
    
        app_delete_graph(obj);
        APP_PRINTF("App Delete Graph Done! \n");
    
        app_deinit(obj);
        APP_PRINTF("App De-init Done! \n");
    
        return status;
    }
    
    static vx_status app_init(AppObj *obj)
    {
        vx_status status = VX_SUCCESS;
        app_grpx_init_prms_t grpx_prms;
    
        /* Create OpenVx Context */
        obj->context = vxCreateContext();
        status = vxGetStatus((vx_reference) obj->context);
        APP_PRINTF("Creating context done!\n");
        if(status == VX_SUCCESS)
        {
            tivxHwaLoadKernels(obj->context);
            tivxImagingLoadKernels(obj->context);
            tivxImgProcLoadKernels(obj->context);
            tivxTIDLLoadKernels(obj->context);
            tivxFileIOLoadKernels(obj->context);
        }
        APP_PRINTF("Kernel loading done!\n");
    
        /* Initialize modules */
    
        app_init_sensor(&obj->sensorObj, "sensor_obj");
        APP_PRINTF("Sensor init done!\n");
    
        app_init_capture(obj->context, &obj->captureObj, &obj->sensorObj, "capture_obj", APP_BUFFER_Q_DEPTH);
        APP_PRINTF("Capture init done!\n");
    
        app_init_viss(obj->context, &obj->vissObj, &obj->sensorObj, "viss_obj");
        APP_PRINTF("VISS init done!\n");
    
        app_init_aewb(obj->context, &obj->aewbObj, &obj->sensorObj, "aewb_obj");
        APP_PRINTF("AEWB init done!\n");
    
        app_init_ldc(obj->context, &obj->ldcObj, &obj->sensorObj, "ldc_obj");
        APP_PRINTF("LDC init done!\n");
    
    
        printf("Scaler output1 width   = %d\n", obj->scalerObj.output[0].width);
        printf("Scaler output1 height  = %d\n", obj->scalerObj.output[0].height);
        printf("Scaler output2 width   = %d\n", obj->scalerObj.output[1].width);
        printf("Scaler output2 height  = %d\n", obj->scalerObj.output[1].height);
    
        app_init_scaler(obj->context, &obj->scalerObj, "scaler_obj", obj->sensorObj.num_cameras_enabled, 2);
        APP_PRINTF("Scaler init done!\n");
    
    // Bert: Add MosaicObj here or above, May be Mosaic can work as scaler
    //
    //
    
    
    
        /* Initialize TIDL first to get tensor I/O information from network */
        app_init_tidl(obj->context, &obj->tidlObj, "tidl_obj", obj->sensorObj.num_cameras_enabled);
        APP_PRINTF("TIDL Init Done! \n");
    
        /* Update pre-proc parameters with TIDL config before calling init */
        app_update_pre_proc(obj->context, &obj->preProcObj, obj->tidlObj.config, obj->sensorObj.num_cameras_enabled);
        APP_PRINTF("Pre Proc Update Done! \n");
    
        app_init_pre_proc(obj->context, &obj->preProcObj, "pre_proc_obj");
        APP_PRINTF("Pre Proc Init Done! \n");
    
        /* Update ioBufDesc in draw detections object */
        app_update_draw_detections(&obj->drawDetectionsObj, obj->tidlObj.config);
        APP_PRINTF("Draw detections Update Done! \n");
    
        app_init_draw_detections(obj->context, &obj->drawDetectionsObj, "draw_detections_obj", obj->sensorObj.num_cameras_enabled);
        APP_PRINTF("Draw Detections Init Done! \n");
    
        app_init_img_mosaic(obj->context, &obj->imgMosaicObj, "img_mosaic_obj", APP_BUFFER_Q_DEPTH);
        APP_PRINTF("Img Mosaic init done!\n");
    
        app_init_display(obj->context, &obj->displayObj, "display_obj");
        APP_PRINTF("Display init done!\n");
    
        #ifndef x86_64
        if(obj->displayObj.display_option == 1)
        {
            appGrpxInitParamsInit(&grpx_prms, obj->context);
            grpx_prms.draw_callback = app_draw_graphics;
            appGrpxInit(&grpx_prms);
        }
        #endif
    
        appPerfPointSetName(&obj->total_perf , "TOTAL");
        appPerfPointSetName(&obj->fileio_perf, "FILEIO");
    
        return status;
    }
    
    static void app_deinit(AppObj *obj)
    {
        app_deinit_sensor(&obj->sensorObj);
        APP_PRINTF("Sensor deinit done!\n");
    
        app_deinit_capture(&obj->captureObj, APP_BUFFER_Q_DEPTH);
        APP_PRINTF("Capture deinit done!\n");
    
        app_deinit_viss(&obj->vissObj);
        APP_PRINTF("VISS deinit done!\n");
    
        app_deinit_aewb(&obj->aewbObj);
        APP_PRINTF("AEWB deinit done!\n");
    
        app_deinit_ldc(&obj->ldcObj);
        APP_PRINTF("LDC deinit done!\n");
    
        app_deinit_scaler(&obj->scalerObj);
        APP_PRINTF("Scaler deinit done!\n");
    
        app_deinit_pre_proc(&obj->preProcObj);
        APP_PRINTF("Pre proc deinit done!\n");
    
        app_deinit_tidl(&obj->tidlObj);
        APP_PRINTF("TIDL deinit done!\n");
    
        app_deinit_draw_detections(&obj->drawDetectionsObj);
        APP_PRINTF("Draw detections deinit done!\n");
    
        app_deinit_img_mosaic(&obj->imgMosaicObj, APP_BUFFER_Q_DEPTH);
        APP_PRINTF("Img Mosaic deinit done!\n");
    
        app_deinit_display(&obj->displayObj);
        APP_PRINTF("Display deinit done!\n");
    
        #ifndef x86_64
        if(obj->displayObj.display_option == 1)
        {
            appGrpxDeInit();
        }
        #endif
    
        tivxTIDLUnLoadKernels(obj->context);
        tivxHwaUnLoadKernels(obj->context);
        tivxImagingUnLoadKernels(obj->context);
        tivxImgProcUnLoadKernels(obj->context);
        tivxFileIOUnLoadKernels(obj->context);
        APP_PRINTF("Kernels unload done!\n");
    
        vxReleaseContext(&obj->context);
        APP_PRINTF("Release context done!\n");
    }
    
    static void app_delete_graph(AppObj *obj)
    {
        app_delete_capture(&obj->captureObj);
        APP_PRINTF("Capture delete done!\n");
    
        app_delete_viss(&obj->vissObj);
        APP_PRINTF("VISS delete done!\n");
    
        app_delete_aewb(&obj->aewbObj);
        APP_PRINTF("AEWB delete done!\n");
    
        app_delete_ldc(&obj->ldcObj);
        APP_PRINTF("LDC delete done!\n");
    
        app_delete_scaler(&obj->scalerObj);
        APP_PRINTF("Scaler delete done!\n");
    
        app_delete_pre_proc(&obj->preProcObj);
        APP_PRINTF("Pre Proc delete done!\n");
    
        app_delete_tidl(&obj->tidlObj);
        APP_PRINTF("TIDL delete done!\n");
    
        app_delete_draw_detections(&obj->drawDetectionsObj);
        APP_PRINTF("Post Proc delete done!\n");
    
        app_delete_img_mosaic(&obj->imgMosaicObj);
        APP_PRINTF("Img Mosaic delete done!\n");
    
        app_delete_display(&obj->displayObj);
        APP_PRINTF("Display delete done!\n");
    
        vxReleaseGraph(&obj->graph);
        APP_PRINTF("Graph delete done!\n");
    }
    
    static vx_status app_create_graph(AppObj *obj)
    {
        vx_status status = VX_SUCCESS;
        vx_graph_parameter_queue_params_t graph_parameters_queue_params_list[2];
        vx_int32 graph_parameter_index;
    
        obj->graph = vxCreateGraph(obj->context);
        status = vxGetStatus((vx_reference)obj->graph);
        vxSetReferenceName((vx_reference)obj->graph, "app_tidl_od_cam_graph");
        APP_PRINTF("Graph create done!\n");
    
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_capture(obj->graph, &obj->captureObj);
            APP_PRINTF("Capture graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_viss(obj->graph, &obj->vissObj, obj->captureObj.raw_image_arr[0]);
            APP_PRINTF("VISS graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_aewb(obj->graph, &obj->aewbObj, obj->vissObj.h3a_stats_arr);
            APP_PRINTF("AEWB graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_ldc(obj->graph, &obj->ldcObj, obj->vissObj.output_arr);
            APP_PRINTF("LDC graph done!\n");
        }
    
    //Bert: May add Mosaic here to play scaler as well
    //    vx_int32 idx = 0;
    //    obj->imgMosaicObj.input_arr[idx++] = obj->ldcObj.output_arr;
    //    obj->imgMosaicObj.num_inputs = idx;
        vx_int32 idx = 0;
        obj->imgMosaicObj.input_arr[idx++] = obj->ldcObj.output_arr;
        obj->imgMosaicObj.num_inputs = idx;
    
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_img_mosaic(obj->graph, &obj->imgMosaicObj, NULL);
            APP_PRINTF("Img Mosaic graph done!\n");
        }
    
    // transfer Mosaic output from 'vx_image' to 'vx_object_array' for pre_proc
    	vx_image out_img = obj->imgMosaicObj.output_image[0];
        vx_object_array tmp_MosaicOutput_array  = vxCreateObjectArray(context, (vx_reference)out_img, num_ch);
        vxReleaseImage(&out_img);
    
    	if(status == VX_SUCCESS)
        {
           app_create_graph_pre_proc(obj->graph, &obj->preProcObj, tmp_MosaicOutput_array);
           APP_PRINTF("Pre proc graph done!\n");
        }
    
    /* by Bert
        if(status == VX_SUCCESS)
        {
            app_create_graph_scaler(obj->context, obj->graph, &obj->scalerObj, obj->ldcObj.output_arr);
            APP_PRINTF("Scaler graph done!\n");
        }
    */
    /*
        if(status == VX_SUCCESS)
        {
           app_create_graph_pre_proc(obj->graph, &obj->preProcObj, obj->scalerObj.output[0].arr);
           APP_PRINTF("Pre proc graph done!\n");
        }
    */	
    
    
        if(status == VX_SUCCESS)
        {
            app_create_graph_tidl(obj->context, obj->graph, &obj->tidlObj, obj->preProcObj.output_tensor_arr);
            APP_PRINTF("TIDL graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            app_create_graph_draw_detections(obj->graph, &obj->drawDetectionsObj, obj->tidlObj.output_tensor_arr[0], obj->scalerObj.output[1].arr);
            APP_PRINTF("Draw detections graph done!\n");
        }
    
    
    //Bert: Need to mark???
    
        vx_int32 idx = 0;
        obj->imgMosaicObj.input_arr[idx++] = obj->drawDetectionsObj.output_image_arr;
        obj->imgMosaicObj.num_inputs = idx;
    
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_img_mosaic(obj->graph, &obj->imgMosaicObj, NULL);
            APP_PRINTF("Img Mosaic graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            status = app_create_graph_display(obj->graph, &obj->displayObj, obj->imgMosaicObj.output_image[0]);
            APP_PRINTF("Display graph done!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            graph_parameter_index = 0;
            add_graph_parameter_by_node_index(obj->graph, obj->captureObj.node, 1);
            obj->captureObj.graph_parameter_index = graph_parameter_index;
            graph_parameters_queue_params_list[graph_parameter_index].graph_parameter_index = graph_parameter_index;
            graph_parameters_queue_params_list[graph_parameter_index].refs_list_size = APP_BUFFER_Q_DEPTH;
            graph_parameters_queue_params_list[graph_parameter_index].refs_list = (vx_reference*)&obj->captureObj.raw_image_arr[0];
            graph_parameter_index++;
    
            vxSetGraphScheduleConfig(obj->graph,
                    VX_GRAPH_SCHEDULE_MODE_QUEUE_AUTO,
                    graph_parameter_index,
                    graph_parameters_queue_params_list);
    
            tivxSetGraphPipelineDepth(obj->graph, APP_PIPELINE_DEPTH);
    
            tivxSetNodeParameterNumBufByIndex(obj->vissObj.node, 6, APP_BUFFER_Q_DEPTH);
            tivxSetNodeParameterNumBufByIndex(obj->vissObj.node, 9, APP_BUFFER_Q_DEPTH);
            tivxSetNodeParameterNumBufByIndex(obj->aewbObj.node, 4, APP_BUFFER_Q_DEPTH);
    
            tivxSetNodeParameterNumBufByIndex(obj->ldcObj.node, 7, APP_BUFFER_Q_DEPTH);
    
            /*This output is accessed slightly later in the pipeline by mosaic node so queue depth is larger */
            tivxSetNodeParameterNumBufByIndex(obj->scalerObj.node, 1, 6);
            tivxSetNodeParameterNumBufByIndex(obj->scalerObj.node, 2, 6);
    
            tivxSetNodeParameterNumBufByIndex(obj->preProcObj.node, 2, APP_BUFFER_Q_DEPTH);
    
            tivxSetNodeParameterNumBufByIndex(obj->tidlObj.node, 4, APP_BUFFER_Q_DEPTH);
            tivxSetNodeParameterNumBufByIndex(obj->tidlObj.node, 7, APP_BUFFER_Q_DEPTH);
    
            tivxSetNodeParameterNumBufByIndex(obj->drawDetectionsObj.node, 3, APP_BUFFER_Q_DEPTH);
    
            tivxSetNodeParameterNumBufByIndex(obj->imgMosaicObj.node, 1, APP_BUFFER_Q_DEPTH);
    
            APP_PRINTF("Pipeline params setup done!\n");
        }
    
        return status;
    }
    
    
    
    static vx_status app_verify_graph(AppObj *obj)
    {
        vx_status status = VX_SUCCESS;
    
        status = vxVerifyGraph(obj->graph);
    
        if(status == VX_SUCCESS)
        {
            APP_PRINTF("Grapy verify SUCCESS!\n");
        }
        else
        {
            APP_PRINTF("Grapy verify FAILURE!\n");
            status = VX_FAILURE;
        }
    
        #if 1
        if(VX_SUCCESS == status)
        {
            status = tivxExportGraphToDot(obj->graph,".", "vx_app_tidl_od_cam");
        }
        #endif
    
        if(VX_SUCCESS == status)
        {
            if (obj->captureObj.enable_error_detection)
            {
                status = app_send_error_frame(&obj->captureObj);
                APP_PRINTF("App Send Error Frame Done! %d \n", obj->captureObj.enable_error_detection);
            }
        }
        /* wait a while for prints to flush */
        tivxTaskWaitMsecs(100);
    
        return status;
    }
    
    
    static vx_status app_run_graph_for_one_frame_pipeline(AppObj *obj, vx_int32 frame_id)
    {
        vx_status status = VX_SUCCESS;
    
        appPerfPointBegin(&obj->total_perf);
        CaptureObj *captureObj = &obj->captureObj;
    
        if(obj->pipeline <= 0)
        {
            /* Enqueue outpus */
            /* Enqueue inputs during pipeup dont execute */
            vxGraphParameterEnqueueReadyRef(obj->graph, captureObj->graph_parameter_index, (vx_reference*)&captureObj->raw_image_arr[obj->enqueueCnt], 1);
    
            obj->enqueueCnt++;
            obj->enqueueCnt   = (obj->enqueueCnt  >= APP_BUFFER_Q_DEPTH)? 0 : obj->enqueueCnt;
            obj->pipeline++;
        }
    
    
        if(obj->pipeline > 0)
        {
            vx_image capture_input_image;
            uint32_t num_refs;
    
            /* Dequeue input */
            vxGraphParameterDequeueDoneRef(obj->graph, captureObj->graph_parameter_index, (vx_reference*)&capture_input_image, 1, &num_refs);
    
            /* Enqueue input - start execution */
            vxGraphParameterEnqueueReadyRef(obj->graph, captureObj->graph_parameter_index, (vx_reference*)&capture_input_image, 1);
    
            obj->enqueueCnt++;
            obj->dequeueCnt++;
    
            obj->enqueueCnt = (obj->enqueueCnt >= APP_BUFFER_Q_DEPTH)? 0 : obj->enqueueCnt;
            obj->dequeueCnt = (obj->dequeueCnt >= APP_BUFFER_Q_DEPTH)? 0 : obj->dequeueCnt;
        }
    
        appPerfPointEnd(&obj->total_perf);
    
        return status;
    }
    
    
    static vx_status app_run_graph(AppObj *obj)
    {
        vx_status status = VX_SUCCESS;
    
        SensorObj *sensorObj = &obj->sensorObj;
        vx_int32 frame_id;
        int32_t ch_mask = obj->sensorObj.ch_mask;
    
        app_pipeline_params_defaults(obj);
    
        if(NULL == sensorObj->sensor_name)
        {
            printf("sensor name is NULL \n");
            return VX_FAILURE;
        }
        status = appStartImageSensor(sensorObj->sensor_name, ch_mask);
        APP_PRINTF("appStartImageSensor returned with status: %d\n", status);
    
        for(frame_id = 0; frame_id < obj->num_frames_to_run; frame_id++)
        {
    #ifdef APP_WRITE_INTERMEDIATE_OUTPUTS
            if(obj->write_file == 1)
            {
                if(obj->captureObj.en_out_capture_write == 1)
                {
                    app_send_cmd_capture_write_node(&obj->captureObj, frame_id, obj->num_frames_to_write, obj->num_frames_to_skip);
                }
                if(obj->vissObj.en_out_viss_write == 1)
                {
                    app_send_cmd_viss_write_node(&obj->vissObj, frame_id, obj->num_frames_to_write, obj->num_frames_to_skip);
                }
                if(obj->ldcObj.en_out_ldc_write == 1)
                {
                    app_send_cmd_ldc_write_node(&obj->ldcObj, frame_id, obj->num_frames_to_write, obj->num_frames_to_skip);
                }
                if(obj->scalerObj.en_out_scaler_write == 1)
                {
                    app_send_cmd_scaler_write_node(&obj->scalerObj, frame_id, obj->num_frames_to_write, obj->num_frames_to_skip);
                }
                if(obj->preProcObj.en_out_pre_proc_write == 1)
                {
                    app_send_cmd_pre_proc_write_node(&obj->preProcObj, frame_id, obj->num_frames_to_write, obj->num_frames_to_skip);
                }
                obj->write_file = 0;
            }
    #endif
            app_run_graph_for_one_frame_pipeline(obj, frame_id);
    
            /* user asked to stop processing */
            if(obj->stop_task)
                break;
        }
    
        vxWaitGraph(obj->graph);
    
        obj->stop_task = 1;
    
        status = appStopImageSensor(obj->sensorObj.sensor_name, ch_mask);
    
        return status;
    }
    
    static void set_display_defaults(DisplayObj *displayObj)
    {
        displayObj->display_option = 1;
    }
    
    static void app_pipeline_params_defaults(AppObj *obj)
    {
        obj->pipeline       = -APP_BUFFER_Q_DEPTH + 1;
        obj->enqueueCnt     = 0;
        obj->dequeueCnt     = 0;
    }
    
    static void set_sensor_defaults(SensorObj *sensorObj)
    {
        strcpy(sensorObj->sensor_name, SENSOR_SONY_IMX390_UB953_D3);
    
        sensorObj->num_sensors_found = 0;
        sensorObj->sensor_features_enabled = 0;
        sensorObj->sensor_features_supported = 0;
        sensorObj->sensor_dcc_enabled = 0;
        sensorObj->sensor_wdr_enabled = 0;
        sensorObj->sensor_exp_control_enabled = 0;
        sensorObj->sensor_gain_control_enabled = 0;
        sensorObj->ch_mask = 1;
        sensorObj->enable_ldc = 1;
        sensorObj->num_cameras_enabled = 1;
        sensorObj->usecase_option = APP_SENSOR_FEATURE_CFG_UC0;
        sensorObj->is_interactive = 1;
    
    }
    
    static void set_scaler_defaults(ScalerObj *scalerObj)
    {
        scalerObj->color_format = VX_DF_IMAGE_NV12;
    }
    
    static void set_pre_proc_defaults(PreProcObj *preProcObj)
    {
        vx_int32 i;
        for(i = 0; i < 4; i++ )
        {
            preProcObj->params.pad_pixel[i] = 0;
        }
    
        for(i = 0; i< 3 ; i++)
        {
            preProcObj->params.scale_val[i] = 1.0;
            preProcObj->params.mean_pixel[i] = 0.0;
        }
    
        preProcObj->params.ip_rgb_or_yuv = 1; /* YUV-NV12 default */
        preProcObj->params.color_conv_flag = TIADALG_COLOR_CONV_YUV420_BGR;
    
        /* Number of time to clear the output buffer before it gets reused */
        preProcObj->params.clear_count  = 4;
    }
    
    static void app_default_param_set(AppObj *obj)
    {
        set_sensor_defaults(&obj->sensorObj);
    
        set_scaler_defaults(&obj->scalerObj);
    
        set_pre_proc_defaults(&obj->preProcObj);
    
        set_display_defaults(&obj->displayObj);
    
        app_pipeline_params_defaults(obj);
    
        obj->captureObj.enable_error_detection = 1; /* enable by default */
        obj->is_interactive = 1;
        obj->write_file = 0;
        obj->num_frames_to_run = 1000000000;
    }
    
    static vx_int32 calc_grid_size(vx_uint32 ch)
    {
        if(0==ch)
        {
            return -1;
        }
        else if(1==ch)
        {
            return 1;
        }
        else if(4>=ch)
        {
            return 2;
        }
        else if(9>=ch)
        {
            return 3;
        }
        else if(16>=ch)
        {
            return 4;
        }
        else
        {
            return -1;
        }
    }
    
    static void update_img_mosaic_defaults(ImgMosaicObj *imgMosaicObj, vx_uint32 in_width, vx_uint32 in_height, vx_int32 numCh)
    {
        vx_int32 idx, ch;
        vx_int32 grid_size = calc_grid_size(numCh);
        imgMosaicObj->out_width    = DISPLAY_WIDTH;
        imgMosaicObj->out_height   = DISPLAY_HEIGHT;
        imgMosaicObj->num_inputs   = 1;
    
        tivxImgMosaicParamsSetDefaults(&imgMosaicObj->params);
    
        idx = 0;
        for(ch = 0; ch < numCh; ch++)
        {
            vx_int32 startX, startY, winX, winY, winWidth, winHeight;
    
            winX = ch%grid_size;
            winY = ch/grid_size;
    
            if((in_width * grid_size) >= imgMosaicObj->out_width)
            {
                winWidth = imgMosaicObj->out_width / grid_size;
                startX = 0;
            }
            else
            {
                winWidth = in_width;
                startX = (imgMosaicObj->out_width - (in_width * grid_size)) / 2;
            }
    
            if((in_height * grid_size) >= imgMosaicObj->out_height)
            {
                winHeight = imgMosaicObj->out_height / grid_size;
                startY = 0;
            }
            else
            {
                winHeight = in_height;
                startY = (imgMosaicObj->out_height - (in_height * grid_size)) / 2;
            }
    
            imgMosaicObj->params.windows[idx].startX  = startX + (winWidth * winX);
            imgMosaicObj->params.windows[idx].startY  = startY + (winHeight * winY);
            imgMosaicObj->params.windows[idx].width   = winWidth;
            imgMosaicObj->params.windows[idx].height  = winHeight;
            imgMosaicObj->params.windows[idx].input_select   = 0;
            imgMosaicObj->params.windows[idx].channel_select = idx;
            idx++;
        }
    
        imgMosaicObj->params.num_windows  = idx;
    
        /* Number of time to clear the output buffer before it gets reused */
        imgMosaicObj->params.clear_count  = APP_BUFFER_Q_DEPTH;
    }
    
    static void update_draw_detections_defaults(AppObj *obj, DrawDetectionsObj *drawDetectionsObj)
    {
        vx_int32 i;
    
        drawDetectionsObj->params.width  = obj->scalerObj.output[1].width;
        drawDetectionsObj->params.height = obj->scalerObj.output[1].height;
    
        for(i = 0; i < drawDetectionsObj->params.num_classes; i++)
        {
            drawDetectionsObj->params.color_map[i][0] = (vx_uint8)(rand() % 256);
            drawDetectionsObj->params.color_map[i][1] = (vx_uint8)(rand() % 256);
            drawDetectionsObj->params.color_map[i][2] = (vx_uint8)(rand() % 256);
        }
    }
    
    static void app_update_param_set(AppObj *obj)
    {
        obj->sensorObj.sensor_index = 0; /* App works only for IMX390 2MP cameras */
    
        update_draw_detections_defaults(obj, &obj->drawDetectionsObj);
        update_img_mosaic_defaults(&obj->imgMosaicObj, obj->scalerObj.output[1].width, obj->scalerObj.output[1].height, obj->sensorObj.num_cameras_enabled);
    }
    
    /*
     * Utility API used to add a graph parameter from a node, node parameter index
     */
    static void add_graph_parameter_by_node_index(vx_graph graph, vx_node node, vx_uint32 node_parameter_index)
    {
        vx_parameter parameter = vxGetParameterByIndex(node, node_parameter_index);
    
        vxAddParameterToGraph(graph, parameter);
        vxReleaseParameter(&parameter);
    }
    
    #ifndef x86_64
    static void app_draw_graphics(Draw2D_Handle *handle, Draw2D_BufInfo *draw2dBufInfo, uint32_t update_type)
    {
        appGrpxDrawDefault(handle, draw2dBufInfo, update_type);
    
        if(update_type == 0)
        {
            Draw2D_FontPrm sHeading;
    
            sHeading.fontIdx = 4;
            Draw2D_drawString(handle, 580, 5, "TIDL - Object Detection Demo", &sHeading);
        }
    
        return;
    }
    #endif
    

  • The above revised code for transfer Mosaic output datatype is Method A. 

    My colleague tried the other one: let call it method B in the following reply.

  • [Method B]: change input data type of Pre_proc to be vx_image.

    The attached zip files are codes modified.

     methodB_PreProc.zip

    And he got the following error message.

    How can we fix the error based on method B?

    Thanks.

  • The flowchart is what we want to implement, 

  • Hello, Takuma:

    Based on method B, we tried to fix the bug via removing (mark) vxReplicateNode inapp_ creat_graphic_pre_proc as shown in the following. 

    The old error message disappears, but new error message as shown in the following:

    How can we fix it?

    Thanks.

  • Hi,

    Sorry for the delay in the response.

    May I know what is the error currently you are facing?

    In the logs shared, there is no error seen.

    [MCU2_0]     55.685587 s: IMX390_GetWBPrgFxn: sensor_pre_gain = 0
    [MCU2_0]     55.690072 s: IMX390_GetWBPrgFxn: sensor_pre_gain = 0

    The above 2 logs highlighted are not error.

    Could you press "p" and let me know if the streaming is occuring? Or is the A72 crashed?

    Regards,
    Nikhil

  • The above 2 logs highlighted are not error.

    (a) No images show on LCD.  

    (b) We did not know whether marking the vxReplicateNode is proper or not. 

  • As to your request: press 'P' and see what happen. 

    We did it and found the CPU:mpu1_0, total load = 0.12% as shown in the following image. 

     _

  • Hi,

    Few followup questions here to understand the issue better.

    1. Could you please confirm if my understanding is correct regarding your usecase?

        -> Your Graph would be such that multi-cam output would be sent to mosaic node from LDC and then a single stream from mosaic (which would consist of merged images of multi-cam) is sent to pre-proc node -> tidl node -> post-proc -> display?

    The attached zip files are codes modified

    2. Thank you for sharing the code. I reviewed the code you had attached for plan B. 
        In this I see that the flow is such that output from LDC (multi-cam) -> Scalar node -> Mosaic Node (Merged single output) -> Display and also Mosaic Node (Merged single output) -> Preproc -> TIDL.

    -> Is the post proc and display of the same not required here?

    -> Is the pipeline .... LDC (multi-cam) -> Scalar node -> Mosaic Node (Merged single output) -> Display working ?

    -> Since we have single image stream output from mosaic, pre-proc as well as tidl node would not have array as input. Hence, changes should also be done for TIDL node too.

    Please clarify my above questions.

    Regards,
    Nikhil

  • Hello, Nikhil:

    (1) Your understanding is right.   Our camera is yuv input, so , the VISS, LDC and AEAWB will be disabled. 

    (2) 

    (a)  Is the post proc and display of the same not required here?

    => Ans.: for on-time running, the display node is not necessary.  Nevertheless, for the psot_proc, may I know its function first?

    (b)  Is the pipeline .... LDC (multi-cam) -> Scalar node -> Mosaic Node (Merged single output) -> Display working ? 

    =>Ans.: Yes, it works successfully. 

    (c) Since we have single image stream output from mosaic, pre-proc as well as tidl node would not have array as input. Hence, changes should also be done for TIDL node too.

    => How to change pre_proc, tidl, and (post_proc, maybe) respectively????

    Thanks.

  • Hi,

    Our camera is yuv input, so , the VISS, LDC and AEAWB will be disabled

    Oh.. ok. So you want your graph to be like Capture node -> Mosaic Node -> Preproc -> TIDL -> Postproc -> Display ?

    Is your model compatible to do object detection on a mosaic image (which is a single image output of multi-cam input) ?

    for the psot_proc, may I know its function first

    In the usecase of object detection, the post proc node used is tivxDrawBoxDetectionsNode(), which draws the detection box using the tensor output from TIDL node and the input image before pre proc and integrates the bounding boxes on this image.

    Are you also trying to do the same? (i.e., drawing bounding box on a mosaiced input). May I know if the model is capable to provide this information?

    How to change pre_proc, tidl, and (post_proc, maybe) respectively????

    In the similar way how you made the change for pre-proc (i.e., change from object_Array to vx_image and remove the replicate node), the same should be done for the TIDL (object_array to vx_tensor and remove the replicate node) and also to post-proc node (if being used)

    The nodes need not be changed as it does not take object_array as input.

    The changes would be in the APIs for preproc, tidl and postproc (For eg. app_create_graph_tidl(), app_create_graph_draw_detections(), app_create_graph_pre_proc() )

    Regards,
    Nikhil

  • Hi, Nikhil:
    As to your questions, please see my feedback below marked by '=>Ans.:'

    (1) Is your model compatible to do object detection on a mosaic image (which is a single image output of multi-cam input) ?
    =>Ans.: Yes. We already trained an AI model based on offline merged-4-image and run sucessfully based on our revised of 'app_tidl_od' which read the merged-4-image from a folder.

    (2)Are you also trying to do the same? (i.e., drawing bounding box on a mosaiced input). May I know if the model is capable to provide this information?
    =>Ans.: We have our own post-proc to do so in the case of 'app_tidl_od'.
    So, for easily debugging we will use our own post-proc via porting our own to ti post_proc_node.
    But for run time, we do not need to draw bonding box on the LCD.

    ======================================
    As to your suggestions of :
    In the similar way how you made the change for pre-proc (i.e., change from object_Array to vx_image and remove the replicate node), the same should be done for the TIDL (object_array to vx_tensor and remove the replicate node) and also to post-proc node (if being used)

    =>Feedback: We will try it. Thanks very much.

    But I am a little confused because I thought the output datatype of pre_proc = input datatype of TIDL.
    method-B only change the input data type of pre_proc and does NOT change its output datatype, so why I need to change the input data type of TIDL??
    or when we mark the replicate node, it implies that we change the output datatype of pre_proc from vx_object_array to be vx_tensor???

  • Hi

    But I am a little confused because I thought the output datatype of pre_proc = input datatype of TIDL.
    method-B only change the input data type of pre_proc and does NOT change its output datatype, so why I need to change the input data type of TIDL??
    or when we mark the replicate node, it implies that we change the output datatype of pre_proc from vx_object_array to be vx_tensor???

    As mentioned above, there is no change in the node (tidl or pre-proc) as they take single input itself such as vx_tensor/ vx_image etc instead of object array.

    Hence in the create function of these nodes, you could find extraction of object_array to get either vx_image or vx_tensor to send to the node.

    Now since you are already sending a single image as input to the app_create_graph_pre_proc(), the output from the pre-proc would be preProcObj->output_tensor (instead of preProcObj->output_tensor_arr[0])

    This would be the input to app_create_graph_tidl() and similarly the tidl shall output tidlObj->output_tensor[] (instead of tidlObj->output_tensor_arr[])

    Regards,
    Nikhil

  • Hello, Nikhil:

    This modification I named it as expB.02. 

    We did it and there is no error message. 

    Nevertheless, no image (or video) shown on the LCD. 

    Do we need to modify LCD as well if we would like to see detection results on LCD offline?

    The message of expB.02 is shown in the following.

    And codes we modified in the zip file. 

    expB.02.zip

    What we can do next?

    Thanks.

  • Hi,

    In the attached code, i have few queries.

    1. in main.c, since your input format from imager is YUV, i.e., (obj->sensorObj.sensor_out_format == 1). Hence in Line 681, obj->enable_mosaic = 0;

    I do not see this being enabled anywhere else. Could you please check this?

    This would be the input to app_create_graph_tidl() and similarly the tidl shall output tidlObj->output_tensor[] (instead of tidlObj->output_tensor_arr[])

    These changes are still not made.

    You are sending obj->preProcObj.output_tensor_arr to app_create_graph_tidl() and in app_tidl_module_our.c, all the places where num_cameras are being used in app_init_tidl() , such as

    tidlObj->in_args_arr  = vxCreateObjectArray(context, (vx_reference)inArgs, num_cameras);
    
    tidlObj->out_args_arr  = vxCreateObjectArray(context, (vx_reference)outArgs, num_cameras);
    
    tidlObj->output_tensor_arr[i]  = vxCreateObjectArray(context, (vx_reference)output_tensors[i], num_cameras);

    will be as shown below (because you have just single stream output which is equivalent to getting output from a single camera)

    tidlObj->inArgs = vxCreateUserDataObject(context, "TIDL_InArgs", capacity, NULL );
    
    setInArgs(context, tidlObj->inArgs);
    
    tidlObj->outArgs = vxCreateUserDataObject(context, "TIDL_outArgs", capacity, NULL );
    
    setOutArgs(context, tidlObj->outArgs);
    
    createOutputTensors(context, tidlObj->config, tidlObj->output_tensors);

    The same should be done in app_update_pre_proc() where instead of

    preProcObj->output_tensor_arr[i] = vxCreateObjectArray(context, (vx_reference)output_tensors[i], num_cameras);

    you can use

    createOutputTensors(context, config, preProcObj->output_tensors);

    with the above changes, in main.c, it would be

    app_create_graph_tidl(obj->context, obj->graph, &obj->tidlObj, obj->preProcObj.output_tensor);

    and in app_create_graph_tidl(), instead of xx_arr, you should use xx itself (where xx could be in_args, out_args, output_tensor and input_tensor)

    Please try the above change and let me know if there are further queries.

    Regards,
    Nikhil

  • Hi Nikhil:

    We modified codes based on our understandings of your suggestions. 

    Seems that there is no error message, but we still cannot see anything on LCD. 

    The revised code (expB.03) and its message after pressing 'p' are in the attached zip files. 

    Please help us to check what we can do next.

    Thanks.

    expB.03.zip

  • Hi,

    Thank you for making the changes. I reviewed your current changes. Please find my comments below.

    1. in main.c, since your input format from imager is YUV, i.e., (obj->sensorObj.sensor_out_format == 1). Hence in Line 681, obj->enable_mosaic = 0;

    I do not see this being enabled anywhere else. Could you please check this?

    1. I still do not see in the code, where obj->enable_mosaic is enabled. Could you please clarify this?

    2. In the fuction app_create_graph_pre_proc(), the below line is not valid as output_tensors is no more an object array, it is just a tensor.

    vx_tensor output  = (vx_tensor)vxGetObjectArrayItem((vx_object_array)preProcObj->output_tensors, 0);

    Hence please remove this. You could send preProcObj->output_tensors[0] directly to the node.

    3 .The same is the issue with app_create_graph_tidl(), the below lines are not needed anymore, you could send input and tidlObj->output_tensors as it is to the node.

        input_tensor[0] = (vx_tensor)vxGetObjectArrayItem((vx_object_array)input, 0);
        output_tensor[0] = (vx_tensor)vxGetObjectArrayItem((vx_object_array)tidlObj->output_tensors, 0);

    4. The output tensor (for both pre-proc and tidl) should be dependent on the num of output tensors. Hence instead of vx_tensor output_tensors, it should be

    vx_tensor output_tensors[APP_MODULE_TIDL_MAX_TENSORS];

    5. In the function, createOutputTensors(), the function takes array of output tensor (please refer the original demo). Hence as mentioned earlier, output_tensor should be array (not object array)

    Note : if num_output_tensors = 1, then your implementation in (4) and (5) is valid.

    From you current implementation, your graph is as shown below. Please let me know if this is the flow you are expecting? May I know why a scalar node is required here? (Since mosaic node itself is scalar)


    If yes, could you let me know if you are getting mosaic output on Display with just the top three nodes? (i.e., Capture + Scalar + Mosaic + Display)

    Regards,
    Nikhil

  • Hello, Nikhil:

    (1) We modified our code based on your suggestions from 1~4, but get error message as shown in the zip file. (expB.04.zip)expB.04.zip

    How can we fix it?

    Thanks.

    (2) Based on expB.04 code, we marked the pre_proc and tidl and then the data flow is Capture + Scalar + Mosaic + Display

    We can see images of LCD as shown in the following (we just plugged 2 cameras). 

  • Hello, Nikhil:

    I found that VISS, Scaler and tidl use the following API twice.

    tivxSetNodeParameterNumBufByIndex.

    Do I need to use this API to set Mosaic twice?

    If yes,

    (1) what buffer index is still available, 10, 11?

    (2) What else I need to modify as well?

    Thanks.

  • Hi,

    We modified our code based on your suggestions from 1~4, but get error message as shown in the zip file

    The error you see is because of below change in app_tidl_module_our.c

         if(status == VX_SUCCESS)
         {
            capacity = sizeof(TIDL_InArgs);
            tidlObj->outArgs = vxCreateUserDataObject(context, "TIDL_outArgs", capacity, NULL );
            setOutArgs(context, tidlObj->outArgs);   
         }

    For outArgs, the capacity should be sizeof(TIDL_outArgs) and not sizeof(TIDL_InArgs);

    Do I need to use this API to set Mosaic twice

    Here, the syntax of tivxSetNodeParameterNumBufByIndex is tivxSetNodeParameterNumBufByIndex(node, parameter, buffer_depth)

    So, the below means that the output (parameter 1) of mosaic node has a buffer depth of APP_BUFFER_Q_DEPTH.

    tivxSetNodeParameterNumBufByIndex(obj->imgMosaicObj.node, 1, APP_BUFFER_Q_DEPTH);

    The parameter index could be obtained from the node definition.

    For eg. in case of mosaic node  prms[1] = (vx_reference)output_image;

    VX_API_ENTRY vx_node VX_API_CALL tivxImgMosaicNode(vx_graph             graph,
                                                       vx_kernel            kernel,
                                                       vx_user_data_object  config,
                                                       vx_image             output_image,
                                                       vx_image             background_image,
                                                       vx_object_array      input_arr[],
                                                       vx_uint32            num_inputs)
    {
        vx_reference prms[TIVX_IMG_MOSAIC_MAX_PARAMS];
        vx_int32 i;
    
        vx_int32 num_params = TIVX_IMG_MOSAIC_BASE_PARAMS + num_inputs;
    
        prms[0] = (vx_reference)config;
        prms[1] = (vx_reference)output_image;
        prms[2] = (vx_reference)background_image;
    
        for(i = 0; i < num_inputs; i++){
            prms[TIVX_IMG_MOSAIC_INPUT_START_IDX + i] = (vx_reference)input_arr[i];
        }
    
        vx_node node = tivxCreateNodeByKernelRef(graph,
                                                 kernel,
                                                 prms,
                                                 num_params);
        return(node);
    
    }


    I found that VISS, Scaler and tidl use the following API twice

    I still am not understanding why is VISS, LDC nodes being used in your code. If you are not using this, please remove it or put it under a flag and disable it.

    Regards,
    Nikhil

  • Hello, Nikhil:

    We tried to remove 3 nodes what we don't use. They are VISS, AEAWB, and LDC. 

    And assign capture node output to be the input of Mosaic node. 

    The following  is our code:

    Nevertheless, we got the following error messages:

    What I can do to fix the issue?

    Thanks.

  • Hi,

    Could you please share the latest source code with which you are facing the error?

    May I know what is the size of the captured image?

    Is the verify graph successful? Please send me the full logs.

    Regards,
    Nikhil

  • Hello, Nikhil:

    We can successfully marked AEAWB and LDC and get video stream shown on LCD. 

    Nevertheless, when we remove (marked the code: status = app_create_graph_viss(obj->graph, &obj->vissObj, obj->captureObj.raw_image_arr[0]);

    We got the error message. 

    How can we do to fix it?

    Thanks.

    The revised coded is in the following:

    ================================================

    static vx_status app_create_graph(AppObj *obj)
    {
    vx_status status = VX_SUCCESS;
    vx_graph_parameter_queue_params_t graph_parameters_queue_params_list[2];
    vx_int32 graph_parameter_index;
    
    obj->graph = vxCreateGraph(obj->context);
    status = vxGetStatus((vx_reference)obj->graph);
    vxSetReferenceName((vx_reference)obj->graph, "app_tidl_od_cam_graph");
    APP_PRINTF("Graph create done!\n");
    
    if(status == VX_SUCCESS)
    {
    status = app_create_graph_capture(obj->graph, &obj->captureObj);
    APP_PRINTF("Capture graph done!\n");
    }
    
    /*
    if(status == VX_SUCCESS)
    {
    status = app_create_graph_viss(obj->graph, &obj->vissObj, obj->captureObj.raw_image_arr[0]);
    APP_PRINTF("VISS graph done!\n");
    }
    
    if(status == VX_SUCCESS)
    {
    status = app_create_graph_aewb(obj->graph, &obj->aewbObj, obj->vissObj.h3a_stats_arr);
    APP_PRINTF("AEWB graph done!\n");
    }
    
    if(status == VX_SUCCESS)
    {
    status = app_create_graph_ldc(obj->graph, &obj->ldcObj, obj->vissObj.output_arr);
    APP_PRINTF("LDC graph done!\n");
    }
    */
    
    vx_int32 idx = 0;
    obj->imgMosaicObj.input_arr[idx++] = obj->captureObj.raw_image_arr[0];
    obj->imgMosaicObj.num_inputs = idx;
    
    if(status == VX_SUCCESS)
    {
    status = app_create_graph_img_mosaic(obj->graph, &obj->imgMosaicObj, NULL);
    APP_PRINTF("Img Mosaic graph done!\n");
    }
    
    if(status == VX_SUCCESS)
    {
    app_create_graph_scaler(obj->context, obj->graph, &obj->scalerObj, obj->imgMosaicObj.output_image[0]);
    APP_PRINTF("Scaler graph done!\n");
    }
    
    if(status == VX_SUCCESS)
    {
    app_create_graph_pre_proc(obj->graph, &obj->preProcObj, obj->scalerObj.output[0].arr);
    APP_PRINTF("Pre proc graph done!\n");
    }
    
    if(status == VX_SUCCESS)
    {
    app_create_graph_tidl(obj->context, obj->graph, &obj->tidlObj, obj->preProcObj.output_tensor_arr);
    APP_PRINTF("TIDL graph done!\n");
    }
    
    if(status == VX_SUCCESS)
    {
    app_create_graph_draw_detections(obj->graph, &obj->drawDetectionsObj, obj->tidlObj.output_tensor_arr[0], obj->scalerObj.output[1].arr);
    APP_PRINTF("Draw detections graph done!\n");
    }
    
    /*
    if(status == VX_SUCCESS)
    {
    status = app_create_graph_display(obj->graph, &obj->displayObj, obj->drawDetectionsObj.output_image_arr);
    APP_PRINTF("Display graph done!\n");
    }
    */
    
    if(status == VX_SUCCESS)
    {
    graph_parameter_index = 0;
    add_graph_parameter_by_node_index(obj->graph, obj->captureObj.node, 1);
    obj->captureObj.graph_parameter_index = graph_parameter_index;
    graph_parameters_queue_params_list[graph_parameter_index].graph_parameter_index = graph_parameter_index;
    graph_parameters_queue_params_list[graph_parameter_index].refs_list_size = APP_BUFFER_Q_DEPTH;
    graph_parameters_queue_params_list[graph_parameter_index].refs_list = (vx_reference*)&obj->captureObj.raw_image_arr[0];
    graph_parameter_index++;
    
    vxSetGraphScheduleConfig(obj->graph,
    VX_GRAPH_SCHEDULE_MODE_QUEUE_AUTO,
    graph_parameter_index,
    graph_parameters_queue_params_list);
    
    tivxSetGraphPipelineDepth(obj->graph, APP_PIPELINE_DEPTH);
    
    //tivxSetNodeParameterNumBufByIndex(obj->vissObj.node, 6, APP_BUFFER_Q_DEPTH);
    //tivxSetNodeParameterNumBufByIndex(obj->vissObj.node, 9, APP_BUFFER_Q_DEPTH);
    //tivxSetNodeParameterNumBufByIndex(obj->aewbObj.node, 4, APP_BUFFER_Q_DEPTH);
    
    //tivxSetNodeParameterNumBufByIndex(obj->ldcObj.node, 7, APP_BUFFER_Q_DEPTH);
    
    tivxSetNodeParameterNumBufByIndex(obj->imgMosaicObj.node, 1, APP_BUFFER_Q_DEPTH);
    
    /*This output is accessed slightly later in the pipeline by mosaic node so queue depth is larger */
    tivxSetNodeParameterNumBufByIndex(obj->scalerObj.node, 1, 6);
    tivxSetNodeParameterNumBufByIndex(obj->scalerObj.node, 2, 6);
    
    tivxSetNodeParameterNumBufByIndex(obj->preProcObj.node, 2, APP_BUFFER_Q_DEPTH);
    
    tivxSetNodeParameterNumBufByIndex(obj->tidlObj.node, 4, APP_BUFFER_Q_DEPTH);
    tivxSetNodeParameterNumBufByIndex(obj->tidlObj.node, 7, APP_BUFFER_Q_DEPTH);
    
    tivxSetNodeParameterNumBufByIndex(obj->drawDetectionsObj.node, 3, APP_BUFFER_Q_DEPTH);
    
    APP_PRINTF("Pipeline params setup done!\n");
    }
    
    return status;
    }

  • Hello, Nikhil:

    How can we fix the issue?

    Thanks.

  • Hi,

    Sorry for the delay in response.

    Could you see what is the return value for the function Fvid2_processRequest() called inside tivxKernelImgMosaicMscDrvSubmit() in vision_apps/kernels/img_proc/r5f/vx_img_mosaic_msc_target.c from where we are see the above error?

    Is the verify graph successful? Please send me the full logs.

    Could you please address the above query? Could you please send the full logs as text file? (and not as screenshot)

    May I know why are you using a scalar node at the output of mosaic node as shown below?

    status = app_create_graph_img_mosaic(obj->graph, &obj->imgMosaicObj, NULL);
    APP_PRINTF("Img Mosaic graph done!\n");
    }
    
    if(status == VX_SUCCESS)
    {
    app_create_graph_scaler(obj->context, obj->graph, &obj->scalerObj, obj->imgMosaicObj.output_image[0]);
    APP_PRINTF("Scaler graph done!\n");
    }

    could you let me know if you are getting mosaic output on Display with just the top three nodes? (i.e., Capture + Scalar + Mosaic + Display)

    I would suggest to test just this first. i.e. Capture + Mosaic + Display. Considering your camera is giving an output of YUV images [Is it YUV420 or YUV422 ?]

    We could remove all the other nodes for now keep the code clean with just these 3 nodes.

    We can integrate the other nodes once this is up?

    Could you please try this at your end?

    Regards,
    Nikhil

  • Hello, Nikhil:

    We will try your suggestion. 

    Let me answer your question first:

    ============

    Q: May I know why are you using a scalar node at the output of mosaic node as shown below?

    Ans.: because we found without followed by scaler node, the mosaic output can NOT feed in Pre-proc. And no video shown on LCD.  

  • Hello, Nikhil:

    == As to your suggestions of:

    Could you let me know if you are getting mosaic output on Display with just the top three nodes? (i.e., Capture + Scalar + Mosaic + Display)

    We could remove all the other nodes for now keep the code clean with just these 3 nodes.

    =======================

    We did the test and cannot see images on LCD. 

    What we can do for next?

    Thanks.

    Error message:

    code:

    expC.04.zip

  • Hi,

    Thank you for sharing the code.

    We did the test and cannot see images on LCD

    Ok, so let us focus on this currently before adding the AI sensing part.

    You currently have Capture + Mosaic + Display not working right?

    Could you please confirm if the output of capture is in NV12 or UYVY format? Note: Mosaic only supports NV12 as input.

    I did a few changes in the code you had sent.
    Could you please replace the main.c file with below and try at your end if you still see the issue?

    /cfs-file/__key/communityserver-discussions-components-files/791/2703.main.c

    Regards,
    Nikhil