This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: what problems should be paid attention to when using scaler node output for coding?

Part Number: TDA4VM

Dear experts,

    I was using the codec module and had some problems. I wrote a capture+m2m+encode demo, which works and saves the video. The m2m here is similar to the LDC node in the app_multi_cam_codec demo, but when I built on this, Only add the scaler node after the m2m node and map the data in the same way. When using the output of the scaler node as the input data of the Gstreamer encoding, the program runs incorrectly and it gets stuck in appCodecDeqAppSrc(). I tried to add other nodes after scaler node to map the data and encountered the same problem, I am confused, I hope you can give some suggestions, what problems should be paid attention to when using scaler node output for coding? The SDK I am using is 08_05.

Regards,
xin

  • Hi Suman,

    They are the same problem.

    Regards

    xin

  • Hi,

    We shall use this thread to work on the issue.

    So your usecase is,

    1. Capture + DisplayM2M + Encode -> Working

    2. Capture + DisplayM2M + Scalar Node + Encode -> Not Working

    Could you please share the code changes in a .txt file.

    A better way would be to share the working code and the patch on top this working code which adds Scalar node.
    This would help us identify the cause of the issue.

    Regards,

    Nikhil

  • Hi Nikhil,

    After adding scaler node I coded successfully, The only difference is that before I use vision_apps/modules/src/app_scaler_module.c, now use vision_apps/apps/basic_demos/app_multi_cam_codec/multi_cam_codec_scaler_module.c, does this make a difference?The files below are some key part code.

    Regards,

    xin

    /*init encode*/
        if (VX_SUCCESS == status)
        {
            vx_image intermediate_img = vxCreateImage(obj->context, obj->enc_pool.width, obj->enc_pool.height, VX_DF_IMAGE_NV12);
            status = vxGetStatus((vx_reference)intermediate_img);
            vx_int32 q;
            if(status == VX_SUCCESS)
            {
                for(q = 0; q < obj->enc_pool.bufq_depth; q++)
                {
                    obj->enc_pool.arr[q] = vxCreateObjectArray(obj->context, (vx_reference)intermediate_img, NUM_CAPT_CHANNELS_TOTAL);
                    status = vxGetStatus((vx_reference)obj->enc_pool.arr[q]);
                    if(status != VX_SUCCESS)
                    {
                        printf("[APP_INIT]: Unable to create image object array! \n");
                        break;
                    }
                    else
                    {
                        vx_char name[VX_MAX_REFERENCE_NAME];
    
                        snprintf(name, VX_MAX_REFERENCE_NAME, "enc_pool.arr_%d", q);
    
                        vxSetReferenceName((vx_reference)obj->enc_pool.arr[q], name);
                    }
                }
                vxReleaseImage(&intermediate_img);
            }
            else
            {
                printf("[APP_INIT]: Unable to create intermediate_img\n");
            }
        }
    
    static vx_status app_create_graphs(AppObj *obj)
    {
        vx_status status = VX_SUCCESS;
    
        obj->graph = vxCreateGraph(obj->context);
        if (vxGetStatus((vx_reference)obj->graph) != VX_SUCCESS)
        {
            APP_PRINTF("graph create failed\n");
            return VX_FAILURE;
        }
        status = vxSetReferenceName((vx_reference)obj->graph, "Capture_CODEC_Demo");
        if (VX_SUCCESS == status)
        {
            APP_PRINTF("graph Set Reference Name  Success !!\n");
        }
        else
        {
            APP_PRINTF("graph Set Reference Name  Failed !!\n");
        }
    
        if (VX_SUCCESS == status)
        {
            status = app_create_graph_capture(obj->graph, &obj->captureObj);
            APP_PRINTF("Capture graph done !\n");
        }
    
        if (VX_SUCCESS == status)
        {
            status = app_create_graph_displayM2M(obj->graph, &obj->displayM2MObj, obj->captureObj.capt_frames[0]);
            APP_PRINTF("DisplayM2M graph done !\n");
        }
        
        if (VX_SUCCESS == status)
        {
            obj->scalerObj.output[0].width = obj->enc_pool.width;
            obj->scalerObj.output[0].height = obj->enc_pool.height;
            obj->scalerObj.output[0].arr = obj->enc_pool.arr[0];
            status = app_create_graph_scaler(obj->context, obj->graph, &obj->scalerObj, obj->displayM2MObj.arr_Output);
            APP_PRINTF("Scaler graph done!\n");
        }
    
        if (VX_SUCCESS == status)
        {
            // vx_image input_image = (vx_image)vxGetObjectArrayItem( obj->captureObj.capt_frames[1], 0);
            // vx_image input_image = (vx_image)vxGetObjectArrayItem( obj->scalerObj.output[0].arr, 0);
            // vx_image input_image = (vx_image)vxGetObjectArrayItem( obj->displayM2MObj.arr_Output, 0);
            vx_image input_image = (vx_image)vxGetObjectArrayItem( obj->enc_pool.arr[0], 0);
            status = app_create_graph_display(obj->graph, &obj->displayObj, input_image);
            vxReleaseImage(&input_image);
            APP_PRINTF("Display graph done !\n");
        }
    
        int graph_parameter_index = 0;
        int graph_parameters_list_depth = 2;
    
        vx_graph_parameter_queue_params_t graph_parameters_queue_params_list[graph_parameters_list_depth];
    
        /* Set graph schedule config such that graph parameter @ index 0 is
            * enqueuable */
        add_graph_parameter_by_node_index(obj->graph, obj->captureObj.node, 1);
        obj->captureObj.graph_parameter_index = graph_parameter_index;
        graph_parameters_queue_params_list[graph_parameter_index].graph_parameter_index = graph_parameter_index;
        graph_parameters_queue_params_list[graph_parameter_index].refs_list_size = APP_BUFFER_Q_DEPTH;
        graph_parameters_queue_params_list[graph_parameter_index].refs_list = (vx_reference*)&obj->captureObj.capt_frames[0];
        graph_parameter_index++;
    
        add_graph_parameter_by_node_index(obj->graph, obj->scalerObj.node, 1);
        obj->scalerObj.graph_parameter_index = graph_parameter_index;
        obj->enc_pool.graph_parameter_index = graph_parameter_index;
        graph_parameters_queue_params_list[graph_parameter_index].graph_parameter_index = graph_parameter_index;
        graph_parameters_queue_params_list[graph_parameter_index].refs_list_size = obj->enc_pool.bufq_depth;
        graph_parameters_queue_params_list[graph_parameter_index].refs_list = (vx_reference*)&obj->enc_pool.arr[0];
        graph_parameter_index++;
    
        if(status == VX_SUCCESS)
        {
            status = tivxSetGraphPipelineDepth(obj->graph, PIPE_DEPTH);
        }
    
        /* Schedule mode auto is used, here we don't need to call vxScheduleGraph
            * Graph gets scheduled automatically as refs are enqueued to it
            */
        if(status == VX_SUCCESS)
        {
            status = vxSetGraphScheduleConfig(obj->graph,
                            VX_GRAPH_SCHEDULE_MODE_QUEUE_AUTO,
                            graph_parameters_list_depth,
                            graph_parameters_queue_params_list
                            );
        }
    
        if(status == VX_SUCCESS)
        {
            APP_PRINTF("vxSetGraphScheduleConfig done...\n");
        }
    
        #if OPEN_ENCODER
        {
            if ( obj->encode ) 
            {
                if(status == VX_SUCCESS)
                {
                    status = appCodecInit(&obj->codec_pipe_params);
                    APP_PRINTF("Codec Pipeline done!\n");
                }
            }
        }
        #endif
    
        return status;
    }
    
    static vx_status app_verify_graph(AppObj *obj)
    {
        vx_status status = VX_SUCCESS;
    
        status = vxVerifyGraph(obj->graph);
    
        if(status == VX_SUCCESS)
        {
            APP_PRINTF("Capture Graph verify done!\n");
        }
        else
        {
            APP_PRINTF("Capture Graph verify failure!\n");
        }
    
        if(status == VX_SUCCESS)
        {
            status = SendCmdtoScaler(&obj->scalerObj);
        }
    
        #if OPEN_ENCODER
        {
            if ( obj->encode ) 
            {
                for (vx_int8 buf_id=0; buf_id<obj->enc_pool.bufq_depth; buf_id++)
                {
                    if(VX_SUCCESS == status)
                    {
                        status = map_vx_object_arr(obj->enc_pool.arr[buf_id], obj->enc_pool.data_ptr[buf_id], obj->enc_pool.map_id[buf_id], obj->enc_pool.num_channels);
                    }
                }
    
                if(VX_SUCCESS == status)
                {
                    status = appCodecSrcInit(obj->enc_pool.data_ptr);
                    if(VX_SUCCESS == status)
                    {
                        APP_PRINTF("\nappCodecSrcInit Done!\n");
                    }
                    else
                    {
                        APP_PRINTF("\nappCodecSrcInit Failed!\n");
                    }
                }
                
                for (vx_int8 buf_id=0; buf_id<obj->enc_pool.bufq_depth; buf_id++)
                {
                    if(VX_SUCCESS == status)
                    {
                        status = unmap_vx_object_arr(obj->enc_pool.arr[buf_id], obj->enc_pool.map_id[buf_id], obj->enc_pool.num_channels);
                    }
                }
            }
    
            /* wait a while for prints to flush */
            tivxTaskWaitMsecs(100);
        }
        #endif
    
    
        return status;
    }
    
    
    static vx_status app_run_graph_for_one_frame_pipeline(AppObj *obj, vx_int32 frame_id)
    {
        vx_status status = VX_SUCCESS;
    
        appPerfPointBegin(&obj->total_perf);
    
            /* enc_pool buffer recycling */
            if (status==VX_SUCCESS && (obj->encode==1))
            {
                status = capture_encode(obj, frame_id);
            }
    
        appPerfPointEnd(&obj->total_perf);
        return status;
    }
    
    static vx_status capture_encode(AppObj* obj, vx_int32 frame_id)
    {
        APP_PRINTF("capture_encode: frame %d beginning\n", frame_id);
        vx_status status = VX_SUCCESS;
    
        CaptureObj *captureObj = &obj->captureObj;
        AppGraphParamRefPool *enc_pool = &obj->enc_pool;
    
        vx_object_array capture_input_arr;
        vx_object_array scaler_output_arr;
    
        uint32_t num_refs;
    
        if ( frame_id >= APP_BUFFER_Q_DEPTH )
        {
            if (status == VX_SUCCESS)
            {
                status = vxGraphParameterDequeueDoneRef(obj->graph, captureObj->graph_parameter_index, (vx_reference*)&capture_input_arr, 1, &num_refs);
                // APP_PRINTF("----------------Dequeue capture-------------\n");
            }
            if (status == VX_SUCCESS)
            {
                status = vxGraphParameterDequeueDoneRef(obj->graph, enc_pool->graph_parameter_index, (vx_reference*)&scaler_output_arr, 1, &num_refs);
                // APP_PRINTF("----------------Dequeue enc_pool-------------\n");
            }
    
            #if OPEN_ENCODER
            {
                if ( frame_id >= enc_pool->bufq_depth )
                {
                    if (status == VX_SUCCESS && obj->encode==1)
                    {
                        // APP_PRINTF("--------------start Dequeue codec-------------\n");
                        status = appCodecDeqAppSrc(obj->scaler_enq_id);
                        // APP_PRINTF("--------------Dequeue codec-------------\n");
                    }
                    if(status==VX_SUCCESS)
                    {
                        status = unmap_vx_object_arr(enc_pool->arr[obj->scaler_enq_id], enc_pool->map_id[obj->scaler_enq_id], NUM_CAPT_CHANNELS_TOTAL);
                        // APP_PRINTF("----------------unmap_vx_object_arr------------\n");
                    }
                }
    
                if(status==VX_SUCCESS)
                {
                    status = map_vx_object_arr(enc_pool->arr[obj->appsrc_push_id], enc_pool->data_ptr[obj->appsrc_push_id], enc_pool->map_id[obj->appsrc_push_id], NUM_CAPT_CHANNELS_TOTAL);
                    // APP_PRINTF("----------------map_vx_object_arr------------\n");
                }
                if(status==VX_SUCCESS && obj->encode==1)
                {
                    status = appCodecEnqAppSrc(obj->appsrc_push_id);
                    // APP_PRINTF("----------------Enqueue codec-------------\n");
                }
                obj->appsrc_push_id++;
                obj->appsrc_push_id     = (obj->appsrc_push_id  >= enc_pool->bufq_depth)? 0 : obj->appsrc_push_id; 
            }
            #endif
            
        }
    
        if (status == VX_SUCCESS)
        {
            status = vxGraphParameterEnqueueReadyRef(obj->graph, captureObj->graph_parameter_index, (vx_reference*)&captureObj->capt_frames[obj->capture_id], 1);
        }
        if (status == VX_SUCCESS)
        {
            status = vxGraphParameterEnqueueReadyRef(obj->graph, enc_pool->graph_parameter_index, (vx_reference*)&enc_pool->arr[obj->scaler_enq_id], 1);
        }
        obj->scaler_enq_id++;
        obj->scaler_enq_id         = (obj->scaler_enq_id  >= enc_pool->bufq_depth)? 0 : obj->scaler_enq_id;
        obj->capture_id++;
        obj->capture_id         = (obj->capture_id  >= APP_BUFFER_Q_DEPTH)? 0 : obj->capture_id;
    
        return status;
    }
    
    

  • Hi, 

    Yes, the only difference between app_scaler_module.c and multi_cam_codec_scaler_module.c is that in the first one, the output object array is already created whereas in the second one, this is not created and the output is NULL. then, we are send the enc_pool.arr[ ] with certain buffer depth which is different from the capture, viss, ldc buffer depths.

    Hence, this multi_cam_codec_scaler_module.c was used, so that the application has the control to set the buffer depth required by the encoder.

    Regards,

    Nikhil