This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Hi there:
The picture above shows my srv graph.And the background is that there was two Display Nodes after Convert_1 & Convert_2. All of the params above was setted to 4. It goes well and the video shown on screen was normal too.
Then I tried to use Mosaic Node to merge images from those two Convert Nodes instead of two Display Nodes. So I tried lots of test by modifying different parameter settings and finally found a set of suitable parameters.
But I'm confused and I wander:
①Why do I must set NumBufs of Convert_1 & Convert_2 to 1 can make it work ?
②What factors should be considered when setting the parameters or the parameter setting depends on what conditions?
Hopeing I have described the problem clearly !
Looking forward to replying !
Hello,
Are you creating the output of convert_1 and convert_2 as graph parameters?
Also, we have some documentation on these topics below. Please let me know if this helps clarify:
Regards,
Lucas
Thank you Lucas,
I just set the output of Capture as graph parameters. And I did some modifications in the Capture node to fit our camera(1280*960 YUYV output). But I don't think think it has anythung to do with my problem. Cuz the demo always goes well before I have added Mosaic.
As for the document you mentioned, my colleagues and I gave read it many times. Didn't find any clue about the solution to this problem.
Hi,
By any chance, any parameter going wrong to the mosaic node? For example, if the pointer is becoming null, then node will return error.
Regards,
Brijesh
Hi, Brijesh:
Thank you for caring about this issue !
I can't deny your guess, although I've checked the code over and over again. However, given that the test of the last set of parameters ( by setting Convert_1 & Convert_2 to 1) makes the it run successfully. I think the parameters might be right.
In addition, add a piece of information: when I set the value of NumBufs of Convert_1&Convert_2 to be bigger than 1. The program has done 4 times Enqueue&Dequeue loops before it stopped. And there are static Video shown on the screen (as showned below). It seems that the data flow is blocked or the buffer between some nodes is not enough to use ?
It might be a bit excessive if I ask you to please reproduce my situation, but I've been stuck with this problem for a long time and stiil haven't found any clues.
Hello,
Could you please provide the SDK version you are using?
Also, do you have any source code where you add the mosaic node to the graph that we could take a look at?
Regards,
Lucas
Hi,
Glad you asked. The SDK version I am currently using is: 7.01.
Here is part of the key code in my SrvDemo.Please forgive me for not finding a suitable format or tool to upload the code in this fornum interface. And I'd like to send you the complete code by mail if you need it.
/*********************** Init Graph****************************/
static vx_status app_init(AppObj *obj)
{
vx_status status = VX_FAILURE;
app_grpx_init_prms_t grpx_prms;
obj->stop_task = 0;
obj->stop_task_done = 0;
status = appCommonInit();
if(status==0)
{
tivxInit();
tivxHostInit();
}
obj->context = vxCreateContext();
APP_ASSERT_VALID_REF(obj->context);
tivxSrvLoadKernels(obj->context);
tivxHwaLoadKernels(obj->context);
#if ENABLE_ORIGNAL_CODE
tivxImagingLoadKernels(obj->context);
APP_PRINTF("tivxImagingLoadKernels done\n");
#endif
/* init convert_1 node */
{
obj->convertObj_1.info_Output.width = obj->pArg->inWidth;
obj->convertObj_1.info_Output.height = obj->pArg->inHeight;
obj->convertObj_1.info_Output.dataFormat = VX_DF_IMAGE_NV12;
strcpy(obj->convertObj_1.name_Target, TIVX_TARGET_DSP2);
strcpy(obj->convertObj_1.name_NodeObj, "Convert_1");
obj->convertObj_1.numCH = NUM_CAPT_CHANNELS;
obj->convertObj_1.arr_Replicate[0] = vx_true_e;
obj->convertObj_1.arr_Replicate[1] = vx_true_e;
app_init_convert(obj->context, &obj->convertObj_1);
}
/* init scaler node */
{
obj->scalerObj.info_Input.width = obj->pArg->inWidth;
obj->scalerObj.info_Input.height = obj->pArg->inHeight;
obj->scalerObj.info_Input.dataFormat = VX_DF_IMAGE_NV12;
obj->scalerObj.info_Output1.width = obj->pArg->inWidth;
obj->scalerObj.info_Output1.height = obj->pArg->inHeight;
obj->scalerObj.info_Output1.dataFormat = VX_DF_IMAGE_NV12;
obj->scalerObj.info_Output2.width = obj->pArg->inWidth;
obj->scalerObj.info_Output2.height = obj->pArg->inHeight;
obj->scalerObj.info_Output2.dataFormat = VX_DF_IMAGE_NV12;
strcpy(obj->scalerObj.name_NodeObj, "ScalerNode");
strcpy(obj->scalerObj.name_Target, TIVX_TARGET_VPAC_MSC2);
obj->scalerObj.arr_Replicate[0] = vx_true_e;
obj->scalerObj.arr_Replicate[1] = vx_true_e;
obj->scalerObj.arr_Replicate[2] = vx_true_e;
obj->scalerObj.arr_Replicate[3] = vx_false_e;
obj->scalerObj.arr_Replicate[4] = vx_false_e;
obj->scalerObj.arr_Replicate[5] = vx_false_e;
app_init_scaler(obj->context, &obj->scalerObj, NUM_CAPT_CHANNELS);
APP_PRINTF("MSC_Scaler init done!\n");
}
/* init srv_1 node */
{
obj->srvObj_1.info_Output.width = obj->pArg->inWidth/2;
obj->srvObj_1.info_Output.height = obj->pArg->inHeight/2;
obj->srvObj_1.info_Output.dataFormat = VX_DF_IMAGE_RGBX;
obj->srvObj_1.whichView = 1;
strcpy(obj->srvObj_1.name_Target, TIVX_TARGET_A72_0);
strcpy(obj->srvObj_1.name_NodeObj, "Srv_1");
app_init_srv(obj->context, &obj->srvObj_1);
}
/* init srv_2 node */
{
obj->srvObj_2.info_Output.width = obj->pArg->inWidth/2;
obj->srvObj_2.info_Output.height = obj->pArg->inHeight/2;
obj->srvObj_2.info_Output.dataFormat = VX_DF_IMAGE_RGBX;
obj->srvObj_2.whichView = 2;
strcpy(obj->srvObj_2.name_Target, TIVX_TARGET_A72_0);
strcpy(obj->srvObj_2.name_NodeObj, "Srv_2");
app_init_srv(obj->context, &obj->srvObj_2);
}
/* init convert_2 node */
{
obj->convertObj_2.info_Output.width = obj->pArg->inWidth/2;
obj->convertObj_2.info_Output.height = obj->pArg->inHeight/2;
obj->convertObj_2.info_Output.dataFormat = VX_DF_IMAGE_NV12;
strcpy(obj->convertObj_2.name_Target, TIVX_TARGET_DSP1);
strcpy(obj->convertObj_2.name_NodeObj, "Convert_2");
obj->convertObj_2.numCH = 1;
app_init_convert(obj->context, &obj->convertObj_2);
}
/* init convert_3 node */
{
obj->convertObj_3.info_Output.width = obj->pArg->inWidth/2;
obj->convertObj_3.info_Output.height = obj->pArg->inHeight/2;
obj->convertObj_3.info_Output.dataFormat = VX_DF_IMAGE_NV12;
strcpy(obj->convertObj_3.name_Target, TIVX_TARGET_DSP2);
strcpy(obj->convertObj_3.name_NodeObj, "Convert_3");
obj->convertObj_3.numCH = 1;
app_init_convert(obj->context, &obj->convertObj_3);
}
#if ENABLE_MOSAIC
/* init mosaic node */
{
set_img_mosaic_defaults(obj, &obj->imgMosaicObj);
strcpy(obj->imgMosaicObj.name_NodeObj, "MosaicNode");
#if DEMO_1
strcpy(obj->imgMosaicObj.name_Target, TIVX_TARGET_VPAC_MSC1);
#else
strcpy(obj->imgMosaicObj.name_Target, TIVX_TARGET_VPAC_MSC2);
#endif
app_init_img_mosaic(obj->context, &obj->imgMosaicObj, NUM_BUFS);
APP_PRINTF("Mosaic Init Done! \n");
}
#endif
#if !USE_CAPTURE_NODE
/* init ldc node */
{
obj->LdcObj.info_Input.width = 1280;
obj->LdcObj.info_Input.height = 960;
obj->LdcObj.info_Input.dataFormat = VX_DF_IMAGE_UYVY;
obj->LdcObj.info_Output1.width = 1280;
obj->LdcObj.info_Output1.height = 960;
obj->LdcObj.info_Output1.dataFormat = VX_DF_IMAGE_NV12;
strcpy(obj->LdcObj.name_NodeObj, "ldc_node");
strcpy(obj->LdcObj.name_Target, TIVX_TARGET_VPAC_LDC1);
app_init_ldc(obj->context, &obj->LdcObj, NUM_CAPT_CHANNELS);
APP_PRINTF("LDC init done!\n");
}
#endif
if(obj->is_enable_gui)
{
appGrpxInitParamsInit(&grpx_prms, obj->context);
grpx_prms.draw_callback = app_draw_graphics;
appGrpxInit(&grpx_prms);
}
appPerfPointSetName(&obj->total_perf , "TOTAL");
return status;
}
/*********************** Create Graph****************************/
static vx_status app_create_graphs(AppObj *obj)
{
vx_status status = VX_SUCCESS;
int graph_parameter_num = 0;
int graph_parameters_list_depth = 1;
if(obj->test_mode == 1)
{
graph_parameters_list_depth = 2;
}
vx_graph_parameter_queue_params_t graph_parameters_queue_params_list[graph_parameters_list_depth];
obj->outWidth = obj->pArg->outWidth;
obj->outHeight = obj->pArg->outHeight;
obj->inWidth = obj->pArg->inWidth;
obj->inHeight = obj->pArg->inHeight;
obj->cam_dcc_id = 0;
if ((vx_true_e == tivxIsTargetEnabled(TIVX_TARGET_DISPLAY1)) &&
(vx_true_e == tivxIsTargetEnabled(TIVX_TARGET_CAPTURE1)) &&
(vx_true_e == tivxIsTargetEnabled(TIVX_TARGET_VPAC_MSC1)) &&
(vx_true_e == tivxIsTargetEnabled(TIVX_TARGET_VPAC_LDC1))
&& (vx_true_e == tivxIsTargetEnabled(TIVX_TARGET_VPAC_VISS1))
)
{
obj->graph = vxCreateGraph(obj->context);
if (vxGetStatus((vx_reference)obj->graph) != VX_SUCCESS)
{
APP_PRINTF("graph create failed\n");
return VX_FAILURE;
}
status = vxSetReferenceName((vx_reference)obj->graph, "3DSRV");
if (VX_SUCCESS == status)
{
status = app_create_capture(obj);
}
if (VX_SUCCESS == status)
{
status = app_create_graph_convert(obj->graph, &obj->convertObj_1, obj->capt_frames[0]);
}
/* create Msc scaler node */
if (VX_SUCCESS == status)
{
status = app_create_graph_scaler(obj->graph, &obj->scalerObj, obj->convertObj_1.arr_Output);
}
if (VX_SUCCESS == status)
{
status = app_create_graph_srv(obj->graph, &obj->srvObj_1, obj->scalerObj.arr_Output1);
}
if (VX_SUCCESS == status)
{
status = app_create_graph_srv(obj->graph, &obj->srvObj_2, obj->scalerObj.arr_Output2);
}
if (VX_SUCCESS == status)
{
status = app_create_graph_convert(obj->graph, &obj->convertObj_2, obj->srvObj_1.arr_Output); /* RGBX -> NV12*/
}
if (VX_SUCCESS == status)
{
status = app_create_graph_convert(obj->graph, &obj->convertObj_3, obj->srvObj_2.arr_Output); /* RGBX -> NV12*/
}
vx_int32 idx = 0;
obj->imgMosaicObj.arr_Input[idx++] = obj->convertObj_2.arr_Output;
obj->imgMosaicObj.arr_Input[idx++] = obj->convertObj_3.arr_Output;
obj->imgMosaicObj.num_inputs = idx;
if (VX_SUCCESS == status)
{
status = app_create_graph_img_mosaic(obj->graph, &obj->imgMosaicObj);
}
if (VX_SUCCESS == status)
{
status = app_create_display2(obj, obj->imgMosaicObj.arr_Output);
}
/* set Node NumBufs */
if (VX_SUCCESS == status)
{
status = tivxSetNodeParameterNumBufByIndex(obj->convertObj_1.node, 1, NUM_BUFS);
APP_PRINTF("Set Node Param Num BUfs: convert_1 OK\n");
}
if (VX_SUCCESS == status)
{
status = tivxSetNodeParameterNumBufByIndex(obj->scalerObj.node, 1, NUM_BUFS);
APP_PRINTF("Set Node Param Num BUfs: scalerObj_1 OK\n");
}
if (VX_SUCCESS == status)
{
status = tivxSetNodeParameterNumBufByIndex(obj->scalerObj.node, 2, NUM_BUFS);
APP_PRINTF("Set Node Param Num BUfs: scalerObj_2 OK\n");
}
if (VX_SUCCESS == status)
{
status = tivxSetNodeParameterNumBufByIndex(obj->srvObj_1.node, 5, NUM_BUFS);
APP_PRINTF("Set Node Param Num BUfs: srv_node OK\n");
}
if (VX_SUCCESS == status)
{
status = tivxSetNodeParameterNumBufByIndex(obj->srvObj_2.node, 5, NUM_BUFS);
APP_PRINTF("Set Node Param Num BUfs: srv_node2 OK\n");
}
if (VX_SUCCESS == status)
{
/*note: it'll go error if set this param bigger than 1 when mosaic node added into the graph*/
status = tivxSetNodeParameterNumBufByIndex(obj->convertObj_2.node, 1, 1);
APP_PRINTF("Set Node Param Num BUfs: Convert2_node OK\n");
}
if (VX_SUCCESS == status)
{
status = tivxSetNodeParameterNumBufByIndex(obj->convertObj_3.node, 1, 1);
APP_PRINTF("Set Node Param Num BUfs: Convert3_node OK\n");
} if (VX_SUCCESS == status)
{
status = tivxSetNodeParameterNumBufByIndex(obj->imgMosaicObj.node, 1, NUM_BUFS);
APP_PRINTF("Set Node Param Num BUfs: imgMosaicObj OK\n");
}
add_graph_parameter_by_node_index(obj->graph, obj->captureNode, 1);
/* Set graph schedule config such that graph parameter @ index 0 is
* enqueuable */
graph_parameters_queue_params_list[graph_parameter_num].graph_parameter_index = graph_parameter_num;
graph_parameters_queue_params_list[graph_parameter_num].refs_list_size = NUM_BUFS;
graph_parameters_queue_params_list[graph_parameter_num].refs_list = (vx_reference*)&obj->capt_frames[0];
graph_parameter_num++;
APP_PRINTF("obj->test_mode = %d \n", obj->test_mode);
if(obj->test_mode == 1)
{
add_graph_parameter_by_node_index(obj->graph, obj->displayNode, 1);
/* Set graph schedule config such that graph parameter @ index 0 is
* enqueuable */
obj->displayNodeGraphParamNum = graph_parameter_num;
graph_parameters_queue_params_list[graph_parameter_num].graph_parameter_index = graph_parameter_num;
graph_parameters_queue_params_list[graph_parameter_num].refs_list_size = NUM_BUFS;
graph_parameters_queue_params_list[graph_parameter_num].refs_list = (vx_reference*)&obj->dispInput_img1;
graph_parameter_num++;
}
if(status == VX_SUCCESS)
{
status = tivxSetGraphPipelineDepth(obj->graph, PIPE_DEPTH);
}
/* Schedule mode auto is used, here we don't need to call vxScheduleGraph
* Graph gets scheduled automatically as refs are enqueued to it
*/
if(status == VX_SUCCESS)
{
status = vxSetGraphScheduleConfig(obj->graph,
VX_GRAPH_SCHEDULE_MODE_QUEUE_AUTO,
graph_parameters_list_depth,
graph_parameters_queue_params_list
);
}
if(status == VX_SUCCESS)
{
APP_PRINTF("app_linux_opengl_integrated_srv: Verifying graph 2 ... .\n");
status = vxVerifyGraph(obj->graph);
APP_ASSERT(status == VX_SUCCESS);
}
if(status == VX_SUCCESS)
{
status = tivxExportGraphToDot(obj->graph, ".", "integrated_srv_graph");
APP_PRINTF("app_linux_opengl_integrated_srv: Verifying graph 2... Done\n");
}
if (status == VX_SUCCESS)
{
status = app_sendcmd2Scaler(&obj->scalerObj);
}
APP_PRINTF("vxSetGraphScheduleConfig done\n");
APP_PRINTF("app_create_graph exiting\n");
}
else
{
APP_PRINTF("app_create_graph failed: appropriate cores not enabled\n");
status = VX_FAILURE;
}
if (obj->test_mode == 1)
{
vx_int32 bytes_read = 0;
obj->fs_test_raw_image = read_test_image_raw(obj->context, &(sensorParams.sensorInfo),
"/opt/vision_apps/test_data/psdkra/app_single_cam/IMX390_001/input2.raw",
&bytes_read);
APP_PRINTF("%d bytes were read by read_error_image_raw()\n", bytes_read);
if(obj->fs_test_raw_image == NULL)
{
printf("read_error_image_raw returned a null pointer - test case failed\n");
status = VX_FAILURE;
}
if((bytes_read <= 0) && (status == VX_SUCCESS))
{
status = tivxReleaseRawImage(&obj->fs_test_raw_image);
obj->fs_test_raw_image = NULL;
}
if(status == VX_SUCCESS)
{
status = vxVerifyGraph(obj->graph);
}
if((status == VX_SUCCESS) && (NULL != obj->fs_test_raw_image) && (NULL != obj->captureNode))
{
status = app_send_test_frame(obj->captureNode, obj->fs_test_raw_image);
}
}
return status;
}
/*********************** Run Graph ****************************/
static vx_status app_run_graph(AppObj *obj)
{
vx_status status = VX_SUCCESS;
vx_uint32 num_refs, buf_id;
int graph_parameter_num = 0;
vx_uint32 iteration = 0;
#if USE_CAPTURE_NODE
vx_uint32 loop_id;
AppSensorCmdParams cmdPrms;
cmdPrms.numSensors = NUM_CAPT_CHANNELS;
cmdPrms.portNum = NUM_CAPT_INST;
for (loop_id = 0U; loop_id < cmdPrms.portNum; loop_id++)
{
cmdPrms.portIdMap[loop_id] = loop_id;
}
if (VX_SUCCESS != appRemoteServiceRun(APP_IPC_CPU_MCU2_0, APP_REMOTE_SERVICE_SENSOR_NAME,
APP_REMOTE_SERVICE_SENSOR_CMD_CONFIG_UB960, &cmdPrms, sizeof(cmdPrms), 0))
{
APP_PRINTF("failed to start remote service!!!\n");
status = VX_FAILURE;
return status;
}
graph_parameter_num = 0;
for (buf_id=0; buf_id<4; buf_id++)
{
if (status == VX_SUCCESS)
{
status = vxGraphParameterEnqueueReadyRef(obj->graph, graph_parameter_num, (vx_reference*)&obj->capt_frames[buf_id], 1);
}
}
#else
char input_file_path[MAXPATHLENGTH];
const char *SimplePic[4] = {"front.uyvy", "rear.uyvy", "left.uyvy", "right.uyvy"};
vx_image tmp_image;
/* Enqueue buf for pipe up but don't trigger graph execution */
graph_parameter_num = 0;
for (buf_id=0; buf_id<NUM_BUFS; buf_id++)
{
for (i=0; i<4; i++)
{
snprintf(input_file_path, MAXPATHLENGTH, "%s/input/%s", get_test_file_path(), SimplePic[i]);
tmp_image = (vx_image)vxGetObjectArrayItem(obj->LdcObj.arr_Input[buf_id], i);
readYUVInput(input_file_path, tmp_image);
printf("load %s success!\n", input_file_path);
vxReleaseImage(&tmp_image);
}
status = vxGraphParameterEnqueueReadyRef(obj->graph, graph_parameter_num,(vx_reference*)&obj->LdcObj.img_Input[buf_id], 1);
}
graph_parameter_num++;
#endif
printf("Enqueue input image success!!!\n");
if(obj->test_mode == 1)
{
if(status == VX_SUCCESS)
{
status = vxGraphParameterEnqueueReadyRef(obj->graph, obj->displayNodeGraphParamNum,
(vx_reference*)&obj->dispInput_img1, 1);
}
}
/* wait for graph instances to complete, compare output and
* recycle data buffers, schedule again */
vx_uint32 actual_checksum = 0;
static vx_int32 iflag = 0;
while(status == VX_SUCCESS)
{
vx_image test_image;
graph_parameter_num = 0;
appPerfPointBegin(&obj->total_perf);
printf(" ------------ dequeue loop start\n");
/* Get output reference, waits until a frame is available */
#if USE_CAPTURE_NODE
vx_object_array out_capture_frames;
if(status == VX_SUCCESS)
{
status = vxGraphParameterDequeueDoneRef(obj->graph, graph_parameter_num,
(vx_reference*)&out_capture_frames, 1, &num_refs);
}
graph_parameter_num++;
#else
vx_image input_image;
if(status == VX_SUCCESS)
{
status = vxGraphParameterDequeueDoneRef(obj->graph, graph_parameter_num,
(vx_reference*)&input_image, 1, &num_refs);
}
graph_parameter_num++;
#endif
printf(" ------------ dequeue graph params success\n");
// app_sendcmd2Display(obj);
/* save convert1 output nv12 img to file */
if ((iflag++ % 20) == 0)
{
#if ADD_GRAPH_PRM_CONVERT1
char output_file_path[MAXPATHLENGTH];
vx_image save_image;
dq_convert1_out_array = vxCreateObjectArray(obj->context, (vx_reference)dq_convert1_out_image, NUM_CAPT_CHANNELS);
for(buf_id=0; buf_id<NUM_CAPT_CHANNELS; buf_id++)
{
save_image = (vx_image)vxGetObjectArrayItem(dq_convert1_out_array, buf_id);
snprintf(output_file_path, MAXPATHLENGTH, "%s/output/convert1_out_%d.bin", get_test_file_path(), buf_id);
// status = app_save_vximage_yuyv_to_bin_file(output_file_path, tmp_capt_image);
status = app_save_vximage_nv12_to_bin_file(output_file_path, save_image);
if (status != VX_SUCCESS)
{
APP_PRINTF("save nv12 img to %s failed !!!\n", output_file_path);
}
else
{
APP_PRINTF("save nv12 img to %s success!!!\n", output_file_path);
}
}
vxReleaseObjectArray(&dq_convert1_out_array);
vxReleaseImage(&save_image);
#endif
}
if(obj->test_mode == 1)
{
/* Get output reference, waits until a frame is available */
if(status == VX_SUCCESS)
{
status = vxGraphParameterDequeueDoneRef(obj->graph, obj->displayNodeGraphParamNum,
(vx_reference*)&test_image, 1, &num_refs);
}
printf("test iteration: %d of %d\n", iteration, TEST_BUFFER+1);
if(iteration > TEST_BUFFER)
{
if(app_test_check_image(test_image, checksums_expected[0][0], &actual_checksum) == vx_false_e)
{
test_result = vx_false_e;
}
populate_gatherer(0, 0, actual_checksum);
obj->stop_task = 1;
}
/* Get output reference, waits until a frame is available */
if(status == VX_SUCCESS)
{
status = vxGraphParameterEnqueueReadyRef(obj->graph, obj->displayNodeGraphParamNum,
(vx_reference*)&test_image, 1);
}
}
graph_parameter_num = 0;
#if USE_CAPTURE_NODE
if(status == VX_SUCCESS)
{
status = vxGraphParameterEnqueueReadyRef(obj->graph, graph_parameter_num, (vx_reference*)&out_capture_frames, 1);
}
#else
if(status == VX_SUCCESS)
{
status = vxGraphParameterEnqueueReadyRef(obj->graph, graph_parameter_num, (vx_reference*)&input_image, 1);
}
#endif
graph_parameter_num++;
printf(" enqueue graph params success------------\n");
appPerfPointEnd(&obj->total_perf);
if(iteration==100)
{
/* after first 'n' iteration reset performance stats */
appPerfStatsResetAll();
}
iteration++;
if((obj->stop_task) || (status != VX_SUCCESS))
{
break;
}
}
/* ensure all graph processing is complete */
vxWaitGraph(obj->graph);
printf("After WaitGraph \n");
/* Dequeue buf for pipe down */
num_refs = 0xFF;
graph_parameter_num = 0;
while((num_refs > 0) && (status == VX_SUCCESS))
{
if(status == VX_SUCCESS)
{
status = vxGraphParameterCheckDoneRef(obj->graph, graph_parameter_num, &num_refs);
}
if(num_refs > 0)
{
if(status == VX_SUCCESS)
{
#if USE_CAPTURE_NODE
vx_object_array out_capture_frames;
status = vxGraphParameterDequeueDoneRef(
obj->graph,
graph_parameter_num,
(vx_reference*)&out_capture_frames,
1,
&num_refs);
#else
vx_image out_image;
status = vxGraphParameterDequeueDoneRef(
obj->graph,
graph_parameter_num,
(vx_reference*)&out_image,
1,
&num_refs);
#endif
}
}
}
graph_parameter_num++;
num_refs = 0xFF;
while((num_refs > 0) && (obj->test_mode == 1) && (status == VX_SUCCESS))
{
vx_image out_image;
if(status == VX_SUCCESS)
{
status = vxGraphParameterCheckDoneRef(obj->graph, obj->displayNodeGraphParamNum, &num_refs);
}
if(num_refs > 0)
{
APP_PRINTF("Dequeue display \n");
if(status == VX_SUCCESS)
{
status = vxGraphParameterDequeueDoneRef(
obj->graph,
obj->displayNodeGraphParamNum,
(vx_reference*)&out_image,
1,
&num_refs);
}
}
}
return status;
}
Hello,
Could you provide the source for the app_create_graph_convert and the app_create_graph_img_mosaic? I am trying to review the parameter setting for each of these. I think you can attach a zip file to this E2E if that makes it easier to share.
Regards,
Lucas
Hi,
I'm not sure if there are some peoblems with the current webpage that I can't upload files or pictures tody. So I pasted the source code here.Please take a check !
/******************************* Convert node init **************************************************/
vx_status app_init_convert(vx_context context, ConvertObj *convertObj)
{
vx_status status = VX_SUCCESS;
vx_image output = vxCreateImage(context, convertObj->info_Output.width, convertObj->info_Output.height, convertObj->info_Output.dataFormat);
convertObj->arr_Output = vxCreateObjectArray(context, (vx_reference)output, convertObj->numCH);
vxReleaseImage(&output);
if (vxGetStatus((vx_reference)convertObj->arr_Output) != VX_SUCCESS)
{
status = VX_FAILURE;
APP_PRINTF("init convert failed\n");
}
return status;
}
/******************************* Convert node create **************************************************/
vx_status app_create_graph_convert(vx_graph graph, ConvertObj *convertObj, vx_object_array arrInput)
{
vx_status status = VX_SUCCESS;
vx_image imgInput = (vx_image)vxGetObjectArrayItem(arrInput, 0);
vx_image imgOutput = (vx_image)vxGetObjectArrayItem(convertObj->arr_Output, 0);
convertObj->node = vxColorConvertNode(graph, imgInput, imgOutput);
vxSetNodeTarget(convertObj->node, VX_TARGET_STRING, convertObj->name_Target);
vxSetReferenceName((vx_reference)convertObj->node, convertObj->name_NodeObj);
if (1 < convertObj->numCH)
{
vxReplicateNode(graph, convertObj->node, convertObj->arr_Replicate, 2);
}
if (vxGetStatus((vx_reference)convertObj->node) != VX_SUCCESS)
{
status = VX_FAILURE;
APP_PRINTF("convertObj->node create failed\n");
}
vxReleaseImage(&imgInput);
vxReleaseImage(&imgOutput);
return status;
}
/******************************* Mosaic node init **************************************************/
vx_status app_init_img_mosaic(vx_context context, ImgMosaicObj *imgMosaicObj, vx_int32 bufq_depth)
{
vx_status status = VX_SUCCESS;
vx_int32 i;
imgMosaicObj->config = vxCreateUserDataObject(context, "ImgMosaicConfig", sizeof(tivxImgMosaicParams), NULL);
status = vxGetStatus((vx_reference)imgMosaicObj->config);
if(status == VX_SUCCESS)
{
status = vxCopyUserDataObject(imgMosaicObj->config, 0, sizeof(tivxImgMosaicParams),\
&imgMosaicObj->params, VX_WRITE_ONLY, VX_MEMORY_TYPE_HOST);
}
vx_image img_output = vxCreateImage(context, imgMosaicObj->info_Output.width, imgMosaicObj->info_Output.height, imgMosaicObj->info_Output.dataFormat);
imgMosaicObj->arr_Output = vxCreateObjectArray(context, (vx_reference)img_output, 1);
vxReleaseImage(&img_output);
if(status == VX_SUCCESS)
{
imgMosaicObj->kernel = tivxAddKernelImgMosaic(context, imgMosaicObj->num_inputs);
status = vxGetStatus((vx_reference)imgMosaicObj->kernel);
}
for(i = 0; i < TIVX_IMG_MOSAIC_MAX_INPUTS; i++)
{
imgMosaicObj->arr_Input[i] = NULL;
}
if (status!= VX_SUCCESS)
{
APP_PRINTF("init imgMosaicObj failed\n");
}
return status;
}
/******************************* Mosaic node init **************************************************/
vx_status app_create_graph_img_mosaic(vx_graph graph, ImgMosaicObj *imgMosaicObj)
{
vx_status status = VX_SUCCESS;
vx_image imgOutput = (vx_image)vxGetObjectArrayItem(imgMosaicObj->arr_Output, 0);
imgMosaicObj->node = tivxImgMosaicNode(graph,
imgMosaicObj->kernel,
imgMosaicObj->config,
imgOutput,
imgMosaicObj->arr_Input,
imgMosaicObj->num_inputs);
if (vxGetStatus((vx_reference)imgMosaicObj->node) != VX_SUCCESS)
{
status = VX_FAILURE;
APP_PRINTF("imgMosaicObj->node create failed\n");
}
APP_ASSERT_VALID_REF(imgMosaicObj->node);
vxSetNodeTarget(imgMosaicObj->node, VX_TARGET_STRING, imgMosaicObj->name_Target);
vxSetReferenceName((vx_reference)imgMosaicObj->node, imgMosaicObj->name_NodeObj);
vxReleaseImage(&imgOutput);
return status;
}
Hello,
I am not seeing the below in the source code that you provided. Could you please either point me to it or include this as well?
obj->convertObj_2.arr_Replicate
obj->convertObj_3.arr_Replicate
Regards,
Lucas
Hi,
Actually, I didn't set Convert_2 & Convert_3 to replicate. In combination with 'Init Graph' and 'Convert Init', you can see this logic. And the reason is the image of the four channels has been combined for one channel after Srv node. So I don't think the Replicate operation is needed in all those nodes after SRV. I'm I right ?
Hello,
Yes, you are correct. I did not notice that there was a check inside the app_create_graph_convert to only replicate if 1 channel was being used.
Do you also have the source of the app_create_graph_srv? I presume this is creating an object array of a single image and using that output to create the SRV node, but I wanted to check.
Given that this use case is very large, it is a bit tricky to determine what issues could be occurring. So that I can better understand your questions, are you getting the expected SoC performance and are curious about why the parameter requires only one buffer, or are you not getting the expected performance and therefore trying to increase this number of buffers?
Regards,
Lucas
Hi,
As mentioned in my first question, there are two points I'm currently concerned about. And I'm not pursuing high performance yet. It seems this demo can only run normally when I set NumBufs of Convert_2 & 3 to be one.
I'm using four cameras as input of this graph. I created an object array of four images to be the input of SRV node.
Thank you very much for following up my questions so responsibly ! And I think it will become easier and more accurate and clear If I can send the source code to you. May I ?
'yh.feng@invo.cn' here is my email address. Please let me know when you allow me to.
Respect,
Damon
Hello,
Thanks for the background. If you are ok with sharing here on the forum, you should be able to attach a zip file of your full source code by selecting Insert -> Image/Video/File then selecting upload.
Regards,
Lucas
Hi, Lucas
Finally, I can upload the zip file normally. Somehow I can't upload files here before.
There are two folders in the zip file: app_srv_camera from 'psdk_rtos_j7/vision_apps/apps/srv_demos/' and gpu from 'psdk_rtos_j7/vision_apps/kernels/srv/'
Please take some time to check my code.
Regards,
Damon
Hello,
Sorry, but I have not gotten a chance to look into it further. I can hopefully take a further look today.
Regards,
Lucas
Hello,
When I tried building this on my end, I got compilation issues due to the tivx_display_transparency_params_t structure not being found. Was this missed somewhere in the code you uploaded?
Regards,
Lucas
Hi,Lucas
Sorry, I forget it. It's used in the function"app_sendcmd2Display" in app_src_camera.c. But this func is disabled for now, so you can delete the code related to this func.
Regards,
Damon
Hi, Lucas
Is the reviewing going well? Or is there any problems during the compliation process?
Hello Damon,
Apologies for the delay here. I will be looking at this more in depth later this week and will provide an update at that time.
Regards,
Lucas
Hi Lucas,
Thank you for replay. Hope we can figure out this problem as soon as possible.
Regards,
Damon
Hi Damon,
I had to change one other thing in the app to get it to build. I changed the APP_REMOTE_SERVICE_SENSOR_CMD_CONFIG_UB960 to APP_REMOTE_SERVICE_SENSOR_CMD_CONFIG_IMX390 since APP_REMOTE_SERVICE_SENSOR_CMD_CONFIG_UB960 was not defined. However, when I run, the app appears to hang, perhaps because of this change.
Are there any additional changes beyond what you have sent that I need in order to run the app?
Also, if you have been able to reproduce this in a smaller app, that would be helpful given that this is app is fairly large and will be difficult to debug.
Regards,
Lucas
Hello Lucas,
Since the camera I'm using is different from yours, I added an additional macro: APP_REMOTE_SERVICE_SENSOR_CMD_CONFIG_UB960.
But my issue seems to have nothing to do with the camera node. I just made another test in which I used LDC node instead of Camera + Convert ndoe, and use pictures to simulate. The same error still there when I set the Num of params of Mosaic node to greater than 1.
There is a macro named USE_CAPTURE_NODE which determines whether to use camera or LDC in app_srv_camera.c. How about you try to set it to be 0 and see if it will print the same error as mine ?
Hello Damon,
I have set this to 0 and the app runs. However, it looks like it needs some test files at /opt/vision_apps/test_data/input/. Could you please provide these?
Regards,
Lucas
Hello Damon,
Sorry for the delays here and thank you for your patience.
I was able to duplicate your issue on my end. The issue here ultimately has to do with how the object array items are supported with pipelining. There is a limitation with using these "composite objects" with our pipelining implementation. However, there is a workaround that can be used in this case to enable multiple buffers at the given parameter.
The workaround in this case is to enable vxReplicateNode for the convertObj_2 and convertObj_3 even though there will be only a single instance of this node. Therefore, in the app_convert_module.c, you can comment out the line "if (1 < convertObj->numCh)" that checks for multiple channels and instead call the vxReplicateNode regardless of the channels enabled.
In addition, you will need to set the below parameters in the convertObj_2 and convertObj_3 structures:
obj->convertObj_2.arr_Replicate[0] = vx_false_e;
obj->convertObj_2.arr_Replicate[1] = vx_true_e;
obj->convertObj_3.arr_Replicate[0] = vx_false_e;
obj->convertObj_3.arr_Replicate[1] = vx_true_e;
With these changes above, you should be able to set the number of buffers to the desired value.
Please let me know if you are able to duplicate this setup.
Regards,
Lucas
Hi Lucas,
So excited to hear such a good news. And congratulations you made it finally!
I can only do a verification as you said in a few days because the board is not at my hand.But I'm still a little confused about this issue:
1. Is the issue related to the use of Mosaic node? Or might it possible that the issue will occur in some other case?
2.Will this limitation be resolved in a future version?
3.What I care about most is to understand the internal machanism of pipelining. I'v readed the docs you mentioned earlier for more times.and searched some other documents. But I still haven't been able to understand the graph pipeline more deeply. Are there some docs like the introduction of chains & links on J6?
I know the questions above may not be that simple to explain clearly just by letter. So it's ok if not answer these additional questions. Nerver mind!
I'll tell you the result as soon as I finish the test. And Thank you again for your support!
Regards,
Damon
Hello Damon,
A few answers to your questions below:
1. This issue is not an issue with the mosaic node specifically, it is something that may be faced with other nodes with similar interfaces. It is similar to the considerations mentioned in the documentation here. What is occurring is that the object being produced by the convert node is an image but the object being consumed by the mosaic node is an object array. In the initial case before adding the replicate node, the call to set multiple buffers at that output was setting multiple image buffers rather than object array buffers. Therefore, when this object was passed to the mosaic node, it was an image object descriptor rather than a object array object descriptor. This caused the mosaic node to fail. However, when replicate node is used, the object array is expected by the framework and therefore sets multiple object arrays successfully.
2. Since there are workarounds that exist to use replicate node in this case, we do not have plans to fix this in upcoming SDK release.
3. We have linked to papers that our team has written on the subject of pipelining with OpenVX that describe more details about the internal implementation here. Please let me know if you have further questions though.
Regards,
Lucas
Hello Lucas,
Thank you very muvh for your answers!
I've tested the workarounds you mentioned and the app run normally. I'll take a close look at the docs in the links you provided.
Regards,
Damon
Hello Damon,
Good to hear. Please respond here if you have further questions on this topic.
Regards,
Lucas