This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VH-Q1: uyvy cam to U16

Part Number: TDA4VH-Q1

Tool/software:

Hello, I am using sdk 9.2 version.My camera format is a uyvy format

But I want to access each pixel through vxMapImagePatch and process the image I want

before I enter the display node after capturing in single camera application

obj->capt_yuv_image = (vx_image)vxGetObjectArrayItem(obj->cap_frames[0], 0);

...


    for(i=0; i<frm_loop_cnt; i++)
     {
         vx_image test_image;
         appPerfPointBegin(&obj->total_perf);
         graph_parameter_num = 0;
         if(status == VX_SUCCESS)
         {
             status = vxGraphParameterDequeueDoneRef(obj->graph, graph_parameter_num, (vx_reference*)&out_capture_frames, 1, &num_refs_capture);
         }
         graph_parameter_num++;
         if((status == VX_SUCCESS) && (obj->test_mode == 1))
         {
             status = vxGraphParameterDequeueDoneRef(obj->graph, 1, (vx_reference*)&test_image, 1, &num_refs_capture);
         }
 
         if((status == VX_SUCCESS) && (obj->mosaic_enable == 1) && obj->mosaic_enqueue == 1)
         {
             status = vxGraphParameterDequeueDoneRef(obj->graph, 1, (vx_reference*)&obj->imgMosaicObj.output_image, 1, &num_refs_capture);
             graph_parameter_num++;
         }
 

     vx_image image = (vx_image)vxGetObjectArrayItem(out_capture_frames, 0);
     
     converToU16(image);
     
     
...



obj->display_image = obj->capt_yuv_image;



     

However, as a result of the code modification,

the display screen remains unchanged, and the afterimage remains as much as 4 frames, which is the number of buffers

If i capture using the 's' key,

the first image applies well and the second image comes out as the uyvy image and repeats with this rule

I want to modify the result of the capture node and put it in the display node and spray it on the display. Which part is wrong?

thank you

  • Hi Kim,

    What do you mean by "uyvy cam to U16"?

    But I want to access each pixel through vxMapImagePatch and process the image I want

    That is possible and typically done by creating a new openvx node for your desired processing and add the processing node between capture and display nodes.

    the display screen remains unchanged, and the afterimage remains as much as 4 frames, which is the number of buffers
    vx_image image = (vx_image)vxGetObjectArrayItem(out_capture_frames, 0);

    converToU16(image);
    ...

    obj->display_image = obj->capt_yuv_image;

    I suppose you would have to "get", "process", and then "release" those images so that the entire pipeline can keep running.
    In your code above, I am not sure if you have done that properly.
    If you create a new opnevx node for your own processing, the buffer management can be handled by the openvx framework automatically.

    Our S/W expert  is copied for his comments and any further questions you may have.

  • Hello
    As you suggested, I created a custom node between the capture node and the display node and applied the algorithm I wanted.

    As a result, I confirmed that it comes out on the display as I want it to.

    However, there is a small problem, but there is an issue where a slight horizontal line only appears and disappears in the area where there is movement.

    Custom nodes are in the form of vx_image, both input and output.

    capture node -> obj->cap_frames[0] -> vx_image -> custom node -> vx_image -> display node

    What would be a problem

    Thank you

  • Hi Kim,

    As you suggested, I created a custom node between the capture node and the display node and applied the algorithm I wanted.

    Thanks for the update and that is good news!

    but there is an issue where a slight horizontal line only appears and disappears in the area where there is movement.

    That sounds like a synchronization or performance overload issue to me.
    Theoretically, the vx_image should be completely updated before it is sent to display to avoid these horizontal lines.

    Hi ,
    Could you please advise?

  • Hello
    The camera frame is normally 30fps and there seems to be no delay on the display.


    What I'm curious about is that if I convert capture output vx_object_array to vx_image,

    put it as node input, receive the output as vx_image,

    and hand it over to the display, will the contents of the four capture buffers be delivered to the display?

  • Hi Kim,

    If you looked the sample code below, you can see that we take the input image of a node from the capture array and create an image for the node output.
    That should work well for your node as well.

    https://git.ti.com/cgit/processor-sdk/vision_apps/tree/apps/utilities/app_heterogeneous/app_heterogeneous.c?h=main#n539

                raw_viss[ci] = (tivx_raw_image) vxGetObjectArrayItem(capture_frames[ci][0], 0);
                yuv_viss[ci] = vxCreateImage(context, width[ci], height[ci], VX_DF_IMAGE_NV12);
     

    Your video problem is typically a s/w issue in your node implementation, e.g., cache coherence.