This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: Need Suggestions for Four Camera Application use-case designing

Part Number: TDA4VM

Dear Sir,

I am working on a use case for the Four-camera application, and looking for your suggestion on designing the same.

Use case:

If I have a single buffer that has 4 cameras' real-time synced output Buffer[cam0,cam1,cam2,cam3], using the buffer I have to access the individual camera output by selecting the camera ID from this buffer and input to the

OD Pipeline for four cameras separately.

According to you what could be your suggestion for creating a pipeline with a single camera buffer holding four camera outputs?

Second, what are your suggestions for considering the single buffer for all four cameras and developing four graphs keeping the individual camera output mapping to the individual graph?

Thanks in advance.

Regards,

Vyom Mishra

  • Thanks for posting your question to TI processor's E2E forum. The expert assigned to this thread is out of office due to public holidays in India today. Please expect a response by the end of this week.

  • Hi Vyom,

    Please refer to the multi-camera example in the SDK. It exactly outputs synced output buffer from all camera and then you can pass them on to your OD pipeline.

    Regards,

    Brijesh 

  • Dear Sir,

    Thanks for the reply!

    I have a few queries related to my understanding, please do correct me if my understanding is wrong.

    In the above snippet "app_run_graph_for_one_frame_pipeline"

    Can you please help me understand how the below API enqueues/Dequeues in the case of single camera and multiple cameras?

    status = vxGraphParameterEnqueueReadyRef(obj->graph, captureObj->graph_parameter_index, (vx_reference*)&obj->captureObj.raw_image_arr[obj->enqueueCnt], 1);

    Where do we get the control for camera buffer input for different cameras?

    If any references are available to understand, please do share.

    Thanks and Regards,

    Vyom Mishra

  • Hi Vyom,

    Did not get your question. What we are dequeuing is object array and object array includes an image from all camera

    Regards,

    Brijesh

  • Hello Sir,

    Thanks for the response, I understood the working of a multi-cam and its buffer handling for single sensor and multi-sensor.

    In file tivx_obj_desc.h, vx_image can be reference using  structure _tivx_obj_desc_image,

     typedef struct _tivx_obj_desc_image
     {
         tivx_obj_desc_t base;
         tivx_shared_mem_ptr_t mem_ptr[TIVX_IMAGE_MAX_PLANES];
         volatile uint32_t width;
         volatile uint32_t height;
         volatile uint32_t format;
         volatile uint32_t planes;
         volatile uint32_t color_space;
         volatile uint32_t color_range;
         volatile uint32_t mem_size[TIVX_IMAGE_MAX_PLANES];
         volatile uint32_t rsv[1];
         volatile uint32_t uniform_image_pixel_value;
         volatile uint32_t create_type;
         vx_imagepatch_addressing_t imagepatch_addr[TIVX_IMAGE_MAX_PLANES];
         vx_rectangle_t valid_roi;
     } tivx_obj_desc_image_t;

     it has  tivx_shared_mem_ptr_t to get the host_ptr to make a memcpy.

    But, for copying data to vx_object_array which structure is to be used? As per my understanding below structure will be referenced(correct me if I am wrong), but It has no 

     tivx_shared_mem_ptr_t to get the host_ptr to make a memcpy

     typedef struct _tivx_obj_desc_object_array
     {
         tivx_obj_desc_t base;
         volatile vx_enum item_type;
         volatile uint32_t num_items;
         volatile uint16_t obj_desc_id[TIVX_OBJECT_ARRAY_MAX_ITEMS];
     
     } tivx_obj_desc_object_array_t;

    Could you please suggest a way or an API to copy the vx_image buffer[4] to the vx_object_array?

    I found a way in which the filling of vx_object_array happens through the file in scalar_module.c, Can you please comment if it is the correct approach for real-time

    performance expectations for 4 camera applications? or could you please suggest a better way.

    Thanks and Regards,
    Vyom Mishra

  • Hi Vyom,

    But why do you want to copy images into object array? 

    The ideal ways to copy images into object array is to get the images at an index, map them into the application and then copy the contents. This can be done for all images in the object array. 

    You can use vxGetObjectArrayItem API to get item from the object array and can refer to sample implementation in ti-processor-sdk-rtos-j721e-evm-08_06_00_12\vision_apps\modules\src\app_scaler_module.c.

    Regards,

    Brijesh

  • Dear Sir,

    Thanks for the reply.

    Sir, I am getting the camera frames of four cameras in vx_image buffer[4](the capture application is running in the background separately), I have to create a

    4-camera application for the segmentation model using the same buffer available to me.

    Note: As the vx_image buffer[4] is shared among many applications, I have to use it as it is for my application.

    As the available data from the camera is in the vx_image buffer, I need to map it to object_array so that the application runs smoothly for multi-camera input data for the same pipeline.

    I would like to know the way to copy the vx_image buffer[4] to the vx_object_array buffer.

    FYR: In the below snippet, I am copying vx_image to vx_image.

    Similarly,I need your help to copy the vx_image to vx_object_array.

    Thanks and Regards,

    Vyom Mishra

  • But why do you store the image in vx_image image[4]? I mean why dont you use vx_object_array for capturing image? This is what is used in the capture node. Once you use object array for images, you can pass on to other nodes and can select the image to be processed in that node. It would be easier to use object array than unnecessarily spending CPU cycles in copying the images. 

    Regards,

    Brijesh

  • Dear Sir,

    Thanks for the patience and reply!

    I understand your concern and suggestions you are absolutely correct, but I am only getting the vx_image buffer[4] from another module due to some

    restrictions from the design as I am not only the consumer of the camera output.

    So, I am in a situation where I am not operating the capture and sensor node, I am directly getting/accessing the captured frames as a vx_image buffer[4].

    So, I am looking for help from you, to copy the vx_image buffer[4] to vx_object_array.

    Thanks and Regards,

    Vyom Mishra

  • But even in this case you are getting vx_image image[4], which is nothing but image object array and that can easily be converted into vx_object_array, that's also image object array. I would suggest changing this. 

    Regarding copying, as i recommended you need to copy each image one by one, you can get the reference to image by calling vxGetObjectArrayItem, mapping the buffer in the application space and then can copy the image. 

    Regards,

    Brijesh

  • Dear sir,

    Thanks for the suggestions.

    I would like to request you to help me with suggestion 1 for converting vx_image to vx_object_array. Any reference to this will be helpful.

    Suggestion 2, I am already trying but facing build warnings

    Thanks and Regards,

    Vyom Mishra 

  • Hi,

    What's build warning? Also should not keep such high size variable "vx_uint32 medium[1812480];" as local variable.

    Regards,

    Brijesh

  • Dear Sir,

    This is my code, but getting pink output while dumping the output from the object array.

    Can you please check and let me know your comment on this?

    One more query, I have regarding the below snippet

    In the above code, input_images[] is referenced from object array input.arr[q] using vxGetObjectArrayItema, so If I update input_images[] ( vx_image type) will it be reflected at input.arr[q] also?

    Thanks and Regards,

    Vyom Mishra

  • Hi,

    I dont understand below from your code. Why are you getting host pointer copying data from it? Instead, can you also map this buffer in the application space and then try copying data into the array? 

    from = app_utils_get_image_descriptor(arr);

    memcpy((uint64_t *)medium_Y,(void *)from->mem_ptr[0].host_ptr,from->mem_size[0]);

    memcpy((uint64_t *)medium_UV,(void *)from->mem_ptr[1].host_ptr,from->mem_size[1]);

    In the above code, input_images[] is referenced from object array input.arr[q] using vxGetObjectArrayItema, so If I update input_images[] ( vx_image type) will it be reflected at input.arr[q] also?

    Yes, it would be, but only for the index0.

    Regards,

    Brijesh

  • Dear Sir,

    Thanks for the patience and response.

    I am new to TIOVX and in the early stage of learning.

    Why are you getting a host pointer copying data from it? 

    While researching, I found the structure struct _tivx_obj_desc_image which can hold the actual data of the vx_image type, so I used the same.

    Instead, can you also map this buffer in the application space and then try copying data into the array? 

    So, I am requesting you to help me know about this with example code

    Yes, it would be, but only for the index0.

    So, is it possible to do this for whole image objects from index 0 to N.

    Thanks and Regards,

    Vyom Mishra

  • Hi,

    While researching, I found the structure struct _tivx_obj_desc_image which can hold the actual data of the vx_image type, so I used the same.

    But this should not be directly accessed. 

    So, I am requesting you to help me know about this with example code

    You are already doing for the images from object array. You need to do same for the "from" image also. 

    So, is it possible to do this for whole image objects from index 0 to N.

    Yes.

    Please refer to OpenVX specs/documentation on TIOVX for specific questions

    The OpenVXTm Specification (khronos.org)

    TIOVX User Guide: Overview

    Regards,

    Brijesh

  • Dear Sir,

    Thanks for the reply.

    Sir, I am getting the camera frames of four cameras in vx_image buffer[4](the capture application is running in the background separately), I have to create a

    4-camera application for the segmentation model using the same buffer available to me.

    Note: As the vx_image buffer[4] is shared among many applications, I have to use it as it is for my application.

    As the available data from the camera is in the vx_image buffer, I need to map it to object_array so that the application runs smoothly for multi-camera input data for the same pipeline.

    I would like to know the way to copy the vx_image buffer[4] to the vx_object_array buffer.

    FYR: In the below snippet, I am copying vx_image to vx_image.

    Similarly,I need your help to copy the vx_image to vx_object_array.

    Thanks and Regards,

    Vyom Mishra

  • Hi,

    You need to map both the images, input as well as output and then you can do copy from input to output image.

    Regards,

    Brijesh

  • Dear Sir,

    Thanks for the reply!

    I am planning to pass the vx_image buffer[4] directly to the pipeline and make changes in the scalar_module.c and other node source code to handle this type of input object.

    I have a few gaps in understanding of the multi-cam application, please answer for my understanding

    Sample: app_multi_cam

    1. In app_ldc_module.c, under function definition "app_create_graph_ldc" vx_image is referenced for index 0 only

    vx_image input_img = (vx_image)vxGetObjectArrayItem(input_arr, 0);

    So when we run a single camera it will consider the index 0 (cam 0), but if we increase the num_enabled camera for more than 0. How does it handle that?

    Thanks and Regards,

    Vyom Mishra

  • Because scalar node just requires a reference input image to understand the parameters, so only one of the input image from the object array is sufficient, but this also is used to tell the framework the number of input buffers in the object array becuase this node is going to be replicated, index0 is required. 

    If you are not replicate the node, one of the image is sufficient. 

  • Dear Sir,

    Thanks for the detailed answer.

     but this also is used to tell the framework the number of input buffers in the object array because this node is going to be replicated, index0 is required.

     replication of the node is for the multi-instances of the graph or for the multiple input(camera) to the graph?

    In my case, I am pushing the vx_image buffer[4] instead of vx_object_arrray, so passing the reference buffer[0] of the vx_image type to scalar node is enough

    for 4 camera applications?

    How to inform the application context/graph, that I am creating an application for 1 camera or multiple cameras if I am not using vx_object_array and the capture node is not part of the graph?

    My application pipeline is 

    Camera dumps(not part of graph)30fps input data of four cameras (vx_image buffer[4]) -> Scalar Node-> Pre-processing Node -> TIDL Node -> Post-processing.

    Thanks and Regards,

    Vyom Mishra

  • In my case, I am pushing the vx_image buffer[4] instead of vx_object_arrray, so passing the reference buffer[0] of the vx_image type to scalar node is enough

    for 4 camera applications?

    In this case, you need to create scalar node 4 times, because you can't replicate the node if there is no object array as input.. If you want scalar to be used for all 4 cameras.

  • Dear Sir,

    Thanks for the response, it is quite helpful for fast understanding.

    In continuation of your response of creating four scalar nodes, does that mean,  I need to create 4 nodes each for the next module( TIDL, Post-processing, etc)?

    If I create four separate nodes each for separate input, will it be possible to get sync data from all post-proc nodes for mosaic?

    Let me know your suggestion on this kind of design where replication of nodes are disabled for real-time performance expectations.

    Thanks and Regards,

    Vyom Mishra

  • No you may not get the data in sync, because 4 independent node instances are running.. another reason to make use object array. 

    Regards,

    Brijesh

  • Dear Sir,

    The issue is resolved, I am able to copy vx_image to vx_object_array for four cameras.

    Thanks for your kind co-operation and support.

    Regards,

    Vyom Mishra

  • Dear Sir,

    I have one more query and need your suggestions for the same

    In multi-cam app "capture_input_image" is an vx_object_array

    under 

    " if(obj->pipeline >= 0)" in function call "app_run_graph_for_one_frame_pipeline"

    /* Enqueue input - start execution */

    status = vxGraphParameterEnqueueReadyRef(obj->graph, obj->ldcObj.graph_parameter_index, (vx_reference*)&capture_input_image, 1);

    In avp4 application "raw_image" is an vx_reference type

    vxGraphParameterEnqueueReadyRef(obj->graph, captureObj->graph_parameter_index, &raw_image, 1); 

    In avp3 "scaler_input_image" is vx_image type

    status = vxGraphParameterEnqueueReadyRef(obj->graph, scalerObj->graph_parameter_index, (vx_reference*)&scaler_input_image, 1);

    In my case, where I do not have a camera node, and filling of the input data for 4 cameras is happening in "app_run_graph_for_one_frame_pipeline"

    by mapping vx_image buffer[4] to vx_object_array.

    Which reference I should consider for the case " if(obj->pipeline >= 0)"

    Thanks and Regards,

    Vyom Mishra

  • Since you dont have input image coming from camera, you can use AVP3 as reference.

    I think your original issue is resolved, so i am closing this ticket. Please start a new ticket for any new questions. 

    Thank you. 

    Regards,

    Brijesh