This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: tda4vm

Part Number: TDA4VM

Hi Lucas,

I'd looked at the tutorial, but a review helped.  

After a vxGraphDequeue I have a pointer to the vx_image buffer (out_capture_frames) available.  Can I then use tivxMapRawImagePatch or tivxCopyRawImagePatch (what's the difference) to copy the buffer and provide to another application.  I did not see where this buffer is unmapped in the single cam example.  

Also vxGraphParameterDequeueDoneRef(obj->graphgraph_parameter_num, (vx_reference*)&out_capture_frames1, &num_refs_capture);  Is this the output of the camera (VISS) without any processing or the input to the display?

The multi-cam use case pipelines one frame at a time so is it using an array to handle multiple camera inputs within one frame cycle?

Please advise.

  • Hello Mufaddal,

    Yes, once you do a dequeue, you can access the pointer using one of the API's you listed.  The difference is that tivxCopyRawImagePatch does a copy of the data to a pointer, whereas tivxMapRawImagePatch provides access to the pointer.  I'm not sure specifically about the single camera app, but if you do a tivxMapRawImagePatch, you will need to do a corresponding tivxUnmapRawImagePatch.

    Please note, if you are using this to copy to another application that isn't using OpenVX, you may need to use an API such as the one listed below for passing the reference across OpenVX boundaries.  There is an example linked to below as well that shows how this is done.

    https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/latest/exports/docs/tiovx/docs/user_guide/group__group__tivx__ext__host.html#ga06d77653ac12b4d30d62379a07629734

    https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/latest/exports/docs/vision_apps/docs/user_guide/group_apps_basic_demos_app_arm_fd_exchange.html

    The call "vxGraphParameterDequeueDoneRef(obj->graphgraph_parameter_num, (vx_reference*)&out_capture_frames1, &num_refs_capture);  " is the output from the capture node which is provided to the VISS node.

    And yes, the multi cam is making use of the vxReplicateNode API which allows multiple instances of the node to operate on an object array of references

    https://www.khronos.org/registry/OpenVX/specs/1.1/html/d7/d61/group__group__node.html#ga873198f07077015c0f60a66399b1cdf9

    Regards,

    Lucas

  • Hi Lucas,

    Does the pointer returned here allocate memory as well or is that needed to be done else where: tivxCopyRawImagePatch.  Would malloc work memory allocation or is there an OpenVX call here.  For a YUV420 or NV12 or RGB image format does it matter the image type since I intend to typecast the image type to tivx_raw_image format.  

    The fd exchange example opens a socket and exchanges files which might or might not be fully applicable since we want to allocate application memory and then pass a pointer to it for accessing.  Also I meant to reference this Deque since it returns the image sent to the display: 

    vxGraphParameterDequeueDoneRef(obj->graph1, (vx_reference*)&test_image1, &num_refs_capture);

    Please advise.

  • Hello Mufaddal,

    Yes, you will need to allocate memory if using the tivxCopyRawImagePatch.  It is recommended to use the tiovx API for allocating memory, shown below:

    https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/latest/exports/docs/tiovx/docs/user_guide/group__group__tivx__mem.html#gafbd611b7cd7c93c92cb158dcfda08e5e

    I'm not sure that I follow the question about image type, can you elaborate?  The tivx_raw_image is not used for YUV420, RGB, etc.  It is only for raw sensor images, such as Bayer.  More details can be found below.  If you want RGB, YUV, etc., you can use vx_image.

    https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/latest/exports/docs/tiovx/docs/user_guide/group__group__raw__image.html#gab1318a66f6be4fbfc204013348e51dd8

    In that case, the fd exchange may not be applicable, I just wanted to pass along in case it is helpful

    And regarding the dequeue, yes, that is the output from the VISS, although the single cam optionally has LDC enabled, so if it is enabled then it would be the output from LDC.  However, I believe by default, that parameter is not enabled because this is used for test purposes to validate the output with test images.

    Regards,

    Lucas

  • HI Lucas,

    Is there a tivxCopyRawImagePatch that I can use for copying vx_images since I cannot use a vxCreateImage.  It'll create a virtual reference?  

    Does this alternative work for RGB/YUV images: use vxMapImagePatch to create a pointer to a buffer.  Allocate memory using tivxMemBufferAlloc and then memcpy the buffer into the allocated memory.  Release the memory after usage.

    For the output if this is the output of the LDC: vxGraphParameterDequeueDoneRef(obj->graph, 1, (vx_reference*)&test_image, 1, &num_refs_capture); then how can I access the output of the MSC which gets sent to the display module (DSS).  Please advise.

  • Hi Mufaddal,

    The vx_images and tivx_raw_images are two separate types of OpenVX data objects.  You won't be able to copy from one to another.  The VISS node is what processes the tivx_raw_image buffer to vx_image buffer.

    Do you mind providing more clarity on exactly how you are using the buffers that you are copying from the tivx_raw_image?  The reason why I had originally pointed to the tivxReferenceImportHandle/tivxReferenceExportHandle API's is because it allows for zero copy between processes.  You don't necessarily have to exchange it via a socket, it is just one way of demonstrating the functionality.

    Just to check, this is the single cam application correct?  If so, sorry, I forgot that there was an MSC node there as well, so this would be the output of MSC in all cases.  The sequence is VISS -> LDC -> MSC, with the LDC being optional.

    Regards,

    Lucas

  • Hi Lucas,

    The usage is not to copy the raw image but to make the vx_image that is generated as an output of the MSC available to an external non OpenVX app.  Hence the question on what's the best way to do that.  Can I provide a pointer to a buffer from vxMapImagePatch or do I need to allocate memory.  Any suggestions you have here would be helpful.  

    If the tivxReferenceExportHandle would be helpful here could you expand on how I could apply it given my usage.  Please advise.

  • Hello Mufaddal,

    If you use vxMapImagePatch, you will not need to allocate memory.  This API allows you to get access to the vx_image data buffer pointer directly which has already been allocated by OpenVX.  Therefore, you do not need to copy.

    This pointer can then be used by the application.  The reason why I referenced application for exchanging FD's is because it shows how to use the tivxMemTranslateVirtAddr and tivxMemTranslateFd to exchange the file descriptor across processes since the virtual address will be different in the different process.

    The tivxReferenceExportHandle is similar to the vxMapImagePatch in that it provides access to the data buffer, only in a generic way for any OpenVX data object.

    Regards,

    Lucas

  • Hi Lucas,

    By using vxMapImagePatch I get access to the data buffer but cannot hold the processor until the other application finishes.  Even implementing the pipelining as one_frame_pipeline I'll get the buffer but I also have to release this with vxUnMapImagePatch.  To get around this I can either copy the image into an external buffer or only UnMap on the next call if the buffer pointer is not null.  Is there a preferred approach.  Also is there a way to find out how fast the image application is running or what the rate of camera input is.  Is this directly tied to the fps of the camera.

    For vxGraphParameterEnqueueReadyRef(obj->graph, 0, (vx_reference*)&(obj->cap_frames[buf_id]), 1);  in addition to enqueuing more input frames can I change the number of references to enqueue.  It is set to 1 for both the single and multi-cam examples.  Will the graph pipeline behave like a queue for the single cam case if I increase the number of references to enqueue.

    Please advise.

  • Another thing I needed to address is if I use the MSC to generate more than one output then how does that affect the pipeline and the entire graph for that matter.

  • Hello Mufaddal,

    A few points here:

    • For performance reasons as you would imagine, it is preferable not to copy the buffer.  One way to potentially avoid this is by adding more buffers as the output of MSC so that the previous node can operate on the other buffers while the downstream application is processing.  However, if the downstream application does not achieve the same frame rate, you may need to implement another mechanism, such as a copy.  You can perhaps open a new ticket on the design of this in order to perhaps get input from some of our engineers that have more experience with the design of these applications to get their suggestions
    • Regarding the performance queries, you can press "p" while the application is running and get a description of how the long each particular node is taking and the loading on each of the nodes.  And yes, it will be tied to frame rate of the camera.  Also, if the downstream nodes are operating at a slower rate than the frame rate, it will starve the capture node of buffers and result in frame drops.
    • Yes, you can use the vxGraphParameterEnqueueReadyRef to enqueue more than one buffer at a time and it is intended to work as a queue.  There is not much testing of this feature though, so it is possible you may face issues with this.
    • Regarding using multiple outputs of the MSC, you can add the separate output and create it as a graph parameter if you need to pass it to a downstream application

    Regards,

    Lucas

    • Regarding using multiple outputs of the MSC, you can add the separate output and create it as a graph parameter if you need to pass it to a downstream application

    The part with multiple inputs or outputs is not clear to me.  I've one input and I generate an additional output.  This becomes an additional node on the graph.  Do I need to add this node to the graph pipeline (it'll have the same input) so I can access the image for consumption.

    Extending this to the multi-cam case is this how I can access the individual camera inputs instead of a mosaic output generated in the sample code.  Please advise.

  • Hello Mufaddal,

    Could you provide a block diagram of the nodes that you want beyond the existing multi cam graph configuration?  Sorry, the question is not clear to me.

    Regards,

    Lucas

  • Hi Lucas,

    It would be something like this (currently for single cam, will send a separate request for multi-cam):

    VISS -> MSC -> ScaledImage (currently this is added to the graph pipeline)

                    |-----> CroppedImage (How do I add this and does this need to get added to the graph pipeline?)

    Also I'm not generating a mosaic output from this so I can display either image or don't send the image to the display (where in the app to disable?) .  

  • Hello Mufaddal,

    You can add a separate MSC node to the graph which takes in the output from VISS as its input and create a new "CroppedImage" which will serve as the output.

    I'm not sure exactly what you mean by adding to the graph pipeline.  You don't necessarily have to add it as a graph parameter if that's what you mean.  You can simply just modify the number of buffers that have been set via the tivxSetNodeParameterNumBufByIndex call to if you need additional buffers.

    Regarding what to do with the output, you can create this as a graph parameter and dequeue it to process in the graph, or you could display the cropped image.  It will depend on your use case what you would like to do.

    Regards,

    Lucas

  • Hi Lucas, tivxVpacMscScaleNode allows me to generate 5 outputs.  Could I just use this to generate an additional output?

    Then I'd update the buffers here to 2: tivxSetNodeParameterNumBufByIndex(obj->scalerNode, 1u, obj->num_cap_buf);

    Do I need to do something like this to add it to the graph:

    if(obj->cropped_image)

    { add_graph_parameter_by_node_index(obj->graph, obj->whichNode??, 0);  <-- create a new node ?

    /* set graph schedule config such that graph parameter @ index 0 is enqueuable */ graph_parameters_queue_params_list[graph_parameter_num].graph_parameter_index = graph_parameter_num;  

    Is graph_parameter_num still 0 or does it need to increment.

    graph_parameters_queue_params_list[graph_parameter_num].refs_list_size = 1; graph_parameters_queue_params_list[graph_parameter_num].refs_list = (vx_reference*)&(obj->cropped_image);

    graph_parameter_num++; }

    Lastly I've the display_image which I need, but how do I prevent it from physically being sent to the display.  Please advise.

  • Separately in the graph pipeline which step of the pipeline do these images reference:

    vxGraphParameterDequeueDoneRef(obj->graph, graph_parameter_num, (vx_reference*)&out_capture_frames, 1, &num_refs_capture);

    vxGraphParameterDequeueDoneRef(obj->graph, 1, (vx_reference*)&test_image, 1, &num_refs_capture); 

    out_capture_frames --> image after MSC or after VISS ??

    test_image --> same as image sent to the display???

    Please advise.

  • Hi Mufaddal,

    You are correct, you can use the tivxVpacMscScaleNode to generate the new output.  Sorry, I didn't realize the app was already using this node, I thought it was using the single output node.  You can simply add a new output to this node.

    Regarding how to add everything to the graph, let me provide a few definitions and explanations:

    When adding multiple buffers to a parameter, you can either make this as a graph parameter or you can add buffers to the parameter using the API tivxSetNodeParameterNumBufByIndex.

    If you make the parameter as a graph parameter, you will need to add the calls as described in your earlier comment using the function "add_graph_parameter_by_node_index".  This allows you to have create all of the buffers as an array in the application and requires you to enqueue and dequeue these directly from the application.

    If you use the tivxSetNodeParameterNumBufByIndex API, you will only create a single data object in the graph, then the framework will create additional buffers as specified by the tivxSetNodeParameterNumBufByIndex API.  These buffers do not need to be enqueued/dequeued by the application because this is done by the framework.

    I would recommend making this a graph parameter since this will be the output parameter that needs to be accessed by the application.  So yes, you will need to add the add_graph_parameter_by_node_index calls and enqueue/dequeue from the graph.

    The "test_image" you referenced earlier is normally just added when in test_mode, but if you are not wanting the image to be sent to the display, you can remove the constraint of enabling this when obj->test_mode = 1.  Then you will need to add a new graph parameter for the new output image from the MSC.

    In order to prevent the image from going to the display, you can simply exclude the display node altogether from the application, by commenting out the calls to tivxDisplayNode, the creation of the display_param_obj object and the corresponding release of these object.

    For more information on pipelining, please reference the below:

    http://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/latest/exports/docs/tiovx/docs/user_guide/TIOVX_PIPELINING.html#NODE_GRAPH_PARAMETER_DEFINITION

    Regards,

    Lucas

  • Hi Lucas,

    Thanks much for clarifying.  I have gone through the pipelining tutorial and although this might be clear something is not adding up.  I realize that the tivxVpacMscScaleNode can be added to the graph and that would enable me to capture the same image as the application is using i.e. obj->displayImage.  However by getting multiple outputs from tivxVpacMscScaleNode can I just add each output to the graph application separately to access it for my application?  Please advise.  I tried this but got an error with invalid image reference.  Am I missing anything?

  • Hi Lucas,

    I'm able to add the MSC output image to the graph pipeline by doing the following:

    This was already set: tivxSetNodeParameterNumBufByIndex(obj->scalerNode, 1u, obj->num_cap_buf);  <-- the app doesn't work if I remove this line.

    The graph node and parameters needed to be added: add_graph_parameter_by_node_index(obj->graph, obj->scalerNode, 1);  <--  however the final value had to be set to 1 even though it is set to 0 for the test display image.  Could you please help clarify.  Also how would I expand this to get multiple scaler outputs from the graph pipeline?

    Separately how do I configure this sensor value ISS_SENSOR_FEATURE_CFG_UC1 to support different frame rates.  What else can be configured this way.  Please advise.

  • Hello Mufaddal,

    The third argument of the add_graph_parameter_by_node_index is the "node parameter".  Node parameter refers to the index of argument the node that was passed as the second argument to this function.  However this index excludes the graph input to the node since this is a required argument for every node.  Therefore, in a sample node shown below, index "1" is the parameter "output":

    tivxSampleNode (graph, input, output)

    Therefore, for the tivxVpacMscScaleNode, you are correct, if you want to take the first listed output of this node, you will use the add_graph_parameter_by_node_index(obj->graph, obj->scalerNode, 1);  call you mentioned.

    If you want to take the next output of the node, you will use "add_graph_parameter_by_node_index(obj->graph, obj->scalerNode, 2);".

    Regarding the frame rate change, I would recommend opening a new thread for this.  Generally, you will just need to update the ISS_SENSOR_FEATURE_CFG_UC1 as you mentioned, but I am not an expert on this.

    Regards,

    Lucas

  • Hi Lucas,

    I did manage to add the 2nd output to the MSC and then also added it to the graph in the following manner:

    obj->scalerNode = tivxVpacMscScaleNode(obj->graph, ldc_in_image, obj->scaler_out_img, obj->scaler_out_img1, NULL, NULL, NULL);

    status = tivxSetNodeParameterNumBufByIndex(obj->scalerNode, 2u, obj->num_cap_buf);

    And then added it to the graph as the 2nd node:

    add_graph_parameter_by_node_index(obj->graph, obj->scalerNode, 2);

    However my output has a lot of noise on it.  And the cropped image is essentially all noise.  The colors are also not correct.  Is there anything in the buffers or choice of target processor, that would improve the image.

    Cropping is also not centered even though I'm using the following co-ordinates:

            // mid-point centered crop
            rect.start_x = (width - crop_width)/2;
            rect.start_y = (height - crop_height)/2;
            rect.end_x   = (width + crop_width)/2;
            rect.end_y   = (height + crop_height)/2;

    Please advise.

  • Hello Mufaddal,

    Could you try without the cropping and see if you are still having issues?

    Regards,

    Lucas

  • Hi Lucas,

    Turning crop off does not change anything on the main output.

    For crop I also tried using the following sample code that is available in msc_test sample yet do not see the image cropped.  It only scales to the output image dimensions.

            vx_user_data_object crop_obj;
            tivx_vpac_msc_crop_params_t crop;
            //vx_reference refs[2] = {0};
    
            vx_uint32 width, height;
            vx_uint32 crop_width = 512, crop_height = 256;
            
            vxQueryImage(ldc_in_image, (vx_enum)VX_IMAGE_WIDTH, &width, sizeof(vx_uint32));
            vxQueryImage(ldc_in_image, (vx_enum)VX_IMAGE_HEIGHT, &height, sizeof(vx_uint32));
    
    
            crop_obj = vxCreateUserDataObject(obj->context, "tivx_vpac_msc_crop_params_t",
                sizeof(tivx_vpac_msc_crop_params_t), NULL);
    
            // mid-point centered crop
            crop.crop_start_x = width / 4; //(width - crop_width)/2;
            crop.crop_start_y = height / 4; //(height - crop_height)/2;
            crop.crop_width   = width / 2; //(width + crop_width)/2;
            crop.crop_height   = height / 2; //(height + crop_height)/2;
    
            /* Center crop of input 
            crop.crop_start_x = w / 4;
            crop.crop_start_y = h / 4;
            crop.crop_width   = w / 2;
            crop.crop_height  = h / 2; */
    
            if(status == VX_SUCCESS)
            {
                status = vxCopyUserDataObject(crop_obj, 0, sizeof(tivx_vpac_msc_crop_params_t), &crop, VX_WRITE_ONLY, VX_MEMORY_TYPE_HOST);
            }
    
            refs[1] = (vx_reference)crop_obj;
            
            if(status == VX_SUCCESS)
            {
                status = tivxNodeSendCommand(obj->scalerNode, 1u, TIVX_VPAC_MSC_CMD_SET_CROP_PARAMS, refs, 2u);
            }
    
            vxReleaseUserDataObject(&crop_obj);

    Please advise.

  • Hello Mufaddal,

    Let me check with our team to see if there is any limitation in outputting using different sizes with this node.  In the meantime, you can try making this two separate nodes and assign them to MSC1 and MSC2 and this should work.

    Regards,

    Lucas

  • Hello Mufaddal,

    I confirmed that two outputs with different resolutions should work.  Could you send me the code that you are using without the cropping so that I can review?

    Regards,

    Lucas

  • Hi Lucas,

    The two outputs with different resolutions does work.  It is the image crop that isn't working.  Please see code below with first image scaled and second expected to be cropped:

        vx_reference refs[2] = {0};
        if(vx_true_e == obj->scaler_enable)
        {
            tivx_vpac_msc_coefficients_t sc_coeffs;
            //vx_reference refs[1];
    
            printf("Scaler is enabled\n");
    
            tivx_vpac_msc_coefficients_params_init(&sc_coeffs, VX_INTERPOLATION_BILINEAR);
    
            obj->sc_coeff_obj = vxCreateUserDataObject(obj->context, "tivx_vpac_msc_coefficients_t", sizeof(tivx_vpac_msc_coefficients_t), NULL);
            if(status == VX_SUCCESS)
            {
                status = vxCopyUserDataObject(obj->sc_coeff_obj, 0, sizeof(tivx_vpac_msc_coefficients_t), &sc_coeffs, VX_WRITE_ONLY, VX_MEMORY_TYPE_HOST);
            }
            refs[0] = (vx_reference)obj->sc_coeff_obj;
            if(status == VX_SUCCESS)
            {
                status = tivxNodeSendCommand(obj->scalerNode, 0, TIVX_VPAC_MSC_CMD_SET_COEFF, refs, 2u);
            }
        }
        else
        {
            printf("Scaler is disabled\n");
        }
    
        if(obj->output_crop == 1)
        {
            vx_user_data_object crop_obj;
            tivx_vpac_msc_crop_params_t crop;
            //vx_reference refs[2] = {0};
    
            vx_uint32 width, height;
            vx_uint32 crop_width = 512, crop_height = 256;
            
            vxQueryImage(ldc_in_image, (vx_enum)VX_IMAGE_WIDTH, &width, sizeof(vx_uint32));
            vxQueryImage(ldc_in_image, (vx_enum)VX_IMAGE_HEIGHT, &height, sizeof(vx_uint32));
    
    
            crop_obj = vxCreateUserDataObject(obj->context, "tivx_vpac_msc_crop_params_t",
                sizeof(tivx_vpac_msc_crop_params_t), NULL);
    
            // mid-point centered crop
            crop.crop_start_x = width / 4;
            crop.crop_start_y = height / 4;
            crop.crop_width   = width / 2;
            crop.crop_height  = height / 2;
    
            if(status == VX_SUCCESS)
            {
                status = vxCopyUserDataObject(crop_obj, 0, sizeof(tivx_vpac_msc_crop_params_t), &crop, VX_WRITE_ONLY, VX_MEMORY_TYPE_HOST);
            }
    
            refs[1] = (vx_reference)crop_obj;
            
            if(status == VX_SUCCESS)
            {
                status = tivxNodeSendCommand(obj->scalerNode, 2u, TIVX_VPAC_MSC_CMD_SET_CROP_PARAMS, refs, 2u);
            }
    
            vxReleaseUserDataObject(&crop_obj);
        }

  • Hello Mufaddal,

    Sorry, I didn't realize it was working without cropping.  So to confirm, without cropping, no noise is seen and the outputs look correct and at the expected sizes?

    And with cropping, both outputs are incorrect, with the first one having incorrect colors and the second being all noise?

    From reviewing the thread, there are a couple things that are incorrect.  First, you should not need the below line if you are adding this as a graph parameter:

    status = tivxSetNodeParameterNumBufByIndex(obj->scalerNode, 2u, obj->num_cap_buf);

    Second, the second argument of tivxNodeSendCommand for the cropping should be 0, not 2.  This argument only applies if the node is a replicated node, which this one is not.

    Could you let me know the results of these updates?  If you are facing issues, please send the updated file and I will see if there is anything else going wrong.

    Regards,

    Lucas

  • Hi Lucas,

    I'd some progress.  I'm able to get image output 1 to crop and the second to scale.  The colors on the second scaled output are inverted.  tivxNodeSendCommand 2nd argument had to be set to 1 though.

    How do I change the code to get the second MSC output to crop and both images to have the correct colors.  See code below:

        vx_reference refs[2] = {0};
        if(vx_true_e == obj->scaler_enable)
        {
            tivx_vpac_msc_coefficients_t sc_coeffs;
            //vx_reference refs[1];
    
            printf("Scaler is enabled\n");
    
            tivx_vpac_msc_coefficients_params_init(&sc_coeffs, VX_INTERPOLATION_BILINEAR);
    
            obj->sc_coeff_obj = vxCreateUserDataObject(obj->context, "tivx_vpac_msc_coefficients_t", sizeof(tivx_vpac_msc_coefficients_t), NULL);
            if(status == VX_SUCCESS)
            {
                status = vxCopyUserDataObject(obj->sc_coeff_obj, 0, sizeof(tivx_vpac_msc_coefficients_t), &sc_coeffs, VX_WRITE_ONLY, VX_MEMORY_TYPE_HOST);
            }
            refs[0] = (vx_reference)obj->sc_coeff_obj;
            if(status == VX_SUCCESS)
            {
                status = tivxNodeSendCommand(obj->scalerNode, TIVX_CONTROL_CMD_SEND_TO_ALL_REPLICATED_NODES, TIVX_VPAC_MSC_CMD_SET_COEFF, refs, 2u);
            }
        }
        else
        {
            printf("Scaler is disabled\n");
        }
    
        if(obj->output_crop == 1)
        {
            vx_user_data_object crop_obj;
            tivx_vpac_msc_crop_params_t crop;
            //vx_reference refs[2] = {0};
    
            vx_uint32 width, height;
            vx_uint32 crop_width = 512, crop_height = 256;
            
            vxQueryImage(ldc_in_image, (vx_enum)VX_IMAGE_WIDTH, &width, sizeof(vx_uint32));
            vxQueryImage(ldc_in_image, (vx_enum)VX_IMAGE_HEIGHT, &height, sizeof(vx_uint32));
    
    
            crop_obj = vxCreateUserDataObject(obj->context, "tivx_vpac_msc_crop_params_t",
                sizeof(tivx_vpac_msc_crop_params_t), NULL);
    
            // mid-point centered crop
            crop.crop_start_x = width / 4;
            crop.crop_start_y = height / 4;
            crop.crop_width   = width / 2;
            crop.crop_height  = height / 2;
    
            if(status == VX_SUCCESS)
            {
                status = vxCopyUserDataObject(crop_obj, 0, sizeof(tivx_vpac_msc_crop_params_t), &crop, VX_WRITE_ONLY, VX_MEMORY_TYPE_HOST);
            }
    
            refs[0] = (vx_reference)crop_obj;
            
            if(status == VX_SUCCESS)
            {
                status = tivxNodeSendCommand(obj->scalerNode, 1u, TIVX_VPAC_MSC_CMD_SET_CROP_PARAMS, refs, 2u);
            }
    
            vxReleaseUserDataObject(&crop_obj);
        }

  • Hi Lucas,

    For the multi-cam example where are the multiple inputs received for display.  I do not see a reference to VPAC or tivxVpacMscScaleNode in the code.  Please advise.

  • Hello Mufaddal,

    Sorry I missed your earlier response.  Regarding your earlier response, is the scalerNode a replicated node?  I thought there were two separate node, one for the original image and another for the cropped image.  Can you send the code where you are creating both nodes?

    And regarding your second question, the multi cam app is not using the tivxVpacMscScaleNode.  It is sending the LDC output to a mosaic node called tivxImgMosaicNode which takes the object array of LDC outputs and scales them into a mosaic of the camera inputs.

    Regards,

    Lucas

  • Hi Lucas,

    Is there a way to use the tivxVpacMscScaleNode instead since I'd want to create multiple outputs from the same image in addition to cropping some of the input images as well.  As before I'd like to use these to create image captures that would be used by the application.  Can this be done using the tivxImgMosaicNode as well.  Do you've any further references on both features that'd help in understanding their usage further.  Please advise.

  • Hi,

    In app_create_graph this is where the ldc output is allocated to the mosaic node.

        vx_int32 idx = 0;
        if(obj->sensorObj.enable_ldc == 1)
        {
            vx_object_array ldc_in_arr;
            if(1 == obj->enable_viss)
            {
                ldc_in_arr = obj->vissObj.output_arr;
            }
            else
            {
                ldc_in_arr = obj->captureObj.raw_image_arr[0];
            }
            if (status == VX_SUCCESS)
            {
                status = app_create_graph_ldc(obj->graph, &obj->ldcObj, ldc_in_arr);
                APP_PRINTF("LDC graph done!\n");
            }
            obj->imgMosaicObj.input_arr[idx++] = obj->ldcObj.output_arr;
        }
        else
        {
            vx_object_array mosaic_in_arr;
            if(1 == obj->enable_viss)
            {
                mosaic_in_arr = obj->vissObj.output_arr;
            }
            else
            {
                mosaic_in_arr = obj->captureObj.raw_image_arr[0];
            }
    
            obj->imgMosaicObj.input_arr[idx++] = mosaic_in_arr;
        }

    Could you please highlight how this is being done for say multiple cameras (say 2) because all I see here is one camera output being allocated.  Please advise.

  • Hi,

    I've seen the following: multiple inputs combined into a single output (mosaic) and single input generating multiple outputs (mscScaleNode). How can I generate or take multiple inputs and generate multiple outputs that the application (or data buffer) can access, since using replicateNode does not give access to the image data.  Please advise.

  • Hi Mufaddal,

    Could you please send a block diagram of what you would like to do in your use case?  This will help me in making my suggestions.

    Regarding your question about the mosaic node inputs, it is taking in an object array "obj->imgMosaicObj.input_arr" which contains an array of all the camera inputs that are used in the mosaic.  Since the multiple camera images are contained within an object array, this object array output from either VISS or LDC (depending on the "enable_ldc" option) is provided to the mosaic node.

    Regards,

    Lucas

  • Hi Lucas,

    We'd like to read the output of the VISS separately and do the following for say two cameras: 

    Not this: (VISS x 2) --> (MSC mosaic x 1)

    This:

    (VISS x 1) --> (MSC scale x1)

    (VISS x 1) --> (MSC scale x1)

    How could I go about doing this?

  • Hi Mufaddal,

    Where will you be passing the output from MSC to?  The reason why I am asking this first is to understand what kind of other use case modifications would be required.  For instance, in this use case, the single mosaic output is sent to the display.  However, if you have multiple MSC outputs, you can only send a single image to the display.  Therefore, you could have one of them sent to the display, or you could remove display altogether.

    Please let me know what you are trying to do with the MSC output.

    Regards,

    Lucas

  • Hi Lucas,

    The multiple MSC outputs will be sent to another application across the OpenVX boundary.  The display output and as such the mosaic output which is an input to the display is not needed.

  • Hi Mufaddal,

    Thanks for the explanation. 

    In that case, you will need to do the following steps:

    1. Remove the app_create_graph_img_mosaic and app_create_graph_display and other associated img_mosaic and display (app_init, app_deinit, app_delete etc.) calls from the app. 

    2. You can then call the app_create_graph_scaler call in their place and make the "input_img_arr" the obj->vissObj.output_arr in addition to calling the init, deinit, and delete calls for the scaler module.

    3. You can then make the output of this node as a graph parameter so that you can dequeue this from the graph and pass across the OpenVX boundary.

    Regards,

    Lucas

  • Hi Lucas,

    Thanks for clarifying.  After I've allocated input_img_arr = obj->vissObj.output_arr, then how do I access each individual output.

    Is it like so?

    output1.arr[0] for camera 1

    output1.arr[1] for camera 2

    output1.arr[2] for camera 3

    Here I'm using the app_create_graph_scaler call used in OD module.  Please advise.  

  • Hi Mufaddal,

    You should be able to reference the app_create_graph_scaler call from the vision_apps/modules/src/app_scaler_module.c.  I'm not sure if these are the same or not, but I would reference the vision_apps/modules call.

    Regarding accessing the VISS output elements, do you need to access these or just the scaler output elements?

    Regards,

    Lucas

  • Hi Lucas,

    Under vision_apps there isn't a folder called modules.  I'm using SDK 7.1.  

    Regarding the outputs I need to access the scaler output elements and not the inputs.  Does this still apply?

    output1.arr[0] for camera 1

    output1.arr[1] for camera 2

    output1.arr[2] for camera 3

    Please also let me know which of the app_scaler_module I should be using.

  • Hello Mufaddal,

    The modules directory was introduced in SDK 7.2, but the one you referenced should be fine as well.

    Regarding accessing the outputs, the way this will work is that the output image of the scaler node will need to be the object that is registered as a graph parameter to be dequeued.  Therefore, in order to access every camera image, you will need to obtain the object array.  The way you can do this is by using the tivxGetReferenceParent API to get the object array.  From the object array, you can use the vxGetObjectArrayItem API to get each element of the array.

    Regards,

    Lucas

  • Thanks Lucas.  Is there any reference code for this.  It seems like the duqueue from the graph would be a 

    vx_object_array capture_output_arrary

    vx_reference output_array = tivxGetReferenceParent(capture_output_array);

    vx_image cameraX = vxGetObjectArrayItem(output_array[X]);

    Please advise if I'm somewhat on the right track here.

  • Hello Mufaddal,

    The dequeued object will be a vx_image.  The way to access the individual images will be similar to the below:

    vx_image dequeued_image;

    vx_object_array camera_array = (vx_object_array)tivxGetReferenceParent((vx_reference)dequeued_image)

    vx_image cameraX = (vx_image)vxGetObjectArrayItem(camera_array, X)

    Regards,

    Lucas

  • Hi Lucas,

    Thanks for the clarification.  Can I use this approach to customize individual camera crop ROIs and sizes as well.  Please advise.

  • Hi Mufaddal,

    Just so that I understand, are you wanting to add an additional output from the scaler node here as well that would be a resized and cropped image?

    If so, then yes, you will need to add a graph parameter at this output and dequeue the image.  Then you can access the individual camera images in the same way.

    By the way, is the cropping now working for you?  I recall this was not working earlier, but perhaps I am mistaken.

    Regards,

    Lucas

  • Hi Lucas,

    Yes this could be applicable to a second image, but my question is more regarding the main output.  Will all the camera outputs have the same configuration or can different camera images be configured differently.

    And yes the single cam cropping is certainly working.  Please advise.

  • Hello Mufaddal,

    Which configuration parameters are you wanting to change across cameras?

    Regards,

    Lucas

  • This would be crop ROI and image size.  Each camera image frame would then be customized to a crop ROI and end size.