This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: TDA4VM

Part Number: TDA4VM

TDA4VMXEVM: VPAC VISS Node

Hi,

My questions are regarding the single camera VPAC applications.

 - what does VPAC stand for?

 -  what does VISS stand for?

 - Can I take the camera sensor's raw data, pass it through VISS and receive it in RGB format

 - which input formats does the VISS node input support: 2mp camera, 8mp camera

- is there an impact in performance if I'm using a 2mp, 8mp or another camera

 - If VISS cannot support RGB which kernel module could I use for doing the image conversion: vx_image_preprocessing_target on C66

  • VPAC = Vision Pre-processing Accelerator

    VISS = Vision Imaging Subsystem

    VISS supports RGB output, however in Processor SDK software default output format is YUV420 semi planar.

    You can process any number of 2MP or 8MP cameras as long as total throughput fits in compute or I/O budget. For more details please refer to TRM and/or your TI representative.

  • Hi Mayank,

    Thanks much for the clarification. 

    Are either VPAC or VISS tied to a particular processor and if so which ones. 

    How or where can I modify the app_single_cam_main sample code to support RGB. 

    The graph seems to be only configured for raw_output and yuv_cam. 

    Please advise on any other example, reference or links that I can use for this purpose as well.

  • Please contact your TI rep to know which TI parts have VPAC. VISS is a submodule inside VPAC.

    Currently SW supports only YUV input. RGB output will require change to the application and VISS OpenVX node.

  • Hi Mayank,

    Could you please advise on who our allocated TI rep is.  I'm not aware of one.

  • Mufaddal,

    Which organization do you work for? Some of the information is protected by NDA, so please make sure that an NDA exists between TI and your organization.

  • Hi Mayank,

    I'm reaching to my organization to identify the contact.  In the meantime I was wondering if you're aware of what the alternative image output format is.

    The VPAC single app example has a yuv output from VISS and a raw output.  Could you please advise what the "raw output" format is and can this be configured?  Please advise.  Greatly appreciate your help. 

  • RAW format is what the sensor outputs. Usually it is bayer

  • Hi Mayank,

    It looks like this is where in OpenVX the camera output is being configured:

    /*! * \brief The configuration data structure used by the TIVX_KERNEL_VISS kernel. * * \

    details Below table provides output format supported on different *

    outputs for the corresponding mux value in this structure. *

    Note that mux value is used only if the corresponding output *

    image is set to non-null. Otherwise mux value is ignored. * *

    |val| mux_output0 | mux_output1 | mux_output2 | mux_output3 | mux_output4 | *

    |:-:|:--------------:|:--------------:|:--------------:|:--------------:|:---------------:| *

    | 0 | Y12(P12/U16) | UV12(P12/U16) | Y8(U8) | UV8(P12/U16) | Invalid | *

    | 1 | Invalid | Invalid | R8(U8) | G8(U8) | B8(U8) | *

    | 2 | Invalid | C1(P12/U16) | C2(P12/U16) | C3(P12/U16) | C4(P12/U16) | *

    | 3 | Value(P12/U16) | Invalid | Value(U8) | Invalid | Saturation(U8) | *

    | 4 | NV12_P12 | Invalid | NV12 | Invalid | Invalid | *

    | 5 | Invalid | Invalid | YUV422 | Invalid | Invalid | * *

    \ingroup group_vision_function_vpac_viss */

    Can you please confirm which output will generate RGB.  Thanks much in advance.

  • Hello Mufaddal,

    R, G and B can be generated in separate planes by setting

    mux_output2= 1

    mux_output3 = 1

    mux_output4 = 1

    Regards,

    Mayank

  • Hi Mayank,

    Thanks for the clarification.  I did already try this, but how do I take the 3 mux outputs and generate a single image to be used by other blocks such as LDC, MSC or display.  The problem is that LDC, MSC and display are expecting a single output while I have the 3 colors as output.  Please advise.

  • RGB888 planar format is not supported by LDC or MSC. You will need to process each plane independently.

  • Hi Mayank,

    Could you please help identify where in the kernel module I can access each plane to modify and combine the bytes together.  My understanding is that if I can R_byte, G_byte and B_byte it will give me one image in memory to access, but it'll become triple the size.  Does this approach work.  Would down scaling this image help?

  • I am not sure what your application requirement is, so I can not comment if downscaling is the right option.

    You can modify TIOVX kernels for LDC, MSC etc. to handle these images.

  • Could you please identify how to modify these kernels to generate interleaved_RGB or interleaved_BGR outputs.  The issue is how can I combine three generated outputs into one.

    VISS (3) -> LDC (1) -> MSC (1)

    Separately is their an API manual for the different kernel modules that are available as part of the SDK so I can understand what the system's capability is?

  • [Graph] Creates a VPAC_LDC Node. <b> Valid input/output format combinations: </b> Input Format | Output Format -----------------------|-------------- VX_DF_IMAGE_U8 | VX_DF_IMAGE_U8 ^ | TIVX_DF_IMAGE_P12 VX_DF_IMAGE_U16 | VX_DF_IMAGE_U16 TIVX_DF_IMAGE_P12 | TIVX_DF_IMAGE_P12 ^ | VX_DF_IMAGE_U8 VX_DF_IMAGE_NV12 | VX_DF_IMAGE_NV12 ^ | TIVX_DF_IMAGE_NV12_P12 TIVX_DF_IMAGE_NV12_P12 | TIVX_DF_IMAGE_NV12_P12 ^ | VX_DF_IMAGE_NV12 VX_DF_IMAGE_UYVY | VX_DF_IMAGE_UYVY ^ | VX_DF_IMAGE_YUYV ^ | VX_DF_IMAGE_NV12 ^ | TIVX_DF_IMAGE_NV12_P12

    This is the configuration of the LDC node and it is not clear to me which output or combination here gives me RGB?

  • >>The issue is how can I combine three generated outputs into one.

    >>VISS (3) -> LDC (1) -> MSC (1)

    This is not possible. This is why we recommend using YUV format.

  • Hi Mayank,

    I'm trying to use the C66 pre_proc_kernel to convert the YUV420 image to RGB format.  However the kernel generates the output in tensor format.  Is there a way to convert vx_tensor to vx_image?

    OpenVX supports the following function called vxCreateImageObjectArrayFromTensor. Documentation link: www.khronos.org/.../group__group__object__tensor.html but I can't find it in the tiovx library.  

    Alternatively do you know of a way to generate an output in vx_image format directly from the kernel.  Please advise.

     

  • Hello,

    If you are trying to convert YUV420 to RGB using the C66x, I would recommend using the vxColorConvertNode from OpenVX referenced below.  You can provide the YUV420 image as an image and the RGB image as the output.

    Regards,

    Lucas

  • Hi Lucas,

    Thanks for the info.  It was really helpful.  I've modified the single_cam_capture example to include this conversion after the LDC step.

    With LDC enabled the result stops after say 5 frames:

    frm_loop_cnt 0... frm_loop_cnt 1... frm_loop_cnt 2... frm_loop_cnt 3... frm_loop_cnt 4...

    With LDC disabled I get the same result which stops after 5 frames:

    frm_loop_cnt 0... frm_loop_cnt 1... frm_loop_cnt 2... frm_loop_cnt 3... frm_loop_cnt 4...

    Is there something in addition to context and graph that I need to configure such as vxSetNodeTarget or tivxSetNodeParameterNumBufByIndex and what the configuration should be. 

    Tried the following without any change:

    vxSetNodeTarget(obj->rgb_node, VX_TARGET_STRING, TIVX_TARGET_HOST);

    tivxSetNodeParameterNumBufByIndex(obj->rgb_node, 8u, NUM_BUFS);

    Please advise.

    Thanks in advance.

  • Hello,

    Yes, you will need to add the following for this node.

    vxSetNodeTarget(obj->rgb_node, VX_TARGET_STRING, TIVX_TARGET_DSP1); // This line is setting it to the C66 which is TIVX_TARGET_DSP1

    tivxSetNodeParameterNumBufByIndex(obj->rgb_node, 1u, NUM_BUFS); // This line sets the number of output buffers of the node.  The second argument is the 

    By the way, which does the node configuration of the updated graph look like?  I just want to be sure that the node configuration will not cause you any issues either.

    Regards,

    Lucas

  • Hi Lucas,

    Thanks for clarifying this.

    Does the node configuration refer to: 

    /* input @ node index 1, becomes graph parameter 0 */

    add_graph_parameter_by_node_index(obj->graph, obj->capture_node, 1);

    Alternatively is it possible for me to combine 3 different U8 images into a single image using image addition.  Would the output generated be a U8 image or can it be configured to be RGB.  Please advise.  Thanks in advance.

  • Hello,

    The node configuration I am referring to is the structure of the nodes within the graph.  For instance, the original single cam graph is below.  What does this list of nodes look like now?

    Capture -> VISS -> LDC -> MSC -> Display

    Regarding the combining of different nodes, am I correct in understanding that you want to combine 3 U8 images to produce an RGB image?  If so, I would recommend the vxChannelCombineNode as linked to below.  You can provide each of the U8 images as inputs and an RGB image as the output.

    Regards,

    Lucas

  • Hi Lucas,

    The nodes look like the following:

    Capture -> VISS -> LDC -> YUV_RGB -> Display

    or

    Capture -> VISS -> YUV_RGB -> MSC -> Display

    Does this look correct?

    Regarding the node combination I set the following:

    tivxSetNodeParameterNumBufByIndex(obj->viss_rgb_node, 1u, NUM_BUFS);

    and set the target to DSP.  Does this need anything else to be configured?  Essentially it should be:

    Capture -> VISS -> RGB -> Display 

    Looks like I'm missing something.

  • Hello,

    I am a bit confused by the response.  Which set of nodes are you using in your app?  I ask because the below configuration is invalid because the MSC node does not accept an RGB input.  Please confirm if you are using MSC in your app.

    Capture -> VISS -> YUV_RGB -> MSC -> Display

    Otherwise, the parameters you set are correct.  What is the result of running this app?  Could you send the full log?

    Regards,

    Lucas

  • Hi Lucas,

    Yes, you're correct.  With LDC disabled the scaling happens first and then the conversion.

  • Hi Lucas,

    For image combination I see the following logs.  Did have to bump the number of buffers to 4, but don't see any output displayed.  Can you please have a look and what I can try.  I'm guessing that RGB can be displayed on the screen directly by the DIS module without requiring any modification.  Is there also a way to save the RGB images for viewing?

    VISS Set Reference done

    AEWB Set Reference done

    vxVISS_RGB!

    VX_TYPE_IMAGE: image_109, 1936 x 1096, 1 plane(s), 6383104 B, VX_DF_IMAGE_RGB VX_COLOR_SPACE_BT709 VX_CHANNEL_RANGE_FULL VX_MEMORY_TYPE_NONE, 1 refs

    VX_TYPE_GRAPH: graph_85, 4 nodes, VX_GRAPH_STATE_UNVERIFIED, avg perf 0.000000s, 0 parameters, 1 refs

    VX_TYPE_NODE: VISS_RGB_Conversion, 5 params, avg perf 0.000000s, VX_SUCCESS, 1 refs vx

    VISS_RGB DONE!

    Scaler is set to enabled

    Disabling scaler for RGB

    VISS_RGB is enabled

    Display Set Target done

    vxSetGraphScheduleConfig done

    Scaler is disabled

    app_create_graph exiting

    app_create_graph done

    Best,

    Mufaddal

  • Hello Mufaddal,

    Is this the full log that you get on the console?  If not could you send the full log that you see?

    Yes, the display node can accept RGB images as inputs, so that should not be an issue.  You could try increasing the buffer depth as well as the pipeline depth being set by the tivxSetGraphPipelineDepth API to see if this fixes it.

    And yes, you can write the output to a file using an API such as the below.  

    However, it is suggested that you make the parameter that you are trying to write to a file as a graph parameter.  This allows you to access each buffer individually in order to write it to a file.  I have linked to documentation explaining this and pipelining in TI OpenVX more generally below.

    Regards,

    Lucas

  • Hi Lucas,

    This is not the full log, but it covers the pre and post VISS create graph where there aren't any errors.  After this it essentially gives the interface dialog but doesn't display anything or provide any error logs.  If you prefer I can capture the full logs.  Will go through the suggestions above too.  Thanks much for your help.

  • Yes, please send over the full log as well.  It may not show anything but I just want to review in case it does.

    Regards,

    Lucas

  • Hi Lucas,

    Below find the full logs below and attached.

    Also bumped tivxSetNodeParameterNumBufByIndex(obj->viss_rgb_node, 4u, MAX_NUM_BUF) and tivxSetGraphPipelineDepth(obj->graph, MAX_NUM_BUF) to the max 8.  It still doesn't produce any output.  Please advise.

    APP: Init ... !!! MEM: Init ... !!! MEM: Initialized DMA HEAP (fd=4) !!! MEM: Init ... Done !!! IPC: Init ... !!! IPC: Init ... Done !!! REMOTE_SERVICE: Init ... !!! REMOTE_SERVICE: Init ... Done !!! APP: Init ... Done !!! 32.640608 s: VX_ZONE_INIT:Enabled 32.640633 s: VX_ZONE_ERROR:Enabled 32.640638 s: VX_ZONE_WARNING:Enabled 32.641307 s: VX_ZONE_INIT:[tivxInit:71] Initialization Done !!! 32.641473 s: VX_ZONE_INIT:[tivxHostInit:48] Initialization Done for HOST !!! Invalid token [ ] sensor_selection = [0] Invalid token [ ] ldc_enable = [0] Invalid token [ ] num_frames_to_run = [1000000000] Invalid token [ ] is_interactive = [1] IttCtrl_registerHandler: command echo registered at location 0 IttCtrl_registerHandler: command iss_read_2a_params registered at location 1 IttCtrl_registerHandler: command iss_write_2a_params registered at location 2 IttCtrl_registerHandler: command iss_raw_save registered at location 3 IttCtrl_registerHandler: command iss_yuv_save registered at location 4 IttCtrl_registerHandler: command iss_read_sensor_reg registered at location 5 IttCtrl_registerHandler: command iss_write_sensor_reg registered at location 6 IttCtrl_registerHandler: command dev_ctrl registered at location 7 IttCtrl_registerHandler: command iss_send_dcc_file registered at location 8 tivxImagingLoadKernels done 32.643814 s: ISS: Enumerating sensors ... !!! NETWORK: Opened at IP Addr = 192.168.1.143, socket port=5000!!! 32.644218 s: ISS: Enumerating sensors ... found 0 : IMX390-UB953_D3 32.644225 s: ISS: Enumerating sensors ... found 1 : AR0233-UB953_MARS 32.644230 s: ISS: Enumerating sensors ... found 2 : AR0820-UB953_LI 32.644234 s: ISS: Enumerating sensors ... found 3 : UB9xxx_RAW12_TESTPATTERN 32.644238 s: ISS: Enumerating sensors ... found 4 : UB96x_UYVY_TESTPATTERN 32.644243 s: ISS: Enumerating sensors ... found 5 : GW_AR0233_UYVY 6 sensor(s) found Supported sensor list: a : IMX390-UB953_D3 b : AR0233-UB953_MARS c : AR0820-UB953_LI d : UB9xxx_RAW12_TESTPATTERN e : UB96x_UYVY_TESTPATTERN f : GW_AR0233_UYVY Select a sensor a[ 29.362432] Initializing XFRM netlink socket LDC Selection Yes(1)/No(0) LDC Selection Yes(1)/No(0) [ 31.842209] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. [ 31.858250] Bridge firewalling registered 0 Sensor selected : IMX390-UB953_D3 app_init done app_create_graph with RGB output Querying IMX390-UB953_D3 42.166283 s: ISS: Querying sensor [IMX390-UB953_D3] ... !!! 42.166590 s: ISS: Querying sensor [IMX390-UB953_D3] ... Done !!! WDR mode is supported Expsoure control is supported Gain control is supported CMS Usecase is supported obj->aewb_cfg.ae_mode = 0 obj->aewb_cfg.awb_mode = 0 Sensor DCC is enabled Sensor width = 1936 Sensor height = 1096 Sensor DCC ID = 390 Sensor Supported Features = 0x378 Sensor Enabled Features = 0x358 42.166628 s: ISS: Initializing sensor [IMX390-UB953_D3], doing IM_SENSOR_CMD_PWRON ... !!! 42.366913 s: ISS: Initializing sensor [IMX390-UB953_D3], doing IM_SENSOR_CMD_CONFIG ... !!! 45.119755 s: ISS: Initializing sensor [IMX390-UB953_D3] ... Done !!! Creating graph Initializing params for capture node Initializing params for capture node local_capture_config.numDataLanes = 4 local_capture_config.dataLanesMap[0] = 1 local_capture_config.dataLanesMap[1] = 2 local_capture_config.dataLanesMap[2] = 3 local_capture_config.dataLanesMap[3] = 4 capture_config = 0x0x6e6380 Creating capture node obj->capture_node = 0x0x6a0c80 Test data path is NULL. Defaulting to current folder reading test RAW image .//img_test.raw read_test_image_raw : Unable to open file .//img_test.raw VISS Set Reference done AEWB Set Reference done vxVISS_RGB! VX_TYPE_IMAGE: image_109, 1936 x 1096, 1 plane(s), 6383104 B, VX_DF_IMAGE_RGB VX_COLOR_SPACE_BT709 VX_CHANNEL_RANGE_FULL VX_MEMORY_TYPE_NONE, 1 refs VX_TYPE_GRAPH: graph_85, 4 nodes, VX_GRAPH_STATE_UNVERIFIED, avg perf 0.000000s, 0 parameters, 1 refs VX_TYPE_NODE: VISS_RGB_Conversion, 5 params, avg perf 0.000000s, VX_SUCCESS, 1 refs vxVISS_RGB DONE! Scaler is set to enabled Disabling scaler for RGB VISS_RGB is enabled Display Set Target done vxSetGraphScheduleConfig done Scaler is disabled app_create_graph exiting app_create_graph done ========================== Demo : Single Camera w/ 2A ========================== p: Print performance statistics s: Save Sensor RAW, VISS Output and H3A output images to File System e: Export performance statistics u: Update DCC from File System x: Exit Enter Choice: Unsupported command ========================== Demo : Single Camera w/ 2A ========================== p: Print performance statistics s: Save Sensor RAW, VISS Output and H3A output images to File System e: Export performance statistics u: Update DCC from File System x: Exit Enter Choice: 45.170429 s: ISS: Starting sensor [IMX390-UB953_D3] ... !!! 45.724751 s: ISS: Starting sensor [IMX390-UB953_D3] ... !!!RGB_output_logs.rtf

  • Hello,

    A few questions and requests:

    1. Did you run "source ./vision_apps_init.sh" prior to running the app?  If not, could you please do so and resend the logs?

    2. Could you press "p" to print additional logs and send these as well?

    3. Is it possible to share the changes you have made to the app?  And which SDK version are you using?

    Regards,

    Lucas

  • Hi Lucas,

    Here are the logs with the steps you'd requested.  The guts of the change to the code using SDK 7.1:

    obj->viss_rgb_node = vxChannelCombineNode(obj->graph, obj->y8_r8_c2, obj->uv8_g8_c3, obj->s8_b8_c4, NULL, obj->viss_rgb_out);

    The output image is set to VX_DF_IMAGE_RGB, the NodeParameterbuffer is set to 8 and the target is DSP1.

    Please have a look.

    1425.RGB_output_logs.rtf

  • Hello,

    Is it possible to send the full app over?  It is difficult to see if there are other issues just from the snippet.

    One potential issue could be how the buffer is being set.  Initially when I provided instructions for the values of the below API, you had indicated you would use the vxColorConvertNode.  Now that you are using the vxChannelCombineNode, you will need to set the second argument differently.  This value is the node parameter that you will be setting the buffers for.  In this case, the output parameter is the fourth parameter.  I have listed in the parameters with their node parameter index for reference below to help explain

    tivxSetNodeParameterNumBufByIndex(obj->viss_rgb_node, 4u, NUM_BUFS);

    Node parameter 0: obj->y8_r8_c2

    Node parameter 1: obj->uv8_g8_c3

    Node parameter 2: obj->s8_b8_c4

    Node parameter 3: NULL

    Node parameter 4: obj->viss_rgb_out

    Another thing to check is whether all of the parameters 0-2 are being created with the U8 format.

    Regards,

    Lucas

  • Hi Lucas,

    By the full app is it just the .c file.  I'd already changed the tivxSetNodeParameterNumBufByIndex value to 4u since it generated a VXZONE_ERROR.  The NUM_BUFS however is set to 8.

    Below are the image attributes for all the images after going through a channel combine:

    VX_TYPE_IMAGE: image_97, 1936 x 1096, 1 plane(s), 2174464 B, VX_DF_IMAGE_U8 VX_COLOR_SPACE_NONE VX_CHANNEL_RANGE_FULL VX_MEMORY_TYPE_NONE, 1 refs

    VX_TYPE_IMAGE: image_98, 1936 x 1096, 1 plane(s), 2174464 B, VX_DF_IMAGE_U8 VX_COLOR_SPACE_NONE VX_CHANNEL_RANGE_FULL VX_MEMORY_TYPE_NONE, 1 refs

    VX_TYPE_IMAGE: image_99, 1936 x 1096, 1 plane(s), 2174464 B, VX_DF_IMAGE_U8 VX_COLOR_SPACE_NONE VX_CHANNEL_RANGE_FULL VX_MEMORY_TYPE_NONE, 1 refs

    VX_TYPE_IMAGE: image_109, 1936 x 1096, 1 plane(s), 6383104 B, VX_DF_IMAGE_RGB VX_COLOR_SPACE_BT709 VX_CHANNEL_RANGE_FULL VX_MEMORY_TYPE_NONE, 1 refs

    Please have a look.

  • Hello,

    These values all look correct. 

    Have you also updated the VISS node output buffers to set multiple buffers at all the outputs?  In the original app, only the y8_r8_c2 output was being used and the output buffers were being set as per the line below.  Now that you are using more outputs (uv8_g8_c3, s8_b8_c4), you will need to update these parameters to use multiple buffers as well.

    tivxSetNodeParameterNumBufByIndex(obj->node_viss, 6u, obj->obj->num_cap_buf)

    Regards,

    Lucas

  • Hi,

    That value is already set to 6 in the app.  The source file is attached for review:

    5684.app_single_cam_main.c

  • Hello,

    My point in my previous statement was that the following two lines need to be added in addition to the one that is already there.  Please let me know if this fixes it.  I will review the app for other possible issues as well.

            tivxSetNodeParameterNumBufByIndex(obj->node_viss, 7u, obj->num_cap_buf);
            tivxSetNodeParameterNumBufByIndex(obj->node_viss, 8u, obj->num_cap_buf);


    Regards,

    Lucas

  • Hi Lucas,

    By your enumeration shouldn't it be nodes 7, 8 and 9.  Although I tried 6, 7 and 8 as well without any result.

  • Hi Lucas,

    Did you have a chance to look at the app and see if anything else requires modifying.  

    Separately is there a way to generate an interleaved_BGR output or convert between formats.  Please advise.

  • Hi Lucas,

    It seems I can use vxuChannelCombine to generate RGB in reverse as well: BGR.  My question is this interleaved and if not how can I make it so.  The image type I specify would still be RGB though?  Is there any information available on what the immediate conversion does other than not requiring to be a node on a graph.  Please advise.

  • Hello,

    Sorry for the delay, I was out of office last week.

    Regarding your initial issues with the application, there are a couple things I saw that are causing issues.

    The first is that the VISS node call duplicates the s8_b8_c4 output.  The second issue is a known bug in the VISS node which requires you to pass in the Y12 and UV12 outputs as well.  Therefore, you will need to uncomment the calls to create these images.  The final VISS node creation call is below.

            obj->node_viss = tivxVpacVissNode(
                                        obj->graph,
                                        obj->configuration,
                                        NULL,
                                        NULL,
                                        obj->raw, obj->y12, obj->uv12_c1,
                                        obj->y8_r8_c2, obj->uv8_g8_c3, obj->s8_b8_c4,
                                        obj->h3a_aew_af, NULL
                    );

    Regarding your other questions, yes, the RGB is interleaved.  However, I would not recommend using the vxu calls given that this will not provide optimal usage of the SoC.  I would recommend adding this to the OpenVX graph and using the vx version of the API.

    Regards,

    Lucas

  • Hi Lucas,

    It is good to have you back and thanks for the suggested changes.

    Now I do have an output, but it has a lot of static and is not recognizable.  Could this have to do with the image size which is 1936 x 1096.  I'm passing the output from the combined node directly to the display.

    Separately is there any documentation or reference on how to setup the TDA4x board to be accessible over the network.  This could be statically or dynamically via a DHCP enabled router.

    Lastly wanted to ask about the multi-camera application.  I can't tell in the app where the graph and viss node enables multiple inputs.  Is this functionality abstracted or accessible to us as well.  Please advise.

  • Hello,

    Regarding the output issue, this particular combination of outputs has not been thoroughly tested and this could be a bug in the driver causing this issue.  Could you provide some background into your use case and why you need to generate RGB from VISS?  Depending on your use case, there could be alternatives to your current node configuration which would include more thoroughly tested outputs from VISS.

    Regarding your second question about network access, we have an NFS boot option as described in the documentation below.  Let me know if this solves your issue.  Otherwise, I would recommend opening a separate E2E thread for further questions on this topic to get support from experts on this.

    software-dl.ti.com/.../RUN_INSTRUCTIONS.html

    Regarding your final question, are you referring to the outputs of VISS produced via the mux parameters?  If so, then yes, this is abstracted within the VISS node itself.  The VISS node can be found at tiovx/kernels_j7/hwa/vpac_viss.  You are welcome to modify the source code.  However, it may require more involved knowledge of the TIOVX framework as well as the VISS driver.

    Regards,

    Lucas

  • Hi Lucas,

    The request to grab RGB directly from VISS is to prevent the 'downscaling' and loss of image data that occurs as a result of conversion to YUV420.  However scaling is a concern here since the MSC doesn't support RGB scaling.  Could we perform scaling on the A72 instead using vx_image_scale?

    I'm more interested in an ability to scp to the board, but might have to create a separate ticket regarding this.

    Regarding the multiple inputs to VISS I'm more interested in understanding how the sequence of camera inputs are recognized and displayed.  Can they be connected in any order and size with a 16Mb ceiling.  Trying to understand this more fully since the code isn't clear on how this is/would be handled.  Please advise.

  • Hi Mufaddal,

    I am not familiar with vx_image_scale.  There is a scale image node referenced below which takes in U8 images and outputs U8 images.  This can be run on each of R, G and B.  This runs on both the MSC and the C66x DSP.

    www.khronos.org/.../group__group__vision__function__scale__image.html

    We also have the YUV420 to RGB vxColorConvertNode that runs on the C66X DSP in case this is helpful.

    Regarding the inputs to VISS, since you are referencing the single cam app, there is only the raw image input that comes in per graph execution.  By utilizing the pipelining extension of OpenVX, the framework maintains multiple instances of the OpenVX graph that enables each node of the graph to execute in parallel.  All of this is abstracted by the framework so it is not visible within the application.

    For more information on this topic, you can refer to the information below on pipelining in OpenVX.

    software-dl.ti.com/.../TIOVX_PIPELINING.html

    Regards,

    Lucas

  • Hi Lucas,

    This is helpful that I'd like to clarify further.  The MSC can generate multiple outputs with different configuration.  Can it be used to accept multiple inputs as well.

    For the inputs to the VISS I'm referring to both the case when there's a single raw image input vs multiple inputs.  How is the handling at the VISS change when we go from one input to more?  Also instead of pipelining can I produce one complete frame at a time.  Would I use vxProcessGraph or another way to accomplish this?  Please advise.

  • Hi Mufaddal,

    In order to provide multiple inputs to MSC, you would need to use multiple nodes to accomplish this.

    Regarding the multiple inputs to VISS, I would recommend referencing the multi cam app listed below.  In this app, there is a run time option of selecting the number of cameras that are used.  The downstream nodes from the capture node then use this parameter to create replicas of the individual nodes by using the vxReplicateNode feature from OpenVX.  This features allows you to create N number of replicas of a given node and cycle through all of the parameters as defined in an object array or pyramid data object.  I'd recommend reviewing the OpenVX 1.1 spec if you need more clarity on these data objects or API's.

    Regarding the handling of VISS when cycling through different inputs, we currently support only homogeneous cameras in VISS, but will add support for heterogeneous cameras in a future release.  You are welcome to use vxProcessGraph if you do just want to execute a single frame at a time.  However, you can access each individual frame even with pipelining by making the output a graph parameter and dequeueing this buffer to the application.

    https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/latest/exports/docs/vision_apps/docs/user_guide/group_apps_basic_demos_app_multi_cam.html

    Regards,

    Lucas

  • Hi Lucas,

    Thanks for the information.  I'm still going through the information.  One thing that has come up and we'd find useful is the hardware timestamp of the image capture.  Is there a way to get this and in what format is this clock information (real time or relative) available.  Please advise.

  • Hi Mufaddal,

    To get the timestamp, you can use the API vxQueryReference with the captured image as the input reference and with the enum attribute TIVX_REFERENCE_TIMESTAMP which returns the relative timestamp in ms.

    Regards,

    Lucas