This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-TDAX: Custom application in OpenVX using PyTIOVX

Part Number: PROCESSOR-SDK-TDAX

Hello all,

I have few queries.

1) Please explain me about input_target_ptr and input_desc parameters in the target files generated using PyTIOVX tool.

2) Below is the block diagram for my application.I have divided into 4 nodes.

For image reading and passing, should i create a separate node?

I am confused. Please provide some clarity on the same.

Regards,

Padmasree N.

  • Hello Padmasree,

    The input_target_ptr is simply the pointer to the input image data buffer.  The input_desc is an object descriptor of type tivx_obj_desc_image_t which is a data structure that is passed between nodes in shared memory and it contains details about the image object, including the data buffer pointer.

    For reading and passing images to a parameter of the graph, you do not need to necessarily make a node.  You can simply map the image using the vxMapImagePatch API and access the pointer to the image and write into it prior to processing the graph.

    If you are using pipelining, you will need to make this input a graph parameter as discussed in the pipelining documentation here.

    Regards,

    Lucas

  • Hello Lucas,

    Thanks for your reply!

    I did not understand the mapping of image using the API.

    For example, say I have 3 (.jpg/.png) images and how do i loop the images for processing as mentioned in the block diagram above?

    I have already referred the tutorials in TIOVX/tutorials. But there are examples only for a .bmp images.

    Also, when i referred the TIOVX/utils/source folder, I got to know about tivx_utils_png_file_read() function. Will this work for my application?

    Please provide me some usecase examples on the .jpg/.png image reading and extracting the pixel values to store in a image object

    Regards,

    Padmasree N.

  • Hello Padmasree,

    We do not have any utility API's for reading png or jpg images.  You could either create a custom function to perform the conversion or convert these images offline and use the included BMP API's.

    The existing tivx_utils_png_file_read do not work properly, so you would need to use a different API for PNG.

    Regards,

    Lucas

  • Hello Lucas,

    Thanks for your reply!

    I will try image read using .bmp for my application and report issues if any.

    Regards,

    Padmasree N.

  • Hello Lucas,

    I have converted my image from .png to .bmp format.

    In my application, I have a separate image file read node and the output of which goes as an input to the other node.

    Below is the line of code to read images and store pixel values in "image" parameter.

    cv::Mat image = cv::imread(imageFileNames[nImage], cv::IMREAD_GRAYSCALE);

    and the parameter "image" is used in the further operations.

    Please advice me on

    1) How to rewrite the same in OpenVX ?

    2) How to pass input image pixel values from image node  to the other node in the graph?

    Regards,

    Padmasree N.

  • Hello Padmasree,

    Sorry, I am confused.  You said that you have a "image file read node", then asked how to rewrite the same in OpenVX.  Could you explain whether or not this has been integrated into OpenVX, or if you just have the code to populate an image, and it hasn't been integrated to OpenVX.

    Regarding the second question, vx_images in OpenVX are passed from one node to the next via data object connections.  Therefore, if the first node has an output and the second node has an image being consumed as an input, then this image object will simply be provided to each node API's and the framework will provide the image object to the second node after the first node has written to it.

    Regards,

    Lucas

  • Hello Lucas,

    Sorry for the confusion!

    I have the code in OpenCV to read and store pixel values

    for (ut_Size nImage = 0; nImage < imageFileNames.size(); ++nImage)
    {
    cv::Mat image = cv::imread(imageFileNames[nImage], cv::IMREAD_GRAYSCALE);

    //.....other camera calibration operations

    ......

    }

    But please let me know how to rewrite in OpenVX including the looping for many images

    Regards,

    Padmasree N.

  • Hello Padmasree,

    You do not necessarily need to make the image file read operation as a node.  You could simply read it in the application using the same OpenCV code then create the image from handle using vxCreateImageFromHandle

    Alternatively, you can read in the image as described in the tutorial below using the provided helper API's

    https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/latest/exports/docs/tiovx/docs/user_guide/vx__tutorial__image__load__save_8c.html

    Finally, if you only have a single node in the graph, you may not need to enable pipelining in your application.  Therefore, you can read in new a new image after calling vxProcessGraph, then continue calling vxProcessGraph in a loop for the number of images you are processing.

    Regards,

    Lucas

  • Hello Lucas,

    Thanks for your reply!

    I need image file reading as a separate node because this application should be integrated with another application which has the same format as mentioned in the block diagram above.

    Also, please tell me if the .python file is right for graph generation.

    Regards,

    Padmasree N.

  • Hello Padmasree,

    We do not support the python file for graph generation.  We recommend using existing applications as a reference for your new application development.

    Regarding the custom user kernel for image file read, are you having issues with this?  If so, what are they?  I will try to review it when I get a chance.

    Regards,

    Lucas

  • Hello Lucas,

    I am not clear on passing input to user data objects.

    For example, the user data object, has 4 parameters like image directory path, file number, etc..

    How do I pass the input for all the above mentioned parameters in the graph?

    Please provide me with an example.

    Once I am clear with passing of input to user data object, I will check the graph and report issues if any.

    Regards,

    Padmasree N.

  • Hello Padmasree,

    We have a number of examples of this.  For instance, in the vision_apps/apps/basic_demos/app_single_cam/app_single_cam_main.c, we create the user data object called "capture_config" using the API vxCreateUserDataObject.  When we call this API, we provide the "local_capture_config" which is a data structure of the user data object data structure type and is used to initialize the values of the user data object.

    If this config has to be mapped for every execution of the graph, the vxMapUserDataObject can be called prior to executing the graph.  An example of this can be found in the vision_apps/apps/basic_demos/app_single_cam/app_single_cam_common.c.

    Regards,

    Lucas

  • Hello Lucas,

    Thanks for your reply!

    Yes, I have referred to the mentioned examples and created the graph now.

    But, I am unable to run the kernel for no.of images. Is there any thing I am missing out?

    Please verify the attachment and advice me

    1) How to loop for no.of images?

    2) When I try to save the image object as a bmp file, the output is a blank grayscale image. Please provide me solution for this.

    Regards,

    Padmasree N.

  • Hello Lucas,

    Please find my error log below.

    Regards,

    Padmasree N.

  • Hello Padmasree,

    How many iterations can you run before you get that error?  Is that on the first iteration or after multiple?

    Are you sure that the image was generated properly?  It could be that the image was not generated correctly and that is why the bmp image is a blank grayscale image.

    Also, can you remind me which SDK version you are using?

    Regards,

    Lucas

  • Hello Lucas,

    Only one iteration is run and the error popped up!

    I have verified the height and width of the generated image using vxQueryImage() and it is correct but when I try to check if it was generated properly by saving it to another folder, I am getting only a blank grayscale image shown below

    I am using PSDK j7_07_00_00_11

    Regards,

    Padmasree N.

  • Hello Padmasree,

    It looks like you are using an RGB image in your use case, which is only a single plane.  However, in the kernel wrapper, the output image is mapping two planes.  Therefore, I am suspecting that the error you are getting is occurring when trying to map the second plane.  Could you confirm via print statements that the error is occurring when calling the below:

            tivxMemBufferMap(image_target_ptr[1],
               image_desc->mem_size[1], (vx_enum)VX_MEMORY_TYPE_HOST,
               (vx_enum)VX_WRITE_ONLY);

    If so, you can remove the references to the second plane in the kernel wrapper?

    Regards,

    Lucas

  • Hello Lucas,

    Thanks for your reply!

    I have modified my kernel to single plane and now the error is resolved.

    But, still the image is not been saved properly.

    Is it the problem with the way I am using tivx_utils_save_vximage_to_bmpfile() or any other issue ?

    Regards,
    Padmasree N.
  • Hello Padmasree,

    The call to tivx_utils_save_vximage_to_bmpfile looks fine.  Could you possibly print out the first few lines of data from the image you are reading and provide the latest log?  I'm just wondering if there is some issue with this file being read in the kernel.

    Regards,

    Lucas

  • Hello Lucas,

    Please find the image data.

    Regards,

    Padmasree N.

  • Hello Padmasree,

    Is this the full log?  It looks like there are other logging statements in the code, such as printing the width and height.

    Regards,

    Lucas

  • Hello Lucas,

    Sorry. The previous log is for another RGB image which is not in the attachment.

    The full log of actual image is attached below.

    Regards,

    Padmasree N.

  • Hello Padmasree,

    This data does not look correct to me.  Is this what you are expecting to be read in from your RGB image?  There may be a bug in your file reading code.

    Regards,

    Lucas

  • Hello Lucas,

    1) Please do provide me some examples on reading a RGB BMP image using VxLib.

        I will verify if my understanding and implementation is correct.

    2) Also, The documentation I referred (https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/latest/exports/docs/tiovx/docs/user_guide/vx__tutorial__image__load__save_8c.html) has only the graph and not the user kernel.

    So, please suggest me a way to implement image read as a separate node(with separate user kernel). 

    Regards,

    Padmasree N.

  • Hello Padmasree,

    For #1, you should just be able to use the default bmp utils as shown below for reading from BMP:

    https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/latest/exports/docs/tiovx/docs/user_guide/group__group__tivx__ext__host__utils.html#ga83fe7115b7012a3d7411b90fc9db0979

    Regarding #2, the way that you are currently implementing a new node via PyTIOVX looks fine.  The issue just appears to be with the BMP reading, so once this is resolved, everything should be functional.

    Regards,

    Lucas

  • Hello Lucas,

    Thanks for your reply!

    The BMP load function tivx_utils_create_vximage_from_bmpfile() cannot be used inside the kernel as it takes "context" as one of its parameter.

    How to use it in my user kernel which I am using right now ?

    Please do throw some light on this.

    Regards,

    Padmasree N.

  • Hello Padmasree,

    The function I linked to is tivx_utils_bmp_file_read() which does not take in context as an argument, just a pointer.  Can you see if this can be used?

    Regards,

    Lucas

  • Hello Lucas,

    Thanks for the clarification!

    I tried with the function tivx_utils_bmp_file_read(), the image is loaded without any errors.

    But, a blank image is only stored in the out folder.

    Please advice me where I am going wrong.. I guess its with the saving of output image from kernel.

    The output console is shown below.

    I have always had less clarity on reading output from graph with different data objects. Please do provide me some documentation which explains the reading of output from graph.

    Regards,

    Padmasree N.

  • Hello Padmasree,

    Regarding documentation for the reading output from graph with different data objects, I would recommend referencing the tutorial as mentioned before.  This is basically the extent of the documentation that we have for this, but you can still reference the OpenVX specification, as this mainly requires the use of the spec API's.

    https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/latest/exports/docs/tiovx/docs/user_guide/CH02_IMAGE.html

    You can also reference the test cases at tiovx/conformance_tests/test_tiovx/test_bmp_rd_wr.c

    Regarding your current issue, you could try writing the output to file from the kernel itself to see if there is an issue with passing the image back to the application.

    Regards,

    Lucas

  • Hello Lucas,

    I tried as you mentioned by including in the kernel itself.

    But, I face "undefined reference to tivx_utils_bmp_file_write()" in my console as shown below.

    Then, I modified the function to tivx_utils_bmp_write() and the graph got compiled succesfully.

    But, when i try to run the graph, Segmentation fault occurs as shown below.

    Below are my queries.

    1) Which is the correct function for bmp write? tivx_utils_bmp_file_write() or tivx_utils_bmp_write() ?

    2) Please provide me clear explanation for the below parameters in the kernel and its impact on the application graph. 

    a) image_addr

    b) image_target_ptr

    c) image_desc

    d) vxlib_image

    Regards,

    Padmasree N.

  • Hello Padmasree,

    Please use the tivx_utils_bmp_write.  The reason for the segfault appears to be that you are using image_desc instead of image_target_ptr.  Please try using image_target_ptr as the input to this API.

    Here are explanations for each of the items you described.  Please let me know if you need further clarification:

    • image_desc: an object descriptor of type tivx_obj_desc_image_t which is a data structure that is passed between nodes in shared memory and it contains details about the image object, including the data buffer pointer.
    • image_addr: This is the pointer contained within the image_desc structure which is the pointer to the image data
    • image_target_ptr: In order to handle upstream filtering operations, we have a function called tivxSetPointerLocation which will set the pointer location of image_target_ptr to the first valid pixel of image_addr.  The reason for this is that according to the OpenVX spec, a previous node in the graph may have used a filter operation, causing invalid pixels at the border of the image.  The tivxSetPointerLocation queries the image_desc to see if there are any invalid pixels and sets image_target_ptr to the first valid pixel location.  Therefore, please use image_target_ptr when accessing the image data.
    • vxlib_image:  This is only intended to easily access image properties of width/height/format, etc from the image_desc.  The tivxInitBufParams function will return these properties into the vxlib_image and store it inside a structure of type VXLIB_bufParams2D_t.

    Regards,

    Lucas

  • Hello Lucas,

    Thank you for your clear explanation! I got a good insight...

    Also, I have modified the kernel code and now I am able to load and save a BMP image successfully from the kernel.

    But, still I am getting only a blank image from my application graph.

    1)What changes should be made to the above code to work ?

    2) My requirement is that the output of this ImageRead node( a set of images) should be passed to another node( camera calibration).                                      How to get multiple images(a set of images) from this node?

    Any suggestions would be really helpful!

    Regards,

    Padmasree N.

  • Hello Padmasree,

    For #1, I will need to take a closer look.  Overall, the code looks fine, so will need to see what the issue could be.

    For #2, are you asking if a set of images can be passed all at once to another node?  In a nutshell, there are two different modes of operation for OpenVX, non-pipelining and pipelining, briefly explained below:

    1. In non-pipelining mode, you will simply call vxProcessGraph and the entire graph of nodes will execute one after another until every node's process callback has been called once.  You can call vxProcessGraph N number of times, thereby processing N number of images in the process callbacks

    2. In pipelining mode, there will be multiple graph executions in flight so that each node can process at simultaneously.  This can also be called for N number of images, and will have better performance.

    Regards,

    Lucas

  • Hello Lucas,

    Thanks for your reply!

    Regarding writing of images back in my application issue, please do verify and provide me solution as early as possible.

    #2, My application is Camera Calibration, which takes N images and some intrinsic parameters as the input and outputs the median roll,yaw,pitch and some other extrinsic parameters. These median and other parameters are calculated once for N images.

    So, I cannot use non-pipelining mode as my application should have only two nodes -ImageReadNode and CameraCalibrationNode. If I follow (1), then I have to have Median Calculation as a separate node.

    Regarding (2), I am not much clear about the Pipelining mode, Please do throw some more light on this.

    Also, Can I use Object Array to output N images at once to my next node?

    Below is my exact flowchart.

    Regards,

    Padmasree N.

  • Hello Padmasree,

    How many images are required for the object array?  We have a max value of 32 that can be contained in an object array.

    Regarding pipelining, you can review the below links for more detail.  Let me know if you have further questions:

    Regards,

    Lucas

  • Hello Lucas,

    There would be a minimum of 300 images in the application.

    Please tell me a way out for this.

    For pipelining, I will refer to the links provided.

    Regards,

    Padmasree N.

  • Hello Lucas,

    In that case, should I create 3 nodes (ImageFileReadNode, Camera CalibrationNode, MedianNode) and pass image by image in a pipelined fashion?

    If so, Please advice me how to implement the same using PyTIOVX. Will the tool support the same?

    Also, I referred about Batch Processing. will that work for my usecase and will the tool support the same?

    Regards,

    Padmasree N..

  • Hello Padmasree,

    Yes, this is an approach that will work.

    We do not have support for pipelining in PyTIOVX.  You will need to write this application manually.  You can reference the tutorial for how to add pipelining to an application.

    You can use batch processing.  However, there is more testing and validation done with the pipelining mode of operation.

    Regards,

    Lucas

  • Hello Lucas,

    Thanks for your reply!

    I will learn about pipelining and will try implementing my application.

    Meanwhile, Please check my code for why the image is not being read back in the application graph.

    Also, Please confirm me if a single node approach of the entire application (only one node) would work on the target hardware with camera as input sensor

    Regards,

    Padmasree N.

  • Hello Padmasree,

    The single node approach can work, but it will depend on the required latencies of your application.  With the single node approach, you will not need to do pipelining given that pipelining is only an advantage when using multiple cores.  Therefore, if you have multiple nodes, these nodes could potentially be processing on multiple cores, giving you better system-level performance.

    Regards,

    Lucas

  • Hello Lucas,

    Thanks for your reply!

    My application with single node takes the image file path as input parameter and not the image.

    The computation for images is done inside the C code, compiled as a static library and added to the graph. The PC simulation is working fine as of now.

    But, now we are moving to target.Will this work ?As we are giving only the image file path as input and not the image itself.

    Please confirm me if this would work on hardware when using camera or should we create a image as input parameter?

    Also, in parallel, we are trying for Multi-node approach of the same application wherein pipelining can be used.

    We have only one core A72 and two cameras.  Do you recommend Pipelining for this usecase?

    Please do provide some insight.

    Regards,

    Padmasree N.

  • Hello Padmasree,

    Yes, this node should work on A72 as well as in PC emulation mode.  And regarding the file read, you should be able to do this in the node as well since it is on the A72.  However, most of our applications that do file read do this outside the node.

    Also, where are you using the camera in this application?  From what I understand, you are using file read and not camera to provide the image.  Could you please confirm?

    Regards,

    Lucas

  • Hello Lucas,

    As of now, we replaced the Capture Node to ImageFileReadNode (in Multi-node approach only).But, in future, we have to move to two Cameras.

    But, the single node approach takes image path only as the input.It does not have any ImageFileReadNode. All the process like reading of image, operations on image, computation,etc are done inside the C code itself. (Compiled and added as a static lib to the graph). Thats why I am wondering if this could work on hardware when two cameras come into picture.

    Regards,

    Padmasree N.

  • Hello Padmasree,

    Once you move to the target, the capture node should be able to replace the ImageFileReadNode as you said.  The reading of image/operations on image, computation, etc can be done in a separate node downstream of the capture node.

    Regarding two cameras, the output of the capture node is an object array with the depth of the array being the number of images being used.  Therefore, the interface will be the same regardless of the amount of cameras, allowing it to be flexible from 1 to N number of cameras.  You can reference the app_multi_cam for details.

    Regards,

    Lucas

  • Hello Lucas,

    We are following two approaches for the same application- Camera Calibration

    1) Single Node approach (Verified on PC simulation, working on hardware)

    2) Multi-Node approach (PC simulation- In Progress)

    So, my questions are..

    1) In approach (1), takes image path only as the input.It does not have any ImageFileReadNode. All the process like reading of image, operations on image, computation,etc are done inside the C code itself. (Compiled and added as a static lib to the graph).we do not have any image as input parameter. We pass only the image path. So, when cameras come into picture and when we have to use CaptureNode later, will that work? or now itself, should we change the code to take "image" as the input parameter?

    2) In approach (2), we have only one core (A72) and three nodes - ImageFileReadNode/CaptureNode, CameraCalibrationNode, MedianNode. So, do you recommend piplelining for the same?

    Regards,

    Padmasree N.

  • Hello Padmasree,

    On #1, yes, I would recommend changing the code to take in the image from the application rather than reading the file in the node.  This will allow for a simpler porting once you introduce the capture node.  In order to comment on the interface to the node, could you provide what image format you need this to be?

    On #2, yes, if you are wanting optimal performance on the SoC, I would recommend pipelining because the capture node will be running on R5F while the other two nodes will be running on different cores.

    Regards,

    Lucas

  • Hello Lucas,

    Thank you so much for your reply!

    (1) approach takes PNG RGB image file path as input and inside the C code, it converts to Grayscale and performs further operation.But, for future, we need to take yuv422 format.

    For (2), as you said, I will perform pipelining and one more clarification is needed here.

    According to my application flow, (ImageFileReadNode---->CameraCalibrationNode) runs for N images and passes to final MedianNode.(This MedianNode calculates median of all N images) and works on A72 core only.

    a) Will this work with pipelining?

    b) Do you recommend pipelining for this usecase, as only one core is involved.

    c) Are there any other things need to be taken care?

    d) Also, why is CaptureNode specific to run on R5F core?

    Please explain me the same.

    Regards,

    Padmasree N.

  • Hello Padmasree,

    A few points below:

    a) This type of flow is not typical in OpenVX, even without pipelining.  Is it possible to modify the median node to track via a context variable the  median value and update upon successive images?  The possibility here is that if this algorithm is running on A72, you don't necessarily need to wrap it in a node.  You could simply write this as a function within the application code and dequeue the images from the graph and pass them to this function.

    b) If you are using capture, then there would be two cores involved given that capture is running on R5F and the remaining algorithms are running on A72.

    c) Which sensor are you using for capture here?  Depending on the type of sensor, you may also need to use the VISS node to convert to YUV422.

    d) The capture node is running on the R5F exclusively as we have written the drivers for R5F.  This was done as a part of the overall architecture decisions for J7 devices.

    Regards,

    Lucas

  • Hello Lucas,

    Thank you so much for your clear explanation!

    I will try the approach for median as you mentioned.

    We have planned for 2 Cameras.

    But, for now, we have ImageFileReadNode which reads image path and passes the image to the next node.

    Please advice me on how to pass the image data to next node using OpenVX.

    Also, any updates on the previous issue of unable to write the image back in application?

    Any insight would be really helpful.

    Thanks in advance!

    Regards,

    Padmasree N.

  • Hello Padmasree,

    You can pass the image from one node to the next by use of data objects at the node interface.  In this case, the ImageFileReadNode needs to have the output be an image while the next node needs to have the input be an image.

    Regarding the debug of writing the image, I am not sure that I will be able to do so.  We have thoroughly tested these API's, so these should be functional.  I suggest using an IDE such as Eclipse to debug this issue and see where the error is occurring.  If you have specific questions based on this debug, I can advise.

    Regards,

    Lucas