This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: Inference picture problem

Part Number: TDA4VM

Hi Ti,

        The chip model we use is TDA4VM, the SDK version is RTOS SDK8.4, and the model we use is BisenetV2.onnx YOLOV3-10.ONNX.

        Specific problem description:

        When I was inferring 4 images at the same time using the batch processing zone, resive, Unsqueeze, Cast, and NonMaxSuppression operators were not supported when I imported the model, but there was no problem when inferring single images with the same model. When I just change numBatches to 1 with the rest of the parameters and input images unchanged, the model import will proceed normally with no unsupported operators.

        Therefore, I hope you can answer the following two questions: 1. Can you check the configuration file to see if additional parameters need to be added or if the input image format is incorrect, resulting in the failure of batch processing function? 2. Could you tell me how to realize the reasoning of four pictures at the same time?Thanks very much!

        Attached are the imported configuration file, spliced pictures, and run logs.

LFJ_0215.zip

  Regards,

  Kong

  • Hi Kong,

         Batch processing feature is mainly implemented for better performance for smaller networks. Currently not all layers support batch processing and after certain number of batches it is harmful to enable batch processing. 

    1. Can you check the configuration file to see if additional parameters need to be added or if the input image format is incorrect, resulting in the failure of batch processing function?

     No specific configuration parameter is required for batch processing except numBatches ( Use inFileFormat = 1 for batch processing which is to provide raw binary files for batch processing.

    2. Could you tell me how to realize the reasoning of four pictures at the same time?

    We support Unsqueeze, Cast, and NonMaxSuppression operators  only as part of object detection post processing. Are these layers coming as part of Object detection post processing in your network?


    Regards,

    Anshu

  • Hi,

         Thank you very much for your reply and guidance.

         For your reply, I have the following two requests and replies:

         1. Can you provide a list of layers that support batch processing?

         2. Operators such as Unsqueeze, Cast and NonMaxSuppression are used in the post-processing part of yolov3, and resize is used in the segmentation model.

    Regards,

    Kong

  • Hi,

        We have not received any reply to this question for four days, which has seriously affected the progress of this part of our work. Please help to solve it, so as to facilitate our work promotion. Thank you very much.

        Regards,

        Kong

  • Hi Kong, 
        If you use the latest SDK 8.5 import tool will throw error if you enable batch processing for un-supported layers. Only following layers supports batch processing for rest import will exit with error in SDK 8.5 : 

    TIDL_ConvolutionLayer

    TIDL_PoolingLayer
    TIDL_BatchNormLayer
    TIDL_EltWiseLayer

    TIDL_ConcatLayer

    TIDL_InnerProductLayer

    TIDL_SoftMaxLayer

    TIDL_DataLayer

    Regards,

    Anshu

  • Hi,

     Actually we have four cameras on the board, and what we want to do is to infer the four images from these four cameras at the same time. How could we achive this goal if the batch processing is not valiable for our model? I wonder if it works to set the batch dimension to be dynamic when we export the model into .onnx?

    Thank you!

     Regards,

     Kong

  • Hi Kong,

        You can loop across TIDL_invoke function to run across multiple images. 

    Regards,
    Anshu