This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: How to configure batch processing using edgeai-tidl-tools

Part Number: TDA4VM
Other Parts Discussed in Thread: AM69A

Tool/software:

Hi all,

We are trying to configure batch processing on a single-core architecture (TDA4VM) using edgeai-tidl-tools, but It seems that only parallel batch processing can be used by configuring advanced_options:inference_mode and advanced_options:num_cores. However, these options are applicable to multi-core platforms. How to proceed with conversion of a network in batch mode? The edgeai-tidl-tools version we use is 09_01_08_00.

NOTE:

We also tried using an old way (import tool from PSDK version 09.01.00.06 in Linux environment) and by setting the numBatches parameter to the desired batch size. With that, the network compilation was successful, but when executed on the target we get correct results only for the first batch and other batches are garbage. The results are correct on PC. After checking the results by layer, it seems like DataConvert layer is problematic as it converts input tensor correctly only for the first batch and other batches is all 0. We checked also with PSDK version 10.00.00.05 and we experienced the same behavior. Can you confirm that DataConvert layer works correctly in batch mode on target, or point us to what we are possibly doing wrong?

  • Hi,

    Yes, parallel batch processing is only supported for multi core devices. The recommendation is to use a compatible SoC such as AM69A to utilize this feature. 

    More on batch processing: https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/docs/tidl_fsg_multi_c7x.md

    Changes in 10_01_00_02 release: https://github.com/TexasInstruments/edgeai-tidl-tools/releases/tag/10_01_00_02

    Thank you,

    Fabiana

  • Hi Fabiana,

    Thank you for your answer. I am aware about the fact that the parallel batch processing is only supported for multi-core devices. However, my question was about whether or not batch processing (not necessarily parallel) is supported for single-core devices. In older PSDK versions, it was supported even for single-core platforms and the model was adjusted to supported during import process. So is it still supported. I assume so, since I was able to import the network. Only issue I was facing is that DataConvert layer seems to not work correctly for multiple batches. Can you confirm this is the issue and how to possibly address it?

    Thank you,

    Mladen

  • Hi Mladen,

    Yes, batch processing is supported on single DSP devices. Could you provide any logs along with your configuration? 

    Thank you,

    Fabiana

  • Hi Fabiana,

    Unfortunately, I cannot provide full logs for the actual network because they contain some confidential information which cannot be shared on a public forum. However, in the next few days I'll try to reproduce the issue with some simpler network and share the results.

    Regards,

    Mladen

  • Hi Mladen,

    Sounds good.

    Thank you,

    Fabiana

  • Hi Fabiana,

    Here is the zip file with a simple network which I used to reproduce the issue. You can find the network ONNX file and the log file which is generated by the tool during import process. Also, you can find the SVG generated by the tool after conversion and dumped outputs of the DataConvert layer for target (tda4_*) and PC simulation (pc_*). When you look into the files with dumped outputs, you can clearly observe that on PC all batches are correct, but on target, only first batch is correct and others are all 0.

    1273.models.zip

    The configuration file I pass to the import tool is as provided below.

    modelType = 2
    outputParamsFile = simple_tidl_io_
    outputNetFile = simple_tidl_net.bin
    inData = ./testvecs/input/simple/batch_000001.bin
    inHeight = 192
    inWidth = 192
    inNumChannels = 1
    numFrames = 1
    numBatches = 5
    inputNetFile = ../../models/simple_model.onnx
    inFileFormat = 1
    quantizationStyle = 2
    biasCalibrationIterations = 5
    calibrationOption = 7
    outElementType = 6
    rawDataInElementType = 6
    addDataConvertToNet = 3
    debugTraceLevel = 1
    inElementType = 6
    perfSimTool = ./ti_cnnperfsim.out
    graphVizTool = ./tidl_graphVisualiser.out
    perfSimConfig = ./device_config.cfg
    tidlStatsTool = ./PC_dsp_test_dl_algo.out
    

    Regards,

    Mladen

  • Hi Mladen,

    Thank you for sharing the data. Can you clarify the edgeai-tidl-tools and Processor SDK versions you are using? Can you verify that they are compatible using the version compatibility table: edgeai-tidl-tools/docs/version_compatibility_table.md at master · TexasInstruments/edgeai-tidl-tools · GitHub

    It is recommended for you to use the latest versions: edgeai-tidl-tools tag version 10_01_00_02 and Processor SDK version 10.01.00.04.

    Thank you,

    Fabiana

  • Hi Fabiana,

    We are currently working with PSDK v09.01.00.06 (as this is the version used by our client) and tools from edgeai-tidl-tools v09_01_08_00, but the same behavior is observed with PSDK v10.00.00.05 with tools from edgeai-tidl-tools v10_01_03_00, which was the latest version available when we investigated this issue. As far as I can see, the latest edgeai-tidl-tools version is 10_01_04_00 compatible with PSDK v10.01.00.04, but we have not tried with that one.

    Nevertheless, can someone confirm if this is a known issue that is fixed in new versions and how we can fix it (e.g., by applying a patch) for earlier versions (if possible)?

    Regards,

    Mladen

  • Hi Mladen,

    You can see the issues that are fixed in each release as well as the known issues that are still present here: https://github.com/TexasInstruments/edgeai-tidl-tools/releases

    I have gotten confirmation that batch processing should be possible on this SOC, but have yet to replicate the issue internally. Please allow me some time to discuss with the team about this. Thank you for your patience.

    Regards,

    Fabiana