This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM69A: Building TIDL Tool for Target

Part Number: AM69A

Tool/software:

I'm currently working on implementing custom operators for TIDL using SDK version 10_01_00_04. I've successfully created a custom operator and built the TIDL tools for the PC using the following command:

make tidl_pc_tools -j

This generates the tools at SDK/tidl_tools.tar.gz. Everything appears to be functioning correctly on the PC: I replaced the TIDL tools in EdgeAI with my custom-built version, compiled an ONNX model that includes the custom operator, and was able to run inference successfully.

The issue arises when trying to build the TIDL tools for the target board. I used the following command to build the runtime tools:

make tidl_rt

I then replaced the following libraries on the board:

  • SDK/c7x-mma-tidl/arm-tidl/onnxrt_ep/out/J784S4/A72/LINUX/release/libtidl_onnxrt_EP.so.1.0

  • SDK/c7x-mma-tidl/arm-tidl/rt/out/J784S4/A72/LINUX/release/libvx_tidl_rt.so.1.0

However, the libvx_tidl_rt.so built for the target does not contain any of the symbols related to my custom operator, unlike the version built for the PC.

So my questions are:

  1. Is make tidl_rt the correct way to build TIDL tools for the target?

  2. How can I ensure that the custom operator implementation is included when building for the target?

  3. Which libraries exactly need to be copied to the board to support custom operators modifications?

  • Hi;

    Thanks for the question. 

    The "make tidl_rt" will build the source code using pre-built libraries provided in the SDK installer. Assuming you are building some TIDL related apps on your target board, if so, usually you have to build the vision app as well by doing "make vision_apps".  

    Not sure exactly what you have added or modified. But, in order to update the OSRT components and C7X firmware, you will have to do build/make tasks on the target board. Make sure you set the SOC parameter matching with your hardware when you are building it. The following link has the details for doing these tasks

    https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/docs/update_target.md

    Regards

    Wen Li

  • I'd like to provide additional context to my previous question regarding the custom operator.

    I've implemented a custom Ceil operator by following the MaxPool example, and made modifications in the following files:

    • utils/tidlModelImport/tidl_onnxRtImport_core.cpp
      → Added the new operator to individualSupportedOnnxOps

    • custom/tidl_custom_import.c
      → Updated TIDL_MapCustomParamsOnnx(), TIDL_getCustomLayerOutputTensorScale(), and TIDL_tfOutReshapeCustomLayer()

    • custom/tidl_custom.c
      → Modified TIDL_customLayerProcess()

    • custom/makefile
      → Added new source and header files

    • custom/ceil/tidl_custom_ceil.c
      → Implemented TIDL_customCeilProcess()

    • custom/tidsp/ceil/ (C7x implementation source and header files):
      tidl_custom_ceil_ixX_oxX.c
      tidl_custom_ceil_ixX_oxX.h
      tidl_custom_ceil_ixX_oxX_c7x.c
      tidl_custom_ceil_ixX_oxX_priv.h

    Inference using this custom operator works correctly on the PC. I then copied the compiled libraries (libtidl_onnxrt_EP.so.1.0 and libvx_tidl_rt.so.1.0) and the compiled ONNX model artifacts to the target board. However, when running inference on the target, although the model loads and a subgraph is created, the output is entirely zeros.

    libtidl_onnxrt_EP loaded 0x10a91a30 
    Final number of subgraphs created are : 1, - Offloaded Nodes - 1, Total Nodes - 1 
    APP: Init ... !!!
      6879.102931 s: MEM: Init ... !!!
      6879.102973 s: MEM: Initialized DMA HEAP (fd=5) !!!
      6879.103103 s: MEM: Init ... Done !!!
      6879.103121 s: IPC: Init ... !!!
      6879.128557 s: IPC: Init ... Done !!!
    REMOTE_SERVICE: Init ... !!!
    REMOTE_SERVICE: Init ... Done !!!
      6879.136605 s: GTC Frequency = 200 MHz
    APP: Init ... Done !!!
      6879.136697 s:  VX_ZONE_INFO: Globally Enabled VX_ZONE_ERROR
      6879.136709 s:  VX_ZONE_INFO: Globally Enabled VX_ZONE_WARNING
      6879.136718 s:  VX_ZONE_INFO: Globally Enabled VX_ZONE_INFO
      6879.137400 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-0 
      6879.137534 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-1 
      6879.137636 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-2 
      6879.137720 s:  VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-3 
      6879.137731 s:  VX_ZONE_INFO: [tivxInitLocal:126] Initialization Done !!!
      6879.137743 s:  VX_ZONE_INFO: Globally Disabled VX_ZONE_INFO
    INPUT: = [[ -1.1   2.2  -3.3  -4.4   5.5  -6.6   7.7  -8.8  -9.9 -10. ]]
    Model Output:  [array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]], dtype=float32)]
    APP: Deinit ... !!!
    REMOTE_SERVICE: Deinit ... !!!
    REMOTE_SERVICE: Deinit ... Done !!!
      6879.188237 s: IPC: Deinit ... !!!
      6879.189250 s: IPC: DeInit ... Done !!!
      6879.189281 s: MEM: Deinit ... !!!
      6879.189294 s: DDR_SHARED_MEM: Alloc's: 7 alloc's of 1887976 bytes 
      6879.189302 s: DDR_SHARED_MEM: Free's : 7 free's  of 1887976 bytes 
      6879.189309 s: DDR_SHARED_MEM: Open's : 0 allocs  of 0 bytes 
      6879.189320 s: MEM: Deinit ... Done !!!
    APP: Deinit ... Done !!!
    

    My questions are:

    1. What is the correct process to ensure that custom operator changes are fully reflected on the target?

    2. Where does the TIDL runtime expect to find the implementation of custom operators on the target, and how does it locate and invoke the corresponding logic for already supported operators?

    3. What is the correct process to build the TIDL runtime and custom operator support for the target without relying on the prebuilt libraries provided in the SDK? Specifically, how can we ensure a full rebuild of all relevant components from source?