This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: [TDA4VM] How can use reshape layer without shape restrictions in TIDL?

Part Number: TDA4VM

Hi

I'm making a segmentation model with PyTorch (onnx).

Also, I use SDK 8.1

I have one problem.

The model that I making must use reshape layer without shape restrictions.

but, I know that the reshape layer only supports [1,1,1,N].

so, I am having a hard time making a segmentation model.

How can use reshape layer without shape restrictions in TIDL?

ex) [1, 4, 32, 1792] -> [1, 128, 32, 56]

  • Hi,

    Have you checked out our Open source runtime offering : https://github.com/TexasInstruments/edgeai-tidl-tools

    This enables you to run models with c7x/MMA unsupported layers on ARM and the supported models on the hardware accelerator.

    Regards,

    Anand

  • Hi,

    Thank you for your answer.

    I don't major in embedded or board, so I don't know exactly about the hardware side.

    I saw the phrase "unsupported" when compiling on PC using edgeai-tidl-tools.
    So is there any way to check the unsupported reshape shape on PC at all?

    Should I unconditionally fit the shape of the reshape to a supported shape?
    If there is a way to replace it, what should I do?

    Also, What exactly does a supported model mean for a hardware accelerator?

    Thanks

  • Ok Lee, thanks for stating your background here so I can explain better. TIDL has optimized implementation for individual layers so that they can be run using hardware accelerators. However, this requires having code specific to each individual layer's properties and so optimized implementation is supported for a specific set of layers and attributes which are defined here: software-dl.ti.com/.../md_tidl_layers_info.html

    In case the network has some layers which are not optimized for accelerator, we split the network into subgraphs which can be run on TI accelerator and those which can be run on ARM cores using the open source tensorlow/onnx libraries. This is covered as part of the open source runtimes offering I have pointed to above: https://github.com/TexasInstruments/edgeai-tidl-tools. So you can still run the entire network seamlessly, even in case some layers do not have supported implementation for accelerator.

    So the prints for "unsupported" you see mean not supported on accelerator but it will execute on ARM cores. You can check the "runtimes_visualization.svg" file generated in model_artifacts folder, this shows the subgraphs created for your model.

    Regards,

    Anand

  • Hi,

    Thank you very much for the detailed reply.

    I've seen the compilation process cut into subgraphs when unsupported layers are included.
    In fact, if you look at the example configuration of Import custom models in the edgeai document, you can see that a bin file is created with one key number (264).
    However, as soon as the subgraph is created, a number of bin files are created. For example, 265_tidl_io_1.bin, 266_tidl_io_1.bin, etc.
    In this case, how can I actually proceed with inference on the board?
    Should I put all my bin files in one folder?
    Can I use the bin file of subgraphs to do inference on PC?
    I wonder how the subgraphs are used.

    Thanks

  • You can refer to this section for running on EVM: https://github.com/TexasInstruments/edgeai-tidl-tools/tree/master/examples/osrt_python#model-inference-on-evm

    model-artifacts folder has all the required bin and io files generated as part of model compilation and our backend implementation takes care of using them correctly to run inference on the board.

    Can I use the bin file of subgraphs to do inference on PC?

    --> check https://github.com/TexasInstruments/edgeai-tidl-tools/tree/master/examples/osrt_python#model-compilation-on-pc point 3.

    Regards,

    Anand