My onnx has TopK operation , but i am trying to convert to TIDL format i am getting the following error "TopK_193 -- ONNX operator TopK is not suported now.. By passing" Whats an alternate solutions to make this work , Please do suggest
This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
My onnx has TopK operation , but i am trying to convert to TIDL format i am getting the following error "TopK_193 -- ONNX operator TopK is not suported now.. By passing" Whats an alternate solutions to make this work , Please do suggest
Hi Abhilash,
Same comment as on the other thread you have :
You can refer to the following repo for running this model : https://github.com/TexasInstruments/edgeai-tidl-tools
TIDL offers open source runtime support to implement supported layers on DSP and unsupported layers on ARM using native runtimes. This should help you get the model working.
Given you are trying to get TopK working, I assume this is an Object detection network you are trying to run, If that is the case, TIDL provides optimized implementation for OD networks with specific architectures as outlined here : https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/docs/tidl_fsg_od_meta_arch.md.
Regards,
Anand
Thanks for the response , my architecture is a standard detection architecture but not related to yolo or ssd , its a new architecture . But i wanted to knw if there i as way to have alternate implementation of TOPK which can run on DSP instead of ARM ?
Hi Abhilash,
Running single layer such as topK out of all the post processing layers on DSP instead of ARM would not give much performance enhancement, and we don't have plans to support this operator standalone on DSP. We provide support for meta architecture to help optimize all layers in post processing on DSP.
Regards,
Anand
THnks for the response , we used the following method to run the model on Arm processor, TVM uses the standard Arm backend and LLVM compiler to generate Arm code in deploy_lib.so. TVM compilation does not save any intermediaries.
On C7x processor, you can turn on "c7x_codegen=1" in your compilation script, https://github.com/TexasInstruments/edgeai-tidl-tools/tree/master/examples/osrt_python#advanced-miscellaneous-options, https://software-dl.ti.com/codegen/docs/tvm/tvm_tidl_users_guide/index.html
The generated code is in the <artifacts_folder>/tempDir/*.c We have not optimized matmul operator for C7x. You can try write your own TVM schedule or implement it as an external library. See TI's TVM user guide: https://software-dl.ti.com/codegen/docs/tvm/tvm_tidl_users_guide/extending.html
i get the following error
Hi Abhilash,
We will have full "topk" support in the next PSDK/TVM release.
In the meantime, you can follow this example, compile a separate "cpp" file for the "float" 1d "topk" implementation, and link it into the final c7x deployable module:
-Yuan