This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-AM62A: Running AI model on the chip.

Part Number: PROCESSOR-SDK-AM62A
Other Parts Discussed in Thread: SK-AM62A-LP

Tool/software:

I bought a board AM62A Low-Power SK EVM to run an AI model.

Link of the chip that I bought: www.ti.com/.../spruj66a.pdf

The chip runs Aragon OS and supports Max 2 TOPS.

I have some questions about the following:

  1.  I don't understand what kind of Processor inside the AM62A SKEVM board Processor SDK RTOS (PSDK RTOS) or  Processor SDK Linux (PSDK Linux)
  2. I made a model for converting a 2D image to 3D image. Then I converted the model to the ONNX (16bit) file and tried to run the ONNX model to AM62A board, however the processing time an 2d image of the model (9 seconds) on TI board is slower when compared with other board (0.4 seconds) (Nano Jetson Xavier). My question is: Do I need convert ONNX file to *.bin file? And could you have any document for converting ONNX file to bin file for running on TI AM62A SK EMV board? 
  3. Can I use this guildeline for the AM62A board (clickhere).
  4. In case of I can use above guildeline. Can I use the guildeline for other complex AI model (LSTM, Transformer...)? If I can please give me some more detail documents. 

Thank you so much. 

  • Hi Tommy,

    Glad to hear you are using AM62A and it is our pleasure to help you using it. See my comments/answer below:

    The chip runs Aragon OS and supports Max 2 TOPS.

    We do not support Aragon OS for AM62A. More about our SDK offering in my answer below.

     I don't understand what kind of Processor inside the AM62A SKEVM board Processor SDK RTOS (PSDK RTOS) or  Processor SDK Linux (PSDK Linux)

    The SK-AM62A-LP EVM is built for the AM62A processor which is an SOC contains up to 4 A53 Cortex ARM processors and several MCUs with AI accelerator which is capable of up to 2 TOPS. The SOC contains several other hardware accelerators which are critical for vision and AI applications such as internal ISP and Video Codec. For more details about the SOC see this datasheet: https://www.ti.com/lit/ds/symlink/am62a7.pdf 

    We support several SDKs for AM62A including Linux, RTOS, and QNX. Our list of SDKs are available for download here: https://www.ti.com/tool/PROCESSOR-SDK-AM62A

    I suggest that you start with the Linux sdk for vision and AI apps. Follow the steps in this quick start guide: https://dev.ti.com/tirex/explore/node?node=A__AXXfkyQhgTbg9xe.BzlxIA__PROCESSORS-DEVTOOLS__FUz-xrs__LATEST

    I made a model for converting a 2D image to 3D image. Then I converted the model to the ONNX (16bit) file and tried to run the ONNX model to AM62A board, however the processing time an 2d image of the model (9 seconds) on TI board is slower when compared with other board (0.4 seconds) (Nano Jetson Xavier). My question is: Do I need convert ONNX file to *.bin file? And could you have any document for converting ONNX file to bin file for running on TI AM62A SK EMV board? 

    Yes, in order to use the hardware AI accelerator, the model has to be compiled using our tidl-tools. This tools will produce the necessary binary files to offload the model to the AI accelerator. The tools is available at this git hub repo: https://github.com/TexasInstruments/edgeai-tidl-tools. First follow the steps in the main README to setup up the tool in your host PC then I suggest you start with the examples provided here: https://github.com/TexasInstruments/edgeai-tidl-tools/tree/master/examples/osrt_python

    If you use the ONNX file directly, it would run on the A53 ARM cores and hens the reason for the 9 seconds latency.

    Can I use this guildeline for the AM62A board (clickhere).

    This is an older version of our documentation. Instead, I suggest follow the steps in the tidl-tools which I shared in my answer above. Also, for more advanced topic about compiling the model, you can refer to this detailed documentation here: https://github.com/TexasInstruments/edgeai-tidl-tools/tree/master/docs

    In case of I can use above guildeline. Can I use the guildeline for other complex AI model (LSTM, Transformer...)? If I can please give me some more detail documents. 

    You can use tidl-tools and documentation I provided above. We support some Transformer layers. For a list of the supported layers, see this document: https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/docs/supported_ops_rts_versions.md

    Best regards,

    Qutaiba