This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM69: Issue with ONNX model on TI AM69 board: Kernel stops during compilation

Part Number: AM69

Tool/software:

Hello everyone,

I’m new to development with the Texas Instruments AM69 board and I’m trying to run an ONNX model (SAM — Segment Anything Model) on it.

I have followed the steps to export and quantize the model to ONNX format and loaded it onto the board. However, when I try to compile or run the kernel that uses this model, the execution stops without clear error messages.

I don’t have much experience with this, so I would really appreciate any guidance on:

  • How to verify that the ONNX model is compatible with the TI AM69 board.

  • If there are extra steps to prepare the kernel for working with a quantized ONNX model.

  • Tools or logs that can help me understand why the kernel stops.

  • Simple examples or beginner-friendly documentation for this environment.

Thank you in advance for any help or suggestions. Any simple explanation will be very welcome.

Best regards.

  • Hi Aday,

    Setup instructions here: https://github.com/TexasInstruments/edgeai-tidl-tools

    First of all, if you are new to this, please try a simple model, like a resnet model.  For example, after setting up edgeai-tidl-tools, cd to edgeai-tidl-tools/examples/osrt_python/ort and run:

    python3 ./onnxrt_ep.py -c -m cl-ort-resnet18-v1

    This will download and compile a resnet18 model.  Then run the model by:

    python3 ./onnxrt_ep.py -m cl-ort-resnet18-v1

    If both of the above work, your setup is good to go.  I would recommend going the OSRT approach first.  Basically, from the same directory vi ../model_configs.py and copy one of the configurations (say, cl-ort-resnet18-v1) and give it a name for your model.  The only thing you should need to set for a crude test is the model_path.  Explicitly set it to your model's path.   Then compile your model.

    If everything is good your model should compile.  This will create artifacts in edgai-tidl-tools/model-artifacts and copy a version of your onnx file to edgai-tidl-tools/models/public.  

    You can then run your model by python3 ./onnxrt_ep.py -m  <whatever_name_you_used_in model_configs.py>

    This is all on the host and emulation.  To run your model on the EVM

    copy the file from edgai-tidl-tools/model-artifacts/<model_name> to the EMV.  For example scp -r edgai-tidl-tools/model-artifacts/cl-ort-resnet18-v1 root@<your_device_IP>:/opt

    To run on the device:

    ssh to your device and cd to /opt/tidl_test

    create a file called infer_evm.txt (you can call it whatever, for this example, i use infer_evm.txt):

    debugTraceLevel = 0
    writeTraceLevel = 0
    netBinFile = "/opt/cl-ort-resnet18-v1 /tidl_net.bin"
    ioConfigFile = "/opt/cl-ort-resnet18-v1 /tidl_io_buff1.bin"
    outData = "/opt/cl-ort-resnet18-v1 /test_model_out_evm.bin"
    inFileFormat = 0
    inData = "/opt/test_model/jet.bmp"
    numFrames = 1

    Finally from the /opt/tidl_test directory,  run on the EVM by:

    ./TI_DEVICE_armv8_test_dl_algo_host_rt.out s:infer_evm.txt

    This is just starting point.  Please review the docs in https://github.com/TexasInstruments/edgeai-tidl-tools/tree/master/docs

    Regards,

    Chris