This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-J722S: Run onnx compilation fail

Part Number: PROCESSOR-SDK-J722S
Other Parts Discussed in Thread: AM67A

Tool/software:

Hi team:

I try to run example code to convert model by https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/examples/jupyter_notebooks/custom-model-onnx.ipynb

but I get error below:

Fullscreen
1
2
3
4
*************** EP Error ***************
EP Error Unknown Provider Type: TIDLCompilationProvider when using ['TIDLCompilationProvider', 'CPUExecutionProvider']
Falling back to ['CPUExecutionProvider'] and retrying.
****************************************
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

I check the TIDL_TOOLS_PATH is set, and make sure the path has "tidl_model_import_onnx.so"

Please help me to fix this symptom

Thanks for your kindly help.

[backup]

code below:

Fullscreen
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
import os
import tqdm
import cv2
import numpy as np
import onnxruntime as rt
import shutil
from scripts.utils import imagenet_class_to_name, download_model
import matplotlib.pyplot as plt
from pathlib import Path
from IPython.display import Markdown as md
from scripts.utils import loggerWritter
from scripts.utils import get_svg_path
import onnx
def preprocess(image_path):
# read the image using openCV
img = cv2.imread(image_path)
# convert to RGB
img = img[:,:,::-1]
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

  • Hi, please check two things.

    1. Make sure your <edgeai-tidl-tools>/tidl_tools/ is there and has some version of the following.

    tidl_tools$ ls -1
    device_config.cfg
    ignore.log
    itidl_rt.h
    itvm_rt.h
    libtidl_onnxrt_EP.so
    libtidl_tfl_delegate.so
    libvx_tidl_rt.so
    libvx_tidl_rt.so.1.0
    nc_infofile.txt
    netbufinfo.bin
    osrt_deps
    PC_dsp_test_dl_algo.out
    ti_cnnperfsim.out
    tidl_graphVisualiser.out
    tidl_graphVisualiser_runtimes.out
    tidl_model_import_onnx.so
    tidl_model_import.out
    tidl_model_import_relay.so
    tidl_model_import.so
    tidl_model_import_tflite.so
    tidl_model_secure.out
    version.txt

    If not, set SOC to your device and run "source ./setup.sh" from our edgeai-tidl-tools/ directory (this will take some time).

    2. If edgeai-tidl-tools/tidl_tools exists and it has the above files, then set TIDL_TOOLS_PATH=/<FULL_PATH_TO_EDGEAI_TIDL_TOOLS>/edgeai-tidl-tools/tidl_tools.

  • Hi Chris:

    I check the tidl_tools file below, the file is correct (SOC = am67a)

    and already set the TIDL_TOOLS_PATH to full path of tidl_tools

    My TIDL version is 10.00.04.00, and run on PC(ubuntu 22.04)

    please help me to check,

    Thanks for your kindly help.

  • Please try this:

    Change:

    from scripts.utils import loggerWritter
    
    log_dir = Path("logs").mkdir(parents=True, exist_ok=True)
    
    # stdout and stderr saved to a *.log file.  
    with loggerWritter("logs/custon-model-onnx"):
        
        # model compilation options


    Add the line in bold:

    from scripts.utils import loggerWritter
    
    log_dir = Path("logs").mkdir(parents=True, exist_ok=True)
    print(os.environ['TIDL_TOOLS_PATH'])
    # stdout and stderr saved to a *.log file.  
    with loggerWritter("logs/custon-model-onnx"):
    # model compilation options


    Make sure the output of the value of
    os.environ['TIDL_TOOLS_PATH'] matches your tidl_tools/ directory. It should
    point to your edgeai-tidl-tools/tidl_tools/ directory.
    if not you may have started 'jupyter notebook' before the variable was set.

    Or if you want more control that jupyter notebook, you can just run your model from the
    command line in
    edgeai-tidl-tools/example/osrt_python/ort/ by:

    python3 ./onnxrt_ep.py -m cl-ort-resnet18-v1 -c (First Compile the model)
    python3 ./onnxrt_ep.py -m cl-ort-resnet18-v1 -d   (Run the model on the x86 processor)

    python3 ./onnxrt_ep.py -m cl-ort-resnet18-v1 (Run the model on the PC with C7x emulation)
     
     
  • Hi Chris:

    I print the os.environ['TIDL_TOOLS_PATH'] below,  The path appears to be correct.

    Thanks for your kindly help.

  • Hi Chris,

    I tried running the command python3 ./onnxrt_ep.py -m cl-ort-resnet18-v1 -c

    and encountered the following error: AttributeError: 'InferenceSession' object has no attribute 'get_TI_benchmark_data'.

    Could this be because the TIDLCompilationProvider is not being used?

    Thanks for your kindly help.

  • What version of TIDL are you running? I would recommend a 10.0 variant.  And it looks like you are running this from a Docker container; is that correct?  Can you run it from outside the container to eliminate some variability?   

    I am certain this works, so the only other thing I can think of is to re-install tidl_tools.  To do this move/remove tidl_tools (that directory should not exist for reinstall).  Ensure SOC is set and run 'source ./setup.sh' from the edgeai-tidl-tools directory.

  • Hi Chris:

    I run TIDL version 10.00.04.00. 

    Yes, I run this from docker container.

    Thank you very much for your suggestion. I will try running it directly on the Ubuntu system without using a container. Additionally, may I confirm if the SOC should be set to "am67a"?

    Thanks for your kindly help.

  • Hi Ken,

    am67a is correct.   

  • Hi Chris:

    I would like to double-check the steps I followed to ensure they are correct:

    [env]

    without container

    Ubuntu : 22.04

    TIDL : 10.00.04.00

    [steps]

    1. export SOC=am67a
    2. source ./setup.sh
    3. export TIDL_TOOLS_PATH=$(pwd)/tidl_tools
      export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$TIDL_TOOLS_PATH
      export ARM64_GCC_PATH=$(pwd)/gcc-arm-9.2-2019.12-x86_64-aarch64-none-linux-gnu
    4. cd examples/jupyter_notebooks
    5. source ./launch_notebook.sh
    6. exec custom-model-onnx.ipynb

    Could you kindly confirm if these steps are correct?

    Additionally, I wanted to ask if the output file will be located in the custom-artifacts/onnx/ directory as specified in the notebook?

    Thank you for your assistance!

    Best regards,

    Ken

  • Hi Ken,

    I ran all your steps, but I am confused about how you run the jupyter ipynb file.   I am using Ubuntu 22.04 and exec does not work.  How are you running it? I have heard of using papermill to run a notebook from the command line but exec is not doing anything.   

    Chris

  • Hi Chris:

    I copied the code from the Jupyter Notebook file into a .py file and executed it using Python 3.

    Thank you for your assistance!

    Best regards,

    Ken

  • Please use the other threads for this.  I will close this thread.