This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-AM69A: PROCESSOR-SDK-AM69A: Issue Deploying Custom iResNet101 Model self trained on AM69A with TIDL

Part Number: PROCESSOR-SDK-AM69A
Other Parts Discussed in Thread: AM69A

Tool/software:

i have compile and run inference time IResNet101 Model with TIDL but look like issue inference time , pls help me check it.

  • i have fix error director but not success 

  • Hi Le,

    Did you try running the inference step on osrt_python/ort ? Please let me know if you see the same issue there. Also could you share your artifacts that you created as well. I will try to run it on my end as well on PC. The high inference time is due to the model completely on ARM. 

    Warm regards,

    Christina

  • Hi Christina,
    i send you file log in folder /opt/edge-tidl-tool/model-artifact .
    Please help me check it. Thank you
    drive.google.com/.../view

  • Hi Le

    What is the inference time that you get when you run this on PC inference? Is it similar?

    Warm regards,

    Christina

  • Hi ,

    it's log when i run IResnet check inference time when docker .
    Pls help me check it. Thank you
    drive.google.com/.../view

  • Hi Le Tung, 

    I double checked and the time that you have on the inference is microseconds [μs], not milliseconds [ms] (as I wrongly informed you in a past E2e) Apologies for that oversight, as I was remembering model run time units for benchmark testing, not for this case. 

    Everything looks fine for me. What is your target run time? There are some warnings  that your model gets when running into TIDL that you can optimize to better run on TIDL. 

    Warm regards,

    Christina

  • Hi ,

    when running our self-trained resnet101 model, the inference time result is 538 microseconds. but when running our facenet model, the inference time result is too optimal compared to other models.

    can you give me a link to the document about this inference time unit?

  • Hi Le,

    There is no documentation that states the unit for the inference time, however, I was able to check this based on looking at the onnxrt_ep.py. Around line 405, it has 

    total_proc_time = total_proc_time / 1000000
    sub_graphs_time = sub_graphs_time / 1000000

    which is what I used to confirm.

    Hope this helps.

    Warm regards,

    Christina

  • Hello ,

    I was previously running the model on the ARM cores due to an incorrect path to the model artifacts. After updating the artifact path, I now encounter the following error during execution:
    TIDL_RT_OVX: ERROR: Verifying TIDL graph ... Failed !!!

    Could you please re-run the model on your end and share the complete inference log with me? Having your log output will help us diagnose the issue more effectively.

    Thank you in advance for your assistance.

    Best regards,
    An Dao

  • Hi An, 

    I ran these under advanced_examples in OSRT on PC. Advanced_examples creates a random data to test the model. I have added both the compilation and inference for your reference. It will look different on the device a bit but please send over your input and the way you are running it on the device if possible.

    Warm regards,
    Christina

    Available execution providers :  ['TIDLExecutionProvider', 'TIDLCompilationProvider', 'CPUExecutionProvider']
    
    Running 1 Models - ['arrow']
    
    ========================= [Model Compilation Started] =========================
    
    Model compilation will perform the following stages:
    1. Parsing
    2. Graph Optimization
    3. Quantization & Calibration
    4. Memory Planning
    
    ============================== [Version Summary] ==============================
    
    -------------------------------------------------------------------------------
    |          TIDL Tools Version          |              10_01_04_00             |
    -------------------------------------------------------------------------------
    |         C7x Firmware Version         |              10_01_00_01             |
    -------------------------------------------------------------------------------
    |            Runtime Version           |                1.15.0                |
    -------------------------------------------------------------------------------
    |          Model Opset Version         |                  11                  |
    -------------------------------------------------------------------------------
    
    ============================== [Parsing Started] ==============================
    
    [TIDL Import] [PARSER] WARNING: Network not identified as Object Detection network : (1) Ignore if network is not Object Detection network (2) If network is Object Detection network, please specify "model_type":"OD" as part of OSRT compilation options
    
    ------------------------- Subgraph Information Summary -------------------------
    -------------------------------------------------------------------------------
    |          Core           |      No. of Nodes       |   Number of Subgraphs   |
    -------------------------------------------------------------------------------
    | C7x                     |                     255 |                       1 |
    | CPU                     |                       0 |                       x |
    -------------------------------------------------------------------------------
    ============================= [Parsing Completed] =============================
    
    ==================== [Optimization for subgraph_0 Started] ====================
    
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_1's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_4's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_10's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_15's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_20's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_26's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_31's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_36's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_41's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_46's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_51's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_56's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_61's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_66's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_71's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_76's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_81's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_86's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_92's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_97's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_102's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_107's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_112's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_117's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_122's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_127's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_132's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_137's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_142's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_147's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_152's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_157's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_162's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_167's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_172's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_177's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_182's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_187's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_192's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_197's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_202's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_207's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_212's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_217's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_222's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_227's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_232's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_237's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_243's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: PReLU Layer PRelu_248's bias cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    [TIDL Import] [PARSER] WARNING: Batch Norm Layer BatchNormalization_254's coeff cannot be found(or not match) in coeff file, Random bias will be generated only for evaluation usage. Results are all random
    
    ----------------------------- Optimization Summary -----------------------------
    --------------------------------------------------------------------------------
    Running_Model :  arrow  
    
    [WARNING] The run might fail for transformer/convnext networks, user will have to modify the strict to set             graph_optimization_level to DISABLE_ALL in session.
    
    
    Running shape inference on model ../../../../../models/arrow/resnet101.onnx 
    
    Completed model -  resnet101.onnx
    
     
    Name : arrow                                             , Total time :   17644.79, Offload Time :   13276.17 , DDR RW MBs : 0
     
     
    
    Available execution providers :  ['TIDLExecutionProvider', 'TIDLCompilationProvider', 'CPUExecutionProvider']
    
    Running 1 Models - ['arrow']
    
    
    Running_Model :  arrow  
    
    [WARNING] The run might fail for transformer/convnext networks, user will have to modify the strict to set             graph_optimization_level to DISABLE_ALL in session.
    
    EP Error /root/onnxruntime/onnxruntime/core/providers/tidl/tidl_execution_provider.cc:94 onnxruntime::TidlExecutionProvider::TidlExecutionProvider(const onnxruntime::TidlExecutionProviderInfo&) status == true was false. 
     when using ['TIDLExecutionProvider', 'CPUExecutionProvider']
    Falling back to ['CPUExecutionProvider'] and retrying.
    Completed model -  resnet101.onnx
    
     
    Name : arrow                                             , Total time :     277.51, Offload Time :       0.00 , DDR RW MBs : 0
     
     
    

  • Hi Christina,

    Thank you for sharing the advanced_examples for OSRT on PC—I really appreciate having both the compilation and inference logs for reference.

    My colleague Mr. Tung and I are working on the AM69A evaluation kit. When I execute the same workflow as Mr. Tung, the log confirms that the model is running entirely on the ARM core, just as shown in the screenshot above.

    To adapt the example to our setup, I only modified the onnxrt_ep.py file, updating:

    python
    delegate_options['artifacts_folder'] = "<correct path to your artifacts folder>"

    so that the runtime can locate the allowedNode.txt file. However, once the TIDL provider is invoked, the run fails with:

    TIDL_RT_OVX: ERROR: Verifying TIDL graph ... Failed !!!

    Could you please try running the advanced_examples on an AM69A device and share your logs or any insights? Your help would be invaluable in diagnosing this TIDL verification failure.

  • Hi Christina,
    i additional log when run model resnet101 of Mr.An below

    drive.google.com/.../view

  • Hi An and Le,

    Did you set the model details under common_utils found in /edgeai-tidl-tools/examples/osrt_python/advanced_examples/unit_tests_validation/ ? 

    Also, do you have an input file for your model that you are using? If you have an input file, I recommend using  /edgeai-tidl-tools/examples/osrt_python/ort

    for the best data since advanced_examples is just for testing compatibility purposes and not accuracy. If you can send the input file, I can test it on the AM69a device to check where you may be having the issue.

    Warm regards,

    Christina 

  • Thanks Le,

    Please give me some time to set this up and run on my end.

    Warm regards,

    Christina

  • Hi Christina ,

    Any progress? Do you need any support?
    Thanks , best regards.

  • Hi Le,

    No support from your end currently needed. I am setting up the device but due to bandwidth, it may take me some time to get back to you. Please expect a response by latest next Wednesday on my progress. 

    I appreciate your patience.

    Warm regards,

    Christina

  • Hi Christina,

    Have there been any results? I'm looking forward to a positive outcome from you.

    Thanks and best regards.

  • Hi Le, 

    I think I came across the artifacts_folder issue that you mentioned. Here is the output I received

    Available execution providers : ['TIDLExecutionProvider', 'TIDLCompilationProvider', 'CPUExecutionProvider']

    Running 1 Models - ['arrow']


    Running_Model : arrow

    [WARNING] The run might fail for transformer/convnext networks, user will have to modify the strict to set graph_optimization_level to DISABLE_ALL in session.

    libtidl_onnxrt_EP loaded 0xad959a0
    EP Error /root/onnxruntime/onnxruntime/core/providers/tidl/tidl_execution_provider.cc:94 onnxruntime::TidlExecutionProvider::TidlExecutionProvider(const onnxruntime::TidlExecutionProviderInfo&) status == true was false.
    when using ['TIDLExecutionProvider', 'CPUExecutionProvider']
    Falling back to ['CPUExecutionProvider'] and retrying.
    Completed model - resnet101.onnx


    Name : arrow , Total time : 538.19, Offload Time : 0.00 , DDR RW MBs : 0


    ERROR : artifacts_folder not a directoryroot

    When trying to run your model under examples, it takes a long time to compile. I am still working through trying to determine the case of this, as well as the artifacts_folder issue. I may try to run your model directly under tidl_tools through TIDLRT since this will then use the TIDLExecution instead of CPU. If you would like to work on it in parallel, the documentation for it can be followed through with jupyter notebooks:

     https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/examples/jupyter_notebooks/colab/tidlrt_tools.ipynb

    I will continue to update you on the progress. 

    Warm regards,

    Christina

  • Hello,

    Could you try the following changes on your board?

    under  /run/media/BOOT-mmcblk1p1/uEnv.tx

    if the name_overlays line is set to 

    name_overlays=ti/k3-j784s4-edgeai-apps.dtbo

    please change it to 

    name_overlays=ti/k3-j784s4-edgeai-apps.dtbo ti/k3-am68-sk-v3link-fusion.dtbo ti/k3-v3link-imx219-0-0.dtbo

    If you are continuing to have trouble running under OSRT, TIDLRT is a second option to run these under. 

    Let me know what happens with the error when this occurs.

    Warm regards,

    Christina

  • Hi Christina , 

    After fixing according to your instructions, there is still no change in the error. I tried the TIDLRT part but it is having an error. It seems that the documentation on the github page has not been updated. Can you send me the detailed results of the part where you tested my resnet101.onnx self-train model on the tidlrt tool?
    Thanks and bét regard.

  • Hi all,

    I have attached the import and inference files needed for successful import and inference of the resnet101 model that I presented in the debug session.  This assumes you have edgeai-tidl-tools set up and have run 'source setup'.sh successfully.  

    First step is to make an arrow/E2E-1491952/ under edgeai-tidl-tools/ and copy resnete101.onnx model to arrow/E2E-1491952/.  Then cd into edgeai-tidl-tools/tidl_tools, copy import_resnet101_host to tidl_tools, run mkdir out/, and run "./tidl_model_import.out import_resnet101".  This will compile the model and place the artifacts in out/.

    Once you have the artifacts, you can then run:

    tidl_tools# ./PC_dsp_test_dl_algo.out s:inference_resnet101_host

    This will execute the model in emulation mode on the PC.

    On the device, make an /opt/arrow directory.

    From tidl_tools/, copy the out folder to /opt/arrow/ by scp -r out root@<your_device_ip_address>/opt/arrow

    Then copy jet.bmp and inference_resnet101 to /opt/arrow/ by scp inference_resnet101  root@<your_device_ip_address>/opt/arrow and cp jet.bmp  root@<your_device_ip_address>/opt/arrow 

    Finally, SSH to the device and cd to /opt/arrow.  Then on the device, run:

    root@am69-sk:/opt/arrow# ../tidl_test/TI_DEVICE_armv8_test_dl_algo_host_rt.out s:inference_resnet101

    The .zip file has the generated binaries I used in the debug session.   It is not needed except as a reference.

    https://e2e.ti.com/cfs-file/__key/communityserver-discussions-components-files/791/inference_5F00_resnet101_5F00_hosthttps://e2e.ti.com/cfs-file/__key/communityserver-discussions-components-files/791/import_5F00_resnet101https://e2e.ti.com/cfs-file/__key/communityserver-discussions-components-files/791/inference_5F00_resnet101 device.zip

    regards,

    Chris

  • Hi Christina , 

    i have follow step by step for you but have issue the same when you before reboot 

    it's log when i follow guide for you , i try reboot devices but not update env , pls help me check it.
    drive.google.com/.../view

  • Hi,

    I suspect there is something wrong with the SD card image.  This will work with the .wic file for 10.01 from https://www.ti.com/tool/download/PROCESSOR-SDK-LINUX-AM69A/10.01.00.05.  Is that the .wic file used to create the SD card?  One other thing to try is to initialize the OpenVX environment by:

    root@ j7-evm:~# cd /opt/vision_apps
    root@ j7-evm:~# source ./vision_apps_init.sh
    root@ j7-evm:/opt/tidl_test# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib

    Before running inference.

    regards,
    Chris
  • Hi Christina,

    I followed the same procedure as in the resnet101.onnx example to import our custom-trained yolov8s.onnx model. However, I encountered the following error logs during the import process using tidl_model_import.out:

    [TIDL Import] WARNING: Cannot read dims for variable input of Add/Mul operator... Unable to find initializer at index -1 for node /model.22/Slice_1 [TIDL Import] UNSUPPORTED: Slice layer : Unable to find float initializer... [TIDL Import] FATAL ERROR: ONNX model import failed...

    This is the link to the YOLOv8s model that runs successfully on the kit, but it currently runs 100% on the CPU.
    drive.google.com/.../view
    Could you help us understand the cause of these errors and how to resolve them?


    2. Requirement to build from source code for evaluation:

    I noticed that your guides often focus on using pre-built executables along with input files for inference. However, our goal is to build the toolchain from source so that we can integrate additional evaluations beyond inference time, such as accuracy and custom metrics.

    Could you please guide us on how to properly build the necessary components from source to support these needs?

  • Hi Le,

    Please add a new E2E for a new question.  We keep all E2E's on a single topic for search purposes.  It sounds like you are able to import the resnet101 model and are going to a new model.  This is good, and please close this thread and start a new one on the new model.

    I am also not able to open your log file.

    Regards,

    Chris

  • Hi Christina ,

    I sincerely appreciate your support in helping me successfully run the resnet101.onnx model on the AM69A board.

    I have created a new case regarding the YOLOv8s model , pls help me check it.
    e2e.ti.com/.../processor-sdk-am69a-processor-sdk-am69a-issue-deploying-custom-yolov8s-model-self-trained-on-am69a-with-tidl

    thanks and best regart ,