This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: can't compile custom model using edge-AI studio model analyzer

Part Number: TDA4VM

Hi, I'm using the model analyzer from edge-AI studio to test onnx model inference time.

The model can be compiled by edge-AI tools without error, version = master.

However, some models can't be compiled successfully using model analyzer, the kernel died while executing sess.run.

Is it able to run model inference using both edge-AI studio and edge-AI tools? Thanks!

Here's my modified ipynb

Fullscreen
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#!/usr/bin/env python
# coding: utf-8
# # Custom Model Compilation and Inference using Onnx runtime
#
# In this example notebook, we describe how to take a pre-trained classification model and compile it using ***Onnx runtime*** to generate deployable artifacts that can be deployed on the target using the ***Onnx*** interface.
#
# - Pre-trained model: `resnet18v2` model trained on ***ImageNet*** dataset using ***Onnx***
#
# In particular, we will show how to
# - compile the model (during heterogenous model compilation, layers that are supported will be offloaded to the`TI-DSP` and artifacts needed for inference are generated)
# - use the generated artifacts for inference
# - perform input preprocessing and output postprocessing
# - enable debug logs
# - use deny-layer compilation option to isolate possible problematic layers and create additional model subgraphs
# - use the generated subgraphs artifacts for inference
# - perform input preprocessing and output postprocessing
#
# ## Onnx Runtime based work flow
#
# The diagram below describes the steps for Onnx Runtime based work flow.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX