This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-J722S: How to check the

Part Number: PROCESSOR-SDK-J722S


Tool/software:

Hi team:

I am using the example code from TIDL 10.00.08.00 (edgeai-tidl-tools/examples) to run inference with an ONNX file.

I would like to know how I can check the execution time of each layer during inference?

[run inference Step]

  • cd edgeai-tidl-tools
  • mkdir build && cd build
  • cmake -DFLAG2=2 –DTARGET_CPU=arm ../examples
  • make -j
  • ./bin/Release/ort_main -f /opt/model_zoo/<my_custom_model>/artifacts/  -i /opt/edgeai-test-data/images/0003.jpg --count 10

Thanks for your kindly help.

Best regards,

Ken

  • Hi Ken,

    Firstly, set the debug level to four.  This will give you layer-by-layer execution times and generate int/float bin files for comparison in /tmp.   If you are doing testing, I would also recommend using osrt_python/ort as that is easier to use, and you do not have to build anything (that may also add delta to the results).

    To use osrt_python:

    1. cd to examples/osrt_python/ort

    2. vi ../modle_configs.py

    3. Copy a model section to a new model section for your model (not needed if you are using the standard models)

    4. python3 ./onnxrt_ep.py -c -m <model name>  (to compile) then python3 ./onnxrt_ep.py -m <model name> (to run inference)

    Regards,

    Chris