This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: TIDL Questions about Implementation

Part Number: TDA4VM

Team,

A customer of mine is currently doing tests on a SK-TDA4VM and they have some questions:

Question a) 

We have some cases (on big models) where inference on J7 (through Tensorflow lite) is blocked  ( just after the tensorflow "load delegate" )  with no easy way to return to a safe state and finally requires a reboot of J7. We have a model on which it appears every time, and import/compilation run fine without warnings.

If we type  CTRL-C the messages displayed are:
    Clean up and exit while handling signal 2
   Application did not close some rpmsg_char devices

But trying to run the app again, or trying to run another model will block in the same way, until a J7 reboot. We have found no "locked" ressources on J7 Linux ( no suspicious process, or file lock ), we have found no way to return to a safe state (save the reboot). 

It is not a critical problem for us, just annoying. Is there a workaround?

 

Question b)

We don't understand the difference between import/calibration through Tflite/onnx delegates (what we use)  as described (with all parameters)  here  https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/examples/osrt_python/README.md 

and the tool   "tidl_model_import.out"  ( which is part of  edgeai-tidl-tools  package ) described here

https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/08_04_00_06/exports/docs/tidl_j721e_08_04_00_16/ti_dl/docs/user_guide_html/md_tidl_model_import.html

They 2 have different parameters set but are supposed to do the same thing!?

 

Question c) 

We are using the board SK-TDA4VM  with  “Processor SDK Linux for Edge AI” Software.

I downloaded  (and installed in Linux Ubuntu)  "Processor SDK RTOS J721E"  version  08_04_00 , which contains a lot of source code and documentation.

I would like to write a small test program to run simple CNN operations like conv2D using directly OpenVX TIOVX layer, with no import/calibration operation involved. The only tutorials in the SDK found so far use the io.bin and net.bin files generated by import.

Do you have any example that do simple CNN operations on OpenVX ?

 

Question d)

We would like to have some information on the way low level tasks are executed on Jacinto, and particularly on the C7/MMA. 

The MMALIB functions are described here 

https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/08_04_00_06/exports/docs/mmalib_02_04_00_06/docs/user_guide/group__MMALIB__CNN.html

But we couldn't find where these functions are called in the "Processor SDK RTOS J721E". One hypothesis is that we don't have all source code, and some binary library running on C7 is calling these functions.

Generally speaking we would like to understand all the workflow for CNN inference, from the OpenVX graph generated during import to the execution of elementary layers on C7.

Thanks,
  Robert

    1. I understand, but am not aware of any workaround for this problem.
    2. Yes, both are supposed to do the same thing. The runtimes interface just takes “accuracy_level” : 1 or 0 instead of understanding the backend and specifying actual calibration Options. This is just an attempt to simplify usage for customers using runtimes, but essentially does the same thing as standalone importer. In most cases, specifying accuracy_level = 1 and number of frames and calibrations to be used should suffice and there shouldn’t be a need to understand the advanced calibration options.
    3. I've asked my colleagues to check if we have some demo examples available here. Will revert back.
    4. Your hypothesis is correct, mmalib source code as well as the TIDL source code which calls it will be available as a binary in SDK and source code will not be available.