This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

SK-TDA4VM: SK-TDA4VM missing dynamic libraries file: tidl_model_import_tflite.so (TIDL Tools missing)

Part Number: SK-TDA4VM
Other Parts Discussed in Thread: TDA4VM

When trying to use the SK-TDA4VM to compile the model,  using the following code, provided in the "Edge AI Academy":

    tidl_delegate = [tflite.load_delegate(os.path.join(os.environ['TIDL_TOOLS_PATH'], 'tidl_model_import_tflite.so'), compile_options)]            

I get the error: tidl_model_import_tflite.so: cannot open shared object file: No such file or directory

It seems the TIDL tools is not installed in the image file downloaded: ti-processor-sdk-linux-sk-tda4vm-etcher-image.zip version 08.00.01.10. Search the complete filesystem and the file is not there.

I checked some documentation for the installation, but it is not clear at all, because it assume a lot things instead of providing step by step how to download it and install it. I would like to be able to compile

custom model using SK-TDA4VM (Linux Ubuntu) and/or Windows10, I do not want to depend of the TI Edge AI Cloud. How can I do it ?

I repeat it, again, I checked some documentation and it is not clear or assume too much as follow: 

Setting up the environment

TIDL is installed by SDK installation and can be found inside ${PSDKRA_PATH}/tidl_j7_01_00_00_00. Run the following steps to export required variables and install tools for ease of use.

For Linux Users

Export the path of TIDL root directory as TIDL_INSTALL_PATH. All examples in this guide will use paths and filenames relative to this directory.

user@ubuntu-pc$ export TIDL_INSTALL_PATH=${PSDKRA_PATH}/tidl_j7_01_00_00_00


There is not variable PSDKRA_PATH or TIDL_INSTALL_PATH and there is not directory tidl_j7_01_00_00_00 in the whole filesystem.


Any help is welcome. I am trying to compare this product with Nvidia Jetson Nano.
  • Hello,

    Thank you for your question.

    The models are compiled on a Linux host PC and NOT the SK EVM. This is described int he SDK documentation at this section here.

    http://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-sk-tda4vm/08_00_01_10/exports/docs/inference_models.html#pub-edgeai-compile-artifacts

    This points to the EdgeAI TIDL tools on:

    https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/README.md#setup

    Once you generate the artifacts, you can use those for inference on the SK EVM.

    Could you try this and let us know if you are able to compile the models?

    Thanks

    Srik

  • Hi

    I think the author of the question is looking for a step-by-step guide from converting a Tensorflow model into a compiled TFlite delegate model. I can't find this guide amongst the list of documentation provided.

    That is a guide starting  from the point where a TensorFlow model is converted into a quantised TFLite model,  then compiled into a TFLite offload model plus artifacts?

  • That is correct Ben, that is the information I am looking for (converting Tensorflow model into a compiled TFlite delegate model). I tried the steps provided by Srik above, and the error changed:

         new error --> OSError: tidl_model_import_tflite.so: cannot open shared object file

         Previous error --> tidl_model_import_tflite.so: cannot open shared object file: No such file or directory

    It looks like it found the library file, but could not load it, maybe there is a dependency missing.

  • That is correct Ben, that is the information I am looking for (converting Tensorflow model into a compiled TFlite delegate model). I tried the steps provided by Srik above, and the error changed:

         new error --> OSError: tidl_model_import_tflite.so: cannot open shared object file

         Previous error --> tidl_model_import_tflite.so: cannot open shared object file: No such file or directory

    It looks like it found the library file, but could not load it, maybe there is a dependency missing.

  • Hi Franciso, 

    Just to make sure.. you are using Linux PC right for the model compilation? I ran the same steps and it works fine for me.


    When you run the below, is everything setup fine?

    git clone github.com/.../edgeai-tidl-tools.git
    cd edgeai-tidl-tools
    source ./setup.sh

    Thanks Srik

  • Srik, yes those steps (download from github) and execution of the setup script run without problem. Also yes I am running the model compilation under Linux, but this is the Linux in the SK-TDA4VM. I was checking the library tidl_model_import_tflite.so and saw it has a dependency from library ld-linux-x86-64.so.2, and that library is not in the Linux image for the SK-TDA4VM and I think that is causing the error. In my environment I have Windows10 with AMD and Nvidia GPU (running Tensorflow and Tensorflow Lite) and the SK-TDA4VM with the Linux image from TI.

  • Thank you, Francisco. That's what I was suspecting. :-) Let me please clarify couple of things. Hi Ben, this will address your comments too.

    1. Model compilation is supported at this point only on a Linux PC w/ x86 architecture and not the ARM based platforms, This is because of several libraries and optimizations for faster compilation time. Typically, model compilation is one time effort and is done in conjunction with training and model development on the host PC with multiple processor cores.
      • SK-EVM and TDA4VM platform with 8 TOPS of energy-efficient deep learning is more targeted towards edge inferencing function.
    2. We don’t document the Tensorflow to Tflite as this is already documented by Tensorflow community at: https://www.tensorflow.org/lite/convert.
      • The good thing about TI’s edge AI support is that we use industry standard open source runtime for Tensorflow and also for ONNX and TVM.
      • This includes both model compilation and inferencing and the code examples are in: https://github.com/TexasInstruments/edgeai-tidl-tools/
      • If you see the code snippet below in examples/osrt_python/tfl/util.py, we are fetching the .tflite model directly from tfhub and then compile it using the tflite Interpreter APIs.

    'mobilenet_v1_1.0_224.tflite' : 'tfhub.dev/.../1

    'deeplabv3_mnv2_ade20k_float.tflite' : 'github.com/.../deeplabv3_mnv2_ade20k_float.tflite

    'ssd_mobilenet_v2_300_float.tflite' : 'github.com/.../ssd_mobilenet_v2_300_float.tflite

     

    • For model compilation, we use 'tidl_model_import_tflite.so' and for model inference, we use: 'libtidl_tfl_delegate.so'.
    • You can see this code in examples/osrt_python/tfl/tflrt_delegate.py

        elif args.compile:

            interpreter = tflite.Interpreter(model_path=config['model_path'], \

                            experimental_delegates=[tflite.load_delegate(os.path.join(tidl_tools_path, 'tidl_model_import_tflite.so'), delegate_options)])

        else:

            interpreter = tflite.Interpreter(model_path=config['model_path'], \

                            experimental_delegates=[tflite.load_delegate('libtidl_tfl_delegate.so', delegate_options)])

     

    So, to do any performance evaluation and benchmarking, you don’t have to compile the models. TI did the work already. All the models in Modelzoo are already compiled and you can run any of the model demo using the out-of-the-box demo application we provided in the SDK.

                  http://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-sk-tda4vm/08_00_01_10/exports/docs/running_advance_demos.html

    You already have SK EVM. So, you can run any of the model in Modelzoo in a matter of minutes using this link above. Were you able to run this? When you run these demos, you will also get inference time and FPS numbers that you can compare with other architectures for performance comparison.

    If you have any custom model that is not in our Modelzoo, then you do have to do the compilation. Do you have such a model for thsi experiment?

    Thanks

    Srik

  • Hi Srik

    Thanks for this clarification.

    When I load the "Custom model compilation and Inferencing using Tensorflow Lite runtime" example in the TI Edge AI Cloud. It has this snippet of code listed, below.

    tidl_delegate = [tflite.load_delegate(os.path.join(os.environ['TIDL_TOOLS_PATH'], 'tidl_model_import_tflite.so'), compile_options)]
    interpreter = tflite.Interpreter(model_path=tflite_model_path, experimental_delegates=tidl_delegate)
    interpreter.allocate_tensors()

    When I change the tflie model to my custom model (and create an artifacts folder)  the Jupyter Notebook kernel crashes with the following message. 

    "The kernel appears to have died. It will restart automatically"

    Does this occur, because the cloud tool is connected to a Jacinto EVM and not a Linux x86 PC (as you have mentioned in your reply)?

    I will try and compile a tflite model with the workflow you have provided in your message, above.

    Cheers.

  • Thank you Srik for the clarification. That mean I need to setup VirtualBox with Linux to test the complete flow, I will try first the compilation tools for Windows, which is run from command prompt using tidl_model_import.out.exe, this process will be easier, for me, than VirtualBox setup.

    I already tested some of the models and demo TI provide and the performance were excellent with the SK-TDA4VM, this is why I want to test the complete flow with custom model now. I did not play around with the Edge AI Cloud because if I generate the artifact with TI Edge AI Cloud (compilation) I did not see the option to download the artifacts and run it in the SK-TDA4VM, that is why I was looking for a process independent of TI Edge AI Cloud to test my custom model.

    I will try now the Windows compilation, as I mentioned above, and let you know how it go. Thank you so much.

  • Hi Franciso, It's great that you already tried the demos and see the performance. 

    Looks like you are already aware of the TIDL import flow from this link. This should work on Windows/PC.

    https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/07_03_00_07/exports/docs/tidl_j7_02_00_00_07/ti_dl/docs/user_guide_html/md_tidl_fsg_import_tool_design.html

    This flow is what we use for a specific set of customers. For broad market customers, we promote the open source run-time APIs for faster and easier development.

    It's pretty straightforward to take the artifacts from the EdgeAI cloud to SK EVM.  The artifacts are essentially in 2 files.

    What model are you trying to compile? Is this outside our Modelzoo?

    Thanks

    Srik

  • I believe on the cloud tool, the compilation is run on host PC and the inference on the actual Jacinto EVM. 

    We have example notebooks that demonstrate the whole flow here. Can you try this and make sure it runs fine.

    One example notebook you can try is found unnder here.  From Menu: Evaluation -> Custom -> Tflite

    This will open up custom-model-tfl notebook. This has both compilation and artifacts sections. You can run this and make sure it's running.

    You can see modules 3 and 4 of edgeAI Academy to understand what files to check for proper compilation...etc

    BTW, what custom model are you trying to compile? 

    Thanks

    Srik

  • Hi Srik

    All I have done is the following:

    1) (works) On a Mac Mini M1 run the TensorFlow Fashion MNIST example in a Jupyter Notebook. OK. (https://www.tensorflow.org/tutorials/keras/classification)

    2) (works) Convert the model into a TensorFlow Lite Model and test in a TFLite runtime interpreter. OK.

    3) (works) Import the model into a EdgeAI Cloud Jupyter Notebook and test in a TFLite runtime interpreter. OK.

    4) (Fails) Replace the TFLite model in the "Custom model compilation and Inferencing using Tensorflow Lite runtime" example. The kernel dies, as described in my previous reply. Not OK.

    I have since tried to compile the TFLite model in Google Colab to run on an EdgeTPU and this has been successful. So I am not sure why step 4) fails, using the EdgeAI Cloud tool.

    Regards

    Ben

  • Got it, Ben. For step 3, it's just running on the ARM. For step 4, you are now compiling that model. If TIDL compiler does not understand all the layers, sometimes it could crash.  If you can share the tflite model that you generated from step 2, I can check with out TIDL experts and give suggestions. Sometimes, minor tweaks are needed to get these models run on TIDL. That is the purpose of our ModelZoo.

    We also have a webinar coming up in December to dig deeper into model compilation specifically for Yolo but the ideas would be similar.

    Hi Francicso,

    One other comment.. Taking artifacts from Cloud tool or Linux PC tool is straightforward.. You just need to take 3 files.. *net.bin, *io_1.bin and allowedNode.txt 

    These will be in your artifacts directory that you specify in the delegate compile options.

    Thanks

    Srik

  • Hi Srik

    The file upload function does not seem to work in the Reply->Insert->Image/Video/File  dialogue.

  • Thank you Srik for the information. I was able to generate the artifact using Windows tools (tidl_model_import.out.exe), the artifacts were generated without error (see below final log lines):

         ------------------ Network Compiler Traces -----------------------------
         successful Memory allocation
         ****************************************************
         ** ALL MODEL CHECK PASSED **
         ****************************************************

    But it did not generate allowedNode.txt, so I took the one provide in the tutorial for model ssd_mobilenet_v1_coco_2018_01_28.tflite (this is the model I am playing with for now to test the whole flow, but I want to generate all files instead of using the provided ones).

    After moving the artifacts to the SK-TDA4VM and run it with the Jupyter notebook from the tutorial(4_Infer_SK_EVM_DL_acceleration.ipynb). The delegation load without error (see below Jupyter notebook log):

         [I 15:35:26.782 NotebookApp] Replaying 6 buffered messages

         Number of subgraphs:1 , 64 nodes delegated out of 64 nodes

         Calling appInit() in TIDL-RT!
         APP: Init ... !!!
         MEM: Init ... !!!
         MEM: Initialized DMA HEAP (fd=45) !!!
         MEM: Init ... Done !!!
         IPC: Init ... !!!
         IPC: Init ... Done !!!
         REMOTE_SERVICE: Init ... !!!
         REMOTE_SERVICE: Init ... Done !!!
         3024.579142 s: GTC Frequency = 200 MHz
         APP: Init ... Done !!!
         3024.579384 s: VX_ZONE_INIT:Enabled
         3024.579481 s: VX_ZONE_ERROR:Enabled
         3024.579566 s: VX_ZONE_WARNING:Enabled
         3024.580204 s: VX_ZONE_INIT:[tivxInitLocal:111] Initialization Done !!!
         3024.580561 s: VX_ZONE_INIT:[tivxHostInitLocal:86] Initialization Done for HOST !!!

    But when running the code line: interpreter.invoke() to do the inference, python notebook engine crash. I think something is wrong with the file generated by Windows tool, not sure what and why, because if I use the ones (170_tidl_net.bin, 170_tidl_io_1.bin) provided by the tutorial it work, but as I mentioned above, I want to generate those files with the provided tools before experimenting with custom models . Below are the parameters passed to the Windows tool to compile with delegation (the import/compilation completed without error in Windows as mentioned above):

    # 0: Caffe(.caffemodel and .prototxt files)
    # 1: TensorFlow (.pb files)
    # 2: ONNX (.onnx files)
    # 3: tfLite (.tflite files)
    modelType = 3

    # Bit-depth for model parameters (weights and bias): default (8), minimum (0), maximux (16)
    numParamBits = 8

    # Bit-depth for layer activation: default (8), minimum (0), maximum (16)
    numFeatureBits = 8

    # Quantization method
    # 0 : Caffe-jacinto specific
    # 1 : Dynamic
    # 2 : Fixed (default)
    # 3 : Power-of-2 Dynamic
    quantizationStyle = 2  # Tried with 3 also and same result

    # Path to input model network definition file (path-param)
    inputNetFile = "C:\Wrkdir\Machine_Learning_and_AI\TI\code\HelloWorld_PC_Cloud_TDA4VM_SK_EVM\TFL-OD-200-ssd-mobV1-coco-mlperf-300x300\ssd_mobilenet_v1_coco_2018_01_28.tflite"

    # Path to output TIDL model file (path-param)
    outputNetFile = "C:\Wrkdir\Machine_Learning_and_AI\TI\code\HelloWorld_PC_Cloud_TDA4VM_SK_EVM\artifacts-mzoo\170_tidl_net_.bin"

    # Path to output TIDL buffer descriptor file (path-param)
    outputParamsFile = "C:\Wrkdir\Machine_Learning_and_AI\TI\code\HelloWorld_PC_Cloud_TDA4VM_SK_EVM\artifacts-mzoo\170_tidl_io_"

    # Data normalization flag for each input feature (multi-param): 1 = true; 0 = false (default)
    inDataNorm = 1

    # Input features parameters *** TBD
    inMean = 128 128 128
    inScale = 0.0078125 0.0078125 0.0078125
    inWidth = 300
    inHeight = 300

    # Number of channel for each input feature (multi-param). RGB (3) for colors images
    inNumChannels = 3

    # *** TBD
    inDataNamesList = "normalized_input_image_tensor"
    outDataNamesList = "BoxPredictor_0/BoxEncodingPredictor/BiasAdd,BoxPredictor_0/ClassPredictor/BiasAdd,BoxPredictor_1/BoxEncodingPredictor/BiasAdd,BoxPredictor_1/ClassPredictor/BiasAdd,BoxPredictor_2/BoxEncodingPredictor/BiasAdd,BoxPredictor_2/ClassPredictor/BiasAdd,BoxPredictor_3/BoxEncodingPredictor/BiasAdd,BoxPredictor_3/ClassPredictor/BiasAdd,BoxPredictor_4/BoxEncodingPredictor/BiasAdd,BoxPredictor_4/ClassPredictor/BiasAdd,BoxPredictor_5/BoxEncodingPredictor/BiasAdd,BoxPredictor_5/ClassPredictor/BiasAdd"
    inData = "C:\Wrkdir\Machine_Learning_and_AI\TI\code\HelloWorld_PC_Cloud_TDA4VM_SK_EVM\artifacts-mzoo\detection_list.txt"

    # Post-processing on output tensor. The following types are currently supported:
    # 0 : No post-processing
    # 1 : Classification (TOP-1 and TOP-5 accuracy)
    # 2 : Object detection (Draw bounding-boxes around detected objects)
    # 3 : Semantic segmentation (Per-pixel color blending)
    postProcType = 2

    # Flag to execute network-compiler after import: 1 = true (default) / 0 = false
    executeNetworkCompiler = 1

    # Id for the device
    # TDA4VMID = 0
    # TIDL_TDA4AEP = 1
    # TIDL_TDA4AM = 2
    # TIDL_TDA4AMPlus = 3
    deviceName = 0

    # Path to configuration file for network compiler tool (path-param)
    perfSimConfig = "C:\Wrkdir\Machine_Learning_and_AI\TI\code\HelloWorld_PC_Cloud_TDA4VM_SK_EVM\artifacts-mzoo\device_config.cfg"

    # Path to statistics and range collection tool (path-param)
    tidlStatsTool = "c:\wrkdir\Tools\PC_dsp_test_dl_algo.out.exe"

    # Path to network graph compiler tool (path-param)
    perfSimTool = "c:\wrkdir\Tools\ti_cnnperfsim.out.exe"

    Thank you so much for the support and guidance. I really would like to use TI Edge AI product, because the performance I saw with Nvidia Jetson Nano was not what I was expecting. I also did Arduino Edge AI using Tensorflow Lite Micro (using Arduino and Harvard libraries) and it work well, but due to the hardware limitation, the models are also limited in accuracy.

  • Srik,

    After reading a lot of documentation I saw that I need to include 2 more parameters (metaArchType, metaLayersNamesList) to the config file for Windows import program when doing object detection. After including those 2 parameters the file generated, with the Windows import tool, did not crash the Python engine in the SK-TDA4VM and it seems to delegating properly, according to the Jupyter log, with a good performance (253 fps). I also saw, that adjusting some parameters, in the device_config.cfg, I can change the performance.

    I, also, generated the artifacts using TI Edge AI Cloud, which worked smoothly and moved (downloaded) those 3 files generated from the TI Edge Cloud to my SK-TDA4VM.

    In both conditions, generating the artifacts with the TI Edge AI Cloud and generating artifacts with Windows Import tool, when doing the inference for object detection in the SK-TDA4VM, using tutorial file 4_Infer_SK_EVM_DL_acceleration.ipynb the box around the object (dog_cat.jpg) is not display or visible (there is not error when running the complete Jupyter Notebook). I think that something still missing somewhere in the configuration files for the artifact generation. Again, if I use the provided artifact 3 files, in the tutorial HelloWorld_PC_Cloud_TDA4VM_SK_EVM.zip, the SK-TDA4VM draw the box around the object properly. So, it looks like the files in the tutorial were generated with different parameters than the one generated by TI Edge AI Cloud and the one I am passing to the Windows import tool. Do you know what might cause this problem ? This is the only thing pending in my side to have the complete process flow clear and working properly using both TI Edge AI Cloud and Windows import tool.

    Thank you so much Srik

  • Hi Francisco,

    I think something might have changed with the latest version of the tools. I re-ran the compilation today and I don't see the inference with the ssd-mobv1. 

    While investigating, I tried quickly another model ssd_mobv2 and that works fine. Please see the attached code directory.

    Please run this file.. and see if this works fine on the Cloud tool. You can take the same artifacts onto the SK EVM too. Please note that, for this model,  . I had to increase the calib_iterations to 16 to get good results.

    This uses the model from this place. This is also COCO dataset and 300x300 image so the same script works as-is. 'https://github.com/mlcommons/mobile_models/blob/main/v0_7/tflite/ssd_mobilenet_v2_300_float.tflite?raw=true',

    3_EdgeAI_Model_Compile_Run_v2.ipynb is what you need to run on the cloud tool.

    Please let me know if this works.. I'll check what needs to be changed for the ssd-mobv1 model comilation.

    BTW, you may know already.. but, for quick upload of these files onto the CLoud can be done by using the ZipFile as below.

    from zipfile import ZipFile

    with ZipFile('HelloWorld_PC_Cloud_TDA4VM_SK_EVM_v2.zip', 'r') as zipFile:
    zipFile.extractall(path='.')

    HelloWorld_PC_Cloud_TDA4VM_SK_EVM_v2.tar.gz

    Thanks

    Srik

  • Srik,

    I just finished testing the files you sent and they work perfectly. The artifacts were generated in the TI Edge AI Cloud and transfer to my SK-TDA4VM and I get same result, as the Cloud, when doing inference in the SK-TDA4VM. Everything seems to be working with that model. Thank you so much for all your support and guidance. I will continue playing with the SK-TDA4VM.