This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Using TIDL with Tensorflow v2 for BeagleBone AI target

Other Parts Discussed in Thread: AM5729, GRACE

Hi there

I have a neural network design that I have created using Tensorflow v2 and Keras. I have converted the network to a Tensorflow Lite version using the convert function within Tensorflow:

# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()

# Save the TF Lite model.
with tf.io.gfile.GFile('model.tflite', 'wb') as f:
  f.write(tflite_model)

I wish to run my network on the AM5729 found on the BeagleBone AI board. I understand that the ti-processor-sdk-linux-am57xx-06.03.00.106 does not yet support the BeagleBone AI tool. However, the Debian image distributed for the BeagleBone AI has all the TIDL libraries packaged with it. 

Therefore the only thing I need to be able to do is convert the tensorflow lite network into the TIDL format using the TIDL import tool. I assume I can do this using the following flow: 

1. download the ti-processor-sdk-linux-am57xx-06.03.00.106-linux-x86-Install.bin from TI 

2. run the installer on a linux (Ubuntu) host 

3. navigate to the tidl_model_import.out tool 

4. run the tidl_model_import.out tool on an appropriate configuration file (below) 

I run the following command to invoke the tidl_model_import tool 

"/home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/linux-devkit/sysroots/x86_64-arago-linux/usr/bin/tidl_model_import.out" model_config.txt

With the contents of my configuration being the following: 

randParams = 0 

modelType = 1 

quantizationStyle = 1

quantRoundAdd = 50 

numParamBits = 8 

inputNetFile = "/home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/model.tflite.pb"
inputParamsFile = "NA"
outputNetFile = "/home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/model_Net.bin"
outputParamsFile = "/home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/model_params.bin"

sampleInData = "/home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/img_0.png"

tidlStatsTool = "/home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/linux-devkit/sysroots/x86_64-arago-linux/usr/bin/eve_test_dl_algo_ref.out"

I get the following output from the import tool: 

alex@alex-lapamatop:~/ti-processor-sdk-linux-am57xx-evm-06.03.00.106$ ./convert.sh 

=============================== TIDL import - parsing ===============================

TF Model (Proto) File  : /home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/model.tflite.pb  
TIDL Network File      : /home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/model_Net.bin  
TIDL Params File       : /home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/model_params.bin  

ERROR: Reading binary proto file

Num of Layer Detected :   0 
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  Num|TIDL Layer Name               |Out Data Name                                     |Group |#Ins  |#Outs |Inbuf Ids                       |Outbuf Id |In NCHW                             |Out NCHW                            |MACS       |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total Giga Macs : 0.0000
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

=============================== TIDL import - calibration ===============================


Processing config file ./tempDir/qunat_stats_config.txt !

Running TIDL simulation for calibration. 


Processing Frame Number : 0 

End of config list found !
alex@alex-lapamatop:~/ti-processor-sdk-linux-am57xx-evm-06.03.00.106$ ./convert.sh | log.txt
log.txt: command not found
alex@alex-lapamatop:~/ti-processor-sdk-linux-am57xx-evm-06.03.00.106$ ./convert.sh | log.txt
./convert.sh: line 1: log.txt: command not found
log.txt: command not found
alex@alex-lapamatop:~/ti-processor-sdk-linux-am57xx-evm-06.03.00.106$ ./convert.sh 
./convert.sh: line 1: log.txt: command not found
alex@alex-lapamatop:~/ti-processor-sdk-linux-am57xx-evm-06.03.00.106$ ./convert.sh 
alex@alex-lapamatop:~/ti-processor-sdk-linux-am57xx-evm-06.03.00.106$ ./convert.sh 

=============================== TIDL import - parsing ===============================

TF Model (Proto) File  : /home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/model.tflite.pb  
TIDL Network File      : /home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/model_Net.bin  
TIDL Params File       : /home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/model_params.bin  

ERROR: Reading binary proto file

Num of Layer Detected :   0 
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  Num|TIDL Layer Name               |Out Data Name                                     |Group |#Ins  |#Outs |Inbuf Ids                       |Outbuf Id |In NCHW                             |Out NCHW                            |MACS       |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total Giga Macs : 0.0000
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

=============================== TIDL import - calibration ===============================


Processing config file ./tempDir/qunat_stats_config.txt !

Running TIDL simulation for calibration. 


Processing Frame Number : 0 

End of config list found !

Is anyone able to tell me where I have gone wrong? 

Thanks

  • Hi,

    The error is because TIDL does not support importing tensorflow lite model. Please try with Caffe/tensoflow/onnx models, for more details refer to TIDL datasheet/user-guide. 

    Thanks,

    Praveen

  • Hi there, 

    Thanks for your speedy response. 

    Please note the documentation for TIDL reads "The supported operators/layers for Tensorflow/TensorFlow Lite/ONNX/Caffe are listed below."

    I am currently building  then arago-base-tisdk-image version  of the SDK from source so that I can try and build the tidl_utils found at https://git.ti.com/cgit/tidl/tidl-utils/. Looking through the repo here suggests there is support for TensorFlow Lite (hopefully this means it is updated to work with the newest version of TensorFlow since TensorFlow indicates that the old method of freezing and optimising graphs is now deprecated) 

    I am having issues installing the tisdk:

    After installing the tisdk I do not have the protobuf-native/usr/inc and protobuf-native/usr/lib directories that are required by tidl_utils as per the instructions found in tidl_utils/src/importTool (below)

     

    This folder contains the source files of TIDL import tool. To build the import tool in x86 Linux, follow these steps:
    1. Setup environment variables:
       - to build for x86 executable: 
          export PLATFORM_BUILD=x86
          export PROTOBUF_LIB_DIR=<protobuf lib folder>, e.g. in yocto, ~/yocto-plsdk/tisdk/build/arago-tmp-external-linaro-toolchain/sysroots-components/x86_64/protobuf-native/usr/lib
          export PROTOBUF_INC_DIR=<protobuf inc folder>, e.g. in yocto, ~/yocto-plsdk/tisdk/build/arago-tmp-external-linaro-toolchain/sysroots-components/x86_64/protobuf-native/usr/include
       - to build for ARM executable:
          export LINUXENV=oearm
          export LINUX_BUILD_TOOLS=<ARM toolchain>/bin/arm-linux-gnueabihf-
          export PROTOBUF_LIB_DIR=<protobuf lib folder>, e.g. ~/yocto-plsdk/tisdk/build/arago-tmp-external-linaro-toolchain/sysroots-components/armv7ahf-neon/protobuf/usr/lib
          export PROTOBUF_INC_DIR=<protobuf inc folder>, e.t. ~/yocto-plsdk/tisdk/build/arago-tmp-external-linaro-toolchain/sysroots-components/armv7ahf-neon/protobuf/usr/include
       - add path of protoc to environment variable PATH, e.g., export PATH=$PATH:~/yocto-plsdk/tisdk/build/arago-tmp-external-linaro-toolchain/sysroots-components/x86_64/protobuf-native/usr/bin
       - Note that when building for x86, "LINUXENV" must be unset if it's been set, and same for "PLATFORM_BUILD" when building for ARM. 
    
    2. Go to folder modules/ti_dl/utils/caffeImport and run protoc to genenrate .cc and .h files:
       protoc --proto_path=. --cpp_out=. caffe.proto
    
    3. Go to folder modules/ti_dl/utils/tfImport and run protoc to genenrate .cc and .h files:
       source genProtoC.sh
    
    4. go to folder modules/ti_dl/utils/tidlModelImport and run makefile:
       make 
    

    How do I get these libraries?

     


    Thanks 

    Alex 

  • Please note the documentation for TIDL reads "The supported operators/layers for Tensorflow/TensorFlow Lite/ONNX/Caffe are listed below."

  • Hi Alex, yes, TFLite is supported. If you only want to import your model to TIDL, I don't think you need to bother to get https://git.ti.com/cgit/tidl/tidl-utils/ build in your setup.

    With respect to TIDL import tool, I think you are using wrong model type, it should be "3"

    I just noticed Table showing TIDL import params in this link is not updated: https://software-dl.ti.com/processor-sdk-linux/esd/docs/06_03_00_106/linux/Foundational_Components/Machine_Learning/tidl.html#introduction-to-programming-model

    Below link and configuration file are for our Jacinto processors, but I think you can use them as guidance:

    tidl_import_mobileNetv1.txt
    modelType          = 3
    numParamBits      = 12
    quantizationStyle  = 2
    inputNetFile      = "/home/paula/tidl_nn_models/public/tflite/mobilenet_v1_1.0_224.tflite"
    outputNetFile      = "../../test/testvecs/config/tidl_models/tflite/tidl_net_tflite_mobilenet_v1_1.0_224.bin"
    outputParamsFile   = "../../test/testvecs/config/tidl_models/tflite/tidl_io_tflite_mobilenet_v1_1.0_224_"
    inDataNorm  = 1
    inMean = 128 128 128
    inScale =  0.0078125 0.0078125 0.0078125
    resizeWidth = 256
    resizeHeight = 256
    inWidth  = 224
    inHeight = 224 
    inNumChannels = 3
    inData = ../../test/testvecs/config/imageNet_sample_val_bg.txt
    postProcType = 1
    

    hope this helps,

    Paula

  • Hi there,

    Yes I managed to get my tflite model to convert using the following configuration file: 

    randParams = 0 
    
    modelType = 3 
    
    quantizationStyle = 1
    
    quantRoundAdd = 50 
    
    numParamBits = 8 
    
    
    
    inputNetFile = "/home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/probability_colour_cnn_classifier.tflite"
    inputParamsFile = "NA"
    outputNetFile = "/home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/probability_colour_cnn_classifier_net.bin"
    outputParamsFile = "/home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/probability_colour_cnn_classifier_param.bin"
    
    inElementType = 0 
    rawSampleInData = 1
    inNumChannels = 1
    inWidth = 100 
    inHeight = 100
    preProcType = 2
    sampleInData = "/home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/img_0.png"
    
    tidlStatsTool = "/home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/linux-devkit/sysroots/x86_64-arago-linux/usr/bin/eve_test_dl_algo_ref.out"
    

    And I got the following output from the tidl conversion tool: 

    alex@alex-lapamatop:~/ti-processor-sdk-linux-am57xx-evm-06.03.00.106$ ./convert.sh 
    
    =============================== TIDL import - parsing ===============================
    
    TFLite Model (Flatbuf) File  : /home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/probability_colour_cnn_classifier.tflite  
    TIDL Network File      : /home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/probability_colour_cnn_classifier_net.bin  
    TIDL IO Info File      : /home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/probability_colour_cnn_classifier_param.bin  
    TFLite node size: 10
    
    Num of Layer Detected :  11 
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
      Num|TIDL Layer Name               |Out Data Name                                     |Group |#Ins  |#Outs |Inbuf Ids                       |Outbuf Id |In NCHW                             |Out NCHW                            |MACS       |
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
        0|TIDL_DataLayer                |sequential_input                                  |     0|    -1|     1|  x   x   x   x   x   x   x   x |  0       |       0        0        0        0 |       1        1      100      100 |         0 |
        1|TIDL_ConvolutionLayer         |uential_1/sequential/conv2d/BiasAdd/ReadVariableOp|     1|     1|     1|  0   x   x   x   x   x   x   x |  1       |       1        1      100      100 |       1       32       98       98 |   2765952 |
        2|TIDL_PoolingLayer             |sequential_1/sequential/max_pooling2d/MaxPool     |     1|     1|     1|  1   x   x   x   x   x   x   x |  2       |       1       32       98       98 |       1       32       49       49 |    307328 |
        3|TIDL_ConvolutionLayer         |ntial_1/sequential/conv2d_1/BiasAdd/ReadVariableOp|     1|     1|     1|  2   x   x   x   x   x   x   x |  3       |       1       32       49       49 |       1       16       47       47 |  10179072 |
        4|TIDL_PoolingLayer             |sequential_1/sequential/average_pooling2d/AvgPool |     1|     1|     1|  3   x   x   x   x   x   x   x |  4       |       1       16       47       47 |       1       16       23       23 |     33856 |
        5|TIDL_ConvolutionLayer         |ntial_1/sequential/conv2d_2/BiasAdd/ReadVariableOp|     1|     1|     1|  4   x   x   x   x   x   x   x |  5       |       1       16       23       23 |       1        8       21       21 |    508032 |
        6|TIDL_PoolingLayer             |sequential_1/sequential/flatten/Reshape           |     1|     1|     1|  5   x   x   x   x   x   x   x |  6       |       1        8       21       21 |       1        1        1      800 |      4000 |
        7|TIDL_InnerProductLayer        |l/dense/Relu;sequential_1/sequential/dense/BiasAdd|     1|     1|     1|  6   x   x   x   x   x   x   x |  7       |       1        1        1      800 |       1        1        1       64 |     51264 |
        8|TIDL_InnerProductLayer        |sequential_1/sequential/dense_1/BiasAdd           |     1|     1|     1|  7   x   x   x   x   x   x   x |  8       |       1        1        1       64 |       1        1        1        2 |       130 |
        9|TIDL_SoftMaxLayer             |Identity                                          |     1|     1|     1|  8   x   x   x   x   x   x   x |  9       |       1        1        1        2 |       1        1        1        2 |         2 |
       10|TIDL_DataLayer                |Identity                                          |     0|     1|    -1|  9   x   x   x   x   x   x   x |  0       |       1        1        1        2 |       0        0        0        0 |         0 |
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Total Giga Macs : 0.0138
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Writing the TIDL converted TFLite Model (FlatBuffers) to File: /home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/probability_colour_cnn_classifier_tidl_am5.tflite  
    Create Tensor 0, sequential_input 
    Create Tensor 1, Identity 
    
    =============================== TIDL import - calibration ===============================
    
    
    Processing config file ./tempDir/qunat_stats_config.txt !
    
    Running TIDL simulation for calibration. 
    
      0, TIDL_DataLayer                ,  0,  -1 ,  1 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  0 ,    0 ,    0 ,    0 ,    0 ,    1 ,    1 ,  100 ,  100 ,
      1, TIDL_ConvolutionLayer         ,  1,   1 ,  1 ,  0 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  1 ,    1 ,    1 ,  100 ,  100 ,    1 ,   32 ,   98 ,   98 ,
      2, TIDL_PoolingLayer             ,  1,   1 ,  1 ,  1 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  2 ,    1 ,   32 ,   98 ,   98 ,    1 ,   32 ,   49 ,   49 ,
      3, TIDL_ConvolutionLayer         ,  1,   1 ,  1 ,  2 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  3 ,    1 ,   32 ,   49 ,   49 ,    1 ,   16 ,   47 ,   47 ,
      4, TIDL_PoolingLayer             ,  1,   1 ,  1 ,  3 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  4 ,    1 ,   16 ,   47 ,   47 ,    1 ,   16 ,   23 ,   23 ,
      5, TIDL_ConvolutionLayer         ,  1,   1 ,  1 ,  4 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  5 ,    1 ,   16 ,   23 ,   23 ,    1 ,    8 ,   21 ,   21 ,
      6, TIDL_PoolingLayer             ,  1,   1 ,  1 ,  5 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  6 ,    1 ,    8 ,   21 ,   21 ,    1 ,    1 ,    1 ,  800 ,
      7, TIDL_InnerProductLayer        ,  1,   1 ,  1 ,  6 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  7 ,    1 ,    1 ,    1 ,  800 ,    1 ,    1 ,    1 ,   64 ,
      8, TIDL_InnerProductLayer        ,  1,   1 ,  1 ,  7 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  8 ,    1 ,    1 ,    1 ,   64 ,    1 ,    1 ,    1 ,    2 ,
      9, TIDL_SoftMaxLayer             ,  1,   1 ,  1 ,  8 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  9 ,    1 ,    1 ,    1 ,    2 ,    1 ,    1 ,    1 ,    2 ,
     10, TIDL_DataLayer                ,  0,   1 , -1 ,  9 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  0 ,    1 ,    1 ,    1 ,    2 ,    0 ,    0 ,    0 ,    0 ,
    Layer ID    ,inBlkWidth  ,inBlkHeight ,inBlkPitch  ,outBlkWidth ,outBlkHeight,outBlkPitch ,numInChs    ,numOutChs   ,numProcInChs,numLclInChs ,numLclOutChs,numProcItrs ,numAccItrs  ,numHorBlock ,numVerBlock ,inBlkChPitch,outBlkChPitc,alignOrNot 
          1           40           16           40           32           14           32            1           32            1            1            8            1            1            4            7          640          448            1    
          3           49            7           49           47            5           47           32           16           32           16            8            1            2            1           10          343          235            1    
          5           23           23           23           21           21           21           16            8           16            8            8            1            2            1            1          529          441            1    
    
    Processing Frame Number : 0 
    
     Layer    1 : Out Q :    94757 , TIDL_ConvolutionLayer, PASSED  #MMACs =     2.77,     3.65, Sparsity : -31.94
     Layer    2 :TIDL_PoolingLayer,     PASSED  #MMACs =     0.08,     0.08, Sparsity :   0.00
     Layer    3 : Out Q :    31670 , TIDL_ConvolutionLayer, PASSED  #MMACs =    10.18,    10.18, Sparsity :   0.00
     Layer    4 : Out Q :    49469 , TIDL_PoolingLayer,     PASSED  #MMACs =     0.01,     0.01, Sparsity :   0.00
     Layer    5 : Out Q :    41944 , TIDL_ConvolutionLayer, PASSED  #MMACs =     0.51,     0.51, Sparsity :   0.00
     Layer    6 : Out Q :   104043 , TIDL_PoolingLayer,     PASSED  #MMACs =     0.00,     0.00, Sparsity :   0.00
     Layer    7 : Out Q :    87067 , TIDL_InnerProductLayer,     PASSED  #MMACs =     0.00,     0.00, Sparsity :   0.00
     Layer    8 : Out Q :   115654 , TIDL_InnerProductLayer,     PASSED  #MMACs =     0.00,     0.00, Sparsity :   0.00
     Layer    9 :-------Max Index    1 : 137 ------- #MMACs =     0.00,     0.00, Sparsity :   0.00
    End of config list found !

    I now need to know how I use this information to determine the output of the network as running on the AM57xx processor? 

  • Hi Alex, do you if new TFLite (probability_colour_cnn_classifier_tidl_am5.tflite)  has a complete offload or a partial offload? Could we check with netron (or similar)? Some details here: 3.15.4.5. Tensorflow Lite heterogeneous execution with TIDL compute offload

    I believe you are familiar with section "3.15.4.5.3. Helper scripts for out of box experience" but if not please take a look and try to run our OOB demos to get familiar, as you can use them as a baseline or starting point for testing your model.

    Our TFLite implementation of custom-op is a basic one, and frankly limited tested. Custom-op approach only creates two subgraph. Mainly, we create a TIDL subgraph until we found an unsupported layer, in that point we fall back to ARM.

    However, if this simple approach works ok for your model then great =)

    But, just FYI, and in case above approach don't work for your model/use case. We recently implemented heterogenous subgraph implementation with NEO-AI-DLR/TVM. This has been tested for several models, it supports multiple subgraphs, and it has been rolled out as part of AWS SageMaker service.

    You could use AM57x board to import the models and create the artifacts or you can use AWS SageMaker to build those artifacts (by selecting platform Sitara_am57x as a target device - example snapshot below) and use those artifacts directly in our board. If you want to explore this approach let me know and I can point you to some demos.

    Thank you,

    Paula

     

      

  • Hi there 

    It is a full offload model: 

    I have noticed in this case that the output buffer size when running the network is 2 Bytes.

    int output_buff_size = eop->GetOutputBufferSizeInBytes();

    I have tried running my network with a number of different inputs. Below is an example of the output of the network for each input image 

    input images are 100x100pixels, 3 colour channel


    int top_candidates = 3; 
    const int k = top_candidates;
    unsigned char *out = (unsigned char *) eop->GetOutputBufferPtr();
    
    for (int i = 0; i < k; i++) {
      if(configuration.enableApiTrace) {
          std::bitset<8> y(out[i]);
          std::cout << "push(" << i << "):"  << y << std::endl;
      }
    }

    Output: 

     

    img_0 (square)
    push(0):11001000 
    push(1):01110101
    push(2):00011001
    
    img_1 (no square)
    push(0):11001000
    push(1):00000101
    push(2):10001011
    
    img_2 (square)
    push(0):11001000
    push(1):00000101
    push(2):01101111
    
    img_3 (square)
    push(0):11001000
    push(1):10100101
    push(2):11010010
    
    img_4 (square)
    push(0):11001000
    push(1):00100101
    push(2):11101111
    
    img_5 (no square)
    push(0):11001000
    push(1):11000101
    push(2):01110001
    
    img_6 (square)
    push(0):11001000
    push(1):10000101
    push(2):00111010
    
    img_7 (no square)
    push(0):11001000
    push(1):00000101
    push(2):01100110

    It it odd that byte 0 appears to always contain the same information 11001000. Is there something "special" about byte 0?

    Since the output layer is softmax, i expect the sum of the probabilities to be 1 for the 2 potential classes

     

  • Hi Alex, which type of input data are you using? BMP? PNG? raw?.. One option is that the input format is not what is expected. No sure how you are testing it, but TFLite OOB demos work with BMP images, and inside the demo there is a conversion from  NHWC (pixel interleaved) to NCHW (plane interleaved), which is the format used by TIDL. Maybe worthy to double check input format is as expected..

    thank you,

    Paula

  • Hi there, 

    I have attached some sample images that are being used. They are in .png format. Is this okay?

    This is "img_0.png" as used for sampleInData in the model config file passed to the tidl conversion tool. 

    A sample image being passed to the NN on the AM57xx is given below. Its another png generated in the same was as the first image just larger. The AM57xx first resizes the image using OpenCV before passing it to the network

  • Just to clarify, images are opened using the following: 

    // open the test input image using openCV
        Mat image;
        image = imread("./testimages/big_img_0.png", CV_LOAD_IMAGE_COLOR);
        
        
        // resize image for the purposes of passing to the NN 
        Mat image_col_resized; 
        Size size(100,100);
        resize(image, image_col_resized, size);
        
    
        // process image using our NN filter 
        status = filter_image(eops, num_eops, current_eop, image_col_resized, configuration, ...);
        

  • Alex, which demo or script are you using?.. Or which image pre-processing steps?. Just want to understand which type of image pre-processing are you doing before it is ingested by TIDL.

    thank you,

    Paula

  • Alex, if using our OOB TFLite demo, could you try a *.bmp image as input? similar format to one of the ones we use (ex: grace_hopper.bmp)?. Also, for calibration maybe we could use a raw file.. but first try just to change the input, then we can worry about recalibration (if needed)

    cd /usr/share/tensorflow-lite/demos
    ./tflite_classification -m /usr/share/ti/tidl/utils/test/testvecs/config/tflite_models/mobilenet_v1_1.0_224_tidl_am5.tflite \
                            -i ../examples/grace_hopper.bmp -l ../examples/labels.txt -p 

  • Hi 

    Pre-processing is as follows: 

    status = filter_image(eops, num_eops, current_eop, image_col_resized, configuration, ... ); 
    
    
    
    
    STATUS_t filter_image(std::vector<ExecutionObjectPipeline*>& eops, int & num_eops, int & current_eop, Mat& src, Configuration & configuration, ... ) {
    
        try
        {
            // Process frames with available EOPs in a pipelined manner
            // additional num_eops iterations to flush the pipeline (epilogue)
    
            ExecutionObjectPipeline* eop = eops[current_eop];
    
            // Wait for previous frame on the same eo to finish processing
            if(eop->ProcessFrameWait()) {
    
                if(configuration.enableApiTrace){
                    std::cout << "Preprocessing Image" << std::endl;
                }
                imgutil::PreprocessImage(src, eop->GetInputBufferPtr(), configuration);
                eop->ProcessFrameStartAsync();
            }
    
            if(configuration.enableApiTrace)
                std::cout << "postprocess()" << std::endl;
            int is_object = tf_postprocess(eop, selected_items_size, selected_items, configuration, label_count, labels_classes);
            

  • I tried regenerating the input images as *bmp and using them but I get the same problem. Byte 0 still comes up as 11001000 - regardless of input. 

    I loaded the network as per the following which I assume is supported for a tflite network?

        std::cout << "loading configuration" << std::endl;
        configuration.numFrames = 0;
        configuration.inData = "/var/lib/cloud9/img_0.png";
        configuration.netBinFile = "/var/lib/cloud9/Ziath/models/BBAI_model_test/probability_colour_cnn_classifier_net.bin";
        configuration.paramsBinFile = "/var/lib/cloud9/Ziath/models/BBAI_model_test/probability_colour_cnn_classifier_param.bin";    
        configuration.preProcType = 2;
        configuration.inWidth = 100;
        configuration.inHeight = 100;
        configuration.inNumChannels = 3;
        configuration.enableApiTrace = true;
        configuration.runFullNet = true;
    
        std::cout << "Attempting to initialise execution pipelines" << std::endl;
    
        try
        {
            std::cout << "allocating execution object pipelines (EOP)" << std::endl;
            
            // Create ExecutionObjectPipelines
            status = CreateExecutionObjectPipelines(eops, e_eve, e_dsp, configuration);
            if (status != SUCCESS)
                return (CREATE_EXECUTION_OBJECT_PIPELINES_FAILED);
    
            // Allocate input/output memory for each EOP
            std::cout << "allocating I/O memory for each EOP" << std::endl;
            AllocateMemory(eops);
            num_eops = eops.size();
            std::cout << "num_eops = " << num_eops << std::endl;
            std::cout << "Ready to process image!" << std::endl;
        }
        catch (tidl::Exception &e)
        {
            std::cerr << e.what() << std::endl;
            return (TIDL_EXCEPTION);
        }
    
        return (SUCCESS);
    }
    
    
    STATUS_t CreateExecutionObjectPipelines(std::vector<ExecutionObjectPipeline*>& eops, Executor * &e_eve, Executor * &e_dsp, Configuration & configuration)
    {
        const uint32_t num_eves = 4;
        const uint32_t num_dsps = 0;
        const uint32_t buffer_factor = 1;
    
        DeviceIds ids_eve, ids_dsp;
        for (uint32_t i = 0; i < num_eves; i++)
            ids_eve.insert(static_cast<DeviceId>(i));
        for (uint32_t i = 0; i < num_dsps; i++)
            ids_dsp.insert(static_cast<DeviceId>(i));
    
    #if 0
        // Create Executors with the approriate core type, number of cores
        // and configuration specified
        // EVE will run layersGroupId 1 in the network, while
        // DSP will run layersGroupId 2 in the network
        std::cout << "allocating executors" << std::endl;
        e_eve = num_eves == 0 ? nullptr :
                new Executor(DeviceType::EVE, ids_eve, configuration, 1);
        e_dsp = num_dsps == 0 ? nullptr :
                new Executor(DeviceType::DSP, ids_dsp, configuration, 2);
    
        // Construct ExecutionObjectPipeline that utilizes multiple
        // ExecutionObjects to process a single frame, each ExecutionObject
        // processes one layerGroup of the network
        // If buffer_factor == 2, duplicating EOPs for pipelining at
        // EO level rather than at EOP level, in addition to double buffering
        // and overlapping host pre/post-processing with device processing
        std::cout << "allocating individual EOPs" << std::endl;
        for (uint32_t j = 0; j < buffer_factor; j++)
        {
            for (uint32_t i = 0; i < std::max(num_eves, num_dsps); i++)
                eops.push_back(new ExecutionObjectPipeline(
                                {(*e_eve)[i%num_eves], (*e_dsp)[i%num_dsps]}));
        }
    #else
        e_eve = num_eves == 0 ? nullptr :
                new Executor(DeviceType::EVE, ids_eve, configuration);
        e_dsp = num_dsps == 0 ? nullptr :
                new Executor(DeviceType::DSP, ids_dsp, configuration);
    
        // Construct ExecutionObjectPipeline with single Execution Object to
        // process each frame. This is parallel processing of frames with
        // as many DSP and EVE cores that we have on hand.
        // If buffer_factor == 2, duplicating EOPs for double buffering
        // and overlapping host pre/post-processing with device processing
        for (uint32_t j = 0; j < buffer_factor; j++)
        {
            for (uint32_t i = 0; i < num_eves; i++)
                eops.push_back(new ExecutionObjectPipeline({(*e_eve)[i]}));
            for (uint32_t i = 0; i < num_dsps; i++)
                eops.push_back(new ExecutionObjectPipeline({(*e_dsp)[i]}));
        }
    #endif
    
        return(SUCCESS);
    }
    
    STATUS_t AllocateMemory(const std::vector<ExecutionObjectPipeline*>& eops)
    {
        for (auto eop : eops)
        {
            size_t in_size  = eop->GetInputBufferSizeInBytes();
            size_t out_size = eop->GetOutputBufferSizeInBytes();
            std::cout << "Allocating input and output buffers" << std::endl;
            void*  in_ptr   = malloc(in_size);
            void*  out_ptr  = malloc(out_size);
            assert(in_ptr != nullptr && out_ptr != nullptr);
            
            ArgInfo in(in_ptr,   in_size);
            ArgInfo out(out_ptr, out_size);
            eop->SetInputOutputBuffer(in, out);
        }
        
        return (SUCCESS);
    }

  • Hi 

    I have been looking at the tensorflow-lite OOB demos. HOWEVER, I have a few issues here - mostly in finding the correct versions of code/processor SDK etc. 

    I am running on a BeagleBone AI, I do not have a TI dev kit for this processor. 

    The version of the SDK I have is PROCESSOR-SDK-LINUX-AM57X  06_03_00_106 in which there is a version of the tidl_model_import tool. However, I suspect this is very out of date. 

    On this page:  there is an example for importing a TFlite model that uses parameters that aren't supported  with the version of the tidl import tool that I have 

    I am trying to use the TI Deep Learning Library User Guide to get the latest version of the TIDL toolkit but it is not clear what I need to download! Do I have to install it on Linux? Can I install it on Windows and have the same level of support? 

    Thanks 

    Alex 

  • Hi Paula 

    I have been doing some digging and have got to the following problem. 

    I upgraded my SDK to ti-processor-sdk-linux-automotive-j7-evm-07_00_01 and installed the psdk_rtos_auto_j7_07_00_00_11 (this required rebuilding my PC and re-installing Ubuntu 18.04.5LTS) 

    Now i believe I have the same set of tidl source files as described in the TIDL Deep Learning Library User Guide.

    I need to convert my models using the tidl_model_import.out tool on a PC targeting C66 DSP cores. So I have read through the information on building the TIDL tools from the TIDL Deep Learning Library User Guide. 

    After editing a few make files I can build using make TARGET_PLATFORM=PC TARGET_CPU=C66  from ~/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/

    This yields the following: 

    ======== MAKING IMPORT TOOL PROTOS LIB =================
    make -C ./ti_dl/utils/tidlModelImport -f makefile_lib
    make[1]: Entering directory '/home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/ti_dl/utils/tidlModelImport'
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/proto_cc/tensorflow/core/framework/device_attributes.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/proto_cc/tensorflow/core/framework/types.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/proto_cc/tensorflow/core/framework/node_def.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/proto_cc/tensorflow/core/framework/op_def.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/proto_cc/tensorflow/core/framework/cost_graph.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/proto_cc/tensorflow/core/framework/tensor_shape.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/proto_cc/tensorflow/core/framework/allocation_description.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/proto_cc/tensorflow/core/framework/attr_value.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/proto_cc/tensorflow/core/framework/kernel_def.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/proto_cc/tensorflow/core/framework/graph.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/proto_cc/tensorflow/core/framework/function.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/proto_cc/tensorflow/core/framework/log_memory.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/proto_cc/tensorflow/core/framework/versions.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/proto_cc/tensorflow/core/framework/tensor_description.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/proto_cc/tensorflow/core/framework/tensor_slice.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/proto_cc/tensorflow/core/framework/variable.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/proto_cc/tensorflow/core/framework/step_stats.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/proto_cc/tensorflow/core/framework/summary.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/proto_cc/tensorflow/core/framework/resource_handle.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/proto_cc/tensorflow/core/framework/tensor.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../onnxImport/onnx_cc/onnx/onnx-operators-ml.proto3.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../onnxImport/onnx_cc/onnx/onnx-ml.proto3.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/faster_rcnn.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/square_box_coder.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/matcher.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/graph_rewriter.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/faster_rcnn_box_coder.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/bipartite_matcher.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/ssd_anchor_generator.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/anchor_generator.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/box_coder.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/argmax_matcher.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/grid_anchor_generator.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/image_resizer.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/model.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/ssd.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/input_reader.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/optimizer.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/box_predictor.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/post_processing.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/multiscale_anchor_generator.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/train.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/preprocessor.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/mean_stddev_box_coder.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/losses.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/region_similarity_calculator.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/eval.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/pipeline.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/string_int_label_map.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/keypoint_box_coder.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tfImport/models_research_cc/object_detection/protos/hyperparams.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../caffeImport/caffe.pb.obj
    r - /home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/out/PC/dsp/algo/release/ti_dl/utils/tidlModelImport/../tidlMetaArch/tidl_meta_arch.pb.obj
    make[1]: Leaving directory '/home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/ti_dl/utils/tidlModelImport'
    .
    .
    ======== MAKING IMPORT TOOL =================
    make -C ./ti_dl/utils/tidlModelImport -f makefile_bin
    make[1]: Entering directory '/home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/ti_dl/utils/tidlModelImport'
    make[1]: Leaving directory '/home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/ti_dl/utils/tidlModelImport'
    .
    .
    ======== MAKING TIDL TEST =================
    make -C ./ti_dl/test -f makefile
    make[1]: Entering directory '/home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/ti_dl/test'
    make[1]: Leaving directory '/home/alex/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/ti_dl/test'
    

    So i have rebuilt:

     /ti_dl/utils/tidlModelImport/out/tidl_model_import.out

    /ti_dl/test/out/PC_dsp_test_dl_algo.out 


    It has failed to rebuild: 


    /ti_dl/utils/perfsim/ti_cnnperfsim.out 

    There is NO MAKEFILE in /ti_dl/utils/perfsim 


    THEREFORE: 

    When I run the tidl_model_import.out tool I get the following output: 

    alex@alex-Inspiron-15-7000-Gaming:~/psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09/ti_dl/utils/tidlModelImport/out$ ./tidl_model_import.out ~/BBAI_model_test/model_config.txt 
    TFLite Model (Flatbuf) File  : /home/alex/BBAI_model_test/probability_colour_cnn_classifier.tflite  
    TIDL Network File      : /home/alex/BBAI_model_test/probability_colour_cnn_classifier_j7_net.bin  
    TIDL IO Info File      : /home/alex/BBAI_model_test/probability_colour_cnn_classifier_j7_param.bin  
    10
    WARNING: [TIDL_E_DATAFLOW_INFO_NULL] ti_cnnperfsim.out fails to allocate memory in MSMC. Please look into perfsim log. This model can only be used on PC emulation, it will get fault on target.
    ****************************************************
    **          1 WARNINGS          0 ERRORS          **
    ****************************************************
    

    This does not yield the correct output binary files or .tflite model file for use on the AM57xx device. 

    I suspect the first step is to try remaking the tidl tools with all the correct makefiles. 

    /psdk_rtos_auto_j7_07_00_00_11/tidl_j7_01_02_00_09

  • Alex, my bad, I didn't have a chance to reply yesterday. Actually please don't use J7's TIDL for AM57x. They are different.

    To give you some background our Automotive version of Sitara AM57x is J6. J7 platform has different TIDL HWA architecture and software differs.

    As you are using BBAI, Linux PSDK 6.3 is the latest version to use. And TIDL import tool in 6.3 filesystem is the latest for this platform. Just FYI, we have a patch to work with AWS NEO/TVM on top of 6.3 but that is not of your relevance for now.

    Just to organize a little bit your effort and help me to understand.

    - Could you give me a brief summary on how your demo works? is there any place (GitHub or similar) I can take a look? or is this demo based on another OOB demo?

    - Could you run Linux PSDK OOB TFLite demos on your BBAI?

    Again, sorry for missing replying yesterday, I admit it could be a bit confusing. Need to think how could we avoid this confusion in the future..

    In any case, if you are interested on a more powerful TIDL platform, J7 has much more horse power and my understanding is that is price competitive compare to AM57x.

    Thank you,

    Paula  

  • Hi Paula, 

    I can certainly provide you with some background for this project and code examples if you have somewhere secure I can send them to? This is simply a demonstration project to cement my understanding of the full design flow for the Texas Instruments product. 

    I have created a basic object classification network using TensorFlow V2 which has only two possible outputs 0 or 1. The network reports the probability of each output class [0, 1]. - I can provide the python/Tensorflow file and trained network files if they would help? 

    Since I am using Tensorflow V2, I export models for mobile platforms (AM57x) as TensorflowLite models since this is the new supported format for Tensorflow. (My understanding is that the old methods using run_optimisation.py and freeze_graph.py functions are deprecated and should not be used.)

    The model accepts images of 100 pixels by 100 pixels over 3 channels (R,G,B) and works with floating-point numbers. 

    I notice that numParamBits can only be set to between 4 and 12 for the import tool in the v6.3 sdk 

    I am not sure if I can run the OOB demos on the BBAI, but if you can point me to the folder that must be copied onto the BBAI and the scripts I need to execute then I can find out! 

    Thanks 

    Alex  

  • Hi Alex, let me suggest you to try first our Linux PSDK OOB TFLite classification demo (./tflite_classification). When OOB is working in your BBAI, then you replace Mobilenet model binaries, input image and labels files with yours. I think this could be a faster way to test your model on TIDL. After you have this baseline working then you can move to your python/Debian demo (my understanding of your final goal).. Some links that you might be already aware of, but in any case below:

    You can download Linux PSDK from:

    https://software-dl.ti.com/processor-sdk-linux/esd/AM57X/latest/index_FDS.html

    This link also has some other links to User Guide documentation and how-to create SD card.

    From User Guide documentation, TFLite section:

    Thank you,

    Paula

  • Hi Paula 

    I have tried creating an SD card image for the Beagle Bone AI. However, no luck here. The Beagle Bone AI does not boot from the image. Below are the steps I used to try creating the image (in case I did something incorrectly!) However, I not that the Processor SDK does not actually support the Beagle Bone AI. 

    I have also checked the default Beaglebone AI image under /usr/share for the TI OOB demonstrations for tensorflow lite, however they are not included. 

    1, Download and install ti-processor-sdk-linux-am57xx-evm-06.03.00.106

    2.Download tensorflow_lite_examples to ti-processor-sdk-linux-am57xx-evm-06.03.00.106/linux-devkit/ 

    3. run source environment-setup 

    4. set the following environment variables:

    • export SYSROOT_INCDIR=”$SDK_PATH_TARGET/usr/include”
    • export SYSROOT_LIBDIR=”$SDK_PATH_TARGET/usr/lib”
    • export TARGET_TOOLCHAIN_PREFIX=arm-linux-gnueabihf-

    5. Navigate to tensorflow-lite-examples and run make

    6. Navigate to ti-processor-sdk-linux-am57xx-evm-06.03.00.106 and run ./setup.sh 

    Note: this step fails with the following message: 

    Board could not be detected. Please connect the board to the PC.
    Press any key to try checking again.

    I ignored this message and carried on to see what would happen. Full output from ./setup.sh is given below 


    7. run sudo bin/create-sdcard.sh 

    I am using a pre-built TI image as a test case to check that the board will boot using this method. 

    I have put the full output below 


    When I now insert this SD card into the BBAI and power it up there is no output over the HDMI port and no activity on any of the LEDs. I strongly suspect this means there is no suitable board support package for the BBAI in the SDK, so the u-boot files are using the wrong board information, therefore the BBAI will not boot using this method

    Output from ./setup.sh 

    alex@alex-Inspiron-15-7000-Gaming:~/ti-processor-sdk-linux-am57xx-evm-06.03.00.106$ ./setup.sh 
    -------------------------------------------------------------------------------
    TISDK setup script
    This script will set up your development host for SDK development.
    Parts of this script require administrator priviliges (sudo access).
    -------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    Verifying Linux host distribution
    Ubuntu 12.04 LTS, Ubuntu 14.04, or Ubuntu 14.04 LTS is being used, continuing..
    --------------------------------------------------------------------------------
    
    Starting with Ubuntu 12.04 serial devices are only accessible by members of the 'dialout' group.
    A user must be apart of this group to have the proper permissions to access a serial device.
    
    User 'alex' is already apart of the 'dialout' group
    
    -------------------------------------------------------------------------------
    setup package script
    This script will make sure you have the proper host support packages installed
    This script requires administrator priviliges (sudo access) if packages are to be installed.
    -------------------------------------------------------------------------------
    System has required packages!
    --------------------------------------------------------------------------------
    Package verification and installation successfully completed
    --------------------------------------------------------------------------------
    --------------------------------------------------------------------------------
    In which directory do you want to install the target filesystem?(if this directory does not exist it will be created)
    [ /home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/targetNFS ] 
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    This step will extract the target filesystem to /home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/targetNFS
    
    Note! This command requires you to have administrator priviliges (sudo access) 
    on your host.
    Press return to continue
    
    Multiple filesystems found.
            1:tisdk-rootfs-image-am57xx-evm.tar.xz
            2:tisdk-docker-rootfs-image-am57xx-evm.tar.xz
    
    Enter Number of rootfs Tarball: [1] 
    
    
    Successfully extracted tisdk-rootfs-image-am57xx-evm.tar.xz to /home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/targetNFS
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    This step will set up the SDK to install binaries in to:
        /home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/targetNFS/home/root/am57xx-evm
    
    The files will be available from /home/root/am57xx-evm on the target.
    
    This setting can be changed later by editing Rules.make and changing the
    EXEC_DIR or DESTDIR variable (depending on your SDK).
    
    Press return to continue
    Rules.make edited successfully..
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    This step will export your target filesystem for NFS access.
    
    Note! This command requires you to have administrator priviliges (sudo access) 
    on your host.
    Press return to continue
    /home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/targetNFS already NFS exported, skipping..
    
    [ ok ] Stopping nfs-kernel-server (via systemctl): nfs-kernel-server.service.
    [ ok ] Starting nfs-kernel-server (via systemctl): nfs-kernel-server.service.
    --------------------------------------------------------------------------------
    --------------------------------------------------------------------------------
    Which directory do you want to be your tftp root directory?(if this directory does not exist it will be created for you)
    [ /tftpboot ] 
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    This step will set up the tftp server in the /tftpboot directory.
    
    Note! This command requires you to have administrator priviliges (sudo access) 
    on your host.
    Press return to continue
    
    Successfully copied *Image-am57xx-evm.bin to tftp root directory /tftpboot
    
    Successfully copied am571x-idk-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am571x-idk.dtb to tftp root directory /tftpboot
    
    Successfully copied am571x-idk-lcd-osd101t2045-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am571x-idk-lcd-osd101t2045.dtb to tftp root directory /tftpboot
    
    Successfully copied am571x-idk-lcd-osd101t2587-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am571x-idk-lcd-osd101t2587.dtb to tftp root directory /tftpboot
    
    Successfully copied am571x-idk-pps-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am571x-idk-pps.dtb to tftp root directory /tftpboot
    
    Successfully copied am571x-idk-pru-excl-uio-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am571x-idk-pru-excl-uio.dtb to tftp root directory /tftpboot
    
    Successfully copied am5729-beagleboneai-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am5729-beagleboneai.dtb to tftp root directory /tftpboot
    
    Successfully copied am572x-idk-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am572x-idk.dtb to tftp root directory /tftpboot
    
    Successfully copied am572x-idk-jailhouse-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am572x-idk-jailhouse.dtb to tftp root directory /tftpboot
    
    Successfully copied am572x-idk-lcd-osd101t2045-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am572x-idk-lcd-osd101t2045.dtb to tftp root directory /tftpboot
    
    Successfully copied am572x-idk-lcd-osd101t2045-jailhouse-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am572x-idk-lcd-osd101t2045-jailhouse.dtb to tftp root directory /tftpboot
    
    Successfully copied am572x-idk-lcd-osd101t2587-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am572x-idk-lcd-osd101t2587.dtb to tftp root directory /tftpboot
    
    Successfully copied am572x-idk-lcd-osd101t2587-jailhouse-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am572x-idk-lcd-osd101t2587-jailhouse.dtb to tftp root directory /tftpboot
    
    Successfully copied am572x-idk-pps-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am572x-idk-pps.dtb to tftp root directory /tftpboot
    
    Successfully copied am572x-idk-pru-excl-uio-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am572x-idk-pru-excl-uio.dtb to tftp root directory /tftpboot
    
    Successfully copied am574x-idk-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am574x-idk.dtb to tftp root directory /tftpboot
    
    Successfully copied am574x-idk-jailhouse-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am574x-idk-jailhouse.dtb to tftp root directory /tftpboot
    
    Successfully copied am574x-idk-lcd-osd101t2587-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am574x-idk-lcd-osd101t2587.dtb to tftp root directory /tftpboot
    
    Successfully copied am574x-idk-pps-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am574x-idk-pps.dtb to tftp root directory /tftpboot
    
    Successfully copied am574x-idk-pru-excl-uio-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am574x-idk-pru-excl-uio.dtb to tftp root directory /tftpboot
    
    Successfully copied am57xx-beagle-x15-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am57xx-beagle-x15.dtb to tftp root directory /tftpboot
    
    Successfully copied am57xx-beagle-x15-revb1-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am57xx-beagle-x15-revb1.dtb to tftp root directory /tftpboot
    
    Successfully copied am57xx-beagle-x15-revc-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am57xx-beagle-x15-revc.dtb to tftp root directory /tftpboot
    
    Successfully copied am57xx-evm-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am57xx-evm-cam-mt9t111-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am57xx-evm-cam-mt9t111.dtb to tftp root directory /tftpboot
    
    Successfully copied am57xx-evm-cam-ov10635-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am57xx-evm-cam-ov10635.dtb to tftp root directory /tftpboot
    
    Successfully copied am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am57xx-evm-jailhouse-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am57xx-evm-jailhouse.dtb to tftp root directory /tftpboot
    
    Successfully copied am57xx-evm-reva3-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am57xx-evm-reva3-cam-mt9t111-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am57xx-evm-reva3-cam-mt9t111.dtb to tftp root directory /tftpboot
    
    Successfully copied am57xx-evm-reva3-cam-ov10635-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am57xx-evm-reva3-cam-ov10635.dtb to tftp root directory /tftpboot
    
    Successfully copied am57xx-evm-reva3.dtb to tftp root directory /tftpboot
    
    Successfully copied am57xx-evm-reva3-jailhouse-am57xx-evm.dtb to tftp root directory /tftpboot
    
    Successfully copied am57xx-evm-reva3-jailhouse.dtb to tftp root directory /tftpboot
    ls: cannot access '*.dtbo': No such file or directory
    
    /etc/xinetd.d/tftp already exists..
    /tftpboot already exported for TFTP, skipping..
    
    Restarting tftp server
    [ ok ] Stopping xinetd (via systemctl): xinetd.service.
    [ ok ] Starting xinetd (via systemctl): xinetd.service.
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------"
    This step will set up minicom (serial communication application) for
    SDK development
    
    
    For boards that contain a USB-to-Serial converter on the board such as:
    	* BeagleBone
    	* Beaglebone Black
    	* AM335x EVM-SK
    	* AM57xx EVM
    	* K2H, K2L, and K2E EVMs
    
    the port used for minicom will be automatically detected. By default Ubuntu
    will not recognize this device. Setup will add a udev rule to
    /etc/udev/ so that from now on it will be recognized as soon as the board is
    plugged in.
    
    For other boards, the serial will defualt to /dev/ttyS0. Please update based
    on your setup.
    
    --------------------------------------------------------------------------------
    
    
    NOTE: If your using any of the above boards simply hit enter
    and the correct port will be determined automatically at a
    later step.  For all other boards select the serial port
    that the board is connected to.
    Which serial port do you want to use with minicom?
    [ /dev/ttyS0 ] 
    
    Copied existing /home/alex/.minirc.dfl to /home/alex/.minirc.dfl.old
    
    Configuration saved to /home/alex/.minirc.dfl. You can change it further from inside
    minicom, see the Software Development Guide for more information.
    --------------------------------------------------------------------------------
    
    --------------------------------------------------------------------------------
    This step will set up the u-boot variables for booting the EVM.
    --------------------------------------------------------------------------------
    Autodetected the following ip address of your host, correct it if necessary
    [ 10.9.8.69 ] 
    
    Select Linux kernel location:
     1: TFTP
     2: SD card
    
    [ 1 ] 2
    
    Select root file system location:
     1: NFS
     2: SD card
    
    [ 1 ] 2
    --------------------------------------------------------------------------------
    Would you like to create a minicom script with the above parameters (y/n)?
    [ y ] y
    
    Successfully wrote /home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/bin/setupBoard.minicom
    
    Board could not be detected. Please connect the board to the PC.
    Press any key to try checking again.
    

    Output from sudo bin/create-sdcard.sh

    alex@alex-Inspiron-15-7000-Gaming:~/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/bin$ sudo ./create-sdcard.sh 
    [sudo] password for alex: 
    
    
    ################################################################################
    
    This script will create a bootable SD card from custom or pre-built binaries.
    
    The script must be run with root permissions and from the bin directory of
    the SDK
    
    Example:
     $ sudo ./create-sdcard.sh
    
    Formatting can be skipped if the SD card is already formatted and
    partitioned properly.
    
    ################################################################################
    
    
    Available Drives to write images to: 
    
    #  major   minor    size   name 
    1:   8       16  976762584 sdb
    2:   8       32   15558144 sdc
     
    Enter Device Number or n to exit: 2
     
    sdc was selected
    
    /dev/sdc is an sdx device
    Unmounting the sdc drives
    Current size of sdc1 71680 bytes
    Current size of sdc2 15469568 bytes
    
    ################################################################################
    
    	Select 2 partitions if only need boot and rootfs (most users).
    	Select 3 partitions if need SDK & other content on SD card.  This is
            usually used by device manufacturers with access to partition tarballs.
    
    	****WARNING**** continuing will erase all data on sdc
    
    ################################################################################
    
    Number of partitions needed [2/3] : 2
    
     
    Now partitioning sdc with 2 partitions...
     
    
    ################################################################################
    
    		Now making 2 partitions
    
    ################################################################################
    
    1024+0 records in
    1024+0 records out
    1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.223487 s, 4.7 MB/s
    DISK SIZE - 15931539456 bytes
    Error: Partition(s) 2 on /dev/sdc have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use.  As a result, the old partition(s) will remain in use.  You should reboot now before making further changes.
    Error: Partition(s) 2 on /dev/sdc have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use.  As a result, the old partition(s) will remain in use.  You should reboot now before making further changes.
    Error: Partition(s) 2 on /dev/sdc have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use.  As a result, the old partition(s) will remain in use.  You should reboot now before making further changes.
    
    ################################################################################
    
    		Partitioning Boot
    
    ################################################################################
    mkfs.fat 4.1 (2017-01-24)
    mkfs.fat: warning - lowercase labels might not work properly with DOS or Windows
    
    ################################################################################
    
    		Partitioning rootfs
    
    ################################################################################
    mke2fs 1.44.1 (24-Mar-2018)
    /dev/sdc2 contains a ext3 file system labelled 'rootfs'
    	last mounted on /media/alex/rootfs on Mon Sep 21 13:37:39 2020
    Proceed anyway? (y,N) y
    /dev/sdc2 is mounted; will not make a filesystem here!
    
    
    ################################################################################
    
       Partitioning is now done
       Continue to install filesystem or select 'n' to safe exit
    
       **Warning** Continuing will erase files any files in the partitions
    
    ################################################################################
    
    
    Would you like to continue? [y/n] : y
    
     
     
    Mount the partitions 
     
    Emptying partitions 
     
    
    Syncing....
    
    ################################################################################
    
    	Choose file path to install from
    
    	1 ) Install pre-built images from SDK
    	2 ) Enter in custom boot and rootfs file paths
    
    ################################################################################
    
    Choose now [1/2] : 1 
    
     
    Will now install from SDK pre-built images
    now installing:  ti-processor-sdk-linux-am57xx-evm-06.03.00.106
    
    ################################################################################
    
       Multiple rootfs Tarballs found
    
    ################################################################################
    
    	 1:tisdk-rootfs-image-am57xx-evm.tar.xz
    	 2:tisdk-docker-rootfs-image-am57xx-evm.tar.xz
    
    Enter Number of rootfs Tarball: 1
     
    ################################################################################
    
    	Copying files now... will take minutes
    
    ################################################################################
    
    Copying boot partition
    
    
    
    MLO copied
    
    
    u-boot.img copied
    
    uEnv.txt copied
    
    Copying rootfs System partition                                                                                                                                                                                         
    
     
    Syncing...
     
    Un-mount the partitions 
     
    Remove created temp directories 
     
    Operation Finished
    

     


  • Hi Paula,

    Lets start with the good news, there is some progress (finally!) 

    I have managed to run the OOB demonstrations on my BeagleBone AI, even better I managed to run them in the Debian environment. Since the SDK does not support the BBAI, I had to manually build the examples on the BeagleBone (if helpful I can provide details of how I did this?).

    Now I can run the ./tidl_classification example with the image grace_hopper.bmp and receive the result of "military uniform" 

    I then wanted to replace the TI default inputs with custom inputs. Changing the input image (.png) format and labels file was straight forward. Changing the model file has not worked

    For my models file I have the following: 

    model file name as exported from tensorflow in tensorflow lite format "probability_colour_cnn_classifier.tflite" 

    model conversion parameters (for tidl_model_import.out) 

    # Default - 0
    randParams         = 0
    
    # 0: Caffe, 1: TensorFlow, 2: ONNX, 3: TensorFlow Lite, Default - 0
    modelType          = 3
    
    # 0: Fixed quantization By tarininng Framework, 1: Dyanamic quantization by TIDL, Default - 1
    quantizationStyle  = 1
    
    # quantRoundAdd/100 will be added while rounding to integer, Default - 50
    quantRoundAdd      = 50
    
    numParamBits       = 12
    
    
    inputNetFile = "/home/alex/BBAI_model_test/probability_colour_cnn_classifier.tflite"
    outputNetFile = "/home/alex/BBAI_model_test/probability_colour_cnn_classifier_am57_net.bin"
    outputParamsFile = "/home/alex/BBAI_model_test/probability_colour_cnn_classifier_am57_param.bin"
    inNumChannels = 3
    inWidth = 100
    inHeight = 100
    
    
    inElementType = 1
    rawSampleInData = 1
    sampleInData = "/home/alex/BBAI_model_test/img_0.png"
    tidlStatusTool = "/home/alex/ti-processor-sdk-linux-am57xx-evm-06.03.00.106/linux-devkit/sysroots/x86_64-arago-linux/usr/bin/eve_test_dl_algo_ref.out"
    

    These parameters are based on the parameters found in the conversion file for the example mobileNetV1 

    The output I get from running tidl_model_import.out is: 

    =============================== TIDL import - parsing ===============================
    
    TFLite Model (Flatbuf) File  : /home/alex/BBAI_model_test/probability_colour_cnn_classifier.tflite  
    TIDL Network File      : /home/alex/BBAI_model_test/probability_colour_cnn_classifier_am57_net.bin  
    TIDL IO Info File      : /home/alex/BBAI_model_test/probability_colour_cnn_classifier_am57_param.bin  
    TFLite node size: 10
    
    Num of Layer Detected :  11 
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
      Num|TIDL Layer Name               |Out Data Name                                     |Group |#Ins  |#Outs |Inbuf Ids                       |Outbuf Id |In NCHW                             |Out NCHW                            |MACS       |
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
        0|TIDL_DataLayer                |sequential_input                                  |     0|    -1|     1|  x   x   x   x   x   x   x   x |  0       |       0        0        0        0 |       1        3      100      100 |         0 |
        1|TIDL_ConvolutionLayer         |uential_1/sequential/conv2d/BiasAdd/ReadVariableOp|     1|     1|     1|  0   x   x   x   x   x   x   x |  1       |       1        3      100      100 |       1       32       98       98 |   8297856 |
        2|TIDL_PoolingLayer             |sequential_1/sequential/max_pooling2d/MaxPool     |     1|     1|     1|  1   x   x   x   x   x   x   x |  2       |       1       32       98       98 |       1       32       49       49 |    307328 |
        3|TIDL_ConvolutionLayer         |ntial_1/sequential/conv2d_1/BiasAdd/ReadVariableOp|     1|     1|     1|  2   x   x   x   x   x   x   x |  3       |       1       32       49       49 |       1       16       47       47 |  10179072 |
        4|TIDL_PoolingLayer             |sequential_1/sequential/average_pooling2d/AvgPool |     1|     1|     1|  3   x   x   x   x   x   x   x |  4       |       1       16       47       47 |       1       16       23       23 |     33856 |
        5|TIDL_ConvolutionLayer         |ntial_1/sequential/conv2d_2/BiasAdd/ReadVariableOp|     1|     1|     1|  4   x   x   x   x   x   x   x |  5       |       1       16       23       23 |       1        8       21       21 |    508032 |
        6|TIDL_PoolingLayer             |sequential_1/sequential/flatten/Reshape           |     1|     1|     1|  5   x   x   x   x   x   x   x |  6       |       1        8       21       21 |       1        1        1      800 |      4000 |
        7|TIDL_InnerProductLayer        |l/dense/Relu;sequential_1/sequential/dense/BiasAdd|     1|     1|     1|  6   x   x   x   x   x   x   x |  7       |       1        1        1      800 |       1        1        1       64 |     51264 |
        8|TIDL_InnerProductLayer        |sequential_1/sequential/dense_1/BiasAdd           |     1|     1|     1|  7   x   x   x   x   x   x   x |  8       |       1        1        1       64 |       1        1        1        2 |       130 |
        9|TIDL_SoftMaxLayer             |Identity                                          |     1|     1|     1|  8   x   x   x   x   x   x   x |  9       |       1        1        1        2 |       1        1        1        2 |         2 |
       10|TIDL_DataLayer                |Identity                                          |     0|     1|    -1|  9   x   x   x   x   x   x   x |  0       |       1        1        1        2 |       0        0        0        0 |         0 |
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Total Giga Macs : 0.0194
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Writing the TIDL converted TFLite Model (FlatBuffers) to File: /home/alex/BBAI_model_test/probability_colour_cnn_classifier_tidl_am5.tflite  
    Create Tensor 0, sequential_input 
    Create Tensor 1, Identity 
    
    =============================== TIDL import - calibration ===============================
    
    Couldn't open tidlStatsTool file:

    first problem - I notice the tidl status tool fails with the error that it can't open. But there is no further information given as to why?  

    Nonetheless, the file probability_colour_cnn_classifier_tidl_am5.tflite  has been generated, so I thought I would try it on the BeagleBone AI anyway. I get the following error message: 


    Loading model...
    ERROR: Encountered unresolved custom op: tidl-am5-custom-op
    ERROR: Node number 0 (tidl-am5-custom-op) failed to prepare. 
    
    Failed to allocate tensors.
    Image input: testimages/big_img_0.png
    Running inference...
    Segmentation fault

    I have tried running the tidl_model_import.out tool on both a linux host PC with the SDK and the BeagleBone AI. But I get exactly the same result. 

    As a test I have attempted to run the .tflite output by tensorflow (probability_colour_cnn_classifier.tflite) and this is successful! 

    I feel we are almost there now! 

    Thanks 

    Alex 

  • Hi Alex, I am glad to see all the progress you have done. Few comments and questions below.

    • "I have managed to run the OOB demonstrations on my BeagleBone AI, even better I managed to run them in the Debian environment. Since the SDK does not support the BBAI (if helpful I can provide details of how I did this?)." --> I am glad you were able to build LPSDK demos in Debian, and yes please, if you don't mind, provide some details here as probably other users would find it very beneficial. However, just to clarify Linux PSDK AM57x indeed supports BBAI.

    PROCESSOR-SDK-LINUX-AM57X  06_03_00_106  Supported EVMs includes Beaglebone AI  

    • "I then wanted to replace the TI default inputs with custom inputs. Changing the input image (.png) format and labels file was straight forward. Changing the model file has not worked." --> can you share the failing error? or the console output?

    • "first problem - I notice the tidl status tool fails with the error that it can't open. But there is no further information given as to why?" --> you can ignore this.

    • I believe below console print is from running your demo (not our ./tflite_classification). Can you confirm? One file that you would need to update and have in the same folder as the binary, I believe, is "subgraph0.cfg". Just want to confirm you do. 

    Loading model...
    ERROR: Encountered unresolved custom op: tidl-am5-custom-op
    ERROR: Node number 0 (tidl-am5-custom-op) failed to prepare.

    Failed to allocate tensors.
    Image input: testimages/big_img_0.png
    Running inference...
    Segmentation fault

    From documentation we have below subgraph0.cfg example for partial offloading “mobilenet_v1_1.0_224.tflite” with “MobilenetV1/MobilenetV1/Conv2d_13_pointwise/Relu6” as the subgraph output.

    netBinFile      = /usr/share/ti/tidl/utils/test/testvecs/config/tidl_models/tflite/tidl_net_tflite_mobilenet_v1_1.0_224.bin
    paramsBinFile   = /usr/share/ti/tidl/utils/test/testvecs/config/tidl_models/tflite/tidl_param_tflite_mobilenet_v1_1.0_224.bin
    # The input is in NHWC format and ranges [-1,1]
    inConvType = 0
    inIsSigned = 1
    inScaleF2Q = 128
    inIsNCHW = 0
    # The output is in NHWC format and ranges [0,6]
    outConvType = 0
    outIsSigned = 0
    outScaleF2Q = 42.5
    outIsNCHW = 0

    thank you,

    Paula

  • Hi Paula 

    I shall respond to your points in the same order for clarity! 

    • I must have found some out of date documentation that said the BeagleBone AI is not yet supported by the SDK. However, when I tried to set up the sdk by running ./setup.sh it was unable to connect to the BBAI that was connected to my PC. 

    • The error that it failed with when trying to load probability_colour_cnn_classifier_tflite_am5.tflite is
    Loading model...
    ERROR: Encountered unresolved custom op: tidl-am5-custom-op
    ERROR: Node number 0 (tidl-am5-custom-op) failed to prepare.
    
    Failed to allocate tensors.
    Image input: testimages/big_img_0.png
    Running inference...
    Segmentation fault
    
    
    • Ok, I shall ignore this 

    • Yes, this error is produced when trying to load the output of tidl_model_import.out using my network 
      • input network file "probability_colour_cnn_classifier.tflite" - this works 
      • output network file (generated by tidl_model_import.out) "probability_colour_cnn_classifier_tidl_am5.tflite" - this doesn't work 

    The conversion tool does not generate a subgraph0.cfg file. 

    Indeed, when running the TI example (./tidl_classification) there is no subgraph file in the directory of the executable 

    I am looking for full offload

    Thanks 

    Alex 

     

  • Just wanted to add a little more information 

    If I open the probability_colour_cnn_classifier_tidl_am5.tflite using netron, I get the following output: 

    I can also share with you the example network file (probability_colour_cnn_classifier.tflite) - there is nothing special about it, it's just a test case for working out the TI design flow to get things working on the AM57xx devices! 

    example_network.zip

  • Hi Alex, I got a setup with an AM57x board (No BBAI.. but should be OK) and PSDKL 6.3 ready.

    Could you share your output "tidl_*_bin", along with an example input image? and your modified classification list?

    Also, did you change/update "netBinFile" and "paramsBinFile" inside  subgraph0.cfg?

    Thank you,

    Paula

  • Hi Paula, 

    Here is some example files for you. I have included the model config file used for converting the model and some test input images and labels. 

    tidl_example.zip

    You mention subgraph0.cfg. Looking through the example found in ./tensorflow-lite-examples there is no subgraph0.cfg file used here. Is this example not using TIDL OFFLOAD on mobilenet_v1? 

    I have not, therefore, made any edits to a subgraph file or have a subgraph file included in the directory. 

    Thanks 

    Alex  

  • Hi Paula

    I have been looking through the source code supplied in  and have set #define TIDL_OFFLOAD symbol inside model_utils.h , I have also set TIDL_ACC to yes in the make file and ensured all the paths are set correctly to target the libraries.

    When I try to build I get the following errors: 

    /usr/bin/ld: /usr/lib/libtidl_api.so: undefined reference to `__is_in_malloced_region'
    /usr/bin/ld: /usr/lib/libtidl_api.so: undefined reference to `__free_ddr'
    /usr/bin/ld: /usr/lib/libtidl_api.so: undefined reference to `__malloc_ddr'
    collect2: error: ld returned 1 exit status

    Why does the compiled shared object library fail to reference these symbols? Where should they be defined? 

    Thanks 

    Alex 

  • Hi Alex, to answer some of your questions.

    1) TIDL Offload is by default enabled, so no need to rebuild.. no sure why you are getting those building errors, though. But if you don't mind please use OOB for now.

    2) Subgraph0.cfg is need it if CostumOP is created. Even if CustomOp includes all the NN model.

    On the other hand, yesterday I tried your model with ./tflite_classification demo and it runs OK, but output is not correct. No sure why. Maybe your model cannot be quantized to 12bits correctly, or there is still a mismatch in expected input image format (NHWC) to TIDL. For now, Let me share with you what I did.

    Attached modified scrips.

    - TIDL import config file (model_config_pc.txt) for importing model in TIDL which creates *_tidl_am5.tflite (run_tidl_compiler_pc.sh)

    • "run_tidl_compiler_pc.sh" creates a "subgraph0.cfg", which uses the path for TIDL *net.bin and *params.bin, plus setup some input/output flags and variables.

    - Run /tflite_classification demo scripts (run_classification_*.sh).

    - Also attached converted *.png images to *.bmp and *.jpg, just in case they are useful for you.

    Input_images_conversion.zip

    PSDKL_Tensorflow-lite_demosScripts_modif.zip

    For running the demo what I did, on target, was:

    $ /etc/init.d/weston start

    $ run_classification_*.sh

    It first run on ARM only, and then on TIDL. Performance and output is printed out in console.

    Thank you,

    Paula

  • Hi Paula, 

    1) This error occurs when I try to use the OOB demos as well. For instance if I do the following:

    cd /usr/share/tensroflow-lite/demos/

    ./tflite_classification ...

    I get an error of the following: 

    ./tflite_classification: symbol lookup error: /usr/lib/libtidl_api.so.1: undefined sumbol: __free_ddr
    ./run_classification_blk.sh Kill: No such process

    This is why I have had to rebuild the source code found on the git repo  

    When I compile this with TIDL_ACC ?= no, and no TIDL_OFFLOAD symbol set, then they build okay. 

    2) I have tried to run your example but I get the same problem as in 1) (errors in libtidl_api.so) 

    Please note, since I am running on the BBAI, my file system is not the same as yours 

    For instance I do not have /usr/share/ti/tidl/utils or /usr/share/tensorflow-lite/demos/ 

    Thanks 

    Alex  

  • Hi Paula 

    I have spent the day trying to rebuild my BeagleBone image such that I remove the unresolved reference to __free_ddr (etc.) errors. 

    I have the following observations/comments: 

    1) I resized the stock BBAI images to accommodate more files from

    <TI PROCESSOR SDK INSTALL DIR>/targetNFS/usr/lib/*

    <TI PROCESSOR SDK INSTALL DIR>/targetNFS/usr/share/ti/* 

    <TI PROCESSOR SDK INSTALL DIR>/targetNFS/usr/share/tensorflow-lite/*

    2) when i replace the contents of the default /usr/lib directory from the BBAI image with the contents of <TI PROCESSOR SDK INSTALL DIR>/targetNFS/usr/lib/ we get the following behaviour 

    "basic" operations on the BBAI no longer work (i.e. copy, apt-get, etc.) - but this is to be expected since the shared libraries for them are now missing 

    running /usr/share/tensorflow-lite/demos/run_tidl_compiler.sh calls the tidl_model_import function, but fails to copy files (as the point above, the cp function is now missing) 

    running /usr/share/tensorflow-lite/demos/run_classification.sh with elevated privileges will 

    a) ARM only demo - cannot open the display, "failed to create wl_display (no such file or directory). But this if this is just linked to Qt not being installed then we can skip over this error as its neither here nor there for the issue we are interested in 

    b) TIDL ACC demo: 

    loading model... 
    TransportRpmsg_create: socket failed: 97 (Address family not supported by protocol)
    TransportRpmsg_create: socket failed: 97 (Address family not supported by protocol) 
    TransportRpmsg_create: socket failed: 97 (Address family not supported by protocol) 
    TIOCL FATAL: Internal Error: Number of message queues (0) does not match number of computer units (2) 
    Segmentation fault  

    To me this looks like the demonstrations as packaged in targetNFS or the SDK are not correctly compiled for the BBAI when run in this manner? 

    3) restoring functionality to the BBAI for "basic" functions (i.e. restoring /usr/lib/arm-linux-gnueabihf/) to that of the default state from the BBAI image returns the issue of unresolved reference to __free_ddr (etc.) errors when trying to use the libtidl_api.so shared object library 

    Therefore I need to understand where this shared object library is looking to attempt to resolve functions associated with the DDR 

    Thanks 

    Alex 

  • Hi Alex, let me ask our TIDL AM57x experts and come back to you.

    thank you,

    Paula

  • Hi Alex,

        tidl_api is using OpenCL to offload TIDL computation to DSP/EVE cores.  __free_ddr() is provided by the OpenCL library.

        I do not fully comprehend what you are trying to do, as I am not familiar with the BBAI debian image.  If you just need to resolve the "__free_ddr()" reference, you can add the "-lOpenCL" linking flag.

        If you can import your tflite network fully into TIDL format, maybe you can look into the TIDL-API examples for running TIDL networks.  They are in "/usr/share/ti/tidl/examples" in Processor SDK.  Not sure if BBAI debian image include those.

    -Yuan

  • Hi Yuan

    Thanks for getting back to me so quickly! 

    I have some progress on the matter. As you suggested I looked at the OpenCL libraries on my BeagleBone image. I had two sets of libOpenCL.so* files. If i remove the libOpenCL.so* libraries that are part of the BeagleBone image I no longer get "unable to resolve..." errors (yay!) 

    However, I do now have a new error: 

    TIOCL FATAL: Internal Error: Number of message queues (0) does not match number of computer units (2) 

    I am assuming this is because the libOpenCL.so* files I am using have been built with a different AM57xx device in mind? 

    (same problem when linking in the OpenCL libraries to the build) 

    Thanks 

    Alex 

  • Hi Alex,

        On AM57x, TIDL API is built on top of OpenCL for functionality.  See the enclosed software architecture link.  While you are modifying the debian image, please make sure that you still have a functional OpenCL first.  For example, "/usr/share/ti/examples/opencl/platforms/platforms" should run and provide OpenCL information.

        The default debian image from beagleboard.org should have both OpenCL and TIDL API examples running properly, since you can run the out-of-the-box demo application.  It seems that you are able to import your full tflite network into TIDL format, you may want to look at the TIDL-API examples that deal with running TIDL networks.  (You don't need to use the TI-modified tflite runtime that is meant for subgraph execution)  TIDL-API example should be on your EVM filesystem under "/usr/share/ti/tidl/examples" and you can go from enclosed software architecture link to the TIDL-API user guide for AM57x.  

    -Yuan

  • Hi Yuan, 

    Sorry I think there has been some mis-communication here: 

    The TI OOB demonstations (/usr/share/tensorflow-lite/demos) do not run. They fail with the error message 

    TIOCL FATAL: Internal Error: Number of message queues (0) does not match number of compute units (2) 

    The only way I got the tensorflow lite examples to run was to download the source code from the git repository and rebuild for ARM only execution 

    I have tried running "/usr/share/ti/examples/opencl/platforms/platforms"  and I get the same error as above 

    TIOCL FATAL: Internal Error: Number of message queues (0) does not match number of compute units (2) 

    So it would seem there is something wrong with OpenCL in my image?

    Thanks 

    Alex 

  • Do OpenCL/TIDL-API examples run in the default debian image from beagleboard.org?  -Yuan

  • Hi Yuan 

    I have flashed a brand new SD card with the stock image from beagleboard.org: AM5729 Debian 10.3 2020-04-06 8GB SD IoT TIDL

    The stock image does not contain the OpenCL/TIDL-API examples we have discussed. 

    For the sake of completeness I performed the following: 

    copy <PROCESSOR SDK ROOT>/targetNFS/usr/share/ti onto the BeagleBone

    copy <PROCESSOR SDK ROOT>/targetNFS/usr/share/tensorflow-lite onto the BeagleBone

    run /usr/share/ti/examples/opencl/platforms/plaforms

    PLATFORM: TI AM57x
        Version: OpenCL 1.2 TI product version 01.02.00.02 ()
        Vendor : Texas Instruments, Inc.
        Profile: FULL_PROFILE
    Segmentation fault

    run /usr/share/ti/tidl/examples/classification/tidl_classification 

    ./tidl_classification: error while loading shared libraries: libopencv_highgui.so.3.1 cannot open shared object file: No such file or directory 
    

    - I expected this since I had to copy over additional dependencies before

    The same error occurs if trying to run /usr/share/tensorflow-lite/demos/run_classification.sh

     Thanks, 
    Alex 

  • That's not good.  The stock image from beagleboard.org should have OpenCL and TIDL-API included and working.  Can you check with beagleboard support?

    I don't know if you can create a SDCard with AM57xx PSDK 6.3 and boot your beagleboard up?  If you do, then we have full filesystem for PSDK 6.3.  Beagleboard debian image is another story as TI does not maintain it.

    Paula, Can you verify if the stock debian image work on BBAI (OpenCL/TIDL) and if SDCard with PSDK 6.3 filesystem can work on BBAI?

    -Yuan

  • Hi Yuan, 

    I have raised a ticket with Beagleboard.org to see if they have any experience of these issues. 

    I am attempting to create an SD-card image using ti-processor-sdk-linux-am57xx-evm-06.03.00.106 using the following steps 

    1) download the .bin installer from ti.com 

    2) install the sdk with the wizard 

    3) run ./setup.sh 

    4) run <PROCESSOR SDK ROOT>/bin/create-sdcard.sh

    Number of partitions needed [2/3] : 2

    ################################################################################
    
    	Choose file path to install from
    
    	1 ) Install pre-built images from SDK
    	2 ) Enter in custom boot and rootfs file paths
    
    ################################################################################
    
    Choose now [1/2] : 1
    
     
    Will now install from SDK pre-built images
    now installing:  ti-processor-sdk-linux-am57xx-evm-06.03.00.106
    
    ################################################################################
    
       Multiple rootfs Tarballs found
    
    ################################################################################
    
    	 1:tisdk-rootfs-image-am57xx-evm.tar.xz
    	 2:tisdk-docker-rootfs-image-am57xx-evm.tar.xz
    
    Enter Number of rootfs Tarball: 1
     
    ################################################################################
    
    	Copying files now... will take minutes
    
    ################################################################################
    
    Copying boot partition

    MLO copied
    
    
    u-boot.img copied
    
    uEnv.txt copied
    
    Copying rootfs System partition
                                                                                                                                                                                                                
    
     
    Syncing...
     
    Un-mount the partitions 
     
    Remove created temp directories 
     
    Operation Finished
    

    However this creates an SD card with the TI Matrix App Laucher v2 on it. Not a linux environment! 

    Thanks 

    Alex 

  • Hi Alex, I am not too familiar with BBAI, by I believe you can get Linux environment console via serial port (same USB as power):

    https://www.element14.com/community/community/designcenter/single-board-computers/next-genbeaglebone/blog/2019/10/06/beaglebone-ai-bb-ai-getting-started  

    My AM57x serial port Tera Term configuration below, just FYI

    Baud rate:115200, Data: 8bit, Parity: none, Stop: 1bit, Flow control: none

    thanks,

    Paula

  • Hi Alex,

        Were you able to boot up your BBAI with SDCard created from PSDK 6.3?

        If you can see TI Matrix App Launcher v2, it means that Linux has booted up.  Matrix GUI is just an auto-started application.  It is running Arago Embedded Linux, not debian Linux.  If you can click around on Matrix GUI, go to Network settings, find the ip address, then you can ssh into it with username "root" and no password.

    -Yuan

  • Hi Yuan 

    I have elements of progress 

    I managed to boot my BBAI from the SD image produced by PSDK 6.3

    I was not able to SSH into it (connection refused) however I went to settings -> terminal to access the weston terminal 

    Running /usr/share/ti/examples/opencl/platforms/platforms gives the following output: 

    Version: OpenCL 1.2 TI product version 01.02.00.02 (bidfed9)
    Vendor: Texas Instruments, Inc.
    Profile: FULL_PROFILE 
        DEVICE: TI Multicore C66 DSP 
             Type       : ACCELERATOR | CUSTOM 
             CompUnits  : 2
             Frequency  : 0.75 GHz 
             G1b Mem    :  360448 KB 
             G1bExt1 Mem:       0 KB
             G1bExt2 Mem:       0 KB 
             Msmc Mem   :    1024 KB
             Loc Mem    :     128 KB
             Max Alloc  :  344064 KB
        DEVICE : TI Embedded Vision Engine (EVE) 
             Type       : CUSTOM 
             CompUnits  : 1
             Frequency  : 0.65 GHz 
             G1b Mem    :  360448 KB
             Loc Mem    :       0 KB 
             Max Alloc  :  344064 KB
        DEVICE : TI Embedded Vision Engine (EVE) 
             Type       : CUSTOM 
             CompUnits  : 1
             Frequency  : 0.65 GHz 
             G1b Mem    :  360448 KB
             Loc Mem    :       0 KB 
             Max Alloc  :  344064 KB
        DEVICE : TI Embedded Vision Engine (EVE) 
             Type       : CUSTOM 
             CompUnits  : 1
             Frequency  : 0.65 GHz 
             G1b Mem    :  360448 KB
             Loc Mem    :       0 KB 
             Max Alloc  :  344064 KB
        DEVICE : TI Embedded Vision Engine (EVE) 
             Type       : CUSTOM 
             CompUnits  : 1
             Frequency  : 0.65 GHz 
             G1b Mem    :  360448 KB
             Loc Mem    :       0 KB 
             Max Alloc  :  344064 KB

    I have also tried the tensorflow-lite-examples  but these do not work

    cd /usr/share/tensorflow-lite-examples/demos 
    
    ./run_tidl_complier.sh
    
    ./run_classification.sh
    
    --------------------------------------------------------------------------------------------
    Running classification using the TIDL compiled model mobilenet_v1_1.0_244_tidl_am5.tflite...
    --------------------------------------------------------------------------------------------
    loading model...
    tflite_classification: inc/executor.h:172 T* tidl::malloc_ddr(size_t) [width T = char; size_t = unsigned int]; Assertion `val != nullptr' failed.

    -Alex 

     

     

        DEVICE : TI Embedded Vision Engine (EVE)          Type       : CUSTOM          CompUnits  : 1         Frequency  : 0.65 GHz          G1b Mem    :  360448 KB         Loc Mem    :       0 KB          Max Alloc  :  344064 KB

  • Hi Yuan,

    I think the error (below) is being generated due to an unsupported layer in the network model

    loading model...
    tflite_classification: inc/executor.h:172 T* tidl::malloc_ddr(size_t) [width T = char; size_t = unsigned int]; Assertion `val != nullptr' failed.
    

    I have tried to work out from the documentation (below) what layers from my model are not supported in by the device

    My test model is described in Tensorflow as: 

    model = models.Sequential()
    model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(dim_x,dim_y,3))) 
    model.add(layers.MaxPooling2D((2,2)))
    model.add(layers.Conv2D(64, (3,3), activation='relu'))
    model.add(layers.MaxPooling2D((2,2)))
    model.add(layers.Conv2D(32, (3,3), activation='relu'))
    model.add(layers.AveragePooling2D((2,2)))
    model.add(layers.Conv2D(16, (3,3), activation='relu'))
    model.add(layers.AveragePooling2D((2,2)))
    model.add(layers.Conv2D(8, (3,3), activation='relu'))
    model.add(layers.AveragePooling2D((2,2)))
    # Add dense layers to model 
    model.add(layers.Flatten())
    model.add(layers.Dense(64, activation='relu'))
    model.add(layers.Dense(2))
    
    probability_model = tf.keras.Sequential([model, tf.keras.layers.Softmax()]) 

    From what I can deduce from the documentation all the above should be supported? 

    The tidl_model_import.out function manages to convert the model successfully, however i do get the error cannot open tidlStatusTool.

    I am using the same tidlStatusTool as given by the example /usr/share/tensorflow-lite-examples/demos

    Thanks 

    Alex 

     

  • Hi Alex,

        You don't have enough CMEM (shared) memory for running your model.  You can try two things:

    1) use less number of EVEs, set env var "TIDL_SUBGRAPH_NUM_EVES=1" and see if you can run.  

    2) You can try increase the size of CMEM: 

    -Yuan

  • Brilliant, the program ran without throwing errors by setting TIDL_SUBGRAPH_NUM_EVES=1 

    HOWEVER, the converted model does not function anywhere near the same as the model before the conversion. I guess this is to do with the setting in the model_config.txt file I pass to tidl_model_import.out. Are you an expert in selecting the right parameters? 

    My config file is: 

    # Default - 0
    randParams         = 0
    
    # 0: Caffe, 1: TensorFlow, 2: ONNX, 3: TensorFlow Lite, Default - 0
    modelType          = 3
    
    # 0: Fixed quantization By tarininng Framework, 1: Dyanamic quantization by TIDL, Default - 1
    quantizationStyle  = 1
    
    # quantRoundAdd/100 will be added while rounding to integer, Default - 50
    quantRoundAdd      = 50
    
    numParamBits       = 12
    
    
    inputNetFile = "./probability_colour_cnn_classifier_simple.tflite"
    outputNetFile = "./probability_colour_cnn_classifier_simple_am57_net.bin"
    outputParamsFile = "./probability_colour_cnn_classifier_simple_am57_param.bin"
    inNumChannels = 3
    inWidth = 100
    inHeight = 100
    
    
    inElementType = 1
    rawSampleInData = 1
    sampleInData = "./img_0.png"
    tidlStatusTool = "eve_test_dl_algo_ref.out"
    

    I am suspicious of the numParamBits since this model is trained on floating point data? 

    Thanks! 

    Alex 

  • If you supply a "png" file as sample input, "rawSampleInData" should be 0.  

    -Yuan

  • I tried changing "rawSampleInData" to 0 - still didn't work. 

    It also won't run the tidlStatusTool during model conversion 

    Alex 

  • Hi 

    I am still having trouble performing the conversion on my networks to yield something that works. 

    I have trained a completely new network this morning, based on the mnist clothing database, 28x28 images. 

    I have saved the first 12 images from the verification set as png format to test my network on using the BBAI. 

    When i run the network on the ARM core only it works fine. When I run the converted network I always return the same label - in this case label #0 and a probability of correctness no more than 0.2. 

    I have included a .zip of the models, label file and conversion scripts. 

    tidl_test_examples.zip

    I have tried a number of different network topologies and model_configs but no model works after being converted for TIDL acceleration. 

    Thanks 

    Alex