This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-TDAX: Problems with the TIDL Keras/Tensorflow model conversion

Part Number: PROCESSOR-SDK-TDAX

Hello TI team,

I am using the TI model conversion tool (tidl_model_import.out.exe) to convert a keras model saved as TensorFlow *.pb model to TI-compatible format.

The conversion from keras to Tensorflow is done using something similar to what is done at this link: https://github.com/amir-abdi/keras_to_tensorflow/blob/master/keras_to_tensorflow.py

I have the following problems:

1) I'm getting this error message when running tidl_model_import.out.exe:

[libprotobuf FATAL D:\work\vision\CNN\protobuf\protobuf-cpp-3.2.0rc2\protobuf-3.
2.0rc2\src\google/protobuf/repeated_field.h:1418] CHECK failed: (index) < (curre
nt_size_):

which was also reported in this post:

https://e2e.ti.com/support/arm/automotive_processors/f/1021/t/638055?tisearch=e2e-sitesearch&keymatch=libprotobuf

The difference in model definition in my case should be however related to the version of Tensorflow (where the saved fields are defined). I am using Tensorflow 1.1 which seems to have its own compiled protobuf readers/writers. Which version of Tensorflow was used by TI to validate the model conversion tool? Do you think that the error's cause is somewhere else?

2) The input to the net is usually an image. It seems that you are using *.y files as input. How the .y file is generated?

3) What I understood is that the quantStatsTool does an emulation on the PC in order to test the output of the model conversion tool. There is eve_test_dl_algo.out.exe for EVE, but I did not find the DSP tool in the Vision SDK.

thank you very much in advance
Safwan

  • Hi Safwan,

    I have forwarded your question to TIDL experts.

    Regards,
    Yordan
  • Hi Safwan,
    1.) we have used , Tensorflow 1.0 for TIDL model import. We have validated inceptionNetV1 and mobilenet_1.0 from below github (Refer user guide and datasheet for more information) github.com/.../slim.
    Source code for import tool is available as part for TIDL release, please re-compile the import tool with updated protobuf readers/writers (generate c files from *.proto files) from the tensorflow if you model is stored with updated format.

    2.) The *.y that we used in our example is a RAW 8-bit RGB file (Separate plan for each channel). Ignore the extension. Our import tool also supports jpeg or png files. please refer other import config files.

    3.) The PC emulation for EVE and DSP will produce exact results. So the eve_test_dl_algo.out.exe can be used for both EVE and DSP.

    Thanks and Regards,
    Kumar.D
  • Hi Kumar,

    thank you very much for your detailed reply.

    I removed my Tensorflow version and installed Tensorflow 1.0 in order to be compatible with what is already validated by TI. Unfortunately, the sam error message appeared.

    Thanks and Regards

    Safwan

  • Hi Kumar,

    unfortunely I did not get the tool running.

    I think, it would be helpful if you tell me how you converted the chkpt file (for instance the chekpt of InceptionNetV1 which you have validated the tool on) to a binary protobuf file.

    thanks and regards
    Safwan
  • Hi Safwan,

    Please refer below file to convert ckpt to the frozen graph.

    https://gist.github.com/StanislawAntol/656e3afe2d43864bb410d71e1c5789c1

    Then use "tensorflow\python\tools\optimize_for_inference.py" on the ouput of above step

    Example : python "tensorflow\python\tools\optimize_for_inference.py"  --input=mobilenet_v1_1.0_224.pb  --output=mobilenet_v1_1.0_224_final.pb --input_names=input --output_names="softmax/Softmax"

    Regards,

    Kumar.D

  • Hello Kumar,

    your response was helpful. Without using the optimize_for_inference.py script, the import tool generates the protobuf error. When I used it, the error disappeared which is a positive sign. I think, this needs to be mentioned in the user guide of the tool.

    Nevertheless, I got other error messages in the log file: unsupported operators. In order to simplify things, I built the simplest possible neural net:

    _________________________________________________________________
    Layer (type)                 Output Shape              Param #   
    =================================================================
    conv2d_1 (Conv2D)            (None, 225, 225, 16)      448       
    _________________________________________________________________
    activation_1 (Activation)    (None, 225, 225, 16)      0         
    _________________________________________________________________
    flatten_1 (Flatten)          (None, 810000)            0         
    _________________________________________________________________
    dense_1 (Dense)              (None, 1)                 810001    
    _________________________________________________________________
    activation_2 (Activation)    (None, 1)                 0         
    =================================================================
    Total params: 810,449
    Trainable params: 810,449
    Non-trainable params: 0

    This is a keras model which was converted to tensorflow graph.
    The graph was then freezed and saved (in a similar way to what is done in the link which you posted for the inception).
    Afterwards, the optimize_for_inference.py was applied (without this step, one gets the protobuf exception).
    The TIDL import tool now outputs the following log file:

    TF Model File : my.pb
    Op Type Shape is Not suported will be By passed
    Op Type StridedSlice is Not suported will be By passed
    Op Type Prod is Not suported will be By passed
    Op Type Pack is Not suported will be By passed
    Could not find the requested input Data : sequential_1/flatten_1/stack/0 !!

    Regards and Thanks

    Safwan

     

  • Thanks Safwan,

    We will mention regarding optimize_for_inference in the users guide.

    Regarding Keras generated TF model, We have not validated such model. We recommend to used TF slim based models for TIDL import. Please refer  nceptionNetV1 and mobilenet_1.0 from below as examples for building your models

    github.com/.../slim.

    Thanks and Regards,

    Kumar.D