This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: TIDL: data format

Part Number: TDA4VM

Hi, 

Accroiing to this documentation, the only image formats RGB, BGR and RGBmax. 

our camerea and images are gerscale. 

which format should i use? 

  • Hi,

       You can convert your input data to raw binary data and use inFileFormat = 1 to feed raw binary data to TIDL. Please refer inFileFormat and inData variables in following documentation :

    https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/08_02_00_05/exports/docs/tidl_j721e_08_02_00_11/ti_dl/docs/user_guide_html/md_tidl_sample_test.html#tidl_inference_2

    Regards,

    Anshu 

  • HI,

    and i should still use inDataForma=1t?

    i an asking nows about the TIDL inporter.

  • Hi,

        inDataFormat becomes irrelevant when user uses inFileFormat =1 as in this case no pre-processing is done on the input and the input is directly fed as it is to the TIDL network.

    Regards,

    Anshu

  • HI, 

    According to this the tidl library suppport the following versions:

    • Caffe - 0.17 (caffe-jacinto in gitHub)
    • Tensorflow - 1.12
    • ONNX - 1.3.0 (opset 8/9)
    • TFLite - Tensorflow 2.0-Alpha

    Is it correct? do you have a more updated version that supports TensorFlow 1.14?

    how do i know whici versions are suppported by the library?

    Thanks,

    Oren

  • Hi Oren,

         As mentioned in the documentation, In most cases new version models also shall work since the basic operations like convolution, pooling etc don't change. Incase something has changed then import tool parser needs to be updated for the same. Design for the same is available here  : 

    software-dl.ti.com/.../md_tidl_fsg_import_tool_design.html

    Regards,

    Anshu

  • HI Anshu

    Thanks for the reply

    the model_import.out always return DIM error for all converted models.

    Here are example for configuration file we use:

    for pb
    ------
    modelType = 1
    inputNetFile = /dms_app/models/seatbelt/seatbelt_ambarella.ckpt.pb
    outputNetFile = /dms_app/models/seatbelt/tidl/seatbelt_ambarella.ckpt.pb.tidl
    outputParamsFile = /dms_app/models/seatbelt/tidl/seatbelt_ambarella.ckpt.pb.tidl.params
    inDataNorm = 0
    inFileFormat = 1
    inData = seatbelt_images_list.txt

    tidl_model_import.out models/seatbelt/seatbelt_config_tf.txt
    TF Model (Proto) File : /dms_app/models/seatbelt/seatbelt_ambarella.ckpt.pb
    TIDL Network File : /dms_app/models/seatbelt/tidl/seatbelt_ambarella.ckpt.pb.tidl
    TIDL IO Info File : /dms_app/models/seatbelt/tidl/seatbelt_ambarella.ckpt.pb.tidl.params
    ****************************************************
    ** All the Input Tensor Dimensions has to be greater then Zero
    ** DIM Error - For Tensor 0, Dim 0 is 0
    ****************************************************

    for onnyx
    ---------
    modelType = 2
    inputNetFile = /dms_app/models/seatbelt/onnyx/SeatbeltSegmentationUNet_128x80_6ch_v9_20210609T195314-onnx
    outputNetFile = /dms_app/models/seatbelt/tidl/seatbelt_onnyx.tidl
    outputParamsFile = /dms_app/models/seatbelt/tidl/seatbelt_onnyx.tidl.params
    inDataNorm = 0
    inFileFormat = 1ONNX Model (Proto) File : /dms_app/models/seatbelt/onnyx/SeatbeltSegmentationUNet_128x80_6ch_v9_20210609T195314-onnx
    TIDL Network File : /dms_app/models/seatbelt/tidl/seatbelt_onnyx.tidl
    TIDL IO Info File : /dms_app/models/seatbelt/tidl/seatbelt_onnyx.tidl.params
    Current ONNX OpSet Version : 15
    ****************************************************
    ** All the Input Tensor Dimensions has to be greater then Zero
    ** DIM Error - For Tensor 0, Dim 0 is 0
    ****************************************************

    we are converting form keras to pb and onnyx, 

    Do you know of  proven convertor that works with your tidl_model_import.out ?

    BTW, the link you had sent is for running tests, we need to import the modes first.

    Thanks

    Oren

  •  i have also atatched an image that describes all the pb used in the above example

  • Hi Oren,

       Can you share the import configuration file? it look like you are not setting the input tensor dimensions correctly.

    Regards,

    Anshu

  • Oren,

        I just noticed that you probably shared the import configuration in  your previous reply. Can you set following parameter to inform the input resolution to the import tool : 

    inWidth = ?
    inHeight = ?
    inNumChannels = ?

    Regards,

    Anshu

  • sure, attaching two files, one for onnx and one for tensor flow.

    i willl and set the values. but according to the document, they are not mandatory

    modelType           = 1
    inputNetFile        = /dms_app/models/seatbelt/seatbelt_ambarella.ckpt.pb
    outputNetFile       = /dms_app/models/seatbelt/tidl/seatbelt_ambarella.ckpt.pb.tidl
    outputParamsFile    = /dms_app/models/seatbelt/tidl/seatbelt_ambarella.ckpt.pb.tidl.params
    inDataNorm          = 0
    inFileFormat        = 1
    inData              = seatbelt_images_list.txt 
    modelType           = 2
    inputNetFile        = /dms_app/models/seatbelt/onnyx/SeatbeltSegmentationUNet_128x80_6ch_v9_20210609T195314-onnx
    outputNetFile       = /dms_app/models/seatbelt/tidl/seatbelt_onnyx.tidl
    outputParamsFile    = /dms_app/models/seatbelt/tidl/seatbelt_onnyx.tidl.params
    inDataNorm          = 0
    inFileFormat        = 1
    inData              = seatbelt_images_list.txt 

  • HI, 

    I have used the below confguration:

    modelType = 1
    inputNetFile = /dms_app/models/seatbelt/seatbelt_ambarella.ckpt.pb
    outputNetFile = /dms_app/models/seatbelt/tidl/seatbelt_ambarella.ckpt.pb.tidl
    outputParamsFile = /dms_app/models/seatbelt/tidl/seatbelt_ambarella.ckpt.pb.tidl.params
    inWidth = 80
    inHeight = 128
    inNumChannels = 1
    inDataNorm = 0
    inFileFormat = 1
    inData = /dms_app/models/seatbelt/seatbelt_images_list.txt

    Now i get the below error:

    TIDL ALLOWLISTING LAYER CHECK: [TIDL_InnerProductLayer] y/MatMul input shape of inner product must be 1x1x1xN.
    TIDL ALLOWLISTING LAYER CHECK: [TIDL_InnerProductLayer] width/MatMul input shape of inner product must be 1x1x1xN.
    TIDL ALLOWLISTING LAYER CHECK: [TIDL_InnerProductLayer] height/MatMul input shape of inner product must be 1x1x1xN.
    TIDL ALLOWLISTING LAYER CHECK: TIDL_E_QUANT_STATS_NOT_AVAILABLE] tidl_quant_stats_tool.out fails to collect dynamic range. Please look into quant stats log. This model will get fault on ta

    should i set outDataNamesList? how do i set the sizes? can it be done automaticvally?

  • another qustion, 

    The importer is in :  /dms_app/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tidl_j721e_08_02_00_11/ti_dl/utils/tidlModelImport/out and it is lookinf for 

    Couldn't open tidlStatsTool file: ../../test/PC_dsp_test_dl_algo.out
    Couldn't open tidlStatsTool file: ../../test/PC_dsp_test_dl_algo.out

    which exist someplace else. how can i set the path correctly?

  • Oren,

    i willl and set the values. but according to the document, they are not mandatory

       They are not mandatory if they are already present in the model but it looks like in this case model doesnt contain information about resolution.

    TIDL ALLOWLISTING LAYER CHECK: [TIDL_InnerProductLayer] y/MatMul input shape of inner product must be 1x1x1xN.

        Can you tell what kind of reshape operator is present before the inner product? 

    Couldn't open tidlStatsTool file: ../../test/PC_dsp_test_dl_algo.out
    Couldn't open tidlStatsTool file: ../../test/PC_dsp_test_dl_algo.out

    which exist someplace else. how can i set the path correctly?

       This file is present inside tidl_j721e_08_02_00_11/ti_dl/test/PC_dsp_test_dl_algo.out ( can you check if this file is present in your installation).  By defaults paths are already relative to the import tool location 

    tidl_j721e_08_02_00_11/ti_dl/utils/tidlModelImport/

    Regards,

    Anshu

  • yes. they exist in 3 locations:
    /dms_app/models_builder/edgeai-tidl-tools/tidl_tools/PC_dsp_test_dl_algo.out
    /dms_app/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tidl_j721e_08_02_00_11/tidl_tools/PC_dsp_test_dl_algo.out
    /dms_app/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tidl_j721e_08_02_00_11/ti_dl/test/PC_dsp_test_dl_algo.out

    but not in the relative path

  • Oren,

        ../../test/PC_dsp_test_dl_algo.out path looks right with relative to tidl_j721e_08_02_00_11/ti_dl/utils/tidlModelImport/. At this location  you should have the PC_dsp_test_dl_algo.out. Can you confirm from which directory you are running the import tool? Expectation is to run from tidl_j721e_08_02_00_11/ti_dl/utils/tidlModelImport/ directory.


    Regards,

    Anshu

  • HI, 

    this is the directory structure:

    ./ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tidl_j721e_08_02_00_11/ti_dl/utils/tidlModelImport/out/tidl_model_import.out
    ./ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tidl_j721e_08_02_00_11/ti_dl/test/PC_dsp_test_dl_algo.out

    this will not work: ../../test/PC_dsp_test_dl_algo.out

    this will work: ../../../test/PC_dsp_test_dl_algo.out

    Can i change it myselft?

    i am using: ti-processor-sdk-rtos-j721e-evm-08_02_00_05.tar.gz

  • Oren,

         You need to run import tool from tidl_j721e_08_02_00_11/ti_dl/utils/tidlModelImport/  as below

    ./out/tidl_model_import.out <import config file>

    Regards,
    Anshu
  • I have copuied the folder. 

    About 

    TIDL ALLOWLISTING LAYER CHECK: [TIDL_InnerProductLayer] y/MatMul input shape of inner product must be 1x1x1xN.

        Can you tell what kind of reshape operator is present before the inner product? 

    The also team had responded:

    We don't have any reshape explicitlly defined in the model

  • Oren,

        As per the snapshot of the model which you shared earlier it indicates that there is a reshape layer before inner product layer.


    Regards,

    Anshu

  • HI

    It was added during the freeze/pb.

    i am atatching the details of the reshape

  • Oren,

       Can you share the generated model visualization output of this network. You can refer the following documentation incase you are not familiar with this ; 

    https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/08_02_00_05/exports/docs/tidl_j721e_08_02_00_11/ti_dl/docs/user_guide_html/md_tidl_fsg_model_visualization.html

    Regards,

    Anshu

  • HI,

    I used an old model. 

  • aboive is the new model.

    Now i get different errors:
    TF Model (Proto) File : /dms_app/models/seatbelt/seatbelt_ambarella.ckpt.pb
    TIDL Network File : /dms_app/models/seatbelt/tidl/seatbelt_ambarella.ckpt.pb.tidl
    TIDL IO Info File : /dms_app/models/seatbelt/tidl/seatbelt_ambarella.ckpt.pb.tidl.params
    TF operator Conv2DBackpropInput is not suported now.. By passing
    TF operator Pack is not suported now.. By passing
    TF operator StridedSlice is not suported now.. By passing
    TF operator Conv2DBackpropInput is not suported now.. By passing
    TF operator Pack is not suported now.. By passing
    TF operator StridedSlice is not suported now.. By passing
    TF operator Conv2DBackpropInput is not suported now.. By passing
    TF operator Pack is not suported now.. By passing
    TF operator StridedSlice is not suported now.. By passing
    TF operator Conv2DBackpropInput is not suported now.. By passing
    TF operator Pack is not suported now.. By passing
    TF operator StridedSlice is not suported now.. By passing
    ****************************************************
    ** All the Input Tensor Dimensions has to be greater then Zero
    ** DIM Error - For Tensor 26, Dim 1 is -1
    ****************************************************

    this is the configuration:
    modelType = 1
    inputNetFile = /dms_app/models/seatbelt/seatbelt_ambarella.ckpt.pb
    outputNetFile = /dms_app/models/seatbelt/tidl/seatbelt_ambarella.ckpt.pb.tidl
    outputParamsFile = /dms_app/models/seatbelt/tidl/seatbelt_ambarella.ckpt.pb.tidl.params
    inWidth = 80
    inHeight = 128
    inNumChannels = 1
    inDataNorm = 0
    inFileFormat = 1
    inData = /dms_app/models/seatbelt/seatbelt_images_list.txt

    Do you a recommended tool for conversion between keras(h5) and onnyx or pb? i think that a better conversion tool will solve our problems

    Thanks
    oren

  • HI Anshu,

    I dont have the optimize_for_inference.py ifile is not a part of the ti-processor-sdk-rtos-j721e-evm-08_02_00_05.tar.gz.

    Is this the right file? according to the doc you have sent it should be a part oif the delivery and should be at this location: ti_dl/test/testvecs/models/public/tensorflow/mobilenet_v2 

    Thanks,

    Oren

  • Oren,

      optimize_for_inference.py is not part of TIDL software. It comes as part of tensorflow the same is mentioned in the docs as can be seen from below snapshot of documentation: 

    Regards,

    Anshu

  • Hi Anshu,

    I have taken the optimize_for_inference.py from the above location and ran the below command:

    python3 optimize_for_inference.py --input=models/seatbelt/seatbelt_ambarella.ckpt.pb --output=models/seatbelt/seatbelt_ambarella.ckpt.frozen.pb --input_names="input" --output_names="segMaskReg/Sigmoid"

    then i rerun the tidl_model_import.out, with the frozen pb.
    I got the below resuts:
    TF Model (Proto) File : /dms_app/models/seatbelt/seatbelt_ambarella.ckpt.frozen.pb
    TIDL Network File : /dms_app/models/seatbelt/tidl/seatbelt_ambarella.ckpt.pb.tidl
    TIDL IO Info File : /dms_app/models/seatbelt/tidl/seatbelt_ambarella.ckpt.pb.tidl.params
    TF operator Conv2DBackpropInput is not suported now.. By passing
    TF operator Pack is not suported now.. By passing
    TF operator StridedSlice is not suported now.. By passing
    TF operator Conv2DBackpropInput is not suported now.. By passing
    TF operator Pack is not suported now.. By passing
    TF operator StridedSlice is not suported now.. By passing
    TF operator Conv2DBackpropInput is not suported now.. By passing
    TF operator Pack is not suported now.. By passing
    TF operator StridedSlice is not suported now.. By passing
    TF operator Conv2DBackpropInput is not suported now.. By passing
    TF operator Pack is not suported now.. By passing
    TF operator StridedSlice is not suported now.. By passing
    ****************************************************
    ** All the Input Tensor Dimensions has to be greater then Zero
    ** DIM Error - For Tensor 26, Dim 1 is -1
    ****************************************************
    BTW i have spoken with our algo team, and they are using freeze_model.
    If we use will be better? do you have a recommended too to convert h5 to onnyx that you that works?
    Thanks
    Oren
  • Oren,

       Not sure why the error which you were getting earlier is different from what you are sharing now. Either ways can you try exporting the model to tflite and see if it helps? Also will it be possible for you to share a model/representative model where the issue is reproducible?

         Have you considered using the below option for import, this will improve the ease of use for these scenarios.

    https://github.com/TexasInstruments/edgeai-tidl-tools

    Regards,

    Anshu

  • Incase you want to give a try with exporting to tflite model then you can refer the following link for the same : 

    https://www.tensorflow.org/api_docs/python/tf/compat/v1/lite/TFLiteConverter#from_frozen_graph

    Regards,

    Anshu

  • Hi Jain,

    I have alreadt tried it.

     But all the examples refers to the model's zoo... they download it and the then run it. 

    Which does not help us. 

    I ditsnt see any documenatation that eaxplains how to take out own model and convert it to tidl.

    Ihave tried running: edgeai-tidl-tools/examples/osrt_python/ort/onnxrt_ep.py but as i said, it use models from zoo which are alread converted(as far as i understand).

    The Readm.md does not give exampes to gthe bring your own  model example.

    Unless the t/onnxrt_ep.py  does just that, but is didnt undertand it from the documentation

  • Hi Anshu,

    I have converting using the example in the site:

    model = tf.keras.models.load_model('keras/SeatbeltSegmentationUNet_128x80_6ch_v9_20210609T195314.h5')
    converter = tf.lite.TFLiteConverter.from_keras_model(model)
    tflite_model = converter.convert()
    open("SeatbeltSegmentationUNet_128x80_6ch_v9_20210609T195314.h5.tflite", "wb").write(tflite_model)

    This is the configuration file i use:

    modelType = 3
    inputNetFile = /dms_app/models/seatbelt/SeatbeltSegmentationUNet_128x80_6ch_v9_20210609T195314.h5.tflite
    outputNetFile = /dms_app/models/seatbelt/tidl/SeatbeltSegmentationUNet_128x80_6ch_v9_20210609T195314.h5.tflite.tidl
    outputParamsFile = /dms_app/models/seatbelt/tidl/SeatbeltSegmentationUNet_128x80_6ch_v9_20210609T195314.h5.tflite.tidl.params
    inWidth = 80
    inHeight = 128
    inNumChannels = 1
    inDataNorm = 0
    inFileFormat = 1
    inData = /dms_app/models/seatbelt/seatbelt_images_list.txt

    This is how i run the converter:

    d /dms_app/ti-processor-sdk-rtos-j721e-evm-08_02_00_05/tidl_j721e_08_02_00_11/ti_dl/utils/tidlModelImport/out ; ./tidl_model_import.out /dms_app/models/seatbelt/seatbelt_config_tf.txt ; cd

    I get segmetation fault:

    TFLite Model (Flatbuf) File : /dms_app/models/seatbelt/SeatbeltSegmentationUNet_128x80_6ch_v9_20210609T195314.h5.tflite
    TIDL Network File : /dms_app/models/seatbelt/tidl/SeatbeltSegmentationUNet_128x80_6ch_v9_20210609T195314.h5.tflite.tidl
    TIDL IO Info File : /dms_app/models/seatbelt/tidl/SeatbeltSegmentationUNet_128x80_6ch_v9_20210609T195314.h5.tflite.tidl.params
    41
    Segmentation fault (core dumped)

  • Hi Oren,

    I see there is "Shape- Strided Slice- Pack" combination of layers in the model. It is caused due to dynamic batch size in the tflite model. In case it is not intentional, can you refer to below link and see if you can remove this combination?

    https://github.com/tensorflow/tensorflow/issues/43882#issuecomment-731636562

    As for running your model in edgeai_tidl_tools (https://github.com/TexasInstruments/edgeai-tidl-tools), you can set the model you want to run here: https://github.com/TexasInstruments/edgeai-tidl-tools/blob/a72b0b1b28de663a2fcbe8d40cd09b4bf4e7e3d3/examples/osrt_python/ort/onnxrt_ep.py#L224

    And specify the corresponding config here: https://github.com/TexasInstruments/edgeai-tidl-tools/blob/a72b0b1b28de663a2fcbe8d40cd09b4bf4e7e3d3/examples/osrt_python/common_utils.py#L347

    Please refer to the available configs to add new one. 

    You can specify the "model_path" (path where your tflite model is placed) as part of this config, and then run the onnxrt_ep.py script to compile the model.

    Regards,

    Anand

  • Btw, since model is converted to tflite now, use tflrt_delegate.py above and not onnxrt_ep.py.

  • HI Anand,

    Does the onnxrt_ep.py. does the same as tidl_model_import.out? according to code it is downloading model from the zoo, 

     

    Thanks,

    Oren

  • Hi Oren,

    "python3 onnxrt_ep.py -c" is equivalent to running tidl_model_import.out, however using ONNX runtime interface. If you have the model available in specified 'model_path', as I have explained above, it will use the existing model.

    Regards,
    Anand

  • Hi,

    I have ran LD_LIBRARY_PATH="/dms_app/tidl_tools/" python3 onnxrt_ep.py --compile with our onnyx models.

    i got the below error:

    Process Process-1:
    Traceback (most recent call last):
    File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
    File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
    File "onnxrt_ep.py", line 219, in run_model
    sess = rt.InferenceSession(config['model_path'] ,providers=EP_list, provider_options=[delegate_options, {}], sess_options=so)
    File "/usr/local/lib/python3.6/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 283, in __init__
    self._create_inference_session(providers, provider_options)
    File "/usr/local/lib/python3.6/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 310, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
    onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from /dms_app/cipia/seatbelt/SeatbeltSegmentationUNet_128x80_6ch_v9_20210609T195314-onnx failed:/home/a0230315/workarea/onnxrt/onnxruntime/onnxruntime/core/graph/model.cc:111 onnxruntime::Model::Model(onnx::ModelProto&&, const PathString&, const IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&) Unknown model file format version.

    My script is able to run with your models

  • Hi Oren,

    Can you please share your model? I can give a try on my end and check for the issue.

    Regards,

    Anand

  • Hi Oren,

    Please try exporting your model with opset version 11 and then running compilation and inference.

    Regards,

    Anand