This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

WEBENCH® Tools/PROCESSOR-SDK-DRA8X-TDA4X: TIDL Import tool depth wise separable convolution layers

Part Number: PROCESSOR-SDK-DRA8X-TDA4X

Tool/software: WEBENCH® Design Tools

Hi,

I try to convert out tensor flow model and  the actual .bin TI model is not created.

I get the following error: DW Convolution with Depth multiplier > 1 is not suported now

We tried to import the original Tensorflow model and found the following discrepancies:

- we downloaded the orginal model from your official site : ssd_mobilenet_v1_coco_2018_01_28 with depth multiplier of 1.

- under path ti_dl/testvecs/config/import/public/tensorflow/ we can't find any configuration file for this model.

When will depth wise convolution layers with depth multiplier greater or equal to 1 be supported by TI_DL import tool ?

  • Hi,

    You can add a 1x1 convolution layer after the depthwise convolution layer in order to produce results that would be equivalent to a single depthwise convolultion with multiplier > 1.   

    regards,

    Victor

  • The import confiog file "tidl_import_mobileNetv1_0.75_ssd.txt" can be used for all thre mobilenet SSD model. Just change the below lin in the file for corresponding model

    #inputNetFile = "../../test/testvecs/models/public/tensorflow/mobilenet_v1_0.75_ssd/frozen_inference_graph.pb"
    #inputNetFile = ../../test/testvecs/models/public/tensorflow/mobilenet_v1_1.0_ssd/frozen_inference_graph_opt_1.pb
    inputNetFile = ../../test/testvecs/models/public/tensorflow/mobilenet_v1_2.0_ssd/frozen_inference_graph_opt_1.pb"

    Regarding depth multiplier in DWS convolution., It the next release we will support depth multiplier if the number of input channels is 1. For Other-cases, the TIDL importer needs to be updated this not planned to be supported. We recommend using multiple DWS convolutions and concatenate the outputs for better runtime

  • tidl_import_mobileNetv1_0.75_ssd.txt
    modelType          = 1
    numParamBits       = 8
    numFeatureBits     = 8
    quantizationStyle  = 2
    inputNetFile      = "/opt/psdk_rtos_auto_j7_06_01_00_15/tidl_j7_01_00_00_00/ti_dl/test/testvecs/models/public/tensorflow/ssd_mobilenet_v1_coco_2018_01_28/frozen_inference_graph.pb"
    #inputNetFile      = ../../test/testvecs/models/public/tensorflow/mobilenet_v1_1.0_ssd/frozen_inference_graph_opt_1.pb
    #inputNetFile      = ../../test/testvecs/models/public/tensorflow/mobilenet_v1_2.0_ssd/frozen_inference_graph_opt_1.pb"
    outputNetFile      = "../../test/testvecs/config/tidl_models/tensorflow/tidl_net_mobilenet_v1_0.75_224_ssd.bin"
    outputParamsFile   = "../../test/testvecs/config/tidl_models/tensorflow/tidl_io_mobilenet_v1_0.75_224_ssd_"
    inDataNorm  = 1
    inMean = 128 128 128
    inScale =  0.0078125 0.0078125 0.0078125
    inWidth  = 300
    inHeight = 300 
    inNumChannels = 3
    inDataNamesList = "Preprocessor/sub"
    outDataNamesList = "BoxPredictor_0/BoxEncodingPredictor/BiasAdd,BoxPredictor_0/ClassPredictor/BiasAdd,BoxPredictor_1/BoxEncodingPredictor/BiasAdd,BoxPredictor_1/ClassPredictor/BiasAdd,BoxPredictor_2/BoxEncodingPredictor/BiasAdd,BoxPredictor_2/ClassPredictor/BiasAdd,BoxPredictor_3/BoxEncodingPredictor/BiasAdd,BoxPredictor_3/ClassPredictor/BiasAdd,BoxPredictor_4/BoxEncodingPredictor/BiasAdd,BoxPredictor_4/ClassPredictor/BiasAdd,BoxPredictor_5/BoxEncodingPredictor/BiasAdd,BoxPredictor_5/ClassPredictor/BiasAdd"
    metaArchType = 1
    metaLayersNamesList = "/opt/psdk_rtos_auto_j7_06_01_00_15/tidl_j7_01_00_00_00/ti_dl/test/testvecs/models/public/tensorflow/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config"
    inData  =   "../../test/testvecs/config/detection_list.txt"
    postProcType = 2
    perfSimConfig = ../../test/testvecs/config/import/perfsim_base.cfg
    
    
    
    

    For ssd_mobilenet_v1_1.0 SSD I get the following 

    After I updated tidl_import_mobileNetv1_0.75_ssd.txt file I get the following error, and models were not created (application aborted):

    TIDL IO Info File      : ../../test/testvecs/config/tidl_models/tensorflow/tidl_io_mobilenet_v1_0.75_224_ssd_  
     TF operator Const is not suported now..  By passing
    [libprotobuf FATAL /data/adasuser_bangvideoapps02/kumar/tidl_tools/protobuf-3.5.1/src/google/protobuf/repeated_field.h:1522] CHECK failed: (index) < (current_size_):
    terminate called after throwing an instance of 'google::protobuf::FatalException'
      what():  CHECK failed: (index) < (current_size_):
    Aborted (core dumped)

    Full log file:

    TF Meta PipeLine (Proto) File  : /opt/psdk_rtos_auto_j7_06_01_00_15/tidl_j7_01_00_00_00/ti_dl/test/testvecs/models/public/tensorflow/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config  
    [libprotobuf WARNING google/protobuf/text_format.cc:305] Warning parsing text-format object_detection.protos.TrainEvalPipelineConfig: 159:3: text format contains deprecated field "from_detection_checkpoint"
    [libprotobuf WARNING google/protobuf/text_format.cc:305] Warning parsing text-format object_detection.protos.TrainEvalPipelineConfig: 169:3: text format contains deprecated field "num_examples"
    [libprotobuf WARNING google/protobuf/text_format.cc:305] Warning parsing text-format object_detection.protos.TrainEvalPipelineConfig: 170:3: text format contains deprecated field "max_evals"
    num_classes : 90
    y_scale : 10.000000
    x_scale : 10.000000
    w_scale : 5.000000
    h_scale : 5.000000
    num_keypoints : 5.000000
    score_threshold : 0.300000
    iou_threshold : 0.600000
    max_detections_per_class : 100
    max_total_detections : 100
          scales, height_stride, width_stride, height_offset, width_offset
       0.2000000,   -1.0000000,   -1.0000000,   -1.0000000,   -1.0000000
       0.3500000,   -1.0000000,   -1.0000000,   -1.0000000,   -1.0000000
       0.5000000,   -1.0000000,   -1.0000000,   -1.0000000,   -1.0000000
       0.6500000,   -1.0000000,   -1.0000000,   -1.0000000,   -1.0000000
       0.8000000,   -1.0000000,   -1.0000000,   -1.0000000,   -1.0000000
       0.9500000,   -1.0000000,   -1.0000000,   -1.0000000,   -1.0000000
    aspect_ratios
       1.0000000
       2.0000000
       0.5000000
       3.0000000
       0.3333000
    TF Model (Proto) File  : /opt/psdk_rtos_auto_j7_06_01_00_15/tidl_j7_01_00_00_00/ti_dl/test/testvecs/models/public/tensorflow/ssd_mobilenet_v1_coco_2018_01_28/frozen_inference_graph.pb  
    TIDL Network File      : ../../test/testvecs/config/tidl_models/tensorflow/tidl_net_mobilenet_v1_0.75_224_ssd.bin  
    TIDL IO Info File      : ../../test/testvecs/config/tidl_models/tensorflow/tidl_io_mobilenet_v1_0.75_224_ssd_  
     TF operator Const is not suported now..  By passing
    [libprotobuf FATAL /data/adasuser_bangvideoapps02/kumar/tidl_tools/protobuf-3.5.1/src/google/protobuf/repeated_field.h:1522] CHECK failed: (index) < (current_size_):
    terminate called after throwing an instance of 'google::protobuf::FatalException'
      what():  CHECK failed: (index) < (current_size_):
    Aborted (core dumped)

    For ssd_mobilenet_v2 SSD I get the following error 

    alex-linux@LinuxV18:/opt/psdk_rtos_auto_j7_06_01_00_15/tidl_j7_01_00_00_00/ti_dl/utils/tidlModelImport$ ./out/tidl_model_import.out /opt/psdk_rtos_auto_j7_06_01_00_15/tidl_j7_01_00_00_00/ti_dl/test/testvecs/config/import/public/tensorflow/tidl_import_mobileNetv1_0.75_ssd.txt
    TF Meta PipeLine (Proto) File  : /opt/psdk_rtos_auto_j7_06_01_00_15/tidl_j7_01_00_00_00/ti_dl/test/testvecs/models/public/tensorflow/ssd_mobilenet_v2_coco_2018_03_29/pipeline.config  
    [libprotobuf ERROR google/protobuf/text_format.cc:288] Error parsing text-format object_detection.protos.TrainEvalPipelineConfig: 35:27: Message type "object_detection.protos.SsdFeatureExtractor" has no field named "batch_norm_trainable".
    ERROR: google::protobuf::TextFormat::Parse proto file(/opt/psdk_rtos_auto_j7_06_01_00_15/tidl_j7_01_00_00_00/ti_dl/test/testvecs/models/public/tensorflow/ssd_mobilenet_v2_coco_2018_03_29/pipeline.config) FAILED !!!

     

     

     

  • Hi Alex,

     Can you try with the "pipeline.config"  used for thr 0.75 mobile net SSD.

    I also noticed that you are following up 0.75 SSD also in the below thread.

    Can we track both of these together?

    Regards,

    Kumar.D

  • Yes, I tried firstly with "pipeline.config"  used for the 0.75 mobile net SSD. Got the same error. 

  • Could you please confirm whether you could get the default configuration working?

    If no, did you follow the Notes - 6 in the below link

    http://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/latest/exports/docs/tidl_j7_01_00_01_00/ti_dl/docs/user_guide_html/md_tidl_models_info.html

  • Hello,

    I am having the same problem with SSD-MobileNetV1, I got the 0.75 version to work, but not the 1.0 version (both with 0.75 and 1.0 pipeline.config file), I get this error (TF operator Const is not suported now..  By passing).

    I also tried an optimized version with optimize_for_inference, but I get the same error.

    I am using the latest SDK on Jacinto 7 (TDA4VM).

    When I visualize both frozen graphs (0.75 and 1.0) on Tensorboard, I see indeed that the 1.0 version uses 6 Const operators that I don't see in the 0.75 version (all the stack nodes at the right of the screenshot bellow)

    Is there a way to bypass this limitation (support of Const layer)?

  • Ideally, the optimize_for_inference script is expected to remove these const operators from the frozen graph.

    Can you specify the only required output names for this script with below argument. After this can you try opening in netron and view this pb file

    --output_names=""BoxPredictor_0/BoxEncodingPredictor/BiasAdd,BoxPredictor_0/ClassPredictor/BiasAdd,BoxPredictor_1/BoxEncodingPredictor/BiasAdd,BoxPredictor_1/ClassPredictor/BiasAdd,BoxPredictor_2/BoxEncodingPredictor/BiasAdd,BoxPredictor_2/ClassPredictor/BiasAdd,BoxPredictor_3/BoxEncodingPredictor/BiasAdd,BoxPredictor_3/ClassPredictor/BiasAdd,BoxPredictor_4/BoxEncodingPredictor/BiasAdd,BoxPredictor_4/ClassPredictor/BiasAdd,BoxPredictor_5/BoxEncodingPredictor/BiasAdd,BoxPredictor_5/ClassPredictor/BiasAdd""

  • I did what you asked, and I don't see Const nodes in Netron or Tensorboard, but I still get the same error, I don't know where do these consts come from.

    This my optimize_for_inference command:

    python -m tensorflow.python.tools.optimize_for_inference \                 
               --input="/opt/TI/psdk_rtos_auto_j7_06_01_01_12/tidl_j7_01_00_01_00/ti_dl/test/testvecs/models/public/tensorflow/ssd-mobilenetv1_1.0/frozen_inference_graph.pb"  \
               --output="/opt/TI/psdk_rtos_auto_j7_06_01_01_12/tidl_j7_01_00_01_00/ti_dl/test/testvecs/models/public/tensorflow/ssd-mobilenetv1_1.0/frozen_inference_graph_final.pb"  \
               --input_names="Preprocessor/sub" \
               --output_names="BoxPredictor_0/BoxEncodingPredictor/BiasAdd,BoxPredictor_0/ClassPredictor/BiasAdd,BoxPredictor_1/BoxEncodingPredictor/BiasAdd,BoxPredictor_1/ClassPredictor/BiasAdd,BoxPredictor_2/BoxEncodingPredictor/BiasAdd,BoxPredictor_2/ClassPredictor/BiasAdd,BoxPredictor_3/BoxEncodingPredictor/BiasAdd,BoxPredictor_3/ClassPredictor/BiasAdd,BoxPredictor_4/BoxEncodingPredictor/BiasAdd,BoxPredictor_4/ClassPredictor/BiasAdd,BoxPredictor_5/BoxEncodingPredictor/BiasAdd,BoxPredictor_5/ClassPredictor/BiasAdd"

    And I am using Tensorflow 1.12.0 as reported in Tensorflow github mentioned in Note 6.

    This is the full output:

    /opt/TI/psdk_rtos_auto_j7_06_01_01_12/tidl_j7_01_00_01_00/ti_dl/utils/tidlModelImport$ ./out/tidl_model_import.out ../../test/testvecs/config/import/public/tensorflow/tidl_import_mobileNetv1_1.0_ssd.txt
    
    TF Meta PipeLine (Proto) File  : /opt/TI/psdk_rtos_auto_j7_06_01_01_12/tidl_j7_01_00_01_00/ti_dl/test/testvecs/models/public/tensorflow/ssd-mobilenetv1_1.0/pipeline.config  
    [libprotobuf WARNING google/protobuf/text_format.cc:305] Warning parsing text-format object_detection.protos.TrainEvalPipelineConfig: 159:3: text format contains deprecated field "from_detection_checkpoint"
    [libprotobuf WARNING google/protobuf/text_format.cc:305] Warning parsing text-format object_detection.protos.TrainEvalPipelineConfig: 169:3: text format contains deprecated field "num_examples"
    [libprotobuf WARNING google/protobuf/text_format.cc:305] Warning parsing text-format object_detection.protos.TrainEvalPipelineConfig: 170:3: text format contains deprecated field "max_evals"
    num_classes : 90
    y_scale : 10.000000
    x_scale : 10.000000
    w_scale : 5.000000
    h_scale : 5.000000
    num_keypoints : 5.000000
    score_threshold : 0.300000
    iou_threshold : 0.600000
    max_detections_per_class : 100
    max_total_detections : 100
          scales, height_stride, width_stride, height_offset, width_offset
       0.2000000,   -1.0000000,   -1.0000000,   -1.0000000,   -1.0000000
       0.3500000,   -1.0000000,   -1.0000000,   -1.0000000,   -1.0000000
       0.5000000,   -1.0000000,   -1.0000000,   -1.0000000,   -1.0000000
       0.6500000,   -1.0000000,   -1.0000000,   -1.0000000,   -1.0000000
       0.8000000,   -1.0000000,   -1.0000000,   -1.0000000,   -1.0000000
       0.9500000,   -1.0000000,   -1.0000000,   -1.0000000,   -1.0000000
    aspect_ratios
       1.0000000
       2.0000000
       0.5000000
       3.0000000
       0.3333000
    TF Model (Proto) File  : /opt/TI/psdk_rtos_auto_j7_06_01_01_12/tidl_j7_01_00_01_00/ti_dl/test/testvecs/models/public/tensorflow/ssd-mobilenetv1_1.0/frozen_inference_graph_final.pb  
    TIDL Network File      : ../../test/testvecs/config/tidl_models/tensorflow/tidl_net_mobilenet_v1_1.0_224_ssd_final.bin  
    TIDL IO Info File      : ../../test/testvecs/config/tidl_models/tensorflow/tidl_io_mobilenet_v1_1.0_224_ssd_final_  
    
     TF operator Const is not suported now..  By passing
    [libprotobuf FATAL /data/adasuser_bangvideoapps02/kumar/tidl_tools/protobuf-3.5.1/src/google/protobuf/repeated_field.h:1522] CHECK failed: (index) < (current_size_): 
    terminate called after throwing an instance of 'google::protobuf::FatalException'
      what():  CHECK failed: (index) < (current_size_): 
    Aborted (core dumped)

    And this is the import config file:

    modelType          = 1
    numParamBits       = 8
    numFeatureBits     = 8
    quantizationStyle  = 2
    inputNetFile      = "/opt/TI/psdk_rtos_auto_j7_06_01_01_12/tidl_j7_01_00_01_00/ti_dl/test/testvecs/models/public/tensorflow/ssd-mobilenetv1_1.0/frozen_inference_graph_final.pb"
    outputNetFile      = "../../test/testvecs/config/tidl_models/tensorflow/tidl_net_mobilenet_v1_1.0_224_ssd_final.bin"
    outputParamsFile   = "../../test/testvecs/config/tidl_models/tensorflow/tidl_io_mobilenet_v1_1.0_224_ssd_final_"
    inDataNorm  = 1
    inMean = 128 128 128
    inScale =  0.0078125 0.0078125 0.0078125
    inWidth  = 300
    inHeight = 300 
    inNumChannels = 3
    inDataNamesList = "Preprocessor/sub"
    outDataNamesList = "BoxPredictor_0/BoxEncodingPredictor/BiasAdd,BoxPredictor_0/ClassPredictor/BiasAdd,BoxPredictor_1/BoxEncodingPredictor/BiasAdd,BoxPredictor_1/ClassPredictor/BiasAdd,BoxPredictor_2/BoxEncodingPredictor/BiasAdd,BoxPredictor_2/ClassPredictor/BiasAdd,BoxPredictor_3/BoxEncodingPredictor/BiasAdd,BoxPredictor_3/ClassPredictor/BiasAdd,BoxPredictor_4/BoxEncodingPredictor/BiasAdd,BoxPredictor_4/ClassPredictor/BiasAdd,BoxPredictor_5/BoxEncodingPredictor/BiasAdd,BoxPredictor_5/ClassPredictor/BiasAdd"
    metaArchType = 1
    metaLayersNamesList = "/opt/TI/psdk_rtos_auto_j7_06_01_01_12/tidl_j7_01_00_01_00/ti_dl/test/testvecs/models/public/tensorflow/ssd-mobilenetv1_1.0/pipeline.config"
    inData  =   "../../test/testvecs/config/detection_list.txt"
    postProcType = 2
    perfSimConfig = ../../test/testvecs/config/import/perfsim_base.cfg
    

  • Going back to this issue,

    When I check closely the network after optimize_for_inference I see that there are Const operators in the last blocks (BoxPredictor/BoxEncodingPredictor and BoxPredictor/ClassPredictor) as shown in the screenshot below (_0__cf__3, _1__cf__4, _2__cf__5 and _3__cf__6).

    The problem is that these Const operators are present for both 0.75 and 1.0 optimized networks, though for some reason 0.75 import does not complain about them, while the 1.0 import does.

  • Can you share the frozen graph to used for this import?

    We will try to re-produced at our end

  • We would like to test on the final pb file that you are trying.

  • Here is the model I get from optimize_for_inference (couldn't attach it here):

    drive.google.com/open

  • Looks like the model migration step mentioned in the Note 6 of below link is not success full at your set up

    http://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/latest/exports/docs/tidl_j7_01_00_01_00/ti_dl/docs/user_guide_html/md_tidl_models_info.html

    You are expected to get the fused batch Norm instead of Mul and Add if the model migration is successful. please refer to below. 

  • What is interesting is that my setup is able to generate fusedBatchNorm in the case of 0.75 frozen graph, but not for 1.0 version.

    Can you please send me the frozen graph you generated in the screenshot you sent? So I can work around this problem. Thank you.

    __

    Youcef.

  • The frozen graph is shared at the below link. Alex has permission to access this.

    https://cdds.ext.ti.com/ematrix/common/emxNavigator.jsp?objectId=28670.42872.30074.59788

  • Apparently I do not have access to this file!

     

    Error

    If you are seeing this message then you do not have a CDDS user account.

    Please contact your design team manager, program manager or reporting manager to submit a new user request on your behalf by navigating to CDDS My Desk -> User Management -> User Requests. Please request them to provide you access to needed data after your account has been created.

  • Hope you have received the required information. Let us know if additional information is required

  • Yes, I did, and also I was able to fix the problem by redoing the exporting and freezing of the graph from the original checkpoints, and I got the same inference output (same bounding boxes for both graphs).

    What I really need now is to have a validation reference to compare to in a more valid way, like mAP metric. Do you have any validation numbers for object detection networks and SSD in particular? and is the SDK capable of computing this metric or an equivalent one during post-processing?

    Thanks Kumar.

  • No, we do NOT have code for mAP metric. We write out the detections to text file, same can be used for metric computation

  • Alright, Thank you for your support.

    Regards,

    Youcef.