This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-TDAX: TIDL model import issue :Image reading is Not Supported. OpenCV not Enabled

Part Number: PROCESSOR-SDK-TDAX

Hello,

I used the image(jpeg) file to instead of *.y file when I import the model with tidl tool,but I got the error : Image reading is Not Supported. OpenCV not Enabled

so, how could I convert the image file to *.y?

this is my import file:

# Default - 0
randParams = 0

# 0: Caffe, 1: TensorFlow, Default - 0
modelType = 0

# 0: Fixed quantization By tarininng Framework, 1: Dyanamic quantization by TIDL, Default - 1
quantizationStyle = 1

# quantRoundAdd/100 will be added while rounding to integer, Default - 50
quantRoundAdd = 25

numParamBits = 8
# 0 : 8bit Unsigned, 1 : 8bit Signed Default - 1
inElementType = 0

inputNetFile = "./nomeanfile_deploy .prototxt"
inputParamsFile = "./nomeanfile_sparse_iter_300000.caffemodel"
outputNetFile = "./tidl_net_resnet_jacintonet11v2.bin"
outputParamsFile = "./tidl_param_resnet_jacintonet11v2.bin"

#preProcType = 0
sampleInData = "./Aaron_Eckhart_0001.jpeg"
tidlStatsTool = "/usr/bin/eve_test_dl_algo_ref.out"

thanks

best regards

  • Hi,
    Please use existing "eve_test_dl_algo.out.exe" executable from "ti_dl\utils\quantStatsTool" folder for importing.
    Change "tidlStatsTool" in the import config file to use this.

    Thanks,
    Praveen
  • Hi,Praveen

    thanks for your reply,I changed the “tidlStatsTool” in the import file, it does work, read image success when the test.

    But there are some error about the output of layer_18, I don't know what went wrong, please help me,

    this is the convert log:

    PS E:\tidl_model_import> .\tidl_model_import.out.exe .\tidl_import_resnet11.txt
    Caffe Network File : .\nomeanfile_deploy(noscale).prototxt
    Caffe Model File   : .\nomean_noscale_sparse_iter_300000.caffemodel
    TIDL Network File  : .\tidl_net_resnet_jacintonet11v2.bin
    TIDL Model File    : .\tidl_param_resnet_jacintonet11v2.bin
    Name of the Network :       ResNet-11
    Num Inputs :               1
     Num of Layer Detected :  19
      0, TIDL_DataLayer                , data                                      0,  -1 ,  1 ,   x ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  0 ,       0 ,       0 ,       0 ,       0 ,       1 ,       3 ,     112 ,      96 ,         0 ,
      1, TIDL_BatchNormLayer           , data/bias                                 1,   1 ,  1 ,   0 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  1 ,       1 ,       3 ,     112 ,      96 ,       1 ,       3 ,     112 ,      96 ,     32256 ,
      2, TIDL_ConvolutionLayer         , conv1                                     1,   1 ,  1 ,   1 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  2 ,       1 ,       3 ,     112 ,      96 ,       1 ,     128 ,      56 ,      48 ,   9289728 ,
      3, TIDL_PoolingLayer             , pool1                                     1,   1 ,  1 ,   2 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  3 ,       1 ,     128 ,      56 ,      48 ,       1 ,     128 ,      28 ,      24 ,    774144 ,
      4, TIDL_ConvolutionLayer         , res2a_branch1                             1,   1 ,  1 ,   3 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  4 ,       1 ,     128 ,      28 ,      24 ,       1 ,      64 ,      28 ,      24 ,   5505024 ,
      5, TIDL_ConvolutionLayer         , res2a_branch2a                            1,   1 ,  1 ,   3 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  5 ,       1 ,     128 ,      28 ,      24 ,       1 ,      64 ,      28 ,      24 ,  49545216 ,
      6, TIDL_ConvolutionLayer         , res2a_branch2b                            1,   1 ,  1 ,   5 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  6 ,       1 ,      64 ,      28 ,      24 ,       1 ,      64 ,      28 ,      24 ,  24772608 ,
      7, TIDL_EltWiseLayer             , res2a                                     1,   2 ,  1 ,   4 ,  6 ,  x ,  x ,  x ,  x ,  x ,  x ,  7 ,       1 ,      64 ,      28 ,      24 ,       1 ,      64 ,      28 ,      24 ,     43008 ,
      8, TIDL_ConvolutionLayer         , res2b_branch2a                            1,   1 ,  1 ,   7 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  8 ,       1 ,      64 ,      28 ,      24 ,       1 ,      64 ,      28 ,      24 ,  24772608 ,
      9, TIDL_ConvolutionLayer         , res2b_branch2b                            1,   1 ,  1 ,   8 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  9 ,       1 ,      64 ,      28 ,      24 ,       1 ,      64 ,      28 ,      24 ,  24772608 ,
     10, TIDL_EltWiseLayer             , res2b                                     1,   2 ,  1 ,   7 ,  9 ,  x ,  x ,  x ,  x ,  x ,  x , 10 ,       1 ,      64 ,      28 ,      24 ,       1 ,      64 ,      28 ,      24 ,     43008 ,
     11, TIDL_ConvolutionLayer         , res3a_branch1                             1,   1 ,  1 ,  10 ,  x ,  x ,  x ,  x ,  x ,  x ,  x , 11 ,       1 ,      64 ,      28 ,      24 ,       1 ,     128 ,      14 ,      12 ,   1376256 ,
     12, TIDL_ConvolutionLayer         , res3a_branch2a                            1,   1 ,  1 ,  10 ,  x ,  x ,  x ,  x ,  x ,  x ,  x , 12 ,       1 ,      64 ,      28 ,      24 ,       1 ,     128 ,      14 ,      12 ,  12386304 ,
     13, TIDL_ConvolutionLayer         , res3a_branch2b                            1,   1 ,  1 ,  12 ,  x ,  x ,  x ,  x ,  x ,  x ,  x , 13 ,       1 ,     128 ,      14 ,      12 ,       1 ,     128 ,      14 ,      12 ,  24772608 ,
     14, TIDL_EltWiseLayer             , res3a                                     1,   2 ,  1 ,  11 , 13 ,  x ,  x ,  x ,  x ,  x ,  x , 14 ,       1 ,     128 ,      14 ,      12 ,       1 ,     128 ,      14 ,      12 ,     21504 ,
     15, TIDL_ConvolutionLayer         , res3b_branch2a                            1,   1 ,  1 ,  14 ,  x ,  x ,  x ,  x ,  x ,  x ,  x , 15 ,       1 ,     128 ,      14 ,      12 ,       1 ,     128 ,      14 ,      12 ,  24772608 ,
     16, TIDL_ConvolutionLayer         , res3b_branch2b                            1,   1 ,  1 ,  15 ,  x ,  x ,  x ,  x ,  x ,  x ,  x , 16 ,       1 ,     128 ,      14 ,      12 ,       1 ,     128 ,      14 ,      12 ,  24772608 ,
     17, TIDL_EltWiseLayer             , res3b                                     1,   2 ,  1 ,  14 , 16 ,  x ,  x ,  x ,  x ,  x ,  x , 17 ,       1 ,     128 ,      14 ,      12 ,       1 ,     128 ,      14 ,      12 ,     21504 ,
     18, TIDL_InnerProductLayer        , fc1                                       1,   1 ,  1 ,  17 ,  x ,  x ,  x ,  x ,  x ,  x ,  x , 18 ,       1 ,     128 ,      14 ,      12 ,       1 ,       1 ,       1 ,     512 ,  11010048 ,
    Total Giga Macs : 0.2387
    已复制         1 个文件。
    
    Processing config file .\tempDir\qunat_stats_config.txt !
      0, TIDL_DataLayer                ,  0,  -1 ,  1 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  0 ,    0 ,    0 ,    0 ,    0 ,    1 ,    3 ,  112 ,   96 ,
      1, TIDL_BatchNormLayer           ,  1,   1 ,  1 ,  0 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  1 ,    1 ,    3 ,  112 ,   96 ,    1 ,    3 ,  112 ,   96 ,
      2, TIDL_ConvolutionLayer         ,  1,   1 ,  1 ,  1 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  2 ,    1 ,    3 ,  112 ,   96 ,    1 ,  128 ,   56 ,   48 ,
      3, TIDL_PoolingLayer             ,  1,   1 ,  1 ,  2 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  3 ,    1 ,  128 ,   56 ,   48 ,    1 ,  128 ,   28 ,   24 ,
      4, TIDL_ConvolutionLayer         ,  1,   1 ,  1 ,  3 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  4 ,    1 ,  128 ,   28 ,   24 ,    1 ,   64 ,   28 ,   24 ,
      5, TIDL_ConvolutionLayer         ,  1,   1 ,  1 ,  3 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  5 ,    1 ,  128 ,   28 ,   24 ,    1 ,   64 ,   28 ,   24 ,
      6, TIDL_ConvolutionLayer         ,  1,   1 ,  1 ,  5 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  6 ,    1 ,   64 ,   28 ,   24 ,    1 ,   64 ,   28 ,   24 ,
      7, TIDL_EltWiseLayer             ,  1,   2 ,  1 ,  4 ,  6 ,  x ,  x ,  x ,  x ,  x ,  x ,  7 ,    1 ,   64 ,   28 ,   24 ,    1 ,   64 ,   28 ,   24 ,
      8, TIDL_ConvolutionLayer         ,  1,   1 ,  1 ,  7 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  8 ,    1 ,   64 ,   28 ,   24 ,    1 ,   64 ,   28 ,   24 ,
      9, TIDL_ConvolutionLayer         ,  1,   1 ,  1 ,  8 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  9 ,    1 ,   64 ,   28 ,   24 ,    1 ,   64 ,   28 ,   24 ,
     10, TIDL_EltWiseLayer             ,  1,   2 ,  1 ,  7 ,  9 ,  x ,  x ,  x ,  x ,  x ,  x , 10 ,    1 ,   64 ,   28 ,   24 ,    1 ,   64 ,   28 ,   24 ,
     11, TIDL_ConvolutionLayer         ,  1,   1 ,  1 , 10 ,  x ,  x ,  x ,  x ,  x ,  x ,  x , 11 ,    1 ,   64 ,   28 ,   24 ,    1 ,  128 ,   14 ,   12 ,
     12, TIDL_ConvolutionLayer         ,  1,   1 ,  1 , 10 ,  x ,  x ,  x ,  x ,  x ,  x ,  x , 12 ,    1 ,   64 ,   28 ,   24 ,    1 ,  128 ,   14 ,   12 ,
     13, TIDL_ConvolutionLayer         ,  1,   1 ,  1 , 12 ,  x ,  x ,  x ,  x ,  x ,  x ,  x , 13 ,    1 ,  128 ,   14 ,   12 ,    1 ,  128 ,   14 ,   12 ,
     14, TIDL_EltWiseLayer             ,  1,   2 ,  1 , 11 , 13 ,  x ,  x ,  x ,  x ,  x ,  x , 14 ,    1 ,  128 ,   14 ,   12 ,    1 ,  128 ,   14 ,   12 ,
     15, TIDL_ConvolutionLayer         ,  1,   1 ,  1 , 14 ,  x ,  x ,  x ,  x ,  x ,  x ,  x , 15 ,    1 ,  128 ,   14 ,   12 ,    1 ,  128 ,   14 ,   12 ,
     16, TIDL_ConvolutionLayer         ,  1,   1 ,  1 , 15 ,  x ,  x ,  x ,  x ,  x ,  x ,  x , 16 ,    1 ,  128 ,   14 ,   12 ,    1 ,  128 ,   14 ,   12 ,
     17, TIDL_EltWiseLayer             ,  1,   2 ,  1 , 14 , 16 ,  x ,  x ,  x ,  x ,  x ,  x , 17 ,    1 ,  128 ,   14 ,   12 ,    1 ,  128 ,   14 ,   12 ,
     18, TIDL_InnerProductLayer        ,  1,   1 ,  1 , 17 ,  x ,  x ,  x ,  x ,  x ,  x ,  x , 18 ,    1 ,  128 ,   14 ,   12 ,    1 ,    1 ,    1 ,  512 ,
     19, TIDL_DataLayer                ,  0,   1 , -1 , 18 ,  x ,  x ,  x ,  x ,  x ,  x ,  x ,  0 ,    1 ,    1 ,    1 ,  512 ,    0 ,    0 ,    0 ,    0 ,
    Layer ID    ,inBlkWidth  ,inBlkHeight ,inBlkPitch  ,outBlkWidth ,outBlkHeight,outBlkPitch ,numInChs    ,numOutChs   ,numProcInChs,numLclInChs ,numLclOutChs,numProcItrs ,numAccItrs  ,numHorBlock ,numVerBlock ,inBlkChPitch,outBlkChPitc,alignOrNot
          2          104           32          104           48           14           48            3          128            3            1            8            1            3            1            4         3328          672            1
          4           32           28           32           32           28           32          128           64          128            8            8            1           16            1            1          896          896            1
          5           40           30           40           32           28           32          128           64          128            7            8            1           19            1            1         1200          896            1
          6           40           30           40           32           28           32           64           64           64            7            8            1           10            1            1         1200          896            1
          8           40           30           40           32           28           32           64           64           64            7            8            1           10            1            1         1200          896            1
          9           40           30           40           32           28           32           64           64           64            7            8            1           10            1            1         1200          896            1
         11           32           28           32           16           14           16           64          128           64            8            8            1            8            1            1          896          224            1
         12           40           32           40           16           14           16           64          128           64            6            8            1           11            1            1         1280          224            1
         13           24           16           24           16           14           16          128          128          128            8            8            1           16            1            1          384          224            1
         15           24           16           24           16           14           16          128          128          128            8            8            1           16            1            1          384          224            1
         16           24           16           24           16           14           16          128          128          128            8            8            1           16            1            1          384          224            1
    
    Processing Frame Number : 0
    
     Layer    1 : Out Q :      280 , TIDL_BatchNormLayer  , PASSED  #MMACs =     0.03,     0.03, Sparsity :   0.00
     Layer    2 : Out Q :    11217 , TIDL_ConvolutionLayer, PASSED  #MMACs =     9.29,     9.66, Sparsity :  -3.94
     Layer    3 :TIDL_PoolingLayer,     PASSED  #MMACs =     0.09,     0.09, Sparsity :   0.00
     Layer    4 : Out Q :     9079 , TIDL_ConvolutionLayer, PASSED  #MMACs =     5.51,     5.45, Sparsity :   0.93
     Layer    5 : Out Q :    24741 , TIDL_ConvolutionLayer, PASSED  #MMACs =    49.55,    33.68, Sparsity :  32.03
     Layer    6 : Out Q :     8177 , TIDL_ConvolutionLayer, PASSED  #MMACs =    24.77,    21.39, Sparsity :  13.64
     Layer    7 : Out Q :     4304 , TIDL_EltWiseLayer,     PASSED  #MMACs =     0.09,     0.09, Sparsity :   0.00
     Layer    8 : Out Q :     1959 , TIDL_ConvolutionLayer, PASSED  #MMACs =    24.77,    23.64, Sparsity :   4.56
     Layer    9 : Out Q :      660 , TIDL_ConvolutionLayer, PASSED  #MMACs =    24.77,    21.50, Sparsity :  13.22
     Layer   10 : Out Q :      573 , TIDL_EltWiseLayer,     PASSED  #MMACs =     0.09,     0.09, Sparsity :   0.00
     Layer   11 : Out Q :      181 , TIDL_ConvolutionLayer, PASSED  #MMACs =     1.38,     1.38, Sparsity :   0.00
     Layer   12 : Out Q :      384 , TIDL_ConvolutionLayer, PASSED  #MMACs =    12.39,     5.63, Sparsity :  54.56
     Layer   13 : Out Q :      145 , TIDL_ConvolutionLayer, PASSED  #MMACs =    24.77,     4.28, Sparsity :  82.71
     Layer   14 : Out Q :       81 , TIDL_EltWiseLayer,     PASSED  #MMACs =     0.04,     0.04, Sparsity :   0.00
     Layer   15 : Out Q :       50 , TIDL_ConvolutionLayer, PASSED  #MMACs =    24.77,     3.86, Sparsity :  84.43
     Layer   16 : Out Q :       16 , TIDL_ConvolutionLayer, PASSED  #MMACs =    24.77,     3.14, Sparsity :  87.31
     Layer   17 : Out Q :       13 , TIDL_EltWiseLayer,     PASSED  #MMACs =     0.04,     0.04, Sparsity :   0.00
     Layer   18 :
    PS E:\tidl_model_import>

    this is the tidl_net and tidl_param bin file:

    tidl_net_param.rar

    my caffemodel which is trained by caffe-jacinto exceed the the upload maximum file size allowed. I can send it to you if you need. my e-mail is 18200391445@sina.cn

    thanks

    best regards

  • Okay, Can you please check TIDL limitations in the userguide section 3.9 (TIDL Limitation) to check whether you model be imported are not with TIDL. If not you can re-train your model to make it work.

    Thanks,
    Praveen
  • Hi, Praveen

    Thanks for your reply, I have checked all the layers, them are supported by TIDL.

    This is my deploy.prototxt

    #quantize: true
    name: "ResNet-11"
    #layer {
    #    name: "data"
    #    type: "Input"
    #    top: "data"
    #    input_param { shape: { dim: 50 dim: 3 dim: 112 dim: 96 } }
    #}
    
    input: "data"
    input_shape {
      dim: 1
      dim: 3
      dim: 112
      dim: 96
    }
    
    layer {
      name: "data/bias"
      type: "Bias"
      bottom: "data"
      top: "data/bias"
      param {
        lr_mult: 0
        decay_mult: 0
      }
      bias_param {
        filler {
          type: "constant"
          value: -128
        }
      }
    }
    
    layer {
        bottom: "data/bias"
        top: "conv1"
        name: "conv1"
        type: "Convolution"
        convolution_param {
            num_output: 128
            kernel_size: 3
            pad: 1
            stride: 2
            weight_filler {
                type: "xavier"
            }
            bias_term: false
    
        }
    }
    
    layer {
        bottom: "conv1"
        top: "conv1"
        name: "bn_conv1"
        type: "BatchNorm"
        batch_norm_param {
            use_global_stats: false
        }
    }
    
    
    
    layer {
        bottom: "conv1"
        top: "conv1"
        name: "conv1_relu"
        type: "ReLU"
    }
    
    layer {
        bottom: "conv1"
        top: "pool1"
        name: "pool1"
        type: "Pooling"
        pooling_param {
            kernel_size: 3
            stride: 2
            pool: MAX
        }
    }
    
    layer {
        bottom: "pool1"
        top: "res2a_branch1"
        name: "res2a_branch1"
        type: "Convolution"
        convolution_param {
            num_output: 64
            kernel_size: 1
            pad: 0
            stride: 1
            weight_filler {
                type: "xavier"
            }
            bias_term: false
    
        }
    }
    
    layer {
        bottom: "res2a_branch1"
        top: "res2a_branch1"
        name: "bn2a_branch1"
        type: "BatchNorm"
        batch_norm_param {
            use_global_stats: false
        }
    }
    
    
    
    layer {
        bottom: "pool1"
        top: "res2a_branch2a"
        name: "res2a_branch2a"
        type: "Convolution"
        convolution_param {
            num_output: 64
            kernel_size: 3
            pad: 1
            stride: 1
            weight_filler {
                type: "xavier"
            }
            bias_term: false
    
        }
    }
    
    layer {
        bottom: "res2a_branch2a"
        top: "res2a_branch2a"
        name: "bn2a_branch2a"
        type: "BatchNorm"
        batch_norm_param {
            use_global_stats: false
        }
    }
    
    
    
    layer {
        bottom: "res2a_branch2a"
        top: "res2a_branch2a"
        name: "res2a_branch2a_relu"
        type: "ReLU"
    }
    
    layer {
        bottom: "res2a_branch2a"
        top: "res2a_branch2b"
        name: "res2a_branch2b"
        type: "Convolution"
        convolution_param {
            num_output: 64
            kernel_size: 3
            pad: 1
            stride: 1
            weight_filler {
                type: "xavier"
            }
            bias_term: false
    
        }
    }
    
    layer {
        bottom: "res2a_branch2b"
        top: "res2a_branch2b"
        name: "bn2a_branch2b"
        type: "BatchNorm"
        batch_norm_param {
            use_global_stats: false
        }
    }
    
    
    
    layer {
        bottom: "res2a_branch1"
        bottom: "res2a_branch2b"
        top: "res2a"
        name: "res2a"
        type: "Eltwise"
        eltwise_param {
            operation: SUM
        }
    }
    
    layer {
        bottom: "res2a"
        top: "res2a"
        name: "res2a_relu"
        type: "ReLU"
    }
    
    layer {
        bottom: "res2a"
        top: "res2b_branch2a"
        name: "res2b_branch2a"
        type: "Convolution"
        convolution_param {
            num_output: 64
            kernel_size: 3
            pad: 1
            stride: 1
            weight_filler {
                type: "xavier"
            }
            bias_term: false
    
        }
    }
    
    layer {
        bottom: "res2b_branch2a"
        top: "res2b_branch2a"
        name: "bn2b_branch2a"
        type: "BatchNorm"
        batch_norm_param {
            use_global_stats: false
        }
    }
    
    
    
    layer {
        bottom: "res2b_branch2a"
        top: "res2b_branch2a"
        name: "res2b_branch2a_relu"
        type: "ReLU"
    }
    
    layer {
        bottom: "res2b_branch2a"
        top: "res2b_branch2b"
        name: "res2b_branch2b"
        type: "Convolution"
        convolution_param {
            num_output: 64
            kernel_size: 3
            pad: 1
            stride: 1
            weight_filler {
                type: "xavier"
            }
            bias_term: false
    
        }
    }
    
    layer {
        bottom: "res2b_branch2b"
        top: "res2b_branch2b"
        name: "bn2b_branch2b"
        type: "BatchNorm"
        batch_norm_param {
            use_global_stats: false
        }
    }
    
    
    
    layer {
        bottom: "res2a"
        bottom: "res2b_branch2b"
        top: "res2b"
        name: "res2b"
        type: "Eltwise"
        eltwise_param {
            operation: SUM
        }
    }
    
    layer {
        bottom: "res2b"
        top: "res2b"
        name: "res2b_relu"
        type: "ReLU"
    }
    
    layer {
        bottom: "res2b"
        top: "res3a_branch1"
        name: "res3a_branch1"
        type: "Convolution"
        convolution_param {
            num_output: 128
            kernel_size: 1
            pad: 0
            stride: 2
            weight_filler {
                type: "xavier"
            }
            bias_term: false
    
        }
    }
    
    layer {
        bottom: "res3a_branch1"
        top: "res3a_branch1"
        name: "bn3a_branch1"
        type: "BatchNorm"
        batch_norm_param {
            use_global_stats: false
        }
    }
    
    
    
    layer {
        bottom: "res2b"
        top: "res3a_branch2a"
        name: "res3a_branch2a"
        type: "Convolution"
        convolution_param {
            num_output: 128
            kernel_size: 3
            pad: 1
            stride: 2
            weight_filler {
                type: "xavier"
            }
            bias_term: false
    
        }
    }
    
    layer {
        bottom: "res3a_branch2a"
        top: "res3a_branch2a"
        name: "bn3a_branch2a"
        type: "BatchNorm"
        batch_norm_param {
            use_global_stats: false
        }
    }
    
    
    layer {
        bottom: "res3a_branch2a"
        top: "res3a_branch2a"
        name: "res3a_branch2a_relu"
        type: "ReLU"
    }
    
    layer {
        bottom: "res3a_branch2a"
        top: "res3a_branch2b"
        name: "res3a_branch2b"
        type: "Convolution"
        convolution_param {
            num_output: 128
            kernel_size: 3
            pad: 1
            stride: 1
            weight_filler {
                type: "xavier"
            }
            bias_term: false
    
        }
    }
    
    layer {
        bottom: "res3a_branch2b"
        top: "res3a_branch2b"
        name: "bn3a_branch2b"
        type: "BatchNorm"
        batch_norm_param {
            use_global_stats: false
        }
    }
    
    
    
    layer {
        bottom: "res3a_branch1"
        bottom: "res3a_branch2b"
        top: "res3a"
        name: "res3a"
        type: "Eltwise"
        eltwise_param {
            operation: SUM
        }
    }
    
    layer {
        bottom: "res3a"
        top: "res3a"
        name: "res3a_relu"
        type: "ReLU"
    }
    
    layer {
        bottom: "res3a"
        top: "res3b_branch2a"
        name: "res3b_branch2a"
        type: "Convolution"
        convolution_param {
            num_output: 128
            kernel_size: 3
            pad: 1
            stride: 1
            weight_filler {
                type: "xavier"
            }
            bias_term: false
    
        }
    }
    
    layer {
        bottom: "res3b_branch2a"
        top: "res3b_branch2a"
        name: "bn3b_branch2a"
        type: "BatchNorm"
        batch_norm_param {
            use_global_stats: false
        }
    }
    
    
    
    layer {
        bottom: "res3b_branch2a"
        top: "res3b_branch2a"
        name: "res3b_branch2a_relu"
        type: "ReLU"
    }
    
    layer {
        bottom: "res3b_branch2a"
        top: "res3b_branch2b"
        name: "res3b_branch2b"
        type: "Convolution"
        convolution_param {
            num_output: 128
            kernel_size: 3
            pad: 1
            stride: 1
            weight_filler {
                type: "xavier"
            }
            bias_term: false
    
        }
    }
    
    layer {
        bottom: "res3b_branch2b"
        top: "res3b_branch2b"
        name: "bn3b_branch2b"
        type: "BatchNorm"
        batch_norm_param {
            use_global_stats: false
        }
    }
    
    
    layer {
        bottom: "res3a"
        bottom: "res3b_branch2b"
        top: "res3b"
        name: "res3b"
        type: "Eltwise"
        eltwise_param {
            operation: SUM
        }
    }
    
    #layer {
    #    bottom: "res3b"
    #    top: "res3b"
    #    name: "res3b_relu"
    #    type: "ReLU"
    #}
    
    #layer {
    #    bottom: "res3b"
    #    top: "pool5"
    #    name: "pool5"
    #    type: "Pooling"
    #    pooling_param {
    #        kernel_size: 3
    #        stride: 2
    #        pool: MAX
    #    }
    #}
    
    layer {
        bottom: "res3b"
        top: "fc1"
        name: "fc1"
        type: "InnerProduct"
        param {
            lr_mult: 1
            decay_mult: 1
        }
        param {
            lr_mult: 2
            decay_mult: 1
        }
        inner_product_param {
            num_output: 512
            weight_filler {
                type: "xavier"
            }
            bias_filler {
                type: "constant"
                value: 0
            }
        }
    }
    
    

    thanks

    best regard

  • I could not find any eror with in "2402.log.txt". Ths log looks fine. Execution completes. can you compate the output (stats_tool_out.bin") avaialble int temp dir with your expedted result
  • thanks for your help, I have solved my problem, the reason is that the number of weights in the fully connected layer exceed the limit of TIDL .