This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA2HF: there is difference in middle layers results between import tool tempDir .y file and .bin files generated on eve

Part Number: TDA2HF

Hello,

        I used import tool tidl_model_import.out  to generate the middle layers results .y file in tempDir folder,and using the same input data to generate the middle layers results .bin file on eve, and I found there is difference in some middle layers when compared two corresponding trace file.Here are the results:

trace_dump_1_320x320.y vs trace_dump_1_320x320.bin min: 128 max: 128 avg: 128.0
trace_dump_2_160x160.y vs trace_dump_2_160x160.bin min: 0 max: 0 avg: 0.0
trace_dump_3_160x160.y vs trace_dump_3_160x160.bin min: 0 max: 0 avg: 0.0
trace_dump_4_160x160.y vs trace_dump_4_160x160.bin min: 128 max: 128 avg: 128.0
trace_dump_5_160x160.y vs trace_dump_5_160x160.bin min: 0 max: 0 avg: 0.0
trace_dump_6_80x80.y vs trace_dump_6_80x80.bin min: 0 max: 0 avg: 0.0
trace_dump_7_80x80.y vs trace_dump_7_80x80.bin min: 128 max: 128 avg: 128.0
trace_dump_8_80x80.y vs trace_dump_8_80x80.bin min: 0 max: 0 avg: 0.0
trace_dump_9_80x80.y vs trace_dump_9_80x80.bin min: 0 max: 0 avg: 0.0
trace_dump_10_80x80.y vs trace_dump_10_80x80.bin min: 128 max: 128 avg: 128.0
trace_dump_11_80x80.y vs trace_dump_11_80x80.bin min: 128 max: 128 avg: 128.0
trace_dump_12_80x80.y vs trace_dump_12_80x80.bin min: 0 max: 0 avg: 0.0
trace_dump_13_40x40.y vs trace_dump_13_40x40.bin min: 0 max: 0 avg: 0.0
trace_dump_14_40x40.y vs trace_dump_14_40x40.bin min: 128 max: 128 avg: 128.0
trace_dump_15_40x40.y vs trace_dump_15_40x40.bin min: 0 max: 0 avg: 0.0
trace_dump_16_40x40.y vs trace_dump_16_40x40.bin min: 0 max: 0 avg: 0.0
trace_dump_17_40x40.y vs trace_dump_17_40x40.bin min: 128 max: 128 avg: 128.0
trace_dump_18_40x40.y vs trace_dump_18_40x40.bin min: 128 max: 128 avg: 128.0
trace_dump_19_40x40.y vs trace_dump_19_40x40.bin min: 0 max: 0 avg: 0.0
trace_dump_20_40x40.y vs trace_dump_20_40x40.bin min: 0 max: 0 avg: 0.0
trace_dump_21_40x40.y vs trace_dump_21_40x40.bin min: 128 max: 128 avg: 128.0
trace_dump_22_40x40.y vs trace_dump_22_40x40.bin min: 128 max: 128 avg: 128.0
trace_dump_23_40x40.y vs trace_dump_23_40x40.bin min: 0 max: 0 avg: 0.0
trace_dump_24_20x20.y vs trace_dump_24_20x20.bin min: 0 max: 0 avg: 0.0
trace_dump_25_20x20.y vs trace_dump_25_20x20.bin min: 128 max: 128 avg: 128.0
trace_dump_26_20x20.y vs trace_dump_26_20x20.bin min: 0 max: 0 avg: 0.0
trace_dump_27_20x20.y vs trace_dump_27_20x20.bin min: 0 max: 0 avg: 0.0
trace_dump_28_20x20.y vs trace_dump_28_20x20.bin min: 128 max: 128 avg: 128.0
trace_dump_29_20x20.y vs trace_dump_29_20x20.bin min: 128 max: 128 avg: 128.0
trace_dump_30_20x20.y vs trace_dump_30_20x20.bin min: 0 max: 0 avg: 0.0
trace_dump_31_20x20.y vs trace_dump_31_20x20.bin min: 0 max: 0 avg: 0.0
trace_dump_32_20x20.y vs trace_dump_32_20x20.bin min: 128 max: 128 avg: 128.0
trace_dump_33_20x20.y vs trace_dump_33_20x20.bin min: 128 max: 128 avg: 128.0
trace_dump_34_20x20.y vs trace_dump_34_20x20.bin min: 0 max: 0 avg: 0.0
trace_dump_35_20x20.y vs trace_dump_35_20x20.bin min: 0 max: 0 avg: 0.0
trace_dump_36_20x20.y vs trace_dump_36_20x20.bin min: 128 max: 128 avg: 128.0
trace_dump_37_20x20.y vs trace_dump_37_20x20.bin min: 128 max: 128 avg: 128.0
trace_dump_38_20x20.y vs trace_dump_38_20x20.bin min: 128 max: 128 avg: 128.0
trace_dump_39_20x20.y vs trace_dump_39_20x20.bin min: 128 max: 128 avg: 128.0
trace_dump_40_20x20.y vs trace_dump_40_20x20.bin min: 128 max: 128 avg: 128.0
trace_dump_41_20x20.y vs trace_dump_41_20x20.bin min: 128 max: 128 avg: 128.0
trace_dump_42_20x20.y vs trace_dump_42_20x20.bin min: 0 max: 0 avg: 0.0
trace_dump_43_40x40.y vs trace_dump_43_40x40.bin min: 128 max: 128 avg: 128.0
trace_dump_44_40x40.y vs trace_dump_44_40x40.bin min: 128 max: 128 avg: 128.0
trace_dump_45_40x40.y vs trace_dump_45_40x40.bin min: 128 max: 128 avg: 128.0
trace_dump_46_40x40.y vs trace_dump_46_40x40.bin min: 0 max: 0 avg: 0.0
trace_dump_47_40x40.y vs trace_dump_47_40x40.bin min: 0 max: 220 avg: 55.7348828125
trace_dump_48_80x80.y vs trace_dump_48_80x80.bin min: 84 max: 246 avg: 139.531689453125
trace_dump_49_80x80.y vs trace_dump_49_80x80.bin min: 123 max: 129 avg: 127.6588427734375
trace_dump_50_80x80.y vs trace_dump_50_80x80.bin min: 124 max: 130 avg: 127.7884716796875
trace_dump_51_80x80.y vs trace_dump_51_80x80.bin min: 0 max: 2 avg: 0.000390625
trace_dump_52_80x80.y vs trace_dump_52_80x80.bin min: 0 max: 1 avg: 1.220703125e-05
trace_dump_53_160x160.y vs trace_dump_53_160x160.bin min: 128 max: 129 avg: 128.00000061035155
trace_dump_54_160x160.y vs trace_dump_54_160x160.bin min: 128 max: 128 avg: 128.0
trace_dump_55_160x160.y vs trace_dump_55_160x160.bin min: 128 max: 128 avg: 128.0
trace_dump_56_160x160.y vs trace_dump_56_160x160.bin min: 0 max: 0 avg: 0.0
trace_dump_57_160x160.y vs trace_dump_57_160x160.bin min: 0 max: 0 avg: 0.0
trace_dump_58_320x320.y vs trace_dump_58_320x320.bin min: 128 max: 128 avg: 128.0
trace_dump_59_320x320.y vs trace_dump_59_320x320.bin min: 128 max: 128 avg: 128.0
trace_dump_60_320x320.y vs trace_dump_60_320x320.bin min: 128 max: 128 avg: 128.0
trace_dump_61_320x320.y vs trace_dump_61_320x320.bin min: 0 max: 0 avg: 0.0
trace_dump_62_320x320.y vs trace_dump_62_320x320.bin min: 128 max: 128 avg: 128.0
trace_dump_63_20x20.y vs trace_dump_63_20x20.bin min: 0 max: 214 avg: 78.5892578125
trace_dump_64_10x10.y vs trace_dump_64_10x10.bin min: 0 max: 206 avg: 32.630703125
trace_dump_65_10x10.y vs trace_dump_65_10x10.bin min: 0 max: 254 avg: 44.3418359375
trace_dump_66_10x10.y vs trace_dump_66_10x10.bin min: 18 max: 201 avg: 111.76666666666667

and we would find there is difference in some layers 

trace_dump_47_40x40.y vs trace_dump_47_40x40.bin min: 0 max: 220 avg: 55.7348828125
trace_dump_48_80x80.y vs trace_dump_48_80x80.bin min: 84 max: 246 avg: 139.531689453125
trace_dump_49_80x80.y vs trace_dump_49_80x80.bin min: 123 max: 129 avg: 127.6588427734375
trace_dump_50_80x80.y vs trace_dump_50_80x80.bin min: 124 max: 130 avg: 127.7884716796875
trace_dump_51_80x80.y vs trace_dump_51_80x80.bin min: 0 max: 2 avg: 0.000390625
trace_dump_52_80x80.y vs trace_dump_52_80x80.bin min: 0 max: 1 avg: 1.220703125e-05
trace_dump_53_160x160.y vs trace_dump_53_160x160.bin min: 128 max: 129 avg: 128.00000061035155


trace_dump_63_20x20.y vs trace_dump_63_20x20.bin min: 0 max: 214 avg: 78.5892578125
trace_dump_64_10x10.y vs trace_dump_64_10x10.bin min: 0 max: 206 avg: 32.630703125
trace_dump_65_10x10.y vs trace_dump_65_10x10.bin min: 0 max: 254 avg: 44.3418359375
trace_dump_66_10x10.y vs trace_dump_66_10x10.bin min: 18 max: 201 avg: 111.76666666666667

and from the import tool debug information,we konw that:

47, TIDL_ConvolutionLayer , conv35
48, TIDL_Deconv2DLayer , conv_transpose2
49, TIDL_BatchNormLayer , batch_norm37
50, TIDL_BatchNormLayer , bn_scale37
51, TIDL_BatchNormLayer , relu24
52, TIDL_ConvolutionLayer , conv36
53, TIDL_Deconv2DLayer , conv_transpose3

54, TIDL_BatchNormLayer , batch_norm39
55, TIDL_BatchNormLayer , bn_scale39
56, TIDL_BatchNormLayer , relu26
57, TIDL_ConvolutionLayer , conv37
58, TIDL_Deconv2DLayer , conv_transpose_seg
59, TIDL_BatchNormLayer , batch_norm_seg
60, TIDL_BatchNormLayer , bn_scale_seg
61, TIDL_BatchNormLayer , relu_seg
62, TIDL_ConvolutionLayer , conv_seg

63, TIDL_ConvolutionLayer , conv40
64, TIDL_ConvolutionLayer , conv41
65, TIDL_ConvolutionLayer , conv42
66, TIDL_ConvolutionLayer , conv_pairs

and below is my part model that happen this:

So Why does this happen?

Thanks,

chen poca

  • this is the caffe prototxt that happen this:

    layer {
    name: "relu21"
    type: "ReLU"
    bottom: "batch_norm_blob34"
    top: "relu_blob21"
    }
    layer {
    name: "conv_transpose1"
    type: "Deconvolution"
    bottom: "relu_blob21"
    top: "conv_transpose_blob1"
    convolution_param {
    num_output: 64
    bias_term: false
    pad: 1
    kernel_size: 4
    stride: 2
    group: 64
    weight_filler {
    type: "xavier"
    }
    dilation: 1
    engine: CAFFE
    }
    }
    layer {
    name: "batch_norm35"
    type: "BatchNorm"
    bottom: "conv_transpose_blob1"
    top: "batch_norm_blob35"
    param {
    lr_mult: 0
    }
    param {
    lr_mult: 0
    }
    param {
    lr_mult: 0
    }
    batch_norm_param {
    use_global_stats: true
    eps: 9.9999997e-06
    }
    }
    layer {
    name: "bn_scale35"
    type: "Scale"
    bottom: "batch_norm_blob35"
    top: "batch_norm_blob35"
    scale_param {
    bias_term: true
    }
    }
    layer {
    name: "relu22"
    type: "ReLU"
    bottom: "batch_norm_blob35"
    top: "relu_blob22"
    }
    layer {
    name: "conv35"
    type: "Convolution"
    bottom: "relu_blob22"
    top: "conv_blob35"
    convolution_param {
    num_output: 64
    bias_term: false
    pad: 2
    kernel_size: 3
    group: 1
    stride: 1
    weight_filler {
    type: "xavier"
    }
    dilation: 2
    }
    }
    layer {
    name: "batch_norm36"
    type: "BatchNorm"
    bottom: "conv_blob35"
    top: "batch_norm_blob36"
    param {
    lr_mult: 0
    }
    param {
    lr_mult: 0
    }
    param {
    lr_mult: 0
    }
    batch_norm_param {
    use_global_stats: true
    eps: 9.9999997e-06
    }
    }
    layer {
    name: "bn_scale36"
    type: "Scale"
    bottom: "batch_norm_blob36"
    top: "batch_norm_blob36"
    scale_param {
    bias_term: true
    }
    }
    layer {
    name: "relu23"
    type: "ReLU"
    bottom: "batch_norm_blob36"
    top: "relu_blob23"
    }
    layer {
    name: "conv_transpose2"
    type: "Deconvolution"
    bottom: "relu_blob23"
    top: "conv_transpose_blob2"
    convolution_param {
    num_output: 64
    bias_term: false
    pad: 1
    kernel_size: 4
    stride: 2
    group: 64
    weight_filler {
    type: "xavier"
    }
    dilation: 1
    engine: CAFFE
    }
    }
    layer {
    name: "batch_norm37"
    type: "BatchNorm"
    bottom: "conv_transpose_blob2"
    top: "batch_norm_blob37"
    param {
    lr_mult: 0
    }
    param {
    lr_mult: 0
    }
    param {
    lr_mult: 0
    }
    batch_norm_param {
    use_global_stats: true
    eps: 9.9999997e-06
    }
    }
    layer {
    name: "bn_scale37"
    type: "Scale"
    bottom: "batch_norm_blob37"
    top: "batch_norm_blob37"
    scale_param {
    bias_term: true
    }
    }
    layer {
    name: "relu24"
    type: "ReLU"
    bottom: "batch_norm_blob37"
    top: "relu_blob24"
    }
    layer {
    name: "conv36"
    type: "Convolution"
    bottom: "relu_blob24"
    top: "conv_blob36"
    convolution_param {
    num_output: 64
    bias_term: false
    pad: 4
    kernel_size: 3
    group: 1
    stride: 1
    weight_filler {
    type: "xavier"
    }
    dilation: 4
    }
    }
    layer {
    name: "batch_norm38"
    type: "BatchNorm"
    bottom: "conv_blob36"
    top: "batch_norm_blob38"
    param {
    lr_mult: 0
    }
    param {
    lr_mult: 0
    }
    param {
    lr_mult: 0
    }
    batch_norm_param {
    use_global_stats: true
    eps: 9.9999997e-06
    }
    }
    layer {
    name: "bn_scale38"
    type: "Scale"
    bottom: "batch_norm_blob38"
    top: "batch_norm_blob38"
    scale_param {
    bias_term: true
    }
    }
    layer {
    name: "relu25"
    type: "ReLU"
    bottom: "batch_norm_blob38"
    top: "relu_blob25"
    }
    layer {
    name: "conv_transpose3"
    type: "Deconvolution"
    bottom: "relu_blob25"
    top: "conv_transpose_blob3"
    convolution_param {
    num_output: 64
    bias_term: false
    pad: 1
    kernel_size: 4
    stride: 2
    group: 64
    weight_filler {
    type: "xavier"
    }
    dilation: 1
    engine: CAFFE
    }
    }
    layer {
    name: "batch_norm39"
    type: "BatchNorm"
    bottom: "conv_transpose_blob3"
    top: "batch_norm_blob39"
    param {
    lr_mult: 0
    }
    param {
    lr_mult: 0
    }
    param {
    lr_mult: 0
    }
    batch_norm_param {
    use_global_stats: true
    eps: 9.9999997e-06
    }
    }
    layer {
    name: "bn_scale39"
    type: "Scale"
    bottom: "batch_norm_blob39"
    top: "batch_norm_blob39"
    scale_param {
    bias_term: true
    }
    }
    layer {
    name: "relu26"
    type: "ReLU"
    bottom: "batch_norm_blob39"
    top: "relu_blob26"
    }
    layer {
    name: "conv37"
    type: "Convolution"
    bottom: "relu_blob26"
    top: "conv_blob37"
    convolution_param {
    num_output: 32
    bias_term: false
    pad: 4
    kernel_size: 3
    group: 1
    stride: 1
    weight_filler {
    type: "xavier"
    }
    dilation: 4
    }
    }
    layer {
    name: "batch_norm40"
    type: "BatchNorm"
    bottom: "conv_blob37"
    top: "batch_norm_blob40"
    param {
    lr_mult: 0
    }
    param {
    lr_mult: 0
    }
    param {
    lr_mult: 0
    }
    batch_norm_param {
    use_global_stats: true
    eps: 9.9999997e-06
    }
    }
    layer {
    name: "bn_scale40"
    type: "Scale"
    bottom: "batch_norm_blob40"
    top: "batch_norm_blob40"
    scale_param {
    bias_term: true
    }
    }
    layer {
    name: "relu27"
    type: "ReLU"
    bottom: "batch_norm_blob40"
    top: "relu_blob27"
    }

    layer {
    name: "conv_transpose_seg"
    type: "Deconvolution"
    bottom: "relu_blob27"
    top: "conv_transpose_seg"
    convolution_param {
    num_output: 32
    bias_term: false
    pad: 1
    kernel_size: 4
    stride: 2
    group: 32
    weight_filler {
    type: "xavier"
    }
    dilation: 1
    engine: CAFFE
    }
    }
    layer {
    name: "batch_norm_seg"
    type: "BatchNorm"
    bottom: "conv_transpose_seg"
    top: "batch_norm_seg"
    param {
    lr_mult: 0
    }
    param {
    lr_mult: 0
    }
    param {
    lr_mult: 0
    }
    batch_norm_param {
    use_global_stats: false
    eps: 9.9999997e-06
    }
    }
    layer {
    name: "bn_scale_seg"
    type: "Scale"
    bottom: "batch_norm_seg"
    top: "batch_norm_seg"
    scale_param {
    bias_term: true
    }
    }
    layer {
    name: "relu_seg"
    type: "ReLU"
    bottom: "batch_norm_seg"
    top: "relu_blob_seg"
    }

    layer {
    name: "conv_seg"
    type: "Convolution"
    bottom: "relu_blob_seg"
    top: "conv_seg"
    convolution_param {
    num_output: 2
    bias_term: true
    pad: 1
    kernel_size: 3
    group: 1
    stride: 1
    weight_filler {
    type: "constant"
    value: 0
    }
    bias_filler {
    type: "constant"
    value: -1
    }
    dilation: 1
    }
    }
    layer {
    name: "conv40"
    type: "Convolution"
    bottom: "relu_blob21"
    top: "conv_blob40"
    convolution_param {
    num_output: 64
    bias_term: false
    pad: 1
    kernel_size: 3
    group: 1
    stride: 1
    weight_filler {
    type: "xavier"
    }
    dilation: 1
    }
    }
    layer {
    name: "batch_norm43"
    type: "BatchNorm"
    bottom: "conv_blob40"
    top: "batch_norm_blob43"
    param {
    lr_mult: 0
    }
    param {
    lr_mult: 0
    }
    param {
    lr_mult: 0
    }
    batch_norm_param {
    use_global_stats: true
    eps: 9.9999997e-06
    }
    }
    layer {
    name: "bn_scale43"
    type: "Scale"
    bottom: "batch_norm_blob43"
    top: "batch_norm_blob43"
    scale_param {
    bias_term: true
    }
    }
    layer {
    name: "relu30"
    type: "ReLU"
    bottom: "batch_norm_blob43"
    top: "relu_blob30"
    }
    layer {
    name: "conv41"
    type: "Convolution"
    bottom: "relu_blob30"
    top: "conv_blob41"
    convolution_param {
    num_output: 256
    bias_term: false
    pad: 1
    kernel_size: 3
    group: 1
    stride: 2
    weight_filler {
    type: "xavier"
    }
    dilation: 1
    }
    }
    layer {
    name: "batch_norm44"
    type: "BatchNorm"
    bottom: "conv_blob41"
    top: "batch_norm_blob44"
    param {
    lr_mult: 0
    }
    param {
    lr_mult: 0
    }
    param {
    lr_mult: 0
    }
    batch_norm_param {
    use_global_stats: true
    eps: 9.9999997e-06
    }
    }
    layer {
    name: "bn_scale44"
    type: "Scale"
    bottom: "batch_norm_blob44"
    top: "batch_norm_blob44"
    scale_param {
    bias_term: true
    }
    }
    layer {
    name: "relu31"
    type: "ReLU"
    bottom: "batch_norm_blob44"
    top: "relu_blob31"
    }
    layer {
    name: "conv42"
    type: "Convolution"
    bottom: "relu_blob31"
    top: "conv_blob42"
    convolution_param {
    num_output: 256
    bias_term: false
    pad: 1
    kernel_size: 3
    group: 1
    stride: 1
    weight_filler {
    type: "xavier"
    }
    dilation: 1
    }
    }
    layer {
    name: "batch_norm45"
    type: "BatchNorm"
    bottom: "conv_blob42"
    top: "batch_norm_blob45"
    param {
    lr_mult: 0
    }
    param {
    lr_mult: 0
    }
    param {
    lr_mult: 0
    }
    batch_norm_param {
    use_global_stats: true
    eps: 9.9999997e-06
    }
    }
    layer {
    name: "bn_scale45"
    type: "Scale"
    bottom: "batch_norm_blob45"
    top: "batch_norm_blob45"
    scale_param {
    bias_term: true
    }
    }
    layer {
    name: "relu32"
    type: "ReLU"
    bottom: "batch_norm_blob45"
    top: "relu_blob32"
    }

    layer {
    name: "conv_pairs"
    type: "Convolution"
    bottom: "relu_blob32"
    top: "conv_pairs"
    convolution_param {
    num_output: 6
    bias_term: true
    pad: 1
    kernel_size: 3
    group: 1
    stride: 1
    weight_filler {
    type: "xavier"
    }
    bias_filler {
    type: "constant"
    }
    dilation: 1
    }
    }

     

  • Hi,

    How did you generate .bin files on eve ? Kindly generate .y files in the eve also and compare .

    Thanks,

    Praveen 

  • Hello,

    Thank you for your replying . I have found the problem ,because TIDL do not support scale layer.

    Thanks.

    Poca

  • Thanks for the update.