This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA2EVM5777: Issue InnerProduct layer in TIDL?

Part Number: TDA2EVM5777

Hi,

I am referring the classification model given in caffe-jacinto repo.

model name : imagenet_jacintonet11v2

layer {
name: "res5a_branch2b/bn"
type: "BatchNorm"
bottom: "res5a_branch2b"
top: "res5a_branch2b"
batch_norm_param {
moving_average_fraction: 0.99
eps: 0.0001
scale_bias: true
}
}
layer {
name: "res5a_branch2b/relu"
type: "ReLU"
bottom: "res5a_branch2b"
top: "res5a_branch2b"
}
layer {
name: "pool5"
type: "Pooling"
bottom: "res5a_branch2b"
top: "pool5"
pooling_param {
pool: AVE
global_pooling: true
}
}
layer {
name: "fc1000"
type: "InnerProduct"
bottom: "pool5"
top: "fc1000"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output: 1000
weight_filler {
type: "msra"
}
bias_filler {
type: "constant"
value: 0
}
}
}

We have observed that in every classification network in caffe-jacinto, You have used global average pooling layer in between Convolution Layer and Inner Product Layer.

Is it mandatory ?

We have written a small network like where InnerProduct layer is following Convolution Layer shown below. 

Till Convolution Layer (conv4), We are able to match TIDL output with Caffe Output . But, At InnerProduct Layer , We are not able to match TIDL and Caffe output.

For InnerProduct Layer , outQ is very small in the range of 300.  

Should we add any other layer like Flatten in between Convolution and Inner Product Layer ?

layer {
name: "conv4"
type: "Convolution"
bottom: "pool3"
top: "conv4"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 1
}
convolution_param {
num_output: 128
kernel_size: 2
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
}

layer {
name: "conv5"
type: "InnerProduct"
bottom: "conv4"
top: "conv5"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 1
}
inner_product_param {
num_output: 256
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
}

Regards,

Sagar