This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: Preprocess padding question

Part Number: TDA4VM


Dear Support,

We have tested our network's accuracy with the PC emulation beside of our TIDL application.
The PC emulation made different and more accurate detections on the same frames.

I've found that the raw input data on the PC emulaion has the same size of the original image (480 x 640). However, on the TIDL side we have to cenvert BGR from YUV, therefore we will have some padding on the converted image. The preprocess output in the BGR format has the size 481 x 643 which is a bit strange. Should't be this value 482 x 642?

We have 'SAME UPPER' padding on our input node as it shown below:

>>> nn.graph.node[0]
input: "external1"
input: "variable1"
input: "variable2"
output: "conv1"
op_type: "Conv"
attribute {
  name: "auto_pad"
  s: "SAME_UPPER"
  type: STRING
}
attribute {
  name: "strides"
  ints: 2
  ints: 2
  type: INTS
}
attribute {
  name: "dilations"
  ints: 1
  ints: 1
  type: INTS
}
attribute {
  name: "group"
  i: 1
  type: INT
}
attribute {
  name: "kernel_shape"
  ints: 3
  ints: 3
  type: INTS
}

We currently have [1, 1, 0, 2] padding in the ioBufDesc:

ioBufDesc->inPadL[0]
ioBufDesc->inPadT[0]
ioBufDesc->inPadR[0]
ioBufDesc->inPadB[0]

How can we fix it?