This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-DRA8X-TDA4X: TIDL model calibration with INT16 input

Part Number: PROCESSOR-SDK-DRA8X-TDA4X

Dear TI experts,

I have a related question regarding using  N-dimensional tensors from .bin file with inFileFormat=1 for TIDLmodelImport tool for calibration.

I wrote a .bin file with (NCHW) format with N=9. The data type of the elements is INT16.

I changed the code in test/src/tidl_tb_utils.c  in function "tidl_ReadNetInput" to read the .bin file for calibration as below. However, I have few questions:

1. This code reads all the N x C x H x W tensors from .bin file and pass it to "BufDescList[numBuffs].bufPlanes[0].buf"  for calibration.  Does it mean all the tensors in the .bin file are considered for calibration ?

if(params->inFileFormat == 1)
{
//Calculate the file length and extract 'N'
fseek(fp1, 0, SEEK_END);
int32_t fileLen = ftell(fp1);
fseek(fp1, 0, SEEK_SET);
int32_t N = (fileLen)/(gIOParams.inNumChannels[numBuffs]*
gIOParams.inWidth[numBuffs]*
gIOParams.inHeight[numBuffs]*sizeof(int16_t));
printf("value of N =%d\n", N );
if (gIOParams.inElementType[numBuffs] == TIDL_SignedShort)
{
readDataS16(fp1, ((int16_t *)BufDescList[numBuffs].bufPlanes[0].buf), N,
gIOParams.inNumChannels[numBuffs],
BufDescList[numBuffs].bufPlanes[0].width,
channelHeight,
BufDescList[numBuffs].bufPlanes[0].width,
channelOffset);
}

2.  The calibration procedure prints the min and max values of tensors as 0 and 255.0 respectively for the starting layer. Why is it not INT16 range but rather UINT8 range ?

   119275200,    113.750 0x7f6b97e2c010
     1118205,      1.066 0x7f6b97d1a010
      223641,      0.213 0x7f6b97ce3010
      372735,      0.355 0x7f6b97c87010
 ----------------------- TIDL Process with REF_ONLY FLOW ------------------------

#    0values of N=9
 . ..Starting Layer # -    1
   0    1.00000    0.00000  255.00000 1
Processing Layer # -    1
Reducing bit depth for Tensor in layer -  1
Reducing bit depth for Tensor in layer -  1
Reducing bit depth for Tensor in layer -  1
Reducing bit depth for Tensor in layer -  1
Reducing bit depth for Tensor in layer -  1
Reducing bit depth for Tensor in layer -  1
Reducing bit depth for Tensor in layer -  1
Reducing bit depth for Tensor in layer -  1

3. I ran the inference using the model generated with the calibration procedure but the inference results are not satisfactory. I suspect the calibration procedure I used above may not be correct with the input range.

Please let me know how can I debug this calibration ?

Best Regards,

Adit