Part Number: PROCESSOR-SDK-DRA8X-TDA4X
Dear TI experts,
I have a related question regarding using N-dimensional tensors from .bin file with inFileFormat=1 for TIDLmodelImport tool for calibration.
I wrote a .bin file with (NCHW) format with N=9. The data type of the elements is INT16.
I changed the code in test/src/tidl_tb_utils.c in function "tidl_ReadNetInput" to read the .bin file for calibration as below. However, I have few questions:
1. This code reads all the N x C x H x W tensors from .bin file and pass it to "BufDescList[numBuffs].bufPlanes[0].buf" for calibration. Does it mean all the tensors in the .bin file are considered for calibration ?
2. The calibration procedure prints the min and max values of tensors as 0 and 255.0 respectively for the starting layer. Why is it not INT16 range but rather UINT8 range ?
119275200, 113.750 0x7f6b97e2c010
1118205, 1.066 0x7f6b97d1a010
223641, 0.213 0x7f6b97ce3010
372735, 0.355 0x7f6b97c87010
----------------------- TIDL Process with REF_ONLY FLOW ------------------------
# 0values of N=9
. ..Starting Layer # - 1
0 1.00000 0.00000 255.00000 1
Processing Layer # - 1
Reducing bit depth for Tensor in layer - 1
Reducing bit depth for Tensor in layer - 1
Reducing bit depth for Tensor in layer - 1
Reducing bit depth for Tensor in layer - 1
Reducing bit depth for Tensor in layer - 1
Reducing bit depth for Tensor in layer - 1
Reducing bit depth for Tensor in layer - 1
Reducing bit depth for Tensor in layer - 1
3. I ran the inference using the model generated with the calibration procedure but the inference results are not satisfactory. I suspect the calibration procedure I used above may not be correct with the input range.
Please let me know how can I debug this calibration ?
Best Regards,
Adit