Dear TI Team
For the custom model , we are able to generate the inference results on both PC emulated and target environment .
Ideally both generated inference results from the model should match . But in our case, there is difference in the output , even though we use same configuration on both PC emulation and Target .
During the course of debugging , we identified that the injection of error is started from layer. -> Conv3...This causes the difference in final layer output( Ref Fig 1, for layer details)
I have shared the import and inference configuration used in our testing .
Kindly let us know the cause for mismatch and also possible solution to fix it .
Fig 1:
Import Configuration -
modelType = 2
numParamBits = 16
numFeatureBits = 16
quantizationStyle = 3
inputNetFile = "XX.onnx"
outputNetFile = "XX.bin"
outputParamsFile = "XX_"
inWidth = 2048
inHeight = 1024
inNumChannels = 3
inFileFormat = 2
inDataFormat = 1
inElementType = 1
inDataNorm = 1
inMean = 123.675 116.28 103.53
inScale = 0.017124754 0.017507003 0.017429194
inData = "input.txt"
postProcType = 3
writeOutput = 2
debugTraceLevel = 1
writeTraceLevel = 3
Inference Configuration-
inFileFormat = 2
postProcType = 3
numFrames = 1
netBinFile = XX.bin
ioConfigFile = XX.bin
inData = tinput.txt
outData = yy.bin
Versions used :-
SDK: ti-processor-sdk-rtos-j721e-evm-07_01_00_11
TVM version: REL.TIDL.J7.01.03.00.11
OS : Ubuntu 18.04
Thanks and regards
VasanthKumar.V.M