Hello,
I'm trying to convert a .tflite model to .bin, following the instructions in:
But it gives me an error that I can't understand when converting it:
TFLite Model (Flatbuf) File : ../../test/testvecs/models/public/tflite/model.tflite
TIDL Network File : ../../test/testvecs/config/tidl_models/tflite/model.bin
TIDL IO Info File : ../../test/testvecs/config/tidl_models/tflite/model
323
tidl_model_import.out: /home/a0132012/work/releases/draft/08_00_01_03/flatbuffers-1.12.0/include/flatbuffers/flatbuffers.h:257: flatbuffers::Vector<T>::return_type flatbuffers::Vector<T>::Get(flatbuffers::uoffset_t) const [with T = flatbuffers::Offset<tflite::Tensor>; flatbuffers::Vector<T>::return_type = const tflite::Tensor*; flatbuffers::uoffset_t = unsigned int]: Assertion `i < size()' failed.
Aborted (core dumped)
The configuration file that I have:
modelType = 3
numParamBits = 32
quantizationStyle = 2
inputNetFile = ../../test/testvecs/models/public/tflite/model.tflite
outputNetFile = "../../test/testvecs/config/tidl_models/tflite/model.bin"
outputParamsFile = "../../test/testvecs/config/tidl_models/tflite/model"
inDataNorm = 0
inWidth = 224
inHeight = 224
inNumChannels = 3
postProcType = 1
And model info:
inputs > type: float32[1,224,224,3]
output > type: float32[1,1]
Also just in case I have tried with the combinations of numParamBits 8, 15, 16 and 32 with quantizationStyle 0, 1, 2 and 3 and I get the same error in all cases.
Any idea of the origin and solution of the error? Suggestions?