This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM5749: TIDL - Cannot quantize layer Reshape

Part Number: AM5749

Hi,

I'm trying to quantize a model written in caffe-jacinto into a TIDL model using the tools provided in the TI Processor SDK.

If I try using an architecture with a Reshape layer I get the following result from tidl_model_import.out:

with_reshape.zip deploy.prototxt and snapshot.caffemodel

Caffe Network File : models/cj/deploy.prototxt
Caffe Model File : models/cj/snapshot.caffemodel
TIDL Network File : models/cj/tidl_net.bin
TIDL Model File : models/cj/tidl_params.bin
Name of the Network : graph_deploy
Num Inputs : 1
Num of Layer Detected : 6
0, TIDL_DataLayer , data 0, -1 , 1 , x , x , x , x , x , x , x , x , 0 , 0 , 0 , 0 , 0 , 1 , 3 , 32 , 32 , 0 ,
1, TIDL_BatchNormLayer , data/bias 1, 1 , 1 , 0 , x , x , x , x , x , x , x , 1 , 1 , 3 , 32 , 32 , 1 , 3 , 32 , 32 , 3072 ,
2, TIDL_ConvolutionLayer , conv1 1, 1 , 1 , 1 , x , x , x , x , x , x , x , 2 , 1 , 3 , 32 , 32 , 1 , 3 , 32 , 32 , 451584 ,
3, (null) , reshape1 1, 1 , 1 , 2 , x , x , x , x , x , x , x , 3 , 1 , 3 , 32 , 32 , 1 , 3 , 3 , 341 , 3069 ,
4, TIDL_FlattenLayer , flatten1 1, 1 , 1 , 3 , x , x , x , x , x , x , x , 4 , 1 , 3 , 3 , 341 , 1 , 1 , 1 , 3069 , 1 ,
5, TIDL_InnerProductLayer , fc10 1, 1 , 1 , 4 , x , x , x , x , x , x , x , 5 , 1 , 1 , 1 , 3069 , 1 , 1 , 1 , 10 , 30690 ,
Total Giga Macs : 0.0005

Processing config file ./tempDir/qunat_stats_config.txt !
0, TIDL_DataLayer , 0, -1 , 1 , x , x , x , x , x , x , x , x , 0 , 0 , 0 , 0 , 0 , 1 , 3 , 32 , 32 ,
1, TIDL_BatchNormLayer , 1, 1 , 1 , 0 , x , x , x , x , x , x , x , 1 , 1 , 3 , 32 , 32 , 1 , 3 , 32 , 32 ,
2, TIDL_ConvolutionLayer , 1, 1 , 1 , 1 , x , x , x , x , x , x , x , 2 , 1 , 3 , 32 , 32 , 1 , 3 , 32 , 32 ,

Process finished with exit code 0

First of all, the name of the Reshape layer is (null), which doesn't sound right, and the quantization process abruptly ends right before processing the layer.

--------------------------------------

If I instead try to quantize a model without reshape, I get the following result:

without_reshape.zip deploy.prototxt and snapshot.caffemodel

Caffe Network File : models/cj/deploy.prototxt
Caffe Model File : models/cj/snapshot.caffemodel
TIDL Network File : models/cj/tidl_net.bin
TIDL Model File : models/cj/tidl_params.bin
Name of the Network : graph_deploy
Num Inputs : 1
Num of Layer Detected : 5
0, TIDL_DataLayer , data 0, -1 , 1 , x , x , x , x , x , x , x , x , 0 , 0 , 0 , 0 , 0 , 1 , 3 , 32 , 32 , 0 ,
1, TIDL_BatchNormLayer , data/bias 1, 1 , 1 , 0 , x , x , x , x , x , x , x , 1 , 1 , 3 , 32 , 32 , 1 , 3 , 32 , 32 , 3072 ,
2, TIDL_ConvolutionLayer , conv1 1, 1 , 1 , 1 , x , x , x , x , x , x , x , 2 , 1 , 3 , 32 , 32 , 1 , 3 , 32 , 32 , 451584 ,
3, TIDL_FlattenLayer , flatten1 1, 1 , 1 , 2 , x , x , x , x , x , x , x , 3 , 1 , 3 , 32 , 32 , 1 , 1 , 1 , 3072 , 1 ,
4, TIDL_InnerProductLayer , fc10 1, 1 , 1 , 3 , x , x , x , x , x , x , x , 4 , 1 , 1 , 1 , 3072 , 1 , 1 , 1 , 10 , 30720 ,
Total Giga Macs : 0.0005

Processing config file ./tempDir/qunat_stats_config.txt !
0, TIDL_DataLayer , 0, -1 , 1 , x , x , x , x , x , x , x , x , 0 , 0 , 0 , 0 , 0 , 1 , 3 , 32 , 32 ,
1, TIDL_BatchNormLayer , 1, 1 , 1 , 0 , x , x , x , x , x , x , x , 1 , 1 , 3 , 32 , 32 , 1 , 3 , 32 , 32 ,
2, TIDL_ConvolutionLayer , 1, 1 , 1 , 1 , x , x , x , x , x , x , x , 2 , 1 , 3 , 32 , 32 , 1 , 3 , 32 , 32 ,
3, TIDL_FlattenLayer , 1, 1 , 1 , 2 , x , x , x , x , x , x , x , 3 , 1 , 3 , 32 , 32 , 1 , 1 , 1 , 3072 ,
4, TIDL_InnerProductLayer , 1, 1 , 1 , 3 , x , x , x , x , x , x , x , 4 , 1 , 1 , 1 , 3072 , 1 , 1 , 1 , 10 ,
5, TIDL_DataLayer , 0, 1 , -1 , 4 , x , x , x , x , x , x , x , 0 , 1 , 1 , 1 , 10 , 0 , 0 , 0 , 0 ,
Layer ID ,inBlkWidth ,inBlkHeight ,inBlkPitch ,outBlkWidth ,outBlkHeight,outBlkPitch ,numInChs ,numOutChs ,numProcInChs,numLclInChs ,numLclOutChs,numProcItrs ,numAccItrs ,numHorBlock ,numVerBlock ,inBlkChPitch,outBlkChPitc,alignOrNot
2 40 38 40 32 32 32 3 3 3 1 3 1 3 1 1 1520 1024 1

Processing Frame Number : 0

Image reading is Not Supported. OpenCV not Enabled
Layer 1 : Out Q : 1 , TIDL_BatchNormLayer , PASSED #MMACs = 0.00, 0.00, Sparsity : 0.00
Layer 2 : Out Q : 1 , TIDL_ConvolutionLayer, PASSED #MMACs = 0.45, 0.48, Sparsity : -6.12
Layer 3 :TIDL_FlattenLayer, PASSED #MMACs = 0.00, 0.00, Sparsity : 0.00
Layer 4 : Out Q : 1 , TIDL_InnerProductLayer, PASSED #MMACs = 0.00, 0.00, Sparsity : 0.00
End of config list found !

Process finished with exit code 0

The model is fully processed and I am able to run inference on the TI.

--------------------------------------------------------

Another weird thing I've noticed is that If I specify a 4-dimensional output in the reshape layer (e.g. [batch_size, channels, h*w, 1]), the first dimension gets ignored during quantization, whereas during training it behaves as expected.

According to this link (http://software-dl.ti.com/processor-sdk-linux/esd/docs/latest/linux/Foundational_Components_TIDL.html) the reshape layer should be supported. What should we do to make it work? We really need the reshape layer to make our model run.

Thanks in advance,

Gabriele