Other Parts Discussed in Thread: TDA2
Hello,
\
As title, I wonder does the caffe-jacinto offer quantization aware training for the SSD?
I've tried the offered two different base-networks SSDs: jacintoNetV2 & mobilenet:
\
As previously noted by a TI's engineer, depth-wise convolution will suffer more from the runtime weights quantization, and that is also the result I got as well:
JacintoNet_v2 SSD:
After sparse training: mAP = 0.88
Quantization test shows: mAP = 0.74
MobileNet SSD:
After sparse training: mAP=0.89
Quantization test shows: mAP=0.62
\
Though the quantization loss is lesser in jacintoNet, but I still want to minimize the quantization loss, therefore:
(1.) I'm asking if the quantization aware training is offered for caffe-jacinto?
(2.) Any other suggestions to minimize the quantization loss?
(3.) Is there a way to turn off the runtime quantization (for accuracy experiment purposes) when running the model on TDA2 dev board?
Thank you,
Wei Chih