Hello,
I am trying to perform quantization over an object detection model. Since I use grayscale inputs, I am forced to use only binary inputs. As I understood from previous threads, I can only provide a single image to the import config file (inFileFormat=1).
I discovered that the 8-bit model accuracy heavily degredates compared to the full precision model. I suspect that the main cause is the restriction on the number of images that are provided in the import stage, and are used to compute quantization.
Can you suggest any solution to this issue?
Thanks,
Adam