Hi ,
I am executing the inference for int8 of my model but the output is not correct when i was debugging it i found the deviation in the input layer of the model compared to fp32 where in fp32 i am getting correct output , can you u please let me knw if i need to do any addition preprocessing steps for int8 inference





