This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Tool/software:
Hello, I have been through routine of links, and use semantic segmentation model: fpn_aspp_regnetx1p6gf_edgeailite trains the cityspaces dataset and obtains the model_best.pth model.
However, in the Quantization Aware Training (QAT), I have carried out the following configuration according to the QAT.md tutorial:
is_cuda = next(model.parameters()).is_cuda example_inputs = create_rand_inputs(args, is_cuda=is_cuda) if 'training' in args.phase: model = edgeai_torchmodelopt.xmodelopt.quantization.v2.QATFxModule(model, example_inputs=example_inputs, total_epochs=args.epochs) elif 'calibration' in args.phase: model = edgeai_torchmodelopt.xmodelopt.quantization.v2.QATFxModule(model) elif 'validation' in args.phase: # Note: bias_calibration is not emabled model = edgeai_torchmodelopt.xmodelopt.quantization.v2.QATFxModule(model, total_epochs=args.epochs)
However, I got the following error:
Traceback (most recent call last): File "/home/xcb/zjb/algorithm/edgeai-torchvision/./references/edgeailite/main/pixel2pixel/train_segmentation_main.py", line 290, in <module> run(args) File "/home/xcb/zjb/algorithm/edgeai-torchvision/./references/edgeailite/main/pixel2pixel/train_segmentation_main.py", line 285, in run main(arguments) File "/home/xcb/zjb/algorithm/edgeai-torchvision/./references/edgeailite/main/pixel2pixel/train_segmentation_main.py", line 148, in main train_pixel2pixel.main(arguemnts) File "/home/xcb/zjb/algorithm/edgeai-torchvision/references/edgeailite/edgeai_xvision/xengine/train_pixel2pixel.py", line 450, in main model = edgeai_torchmodelopt.xmodelopt.quantization.v2.QATFxModule(model, example_inputs=example_inputs, total_epochs=args.epochs) File "/home/xcb/anaconda3/envs/edgeai/lib/python3.10/site-packages/edgeai_torchmodelopt/xmodelopt/quantization/v2/quant_fx.py", line 37, in __init__ super().__init__(*args, is_qat=True, backend=backend, **kwargs) File "/home/xcb/anaconda3/envs/edgeai/lib/python3.10/site-packages/edgeai_torchmodelopt/xmodelopt/quantization/v2/quant_fx_base.py", line 79, in __init__ model = quantize_fx.prepare_qat_fx(model, qconfig_mapping, example_inputs) File "/home/xcb/anaconda3/envs/edgeai/lib/python3.10/site-packages/torch/ao/quantization/quantize_fx.py", line 515, in prepare_qat_fx return _prepare_fx( File "/home/xcb/anaconda3/envs/edgeai/lib/python3.10/site-packages/torch/ao/quantization/quantize_fx.py", line 162, in _prepare_fx graph_module = GraphModule(model, tracer.trace(model)) File "/home/xcb/anaconda3/envs/edgeai/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 739, in trace (self.create_arg(fn(*args)),), File "/home/xcb/anaconda3/envs/edgeai/lib/python3.10/site-packages/edgeai_torchmodelopt/xnn/utils/amp.py", line 45, in conditional_fp16 return func(self, *args, **kwargs) File "/home/xcb/zjb/algorithm/edgeai-torchvision/references/edgeailite/edgeai_xvision/xvision/models/pixel2pixel/pixel2pixelnet.py", line 122, in forward d_out = decoder(x_inp, x_feat, x_list) File "/home/xcb/anaconda3/envs/edgeai/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 717, in module_call_wrapper return self.call_module(mod, forward, args, kwargs) File "/home/xcb/anaconda3/envs/edgeai/lib/python3.10/site-packages/torch/ao/quantization/fx/tracer.py", line 103, in call_module return super().call_module(m, forward, args, kwargs) File "/home/xcb/anaconda3/envs/edgeai/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 434, in call_module return forward(*args, **kwargs) File "/home/xcb/anaconda3/envs/edgeai/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 710, in forward return _orig_module_call(mod, *args, **kwargs) File "/home/xcb/anaconda3/envs/edgeai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/home/xcb/zjb/algorithm/edgeai-torchvision/references/edgeailite/edgeai_xvision/xvision/models/pixel2pixel/fpn_edgeailite.py", line 240, in forward assert isinstance(x_input, (list,tuple)) and len(x_input)<=2, 'incorrect input' AssertionError: incorrect input
I've gone through the tutorial and I've removed example_inputs=example_inputs, and I still get the above error, should I add some other configuration?
Looking forward to your reply!Thanks!
Since there is no further actions, will close the ticket. Please submit a new ticket if there is still issues.
Br, Tommy