Tool/software:
Hi experts,
I am seeing using the ARC fault model on model composer. After training the model, I compile the model but saw a failure. Below is the error log:
The current compilation run has failed.
No log file was produced /opt/tinyml/.pyenv/versions/py310/lib/python3.10/site-packages/edgeai_torchmodelopt/xmodelopt/quantization/v2/qconfig_types.py:48: UserWarning: could not find _get_default_qconfig_mapping_with_default_qconfig in torch.ao.quantization.qconfig_mapping warnings.warn("could not find _get_default_qconfig_mapping_with_default_qconfig in torch.ao.quantization.qconfig_mapping") Traceback (most recent call last): File "/opt/tinyml/code/tinyml-modelmaker/tinyml_modelmaker/utils/config_dict.py", line 77, in __getattr__ return self[key] KeyError: 'model_export_path_quantization' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/tinyml/code/tinyml-mlbackend/./run.py", line 138, in <module> main(vars(args)) File "/opt/tinyml/code/tinyml-mlbackend/./run.py", line 129, in main run_modelmaker.main(config) File "/opt/tinyml/code/tinyml-modelmaker/scripts/run_tinyml_modelmaker.py", line 97, in main model_runner.run() File "/opt/tinyml/code/tinyml-modelmaker/tinyml_modelmaker/ai_modules/timeseries/runner.py", line 208, in run self.model_compilation.run() File "/opt/tinyml/code/tinyml-modelmaker/tinyml_modelmaker/ai_modules/common/compilation/tinyml_benchmark.py", line 117, in run model_file = self.params.training.model_export_path_quantization if self.params.training.quantization != TinyMLQuantizationVersion.NO_QUANTIZATION else self.params.training.model_export_path File "/opt/tinyml/code/tinyml-modelmaker/tinyml_modelmaker/utils/config_dict.py", line 79, in __getattr__ raise AttributeError(key) AttributeError: model_export_path_quantization
May I know how to solve this?
Regards,
Hang.