Tool/software:
Hi experts,
I would like to use edgeai-mmpose to test face2D models like rtm.. When I try to run some demo I have environment issue related with edgeai-torchopt. My python env:
Package Version Editable project location ------------------------ ------------- ---------------------------------------------- addict 2.4.0 aliyun-python-sdk-core 2.16.0 aliyun-python-sdk-kms 2.16.5 attrs 25.1.0 certifi 2025.1.31 cffi 1.17.1 charset-normalizer 3.4.1 chumpy 0.70 click 8.1.8 cmake 3.25.0 colorama 0.4.6 coloredlogs 15.0.1 contourpy 1.3.1 coverage 7.6.12 crcmod 1.7 cryptography 44.0.2 cycler 0.12.1 Cython 3.0.12 edgeai-torchmodelopt 10.0.0+532969 exceptiongroup 1.2.2 filelock 3.14.0 flake8 7.1.2 flatbuffers 25.2.10 fonttools 4.56.0 fsspec 2025.3.0 humanfriendly 10.0 idna 3.10 iniconfig 2.0.0 interrogate 1.7.0 isort 4.3.21 Jinja2 3.1.6 jmespath 0.10.0 json-tricks 3.17.3 kiwisolver 1.4.8 lit 15.0.7 Markdown 3.7 markdown-it-py 3.0.0 MarkupSafe 3.0.2 matplotlib 3.10.1 mccabe 0.7.0 mdurl 0.1.2 mmcv 2.1.0 mmdet 3.2.0 mmengine 0.10.7 mmpose 1.3.1 /home/ht/edgeai/edgeai-tensorlab/edgeai-mmpose model-index 0.1.11 mpmath 1.3.0 munkres 1.1.4 networkx 3.4.2 numpy 1.26.4 nvidia-cublas-cu11 11.10.3.66 nvidia-cublas-cu12 12.1.3.1 nvidia-cuda-cupti-cu11 11.7.101 nvidia-cuda-cupti-cu12 12.1.105 nvidia-cuda-nvrtc-cu11 11.7.99 nvidia-cuda-nvrtc-cu12 12.1.105 nvidia-cuda-runtime-cu11 11.7.99 nvidia-cuda-runtime-cu12 12.1.105 nvidia-cudnn-cu11 8.5.0.96 nvidia-cudnn-cu12 8.9.2.26 nvidia-cufft-cu11 10.9.0.58 nvidia-cufft-cu12 11.0.2.54 nvidia-curand-cu11 10.2.10.91 nvidia-curand-cu12 10.3.2.106 nvidia-cusolver-cu11 11.4.0.1 nvidia-cusolver-cu12 11.4.5.107 nvidia-cusparse-cu11 11.7.4.91 nvidia-cusparse-cu12 12.1.0.106 nvidia-cusparselt-cu12 0.6.2 nvidia-nccl-cu11 2.14.3 nvidia-nccl-cu12 2.19.3 nvidia-nvjitlink-cu12 12.4.127 nvidia-nvtx-cu11 11.7.91 nvidia-nvtx-cu12 12.1.105 onnx 1.17.0 onnxruntime 1.21.0 onnxsim 0.4.36 opencv-python 4.11.0.86 opendatalab 0.0.10 openmim 0.3.9 openxlab 0.1.2 ordered-set 4.1.0 oss2 2.17.0 packaging 24.2 pandas 2.2.3 parameterized 0.9.0 pillow 11.1.0 pip 23.0.1 platformdirs 4.3.6 pluggy 1.5.0 progressbar 2.5 protobuf 6.30.0 py 1.11.0 pycocotools 2.0.8 pycodestyle 2.12.1 pycparser 2.22 pycryptodome 3.21.0 pydot 3.0.4 pyflakes 3.2.0 Pygments 2.19.1 pyparsing 3.2.1 pytest 8.3.5 pytest-runner 6.0.1 python-dateutil 2.9.0.post0 pytz 2023.4 PyYAML 6.0.2 requests 2.28.2 rich 13.4.2 scipy 1.15.2 setuptools 60.2.0 shapely 2.0.7 six 1.17.0 sympy 1.13.1 tabulate 0.9.0 termcolor 2.5.0 terminaltables 3.1.10 tomli 2.2.1 torch 2.0.1 torchaudio 2.0.2+cu118 torchinfo 1.8.0 torchvision 0.15.2+cu118 tqdm 4.65.2 triton 2.0.0 typing_extensions 4.12.2 tzdata 2025.1 urllib3 1.26.20 wheel 0.45.1 xdoctest 1.2.0 xtcocotools 1.14.3 yapf 0.43.0
error log:
Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ht/.pyenv/versions/mmpose/lib/python3.10/site-packages/torch/serialization.py", line 809, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "/home/ht/.pyenv/versions/mmpose/lib/python3.10/site-packages/torch/serialization.py", line 1172, in _load result = unpickler.load() File "/home/ht/.pyenv/versions/mmpose/lib/python3.10/site-packages/torch/serialization.py", line 1165, in find_class return super().find_class(mod_name, name) File "/home/ht/edgeai/edgeai-tensorlab/edgeai-mmpose/mmpose/models/__init__.py", line 2, in <module> from .backbones import * # noqa File "/home/ht/edgeai/edgeai-tensorlab/edgeai-mmpose/mmpose/models/backbones/__init__.py", line 4, in <module> from .csp_darknet import CSPDarknet File "/home/ht/edgeai/edgeai-tensorlab/edgeai-mmpose/mmpose/models/backbones/csp_darknet.py", line 11, in <module> from ..utils import CSPLayer File "/home/ht/edgeai/edgeai-tensorlab/edgeai-mmpose/mmpose/models/utils/__init__.py", line 4, in <module> from .csp_layer import CSPLayer File "/home/ht/edgeai/edgeai-tensorlab/edgeai-mmpose/mmpose/models/utils/csp_layer.py", line 9, in <module> from mmpose.utils.typing import ConfigType, OptConfigType, OptMultiConfig File "/home/ht/edgeai/edgeai-tensorlab/edgeai-mmpose/mmpose/utils/__init__.py", line 9, in <module> from .model_surgery import convert_to_lite_model File "/home/ht/edgeai/edgeai-tensorlab/edgeai-mmpose/mmpose/utils/model_surgery.py", line 33, in <module> import edgeai_torchmodelopt File "/home/ht/.pyenv/versions/mmpose/lib/python3.10/site-packages/edgeai_torchmodelopt/__init__.py", line 32, in <module> from . import xnn File "/home/ht/.pyenv/versions/mmpose/lib/python3.10/site-packages/edgeai_torchmodelopt/xnn/__init__.py", line 32, in <module> from . import utils File "/home/ht/.pyenv/versions/mmpose/lib/python3.10/site-packages/edgeai_torchmodelopt/xnn/utils/__init__.py", line 55, in <module> from .graph_drawer_utils import * File "/home/ht/.pyenv/versions/mmpose/lib/python3.10/site-packages/edgeai_torchmodelopt/xnn/utils/graph_drawer_utils.py", line 6, in <module> from torch.fx.graph import _parse_stack_trace ImportError: cannot import name '_parse_stack_trace' from 'torch.fx.graph' (/home/ht/.pyenv/versions/mmpose/lib/python3.10/site-packages/torch/fx/graph.py) >>>
this problem is related with another e2e: https://e2e.ti.com/support/processors-group/processors/f/processors-forum/1455952/sk-am62a-lp-complie-onnx-split-and-add-node-can-t-pass/5659403#5659403
Customer used a rtm based model to do face2D keypoint detection, thus we are trying to expend current edgeai-mmpose to test that model.
Regards,
Adam