Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

使用inference.py推理微调后的模型总报这个错:OSError: /tiamat-NAS/boyang/GLM4/gjm/1024/checkpoint-12000 does not appear to have a file named THUDM/glm-4-9b-chat--configuration_chatglm.py. Checkout 'https://huggingface.co//tiamat-NAS/boyang/GLM4/gjm/1024/checkpoint-12000/None' for available files. #612

Open
1 of 2 tasks
LolerPanda opened this issue Oct 27, 2024 · 1 comment
Assignees

Comments

@LolerPanda
Copy link

System Info / 系統信息

(chatglm_env) (base) root@di-20240511161733-mz478:/tiamat-vePFS/share_data/boyang/llms/GLM-4/finetune_demo# pip list
Package Version


accelerate 1.0.1
aiofiles 23.2.1
aiohappyeyeballs 2.4.3
aiohttp 3.10.10
aiosignal 1.3.1
annotated-types 0.7.0
anyio 4.6.2.post1
async-timeout 4.0.3
attrs 24.2.0
certifi 2024.8.30
charset-normalizer 3.4.0
click 8.1.7
contourpy 1.3.0
cpm-kernels 1.0.11
cycler 0.12.1
datasets 2.20.0
deepspeed 0.14.4
dill 0.3.8
exceptiongroup 1.2.2
fastapi 0.115.3
ffmpy 0.4.0
filelock 3.16.1
fonttools 4.54.1
frozenlist 1.5.0
fsspec 2024.5.0
gradio 4.44.1
gradio_client 1.3.0
h11 0.14.0
hjson 3.1.0
httpcore 1.0.6
httpx 0.27.2
huggingface-hub 0.26.1
idna 3.10
importlib_metadata 8.5.0
importlib_resources 6.4.5
jieba 0.42.1
Jinja2 3.1.4
joblib 1.4.2
kiwisolver 1.4.7
latex2mathml 3.77.0
Markdown 3.7
markdown-it-py 3.0.0
MarkupSafe 2.1.5
matplotlib 3.9.2
mdtex2html 1.3.0
mdurl 0.1.2
mpmath 1.3.0
multidict 6.1.0
multiprocess 0.70.16
networkx 3.2.1
ninja 1.11.1.1
nltk 3.8.1
numpy 2.0.2
nvidia-cublas-cu12 12.4.5.8
nvidia-cuda-cupti-cu12 12.4.127
nvidia-cuda-nvrtc-cu12 12.4.127
nvidia-cuda-runtime-cu12 12.4.127
nvidia-cudnn-cu12 9.1.0.70
nvidia-cufft-cu12 11.2.1.3
nvidia-curand-cu12 10.3.5.147
nvidia-cusolver-cu12 11.6.1.9
nvidia-cusparse-cu12 12.3.1.170
nvidia-ml-py 12.560.30
nvidia-nccl-cu12 2.21.5
nvidia-nvjitlink-cu12 12.4.127
nvidia-nvtx-cu12 12.4.127
orjson 3.10.10
packaging 24.1
pandas 2.2.3
peft 0.12.0
pillow 10.4.0
pip 24.2
propcache 0.2.0
protobuf 5.28.3
psutil 6.1.0
py-cpuinfo 9.0.0
pyarrow 17.0.0
pyarrow-hotfix 0.6
pydantic 2.9.2
pydantic_core 2.23.4
pydub 0.25.1
Pygments 2.18.0
pyparsing 3.2.0
python-dateutil 2.9.0.post0
python-multipart 0.0.12
pytz 2024.2
PyYAML 6.0.2
regex 2024.9.11
requests 2.32.3
rich 13.9.3
rouge-chinese 1.0.3
ruamel.yaml 0.18.6
ruamel.yaml.clib 0.2.12
ruff 0.7.1
safetensors 0.4.5
semantic-version 2.10.0
sentencepiece 0.2.0
setuptools 75.2.0
shellingham 1.5.4
six 1.16.0
sniffio 1.3.1
starlette 0.41.0
sympy 1.13.1
tokenizers 0.13.3
tomlkit 0.12.0
torch 2.5.0
tqdm 4.66.5
transformers 4.27.1
triton 3.1.0
typer 0.12.5
typing_extensions 4.12.2
tzdata 2024.2
urllib3 2.2.3
uvicorn 0.32.0
websockets 12.0
wheel 0.44.0
xxhash 3.5.0
yarl 1.16.0
zipp 3.20.2

Who can help? / 谁可以帮助到您?

No response

Information / 问题信息

  • The official example scripts / 官方的示例脚本
  • My own modified scripts / 我自己修改的脚本和任务

Reproduction / 复现过程

(chatglm_env) (base) root@di-20240511161733-mz478:/tiamat-vePFS/share_data/boyang/llms/GLM-4/finetune_demo# python inference_ori.py /tiamat-NAS/boyang/GLM4/gjm/1024/checkpoint-12000
Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Could not locate the THUDM/glm-4-9b-chat--configuration_chatglm.py inside /tiamat-NAS/boyang/GLM4/gjm/1024/checkpoint-12000.
╭───────────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────────╮
│ /tiamat-vePFS/share_data/boyang/llms/GLM-4/finetune_demo/inference_ori.py:116 in main │
│ │
│ 113 │ # } │
│ 114 │ # ] │
│ 115 │ │
│ ❱ 116 │ model, tokenizer = load_model_and_tokenizer(model_dir) │
│ 117 │ inputs = tokenizer.apply_chat_template( │
│ 118 │ │ messages, │
│ 119 │ │ add_generation_prompt=True, │
│ │
│ /tiamat-vePFS/share_data/boyang/llms/GLM-4/finetune_demo/inference_ori.py:36 in load_model_and_tokenizer │
│ │
│ 33 │ │ ) │
│ 34 │ │ tokenizer_dir = model.peft_config['default'].base_model_name_or_path │
│ 35 │ else: │
│ ❱ 36 │ │ model = AutoModel.from_pretrained( │
│ 37 │ │ │ model_dir, │
│ 38 │ │ │ trust_remote_code=trust_remote_code, │
│ 39 │ │ │ device_map='auto', │
│ │
│ /tiamat-vePFS/share_data/boyang/llms/GLM-4/finetune_demo/chatglm_env/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py:441 in from_pretrained │
│ │
│ 438 │ │ │ if kwargs_copy.get("torch_dtype", None) == "auto": │
│ 439 │ │ │ │ _ = kwargs_copy.pop("torch_dtype") │
│ 440 │ │ │ │
│ ❱ 441 │ │ │ config, kwargs = AutoConfig.from_pretrained( │
│ 442 │ │ │ │ pretrained_model_name_or_path, │
│ 443 │ │ │ │ return_unused_kwargs=True, │
│ 444 │ │ │ │ trust_remote_code=trust_remote_code, │
│ │
│ /tiamat-vePFS/share_data/boyang/llms/GLM-4/finetune_demo/chatglm_env/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py:911 in │
│ from_pretrained │
│ │
│ 908 │ │ │ │ ) │
│ 909 │ │ │ class_ref = config_dict["auto_map"]["AutoConfig"] │
│ 910 │ │ │ module_file, class_name = class_ref.split(".") │
│ ❱ 911 │ │ │ config_class = get_class_from_dynamic_module( │
│ 912 │ │ │ │ pretrained_model_name_or_path, module_file + ".py", class_name, **kwargs │
│ 913 │ │ │ ) │
│ 914 │ │ │ config_class.register_for_auto_class() │
│ │
│ /tiamat-vePFS/share_data/boyang/llms/GLM-4/finetune_demo/chatglm_env/lib/python3.9/site-packages/transformers/dynamic_module_utils.py:388 in │
│ get_class_from_dynamic_module │
│ │
│ 385 │ cls = get_class_from_dynamic_module("sgugger/my-bert-model", "modeling.py", "MyBertM │
│ 386 │ ```""" │
│ 387 │ # And lastly we get the class inside our newly created module │
│ ❱ 388 │ final_module = get_cached_module_file( │
│ 389 │ │ pretrained_model_name_or_path, │
│ 390 │ │ module_file, │
│ 391 │ │ cache_dir=cache_dir, │
│ │
│ /tiamat-vePFS/share_data/boyang/llms/GLM-4/finetune_demo/chatglm_env/lib/python3.9/site-packages/transformers/dynamic_module_utils.py:252 in get_cached_module_file │
│ │
│ 249 │ │
│ 250 │ try: │
│ 251 │ │ # Load from URL or cache if already cached │
│ ❱ 252 │ │ resolved_module_file = cached_file( │
│ 253 │ │ │ pretrained_model_name_or_path, │
│ 254 │ │ │ module_file, │
│ 255 │ │ │ cache_dir=cache_dir, │
│ │
│ /tiamat-vePFS/share_data/boyang/llms/GLM-4/finetune_demo/chatglm_env/lib/python3.9/site-packages/transformers/utils/hub.py:380 in cached_file │
│ │
│ 377 │ │ resolved_file = os.path.join(os.path.join(path_or_repo_id, subfolder), filename) │
│ 378 │ │ if not os.path.isfile(resolved_file): │
│ 379 │ │ │ if _raise_exceptions_for_missing_entries: │
│ ❱ 380 │ │ │ │ raise EnvironmentError( │
│ 381 │ │ │ │ │ f"{path_or_repo_id} does not appear to have a file named {full_filen │
│ 382 │ │ │ │ │ f"'https://huggingface.co/{path_or_repo_id}/{revision}' for availabl │
│ 383 │ │ │ │ ) │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
OSError: /tiamat-NAS/boyang/GLM4/gjm/1024/checkpoint-12000 does not appear to have a file named THUDM/glm-4-9b-chat--configuration_chatglm.py. Checkout
'https://huggingface.co//tiamat-NAS/boyang/GLM4/gjm/1024/checkpoint-12000/None' for available files.

Expected behavior / 期待表现

Fix the bug

@zRzRzRzRzRzRzR zRzRzRzRzRzRzR self-assigned this Oct 28, 2024
@zRzRzRzRzRzRzR
Copy link
Member

用绝对路径,你这似乎是访问到了HF

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants