Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Plan to support model loaded directly from huggingface/diffusers? #113

Open
nguyendothanhtruc opened this issue Jun 27, 2024 · 1 comment

Comments

@nguyendothanhtruc
Copy link

Hi, thank you for your amazing work!.
May I ask how to load a pretrained model directly from huggingface/diffusers? I also use the convert script of diffusers to convert from diffusers format to safetensors or ckpt. However, I encouter the key error as bellow:

key cond_stage_model.transformer.text_model.embeddings.position_ids not found!

Can your work only load checkpoint downloaded on civital?.
Hope you can reply and provide me some advices. Thank you a lot!

@tyxsspa
Copy link
Owner

tyxsspa commented Jun 27, 2024

Maybe that error is caused by a mis-matched transformer version? (try transformers==4.30.2)
Yes in demo.py you can load any SD1.5 ckpt or lora models from civitai without problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants