You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I'm training qwen2-vl with lm_head and embed_tokens in target_modules for continued pretraining I get this user warning:
UserWarning: Model with tie_word_embeddings=True and the tied_target_modules=['lm_head'] are part of the adapter. This can lead to complications, for example when merging the adapter or converting your model to formats other than safetensors. See for example huggingface/peft#2018
I read the peft issue but the thread is fairly stale now and doesn't seem like a quick fix is on the way any time soon.
Has anyone else experience with tied embedding training and knows how "bad" this issue is exactly? And maybe manual fixes to save the embedding adapters correctly (assuming the saving is wrong in the first place and not perfectly fine already)
The text was updated successfully, but these errors were encountered:
When I'm training qwen2-vl with lm_head and embed_tokens in target_modules for continued pretraining I get this user warning:
UserWarning: Model with
tie_word_embeddings=True
and the tied_target_modules=['lm_head'] are part of the adapter. This can lead to complications, for example when merging the adapter or converting your model to formats other than safetensors. See for example huggingface/peft#2018I read the peft issue but the thread is fairly stale now and doesn't seem like a quick fix is on the way any time soon.
Has anyone else experience with tied embedding training and knows how "bad" this issue is exactly? And maybe manual fixes to save the embedding adapters correctly (assuming the saving is wrong in the first place and not perfectly fine already)
The text was updated successfully, but these errors were encountered: