What if we use 1.5 to finetune XL? #1653
Replies: 2 comments
-
Not sure if I understand what you mean, but I would argue it is not possible and I don't see a benefit. And what would be the purpose any ways? Fine tuning weights that are based on much lower resolutions are probably just not compatible to XL. And how would that work in the first place? As far as I can tell it is not even possible to merge XL models. At least last I tried it wanted to use 64TB ram for that operation. Maybe I am wrong, so far I have decided not to do much with XL since I am limited to 12GB Vram and there are more than enough very well pre-trained 1.5 models that serve as a good base in the range of image sizes for training. I would assume that training on those resolutions with an XL model would just drag the overall quality. |
Beta Was this translation helpful? Give feedback.
-
I already explained why it is not possible few months ago on some issue == |
Beta Was this translation helpful? Give feedback.
-
What if we could use a 1.5 model to fine-tune an XL model?
My question is focused on anime models, since the 1.5 only has a robust model due to the NovelAI leak.
@KohakuBlueleaf tagging you bc there is no discussion tab on your repo 🤭
Beta Was this translation helpful? Give feedback.
All reactions