Replies: 11 comments 29 replies
-
I saw this in the "patch notes" discussion, but I still don't understand what's changed. Could someone give me a primer? |
Beta Was this translation helpful? Give feedback.
-
the same argument can be made anytime there is a fix that changes behavior - there should be a switch to preserve old behavior "for compatibility reasons". i'm strongly against that unless old behavior is proven to be better. yes, i have different philosophy about that than a1111 as i'm against having too many switches - i've already removed 50+ of them from settings. what has changed? here's an example if someone wants to dig in:
|
Beta Was this translation helpful? Give feedback.
-
i went over the prompts provided by @afbagwell (i cannot post them here as requested), and as a result I've introduced a compatibility options in UI -> Settings -> Stable Diffusion -> Prompt attention parser i did find one bug in my new parser (fixed), so I ask to give it a try one more time. btw, comparing output of new vs old parser, new parser used 123 tokens while old used 133 tokens - that's ~10% saving. |
Beta Was this translation helpful? Give feedback.
-
Maybe we need some kind of "parser versioning" that is saved with prompt meta so improvements can continue to be made without creating issues with previous prompts... |
Beta Was this translation helpful? Give feedback.
-
@vladmandic So, correct me if I'm wrong, the most advanced version is "Fixed attention", then there goes "Full parser" with some mentioned bug, and a compatibility "original parser". |
Beta Was this translation helpful? Give feedback.
-
aaaand, i've just added a Compel parser, as its used in ComfyUI and InvokeAI (although they use older version and they call it "Temporal Weighting"). its not selected by default as its a bit different, but i like it: input:
output:
|
Beta Was this translation helpful? Give feedback.
-
i think it would be great if we can compare the parser using plot xyz script |
Beta Was this translation helpful? Give feedback.
-
Is this the expected behavior regarding nested weighted parentheses? My prompt: Full parser A1111 parser This isn't how I normally prompt but at a glance it seems the full parser broke at Not sure if this is a bug or a design choice to not support the syntax I used. And neither noticed the Edit: the
Edit: this was on commit |
Beta Was this translation helpful? Give feedback.
-
ok, i just wrote a proper unit test for that method and that forced me to fix some annoying border cases.
|
Beta Was this translation helpful? Give feedback.
-
Update from me: The parser patch is a masterpiece, I'm almost back to my original images. thanks very much Vlad!! Additionally I have tested reverting back to a known-working commit (several states) before the parser update and that still would not give me the original images, so I the only thing I can image it is down to a change in one of the associated dependent libraries...(git reset --hard commitID does not roll back the associated libararies to what they were on that date I guess). In fact I'm now closer to my original images with Vlad's latest update than with the roll-back to the previous commit. I guess we have to get used to our images probably not being fully reproducible over time (too many associated components involved that will change independetly of the repo). Additionally, keeping a legacy burden too long just makes any program just very complex as all historic states of modules need to always be carried along (including error testing, fixing, dependencies, etc). The best option - if reproducibility is key for your business - would be IMHO versionizing the repo folder of working version as backup before "git pull" and then keeping that frozen version away from updates for the sake of reproducibility. If such versionizing is your thing, you should probably consider keeping your model safetensor files outside the repo folder structure so all repo versions would access a common model library.
|
Beta Was this translation helpful? Give feedback.
-
yes, its true that some underlying libs have direct impact on outputs - for example, i still believe that |
Beta Was this translation helpful? Give feedback.
-
Great in that you were able to track down all the issues you found with the prompt parser. However, I have a ton of finely tuned prompts that now push out different results, sometimes significantly altering general subject matter.
Automatic1111 did a similar fix to his code in the past, and he put a checkbox in the settings that allowed people to use the old, error-prone parser so they could re-capture the way their old prompts worked. Would you consider doing something like this?
Rolling back to the last commit on May 14 is a workaround, but then there's no way to take advantage of new fixes and features unrelated to prompt parsing.
Beta Was this translation helpful? Give feedback.
All reactions