-
-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unprompted has stopped being deterministic #212
Comments
Further observations show that the more variables you change, from checkpoint to using hires fix the more likely it is to happen, or so it seems. When you click generate without HRF active on the same configuration it is much more rare than if using HRF or across different models. |
Update: my template used the shortcode [chance] only in two places over 107 files. Removing them make it much more stable. Yet changing the model or altering other non-related variables makes it tend to change the rng hence still being non-deterministic. Even without changing anything else it still is not deterministic (just hitting generate without changing anything can still output different images, although it is rarer after removing "chance" shortcode). I tried changing Automatic RNG to CPU and GPU, none of them seem to matter at all. |
Hi @entusiastaIA, Thank you for bringing this to my attention. So far, I haven't been able to reproduce the problem in my tests - I've tried running a few of my own templates at a I will experiment with |
So far, without using Hires Fix and without changing any model it barely ever happens. I use 16 batch count and I'd say that only one of every 8-10 times one or two images end up having different prompts. It's a very weird behaivour... I have noticed that most of the times (but not always either) is just one specific part of the prompt (the first "half") the one that stops being deterministic, while the end is still the same that it was before. However for the life of me I don't see anything special or different on my code in those parts... And sometimes it is the whole prompt anyway. The same about the images themselves... sometimes from a 16 grid only one or two images in the middle are changed but after them the new generated images are again back the same they should be. However some other times once one image is different, all the images after that one are changed as well... So is a really weird behaivour... these days I'm actually working trying to narrow the issue at the same time that I clean my code (as I admit it was full of workarounds and dirty patches), but so far the only thing that my code has left depending on RNG are "[choose]" and "[choose _weighted]" so I doubt there's much more I can do on my end, but if you think any specific test could be worth it, tell me so. I think, but this is really hard to proof, that the problem is that at some point some of the [choose] (or other RNG elements) lose, change or have a problem with the seed they should be using, but after them, all the next RNG retains their numbers, that's why only certain parts of the prompts and images get affected while the rest retain their expected prompts. However, since in my template, some "choose" CAN lead to other "choose" but it is not something that always happens; if the seed changes in a way that causes one of this branching elements to be altered (be it cutting previously existing branch or adding a new branch), then all the following rng get screwed because they now happen at different places... If as I suspect is actually a low chance event for every independent rng decission, then depending on how many of those your templates have you may need more or less images to make it happen. I could try later to build a simple code filled with rng and calls to try force it happening. Thank you very much for your time, this is a big issue for me, hindering a lot the workflow, because I need deterministic image grids to compare the specific prompt and model changes I'm making, and definetly it wasn't happening at older versions of Unprompted and Automatic. |
I have updated, it is still happening for the latest release yet. I have been experimenting a bit with SDXL Turbo, and changing parameters such as Steps, model, CFG and almost anything breaks determinism. Sometimes it breaks on it's own, but anytime I change parameters it is likely to change prompts from some images. If you need the template reach me out by the same email, but it shouldn't have anything very special... just many files (181) |
Ok, I've managed to get an undeterministic behaivour on the first branch, this was important because that way I can now it happens pretty soon in the code, so it's unlikely to be because of any quirk of my code. Anyway reaching the first branch will mean sharing quite some lines. Will share them should they help you identifying the problem. First there's the initializing file for the wizard, I'll skip all the commentary and paste the code part:
Now let's go to that "zmainlogic" file:
The code keeps going but I got it to diverge on zsubjectlogic for a same seed, 16 batch count, it diverged there on the 9th image. The only non-default variable I used was subject -> "female". So what's inside zsubjectlogic:
All the final calls there are just files with [choose]listofwords[/choose] inside. And when making the image with 3 steps it generated "lady" which belongs to the "xsubjectfemalenormal" but when using 4 steps it generated "girl" which belongs to the "xsubjectfemaleyoung". From there on, of course, many other different choices happened since the RNG had been changed and it beared consecuences across the whole template, for example due to the modified "elderused" variable. This means that this very first branching line happened different from one run to the other:
Please, let me know if I can help any way better to tailor this issue, is becoming a killer problem as I cannot make solid tests if the prompts are changing constantly. |
I "did" it. Kind of. Kindish of... Adding this in the choose.py makes the prompts random but deterministic again: I don't know well what I'm doing. I've been chasing many many many theories so far. I don't know if it's unprompted or A1111 fault. Countless ChatGPT computing help has been spent to be able to do that little change. It introduces 2 new problems: 1st) For a given seed there is always a same prompt if the template isn't changed. That mean that it doesn't matter if you arrive at seed 23 because you started at seed 19 with a batch count of 5, or because you started at seed 10 with a batch count of 14. Seed 23 will always have the same prompt for both given cases. This is half a problem half a new feature, I cannot decide yet. But definetively is different from the usual behaivour. 2nd) Randomness seem to be a bit... simplistic? It means that some images from close seeds end up with pretty similar prompts where almost everything is the same except a few things, but not always, depending on seeds there are high or low variation. Anyway both problems are orders of magnitude lesser than the one we had. They belong to different universes at all... now I can work with unprompted again, so until you or somebody more experienced makes a proper fix, that floats my boat... EDIT: I've done some tests and nothing I change from the random.seed in the unprompted.py or other mentions to random in that file seem to modify the determinism leak... so I'd say whatever is going on is not on that file? I really cannot understand anymore what's happening... Ok, back to working on prompting again. This thing has melted my brain ... |
Due diligence
Describe the bug
Half of the time the same seeds generate the same images and prompts, however randomly, clicking generate a controlled batch again results in some of the prompts mid processing change, generating different images. Some times it is just some random images within the batch, some times, from a certain point it changes drastically.
I have checked both ticking synchronize seed, and unticking it and setting the initial seed to be the same as the original seed. The problem persits. Example grids after Hires Fix with NSFW removed. They should be the same but they aren't (batch count is 16):
If I keep pressing generate I can keep getting variations. Most of the time the first images are alright, and it tends to happen towards the middle/end of the generation.
Prompt
No response
Log output
No response
Unprompted version
10.2.3
WebUI version
1.6.0
Other comments
P.S: I know images are uncanny, I'm making some very specific tests right now, and actually having them undeterministic is making things much more complex.
The text was updated successfully, but these errors were encountered: