v0.9.0-rc1
Pre-release
Pre-release
Release Candidate
This release has been mostly tested in a variety of situations, and two models are training with it.
It's most likely safe for production use, and the code from here for v0.9.0 is frozen to purely bugfixes.
A massive number of breaking changes since v0.8 are included. See the TUTORIAL for more information.
What's Changed
- VAECache: fix for jpg files not being detected/processed and then erroring out later by @bghira in #247
- Multi-dataset sampler by @bghira in #235
- v0.9.0-alpha by @bghira in #248
- Feature/multi dataset sampler by @bghira in #253
- allow disabling backends
- default noise scheduler should be euler
- fix state tracker IDs by @bghira in #254
- CogVLM: 4bit inference by default
- Diffusers: bump to 0.26.0
- MultiDataBackend: better support for epoch tracking across datasets.
- MultiDataBackend: throw error and end training when global epoch != dataset epoch.
- Logging: major reduction in debug noise
- SDXL: fix num update steps per epoch calculations
- SDXL: Fix number of batch display
- SDXL: Correctness fixes for global_step handling by @bghira in #255
- v0.9.0-alpha3 fixes for logging and probability config / epochs not continuing by @bghira in #256
- multidatabackend fix for non-square image training, data bucket config override by @bghira in #257
- LoRA trainer via --model_type by @bghira in #259
- Remove unnecessary code, simplify commandline args by @bghira in #260
- VAE cache rebuild and dataset repeats by @bghira in #261
- torch compile fixes | DeepSpeed save state fixes by @bghira in #263
- updates for next release by @bghira in #264
- collate_fn: multi-threaded retrieval of SDXL text embeds by @bghira in #265
- text embedding cache should write embeds in parallel by @bghira in #266
- text embedding cache should stop writing and kill the thread when we finish by @bghira in #267
- text embedding cache: optimise the generation of embeds by @bghira in #268
- multiple text embed caches | cache the text embed lists and only process meaningful prompts by @bghira in #269
- text embedding cache speed-up for slow backends (eg. S3 or spinning disks) by @bghira in #271
Full Changelog: v0.8.2...v0.9.0-rc1