Skip to content

Latest commit

 

History

History
98 lines (78 loc) · 3.31 KB

README.md

File metadata and controls

98 lines (78 loc) · 3.31 KB

ruDALL-E PaddlePaddle

ruDALL-E in PaddlePaddle.

Install:

pip install rudalle_paddle==0.0.1rc1

Run with free v100 on AI Studio.

Original Pytorch version Readme:

ruDALL-E

Generate images from texts

Apache license Coverage Status pipeline pre-commit.ci status

🤗 HF Models:

ruDALL-E Malevich (XL)

Minimal Example:

Open In Colab Kaggle

generation by ruDALLE:

from rudalle_paddle.pipelines import generate_images, show, super_resolution, cherry_pick_by_clip
from rudalle_paddle import get_rudalle_model, get_tokenizer, get_vae, get_realesrgan, get_ruclip
from rudalle_paddle.utils import seed_everything

# prepare models
device = 'cuda'
dalle = get_rudalle_model('Malevich', pretrained=True, fp16=True, device=device)
realesrgan = get_realesrgan('x4', device=device)
tokenizer = get_tokenizer()
vae = get_vae().to(device)
ruclip, ruclip_processor = get_ruclip('ruclip-vit-base-patch32-v5')
ruclip = ruclip.to(device)

text = 'изображение радуги на фоне ночного города'

seed_everything(42)
pil_images = []
scores = []
for top_k, top_p, images_num in [
    (2048, 0.995, 3),
    (1536, 0.99, 3),
    (1024, 0.99, 3),
    (1024, 0.98, 3),
    (512, 0.97, 3),
    (384, 0.96, 3),
    (256, 0.95, 3),
    (128, 0.95, 3), 
]:
    _pil_images, _scores = generate_images(text, tokenizer, dalle, vae, top_k=top_k, images_num=images_num, top_p=top_p)
    pil_images += _pil_images
    scores += _scores

show(pil_images, 6)

auto cherry-pick by ruCLIP:

top_images, clip_scores = cherry_pick_by_clip(pil_images, text, ruclip, ruclip_processor, device=device, count=6)
show(top_images, 3)

super resolution:

sr_images = super_resolution(top_images, realesrgan)
show(sr_images, 3)

text, seed = 'красивая тян из аниме', 6955

Image Prompt

see jupyters/ruDALLE-image-prompts-A100.ipynb

text, seed = 'Храм Василия Блаженного', 42
skyes = [red_sky, sunny_sky, cloudy_sky, night_sky]

🚀 Contributors 🚀

  • @neverix thanks a lot for contributing for speed up of inference
  • @oriBetelgeuse thanks a lot for easy API of generation using image prompt