1. 26 Oct, 2022 1 commit
  2. 25 Oct, 2022 3 commits
  3. 24 Oct, 2022 1 commit
  4. 21 Oct, 2022 1 commit
    • Shyam Sudhakaran's avatar
      Wildcard stable diffusion pipeline (#900) · 2fdd094c
      Shyam Sudhakaran authored
      
      
      * Initial Wildcard Stable Diffusion Pipeline
      
      * Added some additional example usage
      
      * style
      
      * Added links in README and additional documentation
      
      * Initial Wildcard Stable Diffusion Pipeline
      
      * Added some additional example usage
      
      * style
      
      * Added links in README and additional documentation
      
      * cleanup readme again
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      2fdd094c
  5. 20 Oct, 2022 5 commits
  6. 19 Oct, 2022 2 commits
  7. 18 Oct, 2022 1 commit
  8. 17 Oct, 2022 11 commits
  9. 14 Oct, 2022 2 commits
  10. 13 Oct, 2022 1 commit
  11. 12 Oct, 2022 2 commits
  12. 11 Oct, 2022 2 commits
    • spezialspezial's avatar
      Eventually preserve this typo? :) (#804) · e8959528
      spezialspezial authored
      e8959528
    • Suraj Patil's avatar
      stable diffusion fine-tuning (#356) · 66a5279a
      Suraj Patil authored
      
      
      * begin text2image script
      
      * loading the datasets, preprocessing & transforms
      
      * handle input features correctly
      
      * add gradient checkpointing support
      
      * fix output names
      
      * run unet in train mode not text encoder
      
      * use no_grad instead of freezing params
      
      * default max steps None
      
      * pad to longest
      
      * don't pad when tokenizing
      
      * fix encode on multi gpu
      
      * fix stupid bug
      
      * add random flip
      
      * add ema
      
      * fix ema
      
      * put ema on cpu
      
      * improve EMA model
      
      * contiguous_format
      
      * don't warp vae and text encode in accelerate
      
      * remove no_grad
      
      * use randn_like
      
      * fix resize
      
      * improve few things
      
      * log epoch loss
      
      * set log level
      
      * don't log each step
      
      * remove max_length from collate
      
      * style
      
      * add report_to option
      
      * make scale_lr false by default
      
      * add grad clipping
      
      * add an option to use 8bit adam
      
      * fix logging in multi-gpu, log every step
      
      * more comments
      
      * remove eval for now
      
      * adress review comments
      
      * add requirements file
      
      * begin readme
      
      * begin readme
      
      * fix typo
      
      * fix push to hub
      
      * populate readme
      
      * update readme
      
      * remove use_auth_token from the script
      
      * address some review comments
      
      * better mixed precision support
      
      * remove redundant to
      
      * create ema model early
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * better description for train_data_dir
      
      * add diffusers in requirements
      
      * update dataset_name_mapping
      
      * update readme
      
      * add inference example
      Co-authored-by: default avataranton-l <anton@huggingface.co>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      66a5279a
  13. 10 Oct, 2022 1 commit
  14. 07 Oct, 2022 1 commit
  15. 06 Oct, 2022 4 commits
  16. 05 Oct, 2022 2 commits