1. 05 Jun, 2024 1 commit
    • Tolga Cangöz's avatar
      Errata (#8322) · 98730c5d
      Tolga Cangöz authored
      * Fix typos
      
      * Trim trailing whitespaces
      
      * Remove a trailing whitespace
      
      * chore: Update MarigoldDepthPipeline checkpoint to prs-eth/marigold-lcm-v1-0
      
      * Revert "chore: Update MarigoldDepthPipeline checkpoint to prs-eth/marigold-lcm-v1-0"
      
      This reverts commit fd742b30b4258106008a6af4d0dd4664904f8595.
      
      * pokemon -> naruto
      
      * `DPMSolverMultistep` -> `DPMSolverMultistepScheduler`
      
      * Improve Markdown stylization
      
      * Improve style
      
      * Improve style
      
      * Refactor pipeline variable names for consistency
      
      * up style
      98730c5d
  2. 07 May, 2024 1 commit
  3. 27 Apr, 2024 1 commit
  4. 22 Feb, 2024 1 commit
  5. 18 Dec, 2023 1 commit
  6. 11 Dec, 2023 1 commit
  7. 07 Dec, 2023 1 commit
    • Younes Belkada's avatar
      [`PEFT`] Adapt example scripts to use PEFT (#5388) · c2717317
      Younes Belkada authored
      
      
      * adapt example scripts to use PEFT
      
      * Update examples/text_to_image/train_text_to_image_lora.py
      
      * fix
      
      * add for SDXL
      
      * oops
      
      * make sure to install peft
      
      * fix
      
      * fix
      
      * fix dreambooth and lora
      
      * more fixes
      
      * add peft to requirements.txt
      
      * fix
      
      * final fix
      
      * add peft version in requirements
      
      * remove comment
      
      * change variable names
      
      * add few lines in readme
      
      * add to reqs
      
      * style
      
      * fix issues
      
      * fix lora dreambooth xl tests
      
      * init_lora_weights to gaussian and add out proj where missing
      
      * ammend requirements.
      
      * ammend requirements.txt
      
      * add correct peft versions
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      c2717317
  8. 16 Aug, 2023 1 commit
  9. 06 Aug, 2023 1 commit
  10. 20 Jun, 2023 1 commit
  11. 16 Jun, 2023 1 commit
  12. 28 Apr, 2023 1 commit
  13. 18 Apr, 2023 1 commit
  14. 06 Apr, 2023 1 commit
  15. 23 Mar, 2023 1 commit
  16. 07 Mar, 2023 1 commit
  17. 06 Feb, 2023 1 commit
  18. 31 Jan, 2023 1 commit
  19. 25 Jan, 2023 1 commit
  20. 23 Jan, 2023 1 commit
  21. 27 Dec, 2022 1 commit
    • Katsuya's avatar
      Make xformers optional even if it is available (#1753) · 8874027e
      Katsuya authored
      * Make xformers optional even if it is available
      
      * Raise exception if xformers is used but not available
      
      * Rename use_xformers to enable_xformers_memory_efficient_attention
      
      * Add a note about xformers in README
      
      * Reformat code style
      8874027e
  22. 06 Dec, 2022 1 commit
  23. 02 Dec, 2022 1 commit
  24. 28 Nov, 2022 1 commit
    • Suraj Patil's avatar
      v-prediction training support (#1455) · 6c56f050
      Suraj Patil authored
      * add get_velocity
      
      * add v prediction for training
      
      * fix saving
      
      * add revision arg
      
      * fix saving
      
      * save checkpoints dreambooth
      
      * fix saving embeds
      
      * add instruction in readme
      
      * quality
      
      * noise_pred -> model_pred
      6c56f050
  25. 22 Nov, 2022 1 commit
  26. 28 Oct, 2022 1 commit
  27. 27 Oct, 2022 2 commits
  28. 11 Oct, 2022 1 commit
    • Suraj Patil's avatar
      stable diffusion fine-tuning (#356) · 66a5279a
      Suraj Patil authored
      
      
      * begin text2image script
      
      * loading the datasets, preprocessing & transforms
      
      * handle input features correctly
      
      * add gradient checkpointing support
      
      * fix output names
      
      * run unet in train mode not text encoder
      
      * use no_grad instead of freezing params
      
      * default max steps None
      
      * pad to longest
      
      * don't pad when tokenizing
      
      * fix encode on multi gpu
      
      * fix stupid bug
      
      * add random flip
      
      * add ema
      
      * fix ema
      
      * put ema on cpu
      
      * improve EMA model
      
      * contiguous_format
      
      * don't warp vae and text encode in accelerate
      
      * remove no_grad
      
      * use randn_like
      
      * fix resize
      
      * improve few things
      
      * log epoch loss
      
      * set log level
      
      * don't log each step
      
      * remove max_length from collate
      
      * style
      
      * add report_to option
      
      * make scale_lr false by default
      
      * add grad clipping
      
      * add an option to use 8bit adam
      
      * fix logging in multi-gpu, log every step
      
      * more comments
      
      * remove eval for now
      
      * adress review comments
      
      * add requirements file
      
      * begin readme
      
      * begin readme
      
      * fix typo
      
      * fix push to hub
      
      * populate readme
      
      * update readme
      
      * remove use_auth_token from the script
      
      * address some review comments
      
      * better mixed precision support
      
      * remove redundant to
      
      * create ema model early
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * better description for train_data_dir
      
      * add diffusers in requirements
      
      * update dataset_name_mapping
      
      * update readme
      
      * add inference example
      Co-authored-by: default avataranton-l <anton@huggingface.co>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      66a5279a