1. 16 Sep, 2024 1 commit
  2. 19 Aug, 2024 1 commit
  3. 09 Jan, 2024 1 commit
    • jiqing-feng's avatar
      enable stable-xl textual inversion (#6421) · aa1797e1
      jiqing-feng authored
      
      
      * enable stable-xl textual inversion
      
      * check if optimizer_2 exists
      
      * check text_encoder_2 before using
      
      * add textual inversion for sdxl in a single file
      
      * fix style
      
      * fix example style
      
      * reset for error changes
      
      * add readme for sdxl
      
      * fix style
      
      * disable autocast as it will cause cast error when weight_dtype=bf16
      
      * fix spelling error
      
      * fix style and readme and 8bit optimizer
      
      * add README_sdxl.md link
      
      * add tracker key on log_validation
      
      * run style
      
      * rm the second center crop
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      aa1797e1
  4. 11 Dec, 2023 1 commit
  5. 31 Oct, 2023 1 commit
    • M. Tolga Cangöz's avatar
      [Docs] Fix typos (#5583) · 442017cc
      M. Tolga Cangöz authored
      * Add Copyright info
      
      * Fix typos, improve, update
      
      * Update deepfloyd_if.md
      
      * Update ldm3d_diffusion.md
      
      * Update opt_overview.md
      442017cc
  6. 24 Jul, 2023 1 commit
  7. 21 Apr, 2023 1 commit
  8. 18 Apr, 2023 1 commit
  9. 27 Dec, 2022 1 commit
    • Katsuya's avatar
      Make xformers optional even if it is available (#1753) · 8874027e
      Katsuya authored
      * Make xformers optional even if it is available
      
      * Raise exception if xformers is used but not available
      
      * Rename use_xformers to enable_xformers_memory_efficient_attention
      
      * Add a note about xformers in README
      
      * Reformat code style
      8874027e
  10. 06 Dec, 2022 1 commit
  11. 28 Nov, 2022 1 commit
    • Suraj Patil's avatar
      v-prediction training support (#1455) · 6c56f050
      Suraj Patil authored
      * add get_velocity
      
      * add v prediction for training
      
      * fix saving
      
      * add revision arg
      
      * fix saving
      
      * save checkpoints dreambooth
      
      * fix saving embeds
      
      * add instruction in readme
      
      * quality
      
      * noise_pred -> model_pred
      6c56f050
  12. 02 Nov, 2022 1 commit
  13. 27 Oct, 2022 1 commit
  14. 26 Oct, 2022 1 commit
  15. 24 Oct, 2022 1 commit
  16. 05 Oct, 2022 2 commits
  17. 29 Sep, 2022 1 commit
  18. 16 Sep, 2022 1 commit
  19. 07 Sep, 2022 1 commit
  20. 06 Sep, 2022 1 commit
  21. 05 Sep, 2022 1 commit
  22. 02 Sep, 2022 2 commits
    • Suraj Patil's avatar
      Update README.md · 30e7c78a
      Suraj Patil authored
      30e7c78a
    • Suraj Patil's avatar
      Textual inversion (#266) · d0d3e24e
      Suraj Patil authored
      * add textual inversion script
      
      * make the loop work
      
      * make coarse_loss optional
      
      * save pipeline after training
      
      * add arg pretrained_model_name_or_path
      
      * fix saving
      
      * fix gradient_accumulation_steps
      
      * style
      
      * fix progress bar steps
      
      * scale lr
      
      * add argument to accept style
      
      * remove unused args
      
      * scale lr using num gpus
      
      * load tokenizer using args
      
      * add checks when converting init token to id
      
      * improve commnets and style
      
      * document args
      
      * more cleanup
      
      * fix default adamw arsg
      
      * TextualInversionWrapper -> CLIPTextualInversionWrapper
      
      * fix tokenizer loading
      
      * Use the CLIPTextModel instead of wrapper
      
      * clean dataset
      
      * remove commented code
      
      * fix accessing grads for multi-gpu
      
      * more cleanup
      
      * fix saving on multi-GPU
      
      * init_placeholder_token_embeds
      
      * add seed
      
      * fix flip
      
      * fix multi-gpu
      
      * add utility methods in wrapper
      
      * remove ipynb
      
      * don't use wrapper
      
      * dont pass vae an dunet to accelerate prepare
      
      * bring back accelerator.accumulate
      
      * scale latents
      
      * use only one progress bar for steps
      
      * push_to_hub at the end of training
      
      * remove unused args
      
      * log some important stats
      
      * store args in tensorboard
      
      * pretty comments
      
      * save the trained embeddings
      
      * mobe the script up
      
      * add requirements file
      
      * more cleanup
      
      * fux typo
      
      * begin readme
      
      * style -> learnable_property
      
      * keep vae and unet in eval mode
      
      * address review comments
      
      * address more comments
      
      * removed unused args
      
      * add train command in readme
      
      * update readme
      d0d3e24e