1. 11 Dec, 2023 1 commit
  2. 21 Apr, 2023 1 commit
  3. 07 Mar, 2023 1 commit
    • Isamu Isozaki's avatar
      Added multitoken training for textual inversion. Issue 369 (#661) · 8552fd7e
      Isamu Isozaki authored
      * Added multitoken training for textual inversion
      
      * Updated assertion
      
      * Removed duplicate save code
      
      * Fixed undefined bug
      
      * Fixed save
      
      * Added multitoken clip model +util helper
      
      * Removed code splitting
      
      * Removed class
      
      * Fixed errors
      
      * Fixed errors
      
      * Added loading functionality
      
      * Loading via dict instead
      
      * Fixed bug of invalid index being loaded
      
      * Fixed adding placeholder token only adding 1 token
      
      * Fixed bug when initializing tokens
      
      * Fixed bug when initializing tokens
      
      * Removed flawed logic
      
      * Fixed vector shuffle
      
      * Fixed tokenizer's inconsistent __call__ method
      
      * Fixed tokenizer's inconsistent __call__ method
      
      * Handling list input
      
      * Added exception for adding invalid tokens to token map
      
      * Removed unnecessary files and started working on progressive tokens
      
      * Set at minimum load one token
      
      * Changed to global step
      
      * Added method to load automatic1111 tokens
      
      * Fixed bug in load
      
      * Quality+style fixes
      
      * Update quality/style fixes
      
      * Cast embeddings to fp16 when loading
      
      * Fixed quality
      
      * Started moving things over
      
      * Clearing diffs
      
      * Clearing diffs
      
      * Moved everything
      
      * Requested changes
      8552fd7e
  4. 27 Dec, 2022 1 commit
    • Katsuya's avatar
      Make xformers optional even if it is available (#1753) · 8874027e
      Katsuya authored
      * Make xformers optional even if it is available
      
      * Raise exception if xformers is used but not available
      
      * Rename use_xformers to enable_xformers_memory_efficient_attention
      
      * Add a note about xformers in README
      
      * Reformat code style
      8874027e
  5. 06 Dec, 2022 1 commit
  6. 28 Nov, 2022 1 commit
    • Suraj Patil's avatar
      v-prediction training support (#1455) · 6c56f050
      Suraj Patil authored
      * add get_velocity
      
      * add v prediction for training
      
      * fix saving
      
      * add revision arg
      
      * fix saving
      
      * save checkpoints dreambooth
      
      * fix saving embeds
      
      * add instruction in readme
      
      * quality
      
      * noise_pred -> model_pred
      6c56f050
  7. 02 Nov, 2022 1 commit
  8. 27 Oct, 2022 1 commit
  9. 26 Oct, 2022 1 commit
  10. 24 Oct, 2022 1 commit
  11. 05 Oct, 2022 2 commits
  12. 29 Sep, 2022 1 commit
  13. 16 Sep, 2022 1 commit
  14. 07 Sep, 2022 1 commit
  15. 06 Sep, 2022 1 commit
  16. 05 Sep, 2022 1 commit
  17. 02 Sep, 2022 2 commits
    • Suraj Patil's avatar
      Update README.md · 30e7c78a
      Suraj Patil authored
      30e7c78a
    • Suraj Patil's avatar
      Textual inversion (#266) · d0d3e24e
      Suraj Patil authored
      * add textual inversion script
      
      * make the loop work
      
      * make coarse_loss optional
      
      * save pipeline after training
      
      * add arg pretrained_model_name_or_path
      
      * fix saving
      
      * fix gradient_accumulation_steps
      
      * style
      
      * fix progress bar steps
      
      * scale lr
      
      * add argument to accept style
      
      * remove unused args
      
      * scale lr using num gpus
      
      * load tokenizer using args
      
      * add checks when converting init token to id
      
      * improve commnets and style
      
      * document args
      
      * more cleanup
      
      * fix default adamw arsg
      
      * TextualInversionWrapper -> CLIPTextualInversionWrapper
      
      * fix tokenizer loading
      
      * Use the CLIPTextModel instead of wrapper
      
      * clean dataset
      
      * remove commented code
      
      * fix accessing grads for multi-gpu
      
      * more cleanup
      
      * fix saving on multi-GPU
      
      * init_placeholder_token_embeds
      
      * add seed
      
      * fix flip
      
      * fix multi-gpu
      
      * add utility methods in wrapper
      
      * remove ipynb
      
      * don't use wrapper
      
      * dont pass vae an dunet to accelerate prepare
      
      * bring back accelerator.accumulate
      
      * scale latents
      
      * use only one progress bar for steps
      
      * push_to_hub at the end of training
      
      * remove unused args
      
      * log some important stats
      
      * store args in tensorboard
      
      * pretty comments
      
      * save the trained embeddings
      
      * mobe the script up
      
      * add requirements file
      
      * more cleanup
      
      * fux typo
      
      * begin readme
      
      * style -> learnable_property
      
      * keep vae and unet in eval mode
      
      * address review comments
      
      * address more comments
      
      * removed unused args
      
      * add train command in readme
      
      * update readme
      d0d3e24e