"tests/vscode:/vscode.git/clone" did not exist on "e3921d5decacd10636b22e9a42ea32eebda69cb9"
  1. 21 Jan, 2025 1 commit
  2. 20 Jan, 2025 1 commit
  3. 19 Jan, 2025 1 commit
  4. 16 Jan, 2025 1 commit
  5. 15 Jan, 2025 1 commit
  6. 13 Jan, 2025 1 commit
  7. 10 Jan, 2025 3 commits
  8. 09 Jan, 2025 2 commits
  9. 08 Jan, 2025 2 commits
  10. 07 Jan, 2025 6 commits
  11. 06 Jan, 2025 1 commit
    • Ameer Azam's avatar
      Regarding the RunwayML path for V1.5 did change to... · 4f5e3e35
      Ameer Azam authored
      Regarding the RunwayML path for V1.5 did change to stable-diffusion-v1-5/[stable-diffusion-v1-5/ stable-diffusion-inpainting] (#10476)
      
      * Update pipeline_controlnet.py
      
      * Update pipeline_controlnet_img2img.py
      
      runwayml Take-down so change all from to this
      stable-diffusion-v1-5/stable-diffusion-v1-5
      
      * Update pipeline_controlnet_inpaint.py
      
      * runwayml take-down make change to sd-legacy
      
      * runwayml take-down make change to sd-legacy
      
      * runwayml take-down make change to sd-legacy
      
      * runwayml take-down make change to sd-legacy
      
      * Update convert_blipdiffusion_to_diffusers.py
      
      style change
      4f5e3e35
  12. 04 Jan, 2025 1 commit
  13. 03 Jan, 2025 1 commit
  14. 02 Jan, 2025 2 commits
  15. 30 Dec, 2024 1 commit
  16. 24 Dec, 2024 1 commit
  17. 23 Dec, 2024 2 commits
  18. 19 Dec, 2024 1 commit
  19. 18 Dec, 2024 3 commits
  20. 17 Dec, 2024 1 commit
  21. 15 Dec, 2024 1 commit
  22. 13 Dec, 2024 1 commit
  23. 12 Dec, 2024 3 commits
    • hlky's avatar
      Remove `negative_*` from SDXL callback (#10203) · f2d348d9
      hlky authored
      * Remove `negative_*` from SDXL callback
      
      * Change example and add XL version
      f2d348d9
    • Sayak Paul's avatar
      [WIP][Training] Flux Control LoRA training script (#10130) · 8170dc36
      Sayak Paul authored
      
      
      * update
      
      * add
      
      * update
      
      * add control-lora conversion script; make flux loader handle norms; fix rank calculation assumption
      
      * control lora updates
      
      * remove copied-from
      
      * create separate pipelines for flux control
      
      * make fix-copies
      
      * update docs
      
      * add tests
      
      * fix
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * remove control lora changes
      
      * apply suggestions from review
      
      * Revert "remove control lora changes"
      
      This reverts commit 73cfc519c9b99b7dc3251cc6a90a5db3056c4819.
      
      * update
      
      * update
      
      * improve log messages
      
      * updates.
      
      * updates
      
      * support register_config.
      
      * fix
      
      * fix
      
      * fix
      
      * updates
      
      * updates
      
      * updates
      
      * fix-copies
      
      * fix
      
      * apply suggestions from review
      
      * add tests
      
      * remove conversion script; enable on-the-fly conversion
      
      * bias -> lora_bias.
      
      * fix-copies
      
      * peft.py
      
      * fix lora conversion
      
      * changes
      Co-authored-by: default avatara-r-r-o-w <contact.aryanvs@gmail.com>
      
      * fix-copies
      
      * updates for tests
      
      * fix
      
      * alpha_pattern.
      
      * add a test for varied lora ranks and alphas.
      
      * revert changes in num_channels_latents = self.transformer.config.in_channels // 8
      
      * revert moe
      
      * add a sanity check on unexpected keys when loading norm layers.
      
      * contro lora.
      
      * fixes
      
      * fixes
      
      * fixes
      
      * tests
      
      * reviewer feedback
      
      * fix
      
      * proper peft version for lora_bias
      
      * fix-copies
      
      * updates
      
      * updates
      
      * updates
      
      * remove debug code
      
      * update docs
      
      * integration tests
      
      * nis
      
      * fuse and unload.
      
      * fix
      
      * add slices.
      
      * more updates.
      
      * button up readme
      
      * train()
      
      * add full fine-tuning version.
      
      * fixes
      
      * Apply suggestions from code review
      Co-authored-by: default avatarAryan <aryan@huggingface.co>
      
      * set_grads_to_none remove.
      
      * readme
      
      ---------
      Co-authored-by: default avatarAryan <aryan@huggingface.co>
      Co-authored-by: default avataryiyixuxu <yixu310@gmail.com>
      Co-authored-by: default avatara-r-r-o-w <contact.aryanvs@gmail.com>
      8170dc36
    • Ethan Smith's avatar
      fix min-snr implementation (#8466) · 26e80e01
      Ethan Smith authored
      * fix min-snr implementation
      
      https://github.com/kohya-ss/sd-scripts/blob/main/library/custom_train_functions.py#L66
      
      
      
      * Update train_dreambooth.py
      
      fix variable name mse_loss_weights
      
      * fix divisor
      
      * make style
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      26e80e01
  24. 10 Dec, 2024 2 commits