1. 23 Jan, 2024 3 commits
  2. 22 Jan, 2024 2 commits
  3. 20 Jan, 2024 1 commit
  4. 19 Jan, 2024 1 commit
  5. 17 Jan, 2024 1 commit
  6. 16 Jan, 2024 3 commits
    • Celestial Phineas's avatar
      [Fix] Multiple image conditionings in a single batch for... · 1040dfd9
      Celestial Phineas authored
      
      [Fix] Multiple image conditionings in a single batch for `StableDiffusionControlNetPipeline` (#6334)
      
      * [Fix] Multiple image conditionings in a single batch for `StableDiffusionControlNetPipeline`.
      
      * Refactor `check_inputs` in `StableDiffusionControlNetPipeline` to avoid redundant codes.
      
      * Make the behavior of MultiControlNetModel to be the same to the original ControlNetModel
      
      * Keep the code change minimum for nested list support
      
      * Add fast test `test_inference_nested_image_input`
      
      * Remove redundant check for nested image condition in `check_inputs`
      
      Remove `len(image) == len(prompt)` check out of `check_image()`
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Better `ValueError` message for incompatible nested image list size
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Fix syntax error in `check_inputs`
      
      * Remove warning message for multi-ControlNets with multiple prompts
      
      * Fix a typo in test_controlnet.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Add test case for multiple prompts, single image conditioning in `StableDiffusionMultiControlNetPipelineFastTests`
      
      * Improved `ValueError` message for nested `controlnet_conditioning_scale`
      
      * Documenting the behavior of image list as `StableDiffusionControlNetPipeline` input
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      1040dfd9
    • Yondon Fu's avatar
      [SVD] Return np.ndarray when output_type="np" (#6507) · 8842bcad
      Yondon Fu authored
      [SVD] Fix output_type="np"
      8842bcad
    • YiYi Xu's avatar
      update slow test for SDXL k-diffusion pipeline (#6588) · fefed445
      YiYi Xu authored
      update expected slice
      fefed445
  7. 11 Jan, 2024 1 commit
  8. 10 Jan, 2024 2 commits
  9. 09 Jan, 2024 1 commit
  10. 05 Jan, 2024 2 commits
  11. 04 Jan, 2024 5 commits
  12. 03 Jan, 2024 2 commits
    • Sayak Paul's avatar
      [LoRA deprecation] handle rest of the stuff related to deprecated lora stuff. (#6426) · d7001400
      Sayak Paul authored
      * handle rest of the stuff related to deprecated lora stuff.
      
      * fix: copies
      
      * don't modify the uNet in-place.
      
      * fix: temporal autoencoder.
      
      * manually remove lora layers.
      
      * don't copy unet.
      
      * alright
      
      * remove lora attn processors from unet3d
      
      * fix: unet3d.
      
      * styl
      
      * Empty-Commit
      d7001400
    • Sayak Paul's avatar
      [LoRA] add: test to check if peft loras are loadable in non-peft envs. (#6400) · 2e4dc3e2
      Sayak Paul authored
      * add: test to check if peft loras are loadable in non-peft envs.
      
      * add torch_device approrpiately.
      
      * fix: get_dummy_inputs().
      
      * test logits.
      
      * rename
      
      * debug
      
      * debug
      
      * fix: generator
      
      * new assertion values after fixing the seed.
      
      * shape
      
      * remove print statements and settle this.
      
      * to update values.
      
      * change values when lora config is initialized under a fixed seed.
      
      * update colab link
      
      * update notebook link
      
      * sanity restored by getting the exact same values without peft.
      2e4dc3e2
  13. 02 Jan, 2024 2 commits
    • Fabio Rigano's avatar
      Add unload_ip_adapter method (#6192) · 86714b72
      Fabio Rigano authored
      
      
      * Add unload_ip_adapter method
      
      * Update attn_processors with original layers
      
      * Add test
      
      * Use set_default_attn_processor
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      86714b72
    • Sayak Paul's avatar
      [LoRA] Remove the use of depcrecated loRA functionalities such as `LoRAAttnProcessor` (#6369) · 61f6c547
      Sayak Paul authored
      * start deprecating loraattn.
      
      * fix
      
      * wrap into unet_lora_state_dict
      
      * utilize text_encoder_lora_params
      
      * utilize text_encoder_attn_modules
      
      * debug
      
      * debug
      
      * remove print
      
      * don't use text encoder for test_stable_diffusion_lora
      
      * load the procs.
      
      * set_default_attn_processor
      
      * fix: set_default_attn_processor call.
      
      * fix: lora_components[unet_lora_params]
      
      * checking for 3d.
      
      * 3d.
      
      * more fixes.
      
      * debug
      
      * debug
      
      * debug
      
      * debug
      
      * more debug
      
      * more debug
      
      * more debug
      
      * more debug
      
      * more debug
      
      * more debug
      
      * hack.
      
      * remove comments and prep for a PR.
      
      * appropriate set_lora_weights()
      
      * fix
      
      * fix: test_unload_lora_sd
      
      * fix: test_unload_lora_sd
      
      * use dfault attebtion processors.
      
      * debu
      
      * debug nan
      
      * debug nan
      
      * debug nan
      
      * use NaN instead of inf
      
      * remove comments.
      
      * fix: test_text_encoder_lora_state_dict_unchanged
      
      * attention processor default
      
      * default attention processors.
      
      * default
      
      * style
      61f6c547
  14. 30 Dec, 2023 1 commit
  15. 28 Dec, 2023 1 commit
  16. 27 Dec, 2023 1 commit
  17. 26 Dec, 2023 5 commits
  18. 25 Dec, 2023 1 commit
  19. 24 Dec, 2023 2 commits
  20. 22 Dec, 2023 2 commits
  21. 21 Dec, 2023 1 commit
    • Will Berman's avatar
      open muse (#5437) · 40398152
      Will Berman authored
      
      
      amused
      
      rename
      
      Update docs/source/en/api/pipelines/amused.md
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      AdaLayerNormContinuous default values
      
      custom micro conditioning
      
      micro conditioning docs
      
      put lookup from codebook in constructor
      
      fix conversion script
      
      remove manual fused flash attn kernel
      
      add training script
      
      temp remove training script
      
      add dummy gradient checkpointing func
      
      clarify temperatures is an instance variable by setting it
      
      remove additional SkipFF block args
      
      hardcode norm args
      
      rename tests folder
      
      fix paths and samples
      
      fix tests
      
      add training script
      
      training readme
      
      lora saving and loading
      
      non-lora saving/loading
      
      some readme fixes
      
      guards
      
      Update docs/source/en/api/pipelines/amused.md
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      Update examples/amused/README.md
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      Update examples/amused/train_amused.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      vae upcasting
      
      add fp16 integration tests
      
      use tuple for micro cond
      
      copyrights
      
      remove casts
      
      delegate to torch.nn.LayerNorm
      
      move temperature to pipeline call
      
      upsampling/downsampling changes
      40398152