1. 20 Apr, 2023 1 commit
    • nupurkmr9's avatar
      adding custom diffusion training to diffusers examples (#3031) · 3979aac9
      nupurkmr9 authored
      
      
      * diffusers==0.14.0 update
      
      * custom diffusion update
      
      * custom diffusion update
      
      * custom diffusion update
      
      * custom diffusion update
      
      * custom diffusion update
      
      * custom diffusion update
      
      * custom diffusion
      
      * custom diffusion
      
      * custom diffusion
      
      * custom diffusion
      
      * custom diffusion
      
      * apply formatting and get rid of bare except.
      
      * refactor readme and other minor changes.
      
      * misc refactor.
      
      * fix: repo_id issue and loaders logging bug.
      
      * fix: save_model_card.
      
      * fix: save_model_card.
      
      * fix: save_model_card.
      
      * add: doc entry.
      
      * refactor doc,.
      
      * custom diffusion
      
      * custom diffusion
      
      * custom diffusion
      
      * apply style.
      
      * remove tralining whitespace.
      
      * fix: toctree entry.
      
      * remove unnecessary print.
      
      * custom diffusion
      
      * custom diffusion
      
      * custom diffusion test
      
      * custom diffusion xformer update
      
      * custom diffusion xformer update
      
      * custom diffusion xformer update
      
      ---------
      Co-authored-by: default avatarNupur Kumari <nupurkumari@Nupurs-MacBook-Pro.local>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarNupur Kumari <nupurkumari@nupurs-mbp.wifi.local.cmu.edu>
      3979aac9
  2. 19 Apr, 2023 5 commits
    • superhero-7's avatar
      Modified altdiffusion pipline to support altdiffusion-m18 (#2993) · a4c91be7
      superhero-7 authored
      
      
      * Modified altdiffusion pipline to support altdiffusion-m18
      
      * Modified altdiffusion pipline to support altdiffusion-m18
      
      * Modified altdiffusion pipline to support altdiffusion-m18
      
      * Modified altdiffusion pipline to support altdiffusion-m18
      
      * Modified altdiffusion pipline to support altdiffusion-m18
      
      * Modified altdiffusion pipline to support altdiffusion-m18
      
      * Modified altdiffusion pipline to support altdiffusion-m18
      
      ---------
      Co-authored-by: default avatarroot <fulong_ye@163.com>
      a4c91be7
    • hwuebben's avatar
      Update pipeline_stable_diffusion_inpaint_legacy.py (#2903) · 3becd368
      hwuebben authored
      
      
      * Update pipeline_stable_diffusion_inpaint_legacy.py
      
      * fix preprocessing of Pil images with adequate batch size
      
      * revert map
      
      * add tests
      
      * reformat
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * next try to fix the style
      
      * wth is this
      
      * Update testing_utils.py
      
      * Update testing_utils.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      3becd368
    • Chanchana Sornsoontorn's avatar
      Correct `Transformer2DModel.forward` docstring (#3074) · c8fdfe45
      Chanchana Sornsoontorn authored
      ️chore(transformer_2d) update function signature for encoder_hidden_states
      c8fdfe45
    • 1lint's avatar
      add from_ckpt method as Mixin (#2318) · 86ecd4b7
      1lint authored
      
      
      * add mixin class for pipeline from original sd ckpt
      
      * Improve
      
      * make style
      
      * merge main into
      
      * Improve more
      
      * fix more
      
      * up
      
      * Apply suggestions from code review
      
      * finish docs
      
      * rename
      
      * make style
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      86ecd4b7
    • cmdr2's avatar
      [ckpt loader] Allow loading the Inpaint and Img2Img pipelines, while loading a ckpt model (#2705) · bdeff4d6
      cmdr2 authored
      * [ckpt loader] Allow loading the Inpaint and Img2Img pipelines, while loading a ckpt model
      
      * Address review comment from PR
      
      * PyLint formatting
      
      * Some more pylint fixes, unrelated to our change
      
      * Another pylint fix
      
      * Styling fix
      bdeff4d6
  3. 18 Apr, 2023 2 commits
  4. 17 Apr, 2023 4 commits
  5. 16 Apr, 2023 2 commits
  6. 14 Apr, 2023 3 commits
  7. 13 Apr, 2023 3 commits
  8. 12 Apr, 2023 14 commits
  9. 11 Apr, 2023 6 commits
    • Will Berman's avatar
      Attn added kv processor torch 2.0 block (#3023) · ea39cd7e
      Will Berman authored
      add AttnAddedKVProcessor2_0 block
      ea39cd7e
    • Will Berman's avatar
      Attention processor cross attention norm group norm (#3021) · 98c5e5da
      Will Berman authored
      add group norm type to attention processor cross attention norm
      
      This lets the cross attention norm use both a group norm block and a
      layer norm block.
      
      The group norm operates along the channels dimension
      and requires input shape (batch size, channels, *) where as the layer norm with a single
      `normalized_shape` dimension only operates over the least significant
      dimension i.e. (*, channels).
      
      The channels we want to normalize are the hidden dimension of the encoder hidden states.
      
      By convention, the encoder hidden states are always passed as (batch size, sequence
      length, hidden states).
      
      This means the layer norm can operate on the tensor without modification, but the group
      norm requires flipping the last two dimensions to operate on (batch size, hidden states, sequence length).
      
      All existing attention processors will have the same logic and we can
      consolidate it in a helper function `prepare_encoder_hidden_states`
      
      prepare_encoder_hidden_states -> norm_encoder_hidden_states re: @patrickvonplaten
      
      move norm_cross defined check to outside norm_encoder_hidden_states
      
      add missing attn.norm_cross check
      98c5e5da
    • Will Berman's avatar
      unet time embedding activation function (#3048) · 2d52e81c
      Will Berman authored
      * unet time embedding activation function
      
      * typo act_fn -> time_embedding_act_fn
      
      * flatten conditional
      2d52e81c
    • Chanchana Sornsoontorn's avatar
      Fix typo and format BasicTransformerBlock attributes (#2953) · 52c4d32d
      Chanchana Sornsoontorn authored
      * ️chore(train_controlnet) fix typo in logger message
      
      * ️chore(models) refactor modules order; make them the same as calling order
      
      When printing the BasicTransformerBlock to stdout, I think it's crucial that the attributes order are shown in proper order. And also previously the "3. Feed Forward" comment was not making sense. It should have been close to self.ff but it's instead next to self.norm3
      
      * correct many tests
      
      * remove bogus file
      
      * make style
      
      * correct more tests
      
      * finish tests
      
      * fix one more
      
      * make style
      
      * make unclip deterministic
      
      * 
      
      ️chore(models/attention) reorganize comments in BasicTransformerBlock class
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      52c4d32d
    • Will Berman's avatar
      add only cross attention to simple attention blocks (#3011) · c6180a31
      Will Berman authored
      * add only cross attention to simple attention blocks
      
      * add test for only_cross_attention re: @patrickvonplaten
      
      * mid_block_only_cross_attention better default
      
      allow mid_block_only_cross_attention to default to
      `only_cross_attention` when `only_cross_attention` is given
      as a single boolean
      c6180a31
    • Pedro Cuenca's avatar
      Fix scheduler type mismatch (#3041) · 526827c3
      Pedro Cuenca authored
      When doing generation manually and using guidance_scale as a static
      argument.
      526827c3