"docs/vscode:/vscode.git/clone" did not exist on "3a630c589dd4ab779d28067b17c87712c9f1a674"
  1. 26 Apr, 2023 1 commit
  2. 25 Apr, 2023 1 commit
    • Patrick von Platen's avatar
      add model (#3230) · e51f19ae
      Patrick von Platen authored
      
      
      * add
      
      * clean
      
      * up
      
      * clean up more
      
      * fix more tests
      
      * Improve docs further
      
      * improve
      
      * more fixes docs
      
      * Improve docs more
      
      * Update src/diffusers/models/unet_2d_condition.py
      
      * fix
      
      * up
      
      * update doc links
      
      * make fix-copies
      
      * add safety checker and watermarker to stage 3 doc page code snippets
      
      * speed optimizations docs
      
      * memory optimization docs
      
      * make style
      
      * add watermarking snippets to doc string examples
      
      * make style
      
      * use pt_to_pil helper functions in doc strings
      
      * skip mps tests
      
      * Improve safety
      
      * make style
      
      * new logic
      
      * fix
      
      * fix bad onnx design
      
      * make new stable diffusion upscale pipeline model arguments optional
      
      * define has_nsfw_concept when non-pil output type
      
      * lowercase linked to notebook name
      
      ---------
      Co-authored-by: default avatarWilliam Berman <WLBberman@gmail.com>
      e51f19ae
  3. 24 Apr, 2023 1 commit
  4. 22 Apr, 2023 1 commit
  5. 21 Apr, 2023 1 commit
  6. 20 Apr, 2023 1 commit
    • nupurkmr9's avatar
      adding custom diffusion training to diffusers examples (#3031) · 3979aac9
      nupurkmr9 authored
      
      
      * diffusers==0.14.0 update
      
      * custom diffusion update
      
      * custom diffusion update
      
      * custom diffusion update
      
      * custom diffusion update
      
      * custom diffusion update
      
      * custom diffusion update
      
      * custom diffusion
      
      * custom diffusion
      
      * custom diffusion
      
      * custom diffusion
      
      * custom diffusion
      
      * apply formatting and get rid of bare except.
      
      * refactor readme and other minor changes.
      
      * misc refactor.
      
      * fix: repo_id issue and loaders logging bug.
      
      * fix: save_model_card.
      
      * fix: save_model_card.
      
      * fix: save_model_card.
      
      * add: doc entry.
      
      * refactor doc,.
      
      * custom diffusion
      
      * custom diffusion
      
      * custom diffusion
      
      * apply style.
      
      * remove tralining whitespace.
      
      * fix: toctree entry.
      
      * remove unnecessary print.
      
      * custom diffusion
      
      * custom diffusion
      
      * custom diffusion test
      
      * custom diffusion xformer update
      
      * custom diffusion xformer update
      
      * custom diffusion xformer update
      
      ---------
      Co-authored-by: default avatarNupur Kumari <nupurkumari@Nupurs-MacBook-Pro.local>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarNupur Kumari <nupurkumari@nupurs-mbp.wifi.local.cmu.edu>
      3979aac9
  7. 19 Apr, 2023 1 commit
  8. 18 Apr, 2023 2 commits
  9. 17 Apr, 2023 1 commit
  10. 16 Apr, 2023 1 commit
  11. 14 Apr, 2023 1 commit
  12. 12 Apr, 2023 2 commits
  13. 11 Apr, 2023 8 commits
    • Will Berman's avatar
      Attn added kv processor torch 2.0 block (#3023) · ea39cd7e
      Will Berman authored
      add AttnAddedKVProcessor2_0 block
      ea39cd7e
    • Will Berman's avatar
      Attention processor cross attention norm group norm (#3021) · 98c5e5da
      Will Berman authored
      add group norm type to attention processor cross attention norm
      
      This lets the cross attention norm use both a group norm block and a
      layer norm block.
      
      The group norm operates along the channels dimension
      and requires input shape (batch size, channels, *) where as the layer norm with a single
      `normalized_shape` dimension only operates over the least significant
      dimension i.e. (*, channels).
      
      The channels we want to normalize are the hidden dimension of the encoder hidden states.
      
      By convention, the encoder hidden states are always passed as (batch size, sequence
      length, hidden states).
      
      This means the layer norm can operate on the tensor without modification, but the group
      norm requires flipping the last two dimensions to operate on (batch size, hidden states, sequence length).
      
      All existing attention processors will have the same logic and we can
      consolidate it in a helper function `prepare_encoder_hidden_states`
      
      prepare_encoder_hidden_states -> norm_encoder_hidden_states re: @patrickvonplaten
      
      move norm_cross defined check to outside norm_encoder_hidden_states
      
      add missing attn.norm_cross check
      98c5e5da
    • Will Berman's avatar
      unet time embedding activation function (#3048) · 2d52e81c
      Will Berman authored
      * unet time embedding activation function
      
      * typo act_fn -> time_embedding_act_fn
      
      * flatten conditional
      2d52e81c
    • Chanchana Sornsoontorn's avatar
      Fix typo and format BasicTransformerBlock attributes (#2953) · 52c4d32d
      Chanchana Sornsoontorn authored
      * ️chore(train_controlnet) fix typo in logger message
      
      * ️chore(models) refactor modules order; make them the same as calling order
      
      When printing the BasicTransformerBlock to stdout, I think it's crucial that the attributes order are shown in proper order. And also previously the "3. Feed Forward" comment was not making sense. It should have been close to self.ff but it's instead next to self.norm3
      
      * correct many tests
      
      * remove bogus file
      
      * make style
      
      * correct more tests
      
      * finish tests
      
      * fix one more
      
      * make style
      
      * make unclip deterministic
      
      * 
      
      ️chore(models/attention) reorganize comments in BasicTransformerBlock class
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      52c4d32d
    • Will Berman's avatar
      add only cross attention to simple attention blocks (#3011) · c6180a31
      Will Berman authored
      * add only cross attention to simple attention blocks
      
      * add test for only_cross_attention re: @patrickvonplaten
      
      * mid_block_only_cross_attention better default
      
      allow mid_block_only_cross_attention to default to
      `only_cross_attention` when `only_cross_attention` is given
      as a single boolean
      c6180a31
    • George Ogden's avatar
      Update documentation (#2996) · cb63febf
      George Ogden authored
      * Update documentation
      
      Based on sampling, the width and height must be powers of 2 as the samples halve in size each time
      
      * make style
      cb63febf
    • Will Berman's avatar
      `AttentionProcessor.group_norm` num_channels should be `query_dim` (#3046) · 8c6b47cf
      Will Berman authored
      * `AttentionProcessor.group_norm` num_channels should be `query_dim`
      
      The group_norm on the attention processor should really norm the number
      of channels in the query _not_ the inner dim. This wasn't caught before
      because the group_norm is only used by the added kv attention processors
      and the added kv attention processors are only used by the karlo models
      which are configured such that the inner dim is the same as the query
      dim.
      
      * add_{k,v}_proj should be projecting to inner_dim
      8c6b47cf
    • Patrick von Platen's avatar
      Fix config prints and save, load of pipelines (#2849) · 8b451eb6
      Patrick von Platen authored
      * [Config] Fix config prints and save, load
      
      * Only use potential nn.Modules for dtype and device
      
      * Correct vae image processor
      
      * make sure in_channels is not accessed directly
      
      * make sure in channels is only accessed via config
      
      * Make sure schedulers only access config attributes
      
      * Make sure to access config in SAG
      
      * Fix vae processor and make style
      
      * add tests
      
      * uP
      
      * make style
      
      * Fix more naming issues
      
      * Final fix with vae config
      
      * change more
      8b451eb6
  14. 10 Apr, 2023 5 commits
  15. 30 Mar, 2023 1 commit
  16. 28 Mar, 2023 2 commits
  17. 27 Mar, 2023 2 commits
  18. 23 Mar, 2023 4 commits
    • Sanchit Gandhi's avatar
      Add AudioLDM (#2232) · b94880e5
      Sanchit Gandhi authored
      
      
      * Add AudioLDM
      
      * up
      
      * add vocoder
      
      * start unet
      
      * unconditional unet
      
      * clap, vocoder and vae
      
      * clean-up: conversion scripts
      
      * fix: conversion script token_type_ids
      
      * clean-up: pipeline docstring
      
      * tests: from SD
      
      * clean-up: cpu offload vocoder instead of safety checker
      
      * feat: adapt tests to audioldm
      
      * feat: add docs
      
      * clean-up: amend pipeline docstrings
      
      * clean-up: make style
      
      * clean-up: make fix-copies
      
      * fix: add doc path to toctree
      
      * clean-up: args for conversion script
      
      * clean-up: paths to checkpoints
      
      * fix: use conditional unet
      
      * clean-up: make style
      
      * fix: type hints for UNet
      
      * clean-up: docstring for UNet
      
      * clean-up: make style
      
      * clean-up: remove duplicate in docstring
      
      * clean-up: make style
      
      * clean-up: make fix-copies
      
      * clean-up: move imports to start in code snippet
      
      * fix: pass cross_attention_dim as a list/tuple to unet
      
      * clean-up: make fix-copies
      
      * fix: update checkpoint path
      
      * fix: unet cross_attention_dim in tests
      
      * film embeddings -> class embeddings
      
      * Apply suggestions from code review
      Co-authored-by: default avatarWill Berman <wlbberman@gmail.com>
      
      * fix: unet film embed to use existing args
      
      * fix: unet tests to use existing args
      
      * fix: make style
      
      * fix: transformers import and version in init
      
      * clean-up: make style
      
      * Revert "clean-up: make style"
      
      This reverts commit 5d6d1f8b324f5583e7805dc01e2c86e493660d66.
      
      * clean-up: make style
      
      * clean-up: use pipeline tester mixin tests where poss
      
      * clean-up: skip attn slicing test
      
      * fix: add torch dtype to docs
      
      * fix: remove conversion script out of src
      
      * fix: remove .detach from 1d waveform
      
      * fix: reduce default num inf steps
      
      * fix: swap height/width -> audio_length_in_s
      
      * clean-up: make style
      
      * fix: remove nightly tests
      
      * fix: imports in conversion script
      
      * clean-up: slim-down to two slow tests
      
      * clean-up: slim-down fast tests
      
      * fix: batch consistent tests
      
      * clean-up: make style
      
      * clean-up: remove vae slicing fast test
      
      * clean-up: propagate changes to doc
      
      * fix: increase test tol to 1e-2
      
      * clean-up: finish docs
      
      * clean-up: make style
      
      * feat: vocoder / VAE compatibility check
      
      * feat: possibly expand / cut audio waveform
      
      * fix: pipeline call signature test
      
      * fix: slow tests output len
      
      * clean-up: make style
      
      * make style
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarWilliam Berman <WLBberman@gmail.com>
      b94880e5
    • YiYi Xu's avatar
      Flax controlnet (#2727) · df91c447
      YiYi Xu authored
      
      
      * add contronet flax
      
      ---------
      Co-authored-by: default avataryiyixuxu <yixu310@gmail,com>
      df91c447
    • Kashif Rasul's avatar
      Music Spectrogram diffusion pipeline (#1044) · 2ef9bdd7
      Kashif Rasul authored
      
      
      * initial TokenEncoder and ContinuousEncoder
      
      * initial modules
      
      * added ContinuousContextTransformer
      
      * fix copy paste error
      
      * use numpy for get_sequence_length
      
      * initial terminal relative positional encodings
      
      * fix weights keys
      
      * fix assert
      
      * cross attend style: concat encodings
      
      * make style
      
      * concat once
      
      * fix formatting
      
      * Initial SpectrogramPipeline
      
      * fix input_tokens
      
      * make style
      
      * added mel output
      
      * ignore weights for config
      
      * move mel to numpy
      
      * import pipeline
      
      * fix class names and import
      
      * moved models to models folder
      
      * import ContinuousContextTransformer and SpectrogramDiffusionPipeline
      
      * initial spec diffusion converstion script
      
      * renamed config to t5config
      
      * added weight loading
      
      * use arguments instead of t5config
      
      * broadcast noise time to batch dim
      
      * fix call
      
      * added scale_to_features
      
      * fix weights
      
      * transpose laynorm weight
      
      * scale is a vector
      
      * scale the query outputs
      
      * added comment
      
      * undo scaling
      
      * undo depth_scaling
      
      * inital get_extended_attention_mask
      
      * attention_mask is none in self-attention
      
      * cleanup
      
      * manually invert attention
      
      * nn.linear need bias=False
      
      * added T5LayerFFCond
      
      * remove to fix conflict
      
      * make style and dummy
      
      * remove unsed variables
      
      * remove predict_epsilon
      
      * Move accelerate to a soft-dependency (#1134)
      
      * finish
      
      * finish
      
      * Update src/diffusers/modeling_utils.py
      
      * Update src/diffusers/pipeline_utils.py
      Co-authored-by: default avatarAnton Lozhkov <anton@huggingface.co>
      
      * more fixes
      
      * fix
      Co-authored-by: default avatarAnton Lozhkov <anton@huggingface.co>
      
      * fix order
      
      * added initial midi to note token data pipeline
      
      * added int to int tokenizer
      
      * remove duplicate
      
      * added logic for segments
      
      * add melgan to pipeline
      
      * move autoregressive gen into pipeline
      
      * added note_representation_processor_chain
      
      * fix dtypes
      
      * remove immutabledict req
      
      * initial doc
      
      * use np.where
      
      * require note_seq
      
      * fix typo
      
      * update dependency
      
      * added note-seq to test
      
      * added is_note_seq_available
      
      * fix import
      
      * added toc
      
      * added example usage
      
      * undo for now
      
      * moved docs
      
      * fix merge
      
      * fix imports
      
      * predict first segment
      
      * avoid un-needed copy to and from cpu
      
      * make style
      
      * Copyright
      
      * fix style
      
      * add test and fix inference steps
      
      * remove bogus files
      
      * reorder models
      
      * up
      
      * remove transformers dependency
      
      * make work with diffusers cross attention
      
      * clean more
      
      * remove @
      
      * improve further
      
      * up
      
      * uP
      
      * Apply suggestions from code review
      
      * Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py
      
      * loop over all tokens
      
      * make style
      
      * Added a section on the model
      
      * fix formatting
      
      * grammer
      
      * formatting
      
      * make fix-copies
      
      * Update src/diffusers/pipelines/__init__.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/spectrogram_diffusion/pipeline_spectrogram_diffusion.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * added callback ad optional ionnx
      
      * do not squeeze batch dim
      
      * clean up more
      
      * upload
      
      * convert jax to nnumpy
      
      * make style
      
      * fix warning
      
      * make fix-copies
      
      * fix warning
      
      * add initial fast tests
      
      * add initial pipeline_params
      
      * eval mode due to dropout
      
      * skip batch tests as pipeline runs on a single file
      
      * make style
      
      * fix relative path
      
      * fix doc tests
      
      * Update src/diffusers/models/t5_film_transformer.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/models/t5_film_transformer.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update docs/source/en/api/pipelines/spectrogram_diffusion.mdx
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * add MidiProcessor
      
      * format
      
      * fix org
      
      * Apply suggestions from code review
      
      * Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py
      
      * make style
      
      * pin protobuf to <4
      
      * fix formatting
      
      * white space
      
      * tensorboard needs protobuf
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarAnton Lozhkov <anton@huggingface.co>
      2ef9bdd7
    • Patrick von Platen's avatar
      [UNet3DModel] Fix with attn processor (#2790) · a8315ce1
      Patrick von Platen authored
      * [UNet3DModel] Fix attn processor
      
      * make style
      a8315ce1
  19. 22 Mar, 2023 1 commit
    • Patrick von Platen's avatar
      [MS Text To Video] Add first text to video (#2738) · ca1a2229
      Patrick von Platen authored
      
      
      * [MS Text To Video} Add first text to video
      
      * upload
      
      * make first model example
      
      * match unet3d params
      
      * make sure weights are correcctly converted
      
      * improve
      
      * forward pass works, but diff result
      
      * make forward work
      
      * fix more
      
      * finish
      
      * refactor video output class.
      
      * feat: add support for a video export utility.
      
      * fix: opencv availability check.
      
      * run make fix-copies.
      
      * add: docs for the model components.
      
      * add: standalone pipeline doc.
      
      * edit docstring of the pipeline.
      
      * add: right path to TransformerTempModel
      
      * add: first set of tests.
      
      * complete fast tests for text to video.
      
      * fix bug
      
      * up
      
      * three fast tests failing.
      
      * add: note on slow tests
      
      * make work with all schedulers
      
      * apply styling.
      
      * add slow tests
      
      * change file name
      
      * update
      
      * more correction
      
      * more fixes
      
      * finish
      
      * up
      
      * Apply suggestions from code review
      
      * up
      
      * finish
      
      * make copies
      
      * fix pipeline tests
      
      * fix more tests
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * apply suggestions
      
      * up
      
      * revert
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      ca1a2229
  20. 21 Mar, 2023 1 commit
  21. 17 Mar, 2023 1 commit
  22. 16 Mar, 2023 1 commit