1. 09 Nov, 2022 1 commit
  2. 08 Nov, 2022 3 commits
  3. 06 Nov, 2022 1 commit
    • Cheng Lu's avatar
      Add multistep DPM-Solver discrete scheduler (#1132) · b4a1ed85
      Cheng Lu authored
      
      
      * add dpmsolver discrete pytorch scheduler
      
      * fix some typos in dpm-solver pytorch
      
      * add dpm-solver pytorch in stable-diffusion pipeline
      
      * add jax/flax version dpm-solver
      
      * change code style
      
      * change code style
      
      * add docs
      
      * add `add_noise` method for dpmsolver
      
      * add pytorch unit test for dpmsolver
      
      * add dummy object for pytorch dpmsolver
      
      * Update src/diffusers/schedulers/scheduling_dpmsolver_discrete.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Update tests/test_config.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Update tests/test_config.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * resolve the code comments
      
      * rename the file
      
      * change class name
      
      * fix code style
      
      * add auto docs for dpmsolver multistep
      
      * add more explanations for the stabilizing trick (for steps < 15)
      
      * delete the dummy file
      
      * change the API name of predict_epsilon, algorithm_type and solver_type
      
      * add compatible lists
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      b4a1ed85
  4. 04 Nov, 2022 2 commits
    • Chen Wu (吴尘)'s avatar
      Add CycleDiffusion pipeline using Stable Diffusion (#888) · 9d8943b7
      Chen Wu (吴尘) authored
      
      
      * Add CycleDiffusion pipeline for Stable Diffusion
      
      * Add the option of passing noise to DDIMScheduler
      
      Add the option of providing the noise itself to DDIMScheduler, instead of the random seed generator.
      
      * Update README.md
      
      * Update README.md
      
      * Update pipeline_stable_diffusion_cycle_diffusion.py
      
      * Update pipeline_stable_diffusion_cycle_diffusion.py
      
      * Update pipeline_stable_diffusion_cycle_diffusion.py
      
      * Update pipeline_stable_diffusion_cycle_diffusion.py
      
      * Update scheduling_ddim.py
      
      * Update import format
      
      * Update pipeline_stable_diffusion_cycle_diffusion.py
      
      * Update scheduling_ddim.py
      
      * Update src/diffusers/schedulers/scheduling_ddim.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/schedulers/scheduling_ddim.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/schedulers/scheduling_ddim.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/schedulers/scheduling_ddim.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/schedulers/scheduling_ddim.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update scheduling_ddim.py
      
      * Update scheduling_ddim.py
      
      * Update scheduling_ddim.py
      
      * add two tests
      
      * Update pipeline_stable_diffusion_cycle_diffusion.py
      
      * Update pipeline_stable_diffusion_cycle_diffusion.py
      
      * Update README.md
      
      * Rename pipeline name as suggested in the latest reviewer comment
      
      * Update test_pipelines.py
      
      * Update test_pipelines.py
      
      * Update test_pipelines.py
      
      * Update pipeline_stable_diffusion_cycle_diffusion.py
      
      * Remove the generator
      
      This generator does not control all randomness during sampling, which can be misleading.
      
      * Update optimal hyperparameters
      
      * Update src/diffusers/pipelines/stable_diffusion/README.md
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/README.md
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/README.md
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Apply suggestions from code review
      
      * uP
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_cycle_diffusion.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * up
      
      * up
      
      * Replace assert with ValueError
      
      * finish docs
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      9d8943b7
    • Anton Lozhkov's avatar
      Bump to 0.8.0.dev0 (#1131) · 2fcae69f
      Anton Lozhkov authored
      * Bump to 0.8.0.dev0
      
      * deprecate int timesteps
      
      * style
      2fcae69f
  5. 03 Nov, 2022 3 commits
    • Suraj Patil's avatar
      handle device for randn in euler step (#1124) · 7b030a7d
      Suraj Patil authored
      * handle device for randn in euler step
      
      * convert device to str
      7b030a7d
    • Will Berman's avatar
      VQ-diffusion (#658) · ef2ea33c
      Will Berman authored
      
      
      * Changes for VQ-diffusion VQVAE
      
      Add specify dimension of embeddings to VQModel:
      `VQModel` will by default set the dimension of embeddings to the number
      of latent channels. The VQ-diffusion VQVAE has a smaller
      embedding dimension, 128, than number of latent channels, 256.
      
      Add AttnDownEncoderBlock2D and AttnUpDecoderBlock2D to the up and down
      unet block helpers. VQ-diffusion's VQVAE uses those two block types.
      
      * Changes for VQ-diffusion transformer
      
      Modify attention.py so SpatialTransformer can be used for
      VQ-diffusion's transformer.
      
      SpatialTransformer:
      - Can now operate over discrete inputs (classes of vector embeddings) as well as continuous.
      - `in_channels` was made optional in the constructor so two locations where it was passed as a positional arg were moved to kwargs
      - modified forward pass to take optional timestep embeddings
      
      ImagePositionalEmbeddings:
      - added to provide positional embeddings to discrete inputs for latent pixels
      
      BasicTransformerBlock:
      - norm layers were made configurable so that the VQ-diffusion could use AdaLayerNorm with timestep embeddings
      - modified forward pass to take optional timestep embeddings
      
      CrossAttention:
      - now may optionally take a bias parameter for its query, key, and value linear layers
      
      FeedForward:
      - Internal layers are now configurable
      
      ApproximateGELU:
      - Activation function in VQ-diffusion's feedforward layer
      
      AdaLayerNorm:
      - Norm layer modified to incorporate timestep embeddings
      
      * Add VQ-diffusion scheduler
      
      * Add VQ-diffusion pipeline
      
      * Add VQ-diffusion convert script to diffusers
      
      * Add VQ-diffusion dummy objects
      
      * Add VQ-diffusion markdown docs
      
      * Add VQ-diffusion tests
      
      * some renaming
      
      * some fixes
      
      * more renaming
      
      * correct
      
      * fix typo
      
      * correct weights
      
      * finalize
      
      * fix tests
      
      * Apply suggestions from code review
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * finish
      
      * finish
      
      * up
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      ef2ea33c
    • Revist's avatar
      feat: add repaint (#974) · d38c8043
      Revist authored
      
      
      * feat: add repaint
      
      * fix: fix quality check with `make fix-copies`
      
      * fix: remove old unnecessary arg
      
      * chore: change default to DDPM (looks better in experiments)
      
      * ".to(device)" changed to "device="
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * make generator device-specific
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * make generator device-specific and change shape
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * fix: add preprocessing for image and mask
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * fix: update test
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * Update src/diffusers/pipelines/repaint/pipeline_repaint.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Add docs and examples
      
      * Fix toctree
      Co-authored-by: default avatarfja <fja@zurich.ibm.com>
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarAnton Lozhkov <anton@huggingface.co>
      d38c8043
  6. 02 Nov, 2022 1 commit
  7. 31 Oct, 2022 2 commits
  8. 29 Oct, 2022 1 commit
  9. 27 Oct, 2022 1 commit
    • Pedro Cuenca's avatar
      Continuation of #942: additional float64 failure (#996) · 1d04e1b4
      Pedro Cuenca authored
      * Add failing test for #940.
      
      * Do not use torch.float64 in mps.
      
      * style
      
      * Temporarily skip add_noise for IPNDMScheduler.
      
      Until #990 is addressed.
      
      * Fix additional float64 error in mps.
      
      * Improve add_noise test
      
      * Slight edit – I think it's clearer this way.
      1d04e1b4
  10. 26 Oct, 2022 1 commit
  11. 25 Oct, 2022 2 commits
  12. 20 Oct, 2022 3 commits
  13. 18 Oct, 2022 1 commit
  14. 14 Oct, 2022 1 commit
  15. 13 Oct, 2022 1 commit
    • Suraj Patil's avatar
      update flax scheduler API (#822) · 0a09af2f
      Suraj Patil authored
      * update flax scheduler API
      
      * remoev set format
      
      * fix call to scale_model_input
      
      * update flax pndm
      
      * use int32
      
      * update docstr
      0a09af2f
  16. 10 Oct, 2022 1 commit
  17. 07 Oct, 2022 3 commits
  18. 06 Oct, 2022 4 commits
  19. 05 Oct, 2022 2 commits
  20. 04 Oct, 2022 1 commit
  21. 03 Oct, 2022 3 commits
    • Patrick von Platen's avatar
      [Utils] Add deprecate function and move testing_utils under utils (#659) · f1484b81
      Patrick von Platen authored
      * [Utils] Add deprecate function
      
      * up
      
      * up
      
      * uP
      
      * up
      
      * up
      
      * up
      
      * up
      
      * uP
      
      * up
      
      * fix
      
      * up
      
      * move to deprecation utils file
      
      * fix
      
      * fix
      
      * fix more
      f1484b81
    • Pedro Cuenca's avatar
      Fix import with Flax but without PyTorch (#688) · 688031c5
      Pedro Cuenca authored
      * Don't use `load_state_dict` if torch is not installed.
      
      * Define `SchedulerOutput` to use torch or flax arrays.
      
      * Don't import LMSDiscreteScheduler without torch.
      
      * Create distinct FlaxSchedulerOutput.
      
      * Additional changes required for FlaxSchedulerMixin
      
      * Do not import torch pipelines in Flax.
      
      * Revert "Define `SchedulerOutput` to use torch or flax arrays."
      
      This reverts commit f653140134b74d9ffec46d970eb46925fe3a409d.
      
      * Prefix Flax scheduler outputs for consistency.
      
      * make style
      
      * FlaxSchedulerOutput is now a dataclass.
      
      * Don't use f-string without placeholders.
      
      * Add blank line.
      
      * Style (docstrings)
      688031c5
    • Pedro Cuenca's avatar
      Flax: add shape argument to `set_timesteps` (#690) · 249b36cc
      Pedro Cuenca authored
      * Flax: add shape argument to set_timesteps
      
      * style
      249b36cc
  22. 30 Sep, 2022 1 commit
    • Nouamane Tazi's avatar
      Optimize Stable Diffusion (#371) · 9ebaea54
      Nouamane Tazi authored
      * initial commit
      
      * make UNet stream capturable
      
      * try to fix noise_pred value
      
      * remove cuda graph and keep NB
      
      * non blocking unet with PNDMScheduler
      
      * make timesteps np arrays for pndm scheduler
      because lists don't get formatted to tensors in `self.set_format`
      
      * make max async in pndm
      
      * use channel last format in unet
      
      * avoid moving timesteps device in each unet call
      
      * avoid memcpy op in `get_timestep_embedding`
      
      * add `channels_last` kwarg to `DiffusionPipeline.from_pretrained`
      
      * update TODO
      
      * replace `channels_last` kwarg with `memory_format` for more generality
      
      * revert the channels_last changes to leave it for another PR
      
      * remove non_blocking when moving input ids to device
      
      * remove blocking from all .to() operations at beginning of pipeline
      
      * fix merging
      
      * fix merging
      
      * model can run in other precisions without autocast
      
      * attn refactoring
      
      * Revert "attn refactoring"
      
      This reverts commit 0c70c0e189cd2c4d8768274c9fcf5b940ee310fb.
      
      * remove restriction to run conv_norm in fp32
      
      * use `baddbmm` instead of `matmul`for better in attention for better perf
      
      * removing all reshapes to test perf
      
      * Revert "removing all reshapes to test perf"
      
      This reverts commit 006ccb8a8c6bc7eb7e512392e692a29d9b1553cd.
      
      * add shapes comments
      
      * hardcore whats needed for jitting
      
      * Revert "hardcore whats needed for jitting"
      
      This reverts commit 2fa9c698eae2890ac5f8e367ca80532ecf94df9a.
      
      * Revert "remove restriction to run conv_norm in fp32"
      
      This reverts commit cec592890c32da3d1b78d38b49e4307aedf459b9.
      
      * revert using baddmm in attention's forward
      
      * cleanup comment
      
      * remove restriction to run conv_norm in fp32. no quality loss was noticed
      
      This reverts commit cc9bc1339c998ebe9e7d733f910c6d72d9792213.
      
      * add more optimizations techniques to docs
      
      * Revert "add shapes comments"
      
      This reverts commit 31c58eadb8892f95478cdf05229adf678678c5f4.
      
      * apply suggestions
      
      * make quality
      
      * apply suggestions
      
      * styling
      
      * `scheduler.timesteps` are now arrays so we dont need .to()
      
      * remove useless .type()
      
      * use mean instead of max in `test_stable_diffusion_inpaint_pipeline_k_lms`
      
      * move scheduler timestamps to correct device if tensors
      
      * add device to `set_timesteps` in LMSD scheduler
      
      * `self.scheduler.set_timesteps` now uses device arg for schedulers that accept it
      
      * quick fix
      
      * styling
      
      * remove kwargs from schedulers `set_timesteps`
      
      * revert to using max in K-LMS inpaint pipeline test
      
      * Revert "`self.scheduler.set_timesteps` now uses device arg for schedulers that accept it"
      
      This reverts commit 00d5a51e5c20d8d445c8664407ef29608106d899.
      
      * move timesteps to correct device before loop in SD pipeline
      
      * apply previous fix to other SD pipelines
      
      * UNet now accepts tensor timesteps even on wrong device, to avoid errors
      - it shouldnt affect performance if timesteps are alrdy on correct device
      - it does slow down performance if they're on the wrong device
      
      * fix pipeline when timesteps are arrays with strides
      9ebaea54
  23. 29 Sep, 2022 1 commit