1. 09 Nov, 2022 1 commit
  2. 04 Nov, 2022 1 commit
    • Chen Wu (吴尘)'s avatar
      Add CycleDiffusion pipeline using Stable Diffusion (#888) · 9d8943b7
      Chen Wu (吴尘) authored
      
      
      * Add CycleDiffusion pipeline for Stable Diffusion
      
      * Add the option of passing noise to DDIMScheduler
      
      Add the option of providing the noise itself to DDIMScheduler, instead of the random seed generator.
      
      * Update README.md
      
      * Update README.md
      
      * Update pipeline_stable_diffusion_cycle_diffusion.py
      
      * Update pipeline_stable_diffusion_cycle_diffusion.py
      
      * Update pipeline_stable_diffusion_cycle_diffusion.py
      
      * Update pipeline_stable_diffusion_cycle_diffusion.py
      
      * Update scheduling_ddim.py
      
      * Update import format
      
      * Update pipeline_stable_diffusion_cycle_diffusion.py
      
      * Update scheduling_ddim.py
      
      * Update src/diffusers/schedulers/scheduling_ddim.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/schedulers/scheduling_ddim.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/schedulers/scheduling_ddim.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/schedulers/scheduling_ddim.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/schedulers/scheduling_ddim.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update scheduling_ddim.py
      
      * Update scheduling_ddim.py
      
      * Update scheduling_ddim.py
      
      * add two tests
      
      * Update pipeline_stable_diffusion_cycle_diffusion.py
      
      * Update pipeline_stable_diffusion_cycle_diffusion.py
      
      * Update README.md
      
      * Rename pipeline name as suggested in the latest reviewer comment
      
      * Update test_pipelines.py
      
      * Update test_pipelines.py
      
      * Update test_pipelines.py
      
      * Update pipeline_stable_diffusion_cycle_diffusion.py
      
      * Remove the generator
      
      This generator does not control all randomness during sampling, which can be misleading.
      
      * Update optimal hyperparameters
      
      * Update src/diffusers/pipelines/stable_diffusion/README.md
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/README.md
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/README.md
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Apply suggestions from code review
      
      * uP
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_cycle_diffusion.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * up
      
      * up
      
      * Replace assert with ValueError
      
      * finish docs
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      9d8943b7
  3. 03 Nov, 2022 2 commits
    • Will Berman's avatar
      VQ-diffusion (#658) · ef2ea33c
      Will Berman authored
      
      
      * Changes for VQ-diffusion VQVAE
      
      Add specify dimension of embeddings to VQModel:
      `VQModel` will by default set the dimension of embeddings to the number
      of latent channels. The VQ-diffusion VQVAE has a smaller
      embedding dimension, 128, than number of latent channels, 256.
      
      Add AttnDownEncoderBlock2D and AttnUpDecoderBlock2D to the up and down
      unet block helpers. VQ-diffusion's VQVAE uses those two block types.
      
      * Changes for VQ-diffusion transformer
      
      Modify attention.py so SpatialTransformer can be used for
      VQ-diffusion's transformer.
      
      SpatialTransformer:
      - Can now operate over discrete inputs (classes of vector embeddings) as well as continuous.
      - `in_channels` was made optional in the constructor so two locations where it was passed as a positional arg were moved to kwargs
      - modified forward pass to take optional timestep embeddings
      
      ImagePositionalEmbeddings:
      - added to provide positional embeddings to discrete inputs for latent pixels
      
      BasicTransformerBlock:
      - norm layers were made configurable so that the VQ-diffusion could use AdaLayerNorm with timestep embeddings
      - modified forward pass to take optional timestep embeddings
      
      CrossAttention:
      - now may optionally take a bias parameter for its query, key, and value linear layers
      
      FeedForward:
      - Internal layers are now configurable
      
      ApproximateGELU:
      - Activation function in VQ-diffusion's feedforward layer
      
      AdaLayerNorm:
      - Norm layer modified to incorporate timestep embeddings
      
      * Add VQ-diffusion scheduler
      
      * Add VQ-diffusion pipeline
      
      * Add VQ-diffusion convert script to diffusers
      
      * Add VQ-diffusion dummy objects
      
      * Add VQ-diffusion markdown docs
      
      * Add VQ-diffusion tests
      
      * some renaming
      
      * some fixes
      
      * more renaming
      
      * correct
      
      * fix typo
      
      * correct weights
      
      * finalize
      
      * fix tests
      
      * Apply suggestions from code review
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * finish
      
      * finish
      
      * up
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      ef2ea33c
    • Revist's avatar
      feat: add repaint (#974) · d38c8043
      Revist authored
      
      
      * feat: add repaint
      
      * fix: fix quality check with `make fix-copies`
      
      * fix: remove old unnecessary arg
      
      * chore: change default to DDPM (looks better in experiments)
      
      * ".to(device)" changed to "device="
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * make generator device-specific
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * make generator device-specific and change shape
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * fix: add preprocessing for image and mask
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * fix: update test
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * Update src/diffusers/pipelines/repaint/pipeline_repaint.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Add docs and examples
      
      * Fix toctree
      Co-authored-by: default avatarfja <fja@zurich.ibm.com>
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarAnton Lozhkov <anton@huggingface.co>
      d38c8043
  4. 25 Oct, 2022 1 commit
  5. 19 Oct, 2022 1 commit
  6. 18 Oct, 2022 2 commits
  7. 06 Oct, 2022 2 commits
  8. 03 Oct, 2022 1 commit
    • Pedro Cuenca's avatar
      Fix import with Flax but without PyTorch (#688) · 688031c5
      Pedro Cuenca authored
      * Don't use `load_state_dict` if torch is not installed.
      
      * Define `SchedulerOutput` to use torch or flax arrays.
      
      * Don't import LMSDiscreteScheduler without torch.
      
      * Create distinct FlaxSchedulerOutput.
      
      * Additional changes required for FlaxSchedulerMixin
      
      * Do not import torch pipelines in Flax.
      
      * Revert "Define `SchedulerOutput` to use torch or flax arrays."
      
      This reverts commit f653140134b74d9ffec46d970eb46925fe3a409d.
      
      * Prefix Flax scheduler outputs for consistency.
      
      * make style
      
      * FlaxSchedulerOutput is now a dataclass.
      
      * Don't use f-string without placeholders.
      
      * Add blank line.
      
      * Style (docstrings)
      688031c5
  9. 20 Sep, 2022 1 commit
  10. 08 Sep, 2022 1 commit
  11. 07 Sep, 2022 1 commit
  12. 01 Sep, 2022 1 commit
  13. 30 Aug, 2022 1 commit
  14. 17 Aug, 2022 1 commit
  15. 14 Aug, 2022 1 commit
  16. 09 Aug, 2022 1 commit
  17. 20 Jul, 2022 1 commit
  18. 19 Jul, 2022 1 commit
  19. 13 Jul, 2022 1 commit
  20. 28 Jun, 2022 2 commits
  21. 26 Jun, 2022 1 commit
  22. 25 Jun, 2022 2 commits
  23. 22 Jun, 2022 2 commits
  24. 17 Jun, 2022 2 commits
  25. 16 Jun, 2022 1 commit
  26. 15 Jun, 2022 5 commits
  27. 13 Jun, 2022 3 commits