1. 07 Aug, 2024 1 commit
  2. 01 Aug, 2024 1 commit
  3. 30 Jul, 2024 1 commit
    • Yoach Lacombe's avatar
      Stable Audio integration (#8716) · 69e72b1d
      Yoach Lacombe authored
      
      
      * WIP modeling code and pipeline
      
      * add custom attention processor + custom activation + add to init
      
      * correct ProjectionModel forward
      
      * add stable audio to __initèè
      
      * add autoencoder and update pipeline and modeling code
      
      * add half Rope
      
      * add partial rotary v2
      
      * add temporary modfis to scheduler
      
      * add EDM DPM Solver
      
      * remove TODOs
      
      * clean GLU
      
      * remove att.group_norm to attn processor
      
      * revert back src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
      
      * refactor GLU -> SwiGLU
      
      * remove redundant args
      
      * add channel multiples in autoencoder docstrings
      
      * changes in docsrtings and copyright headers
      
      * clean pipeline
      
      * further cleaning
      
      * remove peft and lora and fromoriginalmodel
      
      * Delete src/diffusers/pipelines/stable_audio/diffusers.code-workspace
      
      * make style
      
      * dummy models
      
      * fix copied from
      
      * add fast oobleck tests
      
      * add brownian tree
      
      * oobleck autoencoder slow tests
      
      * remove TODO
      
      * fast stable audio pipeline tests
      
      * add slow tests
      
      * make style
      
      * add first version of docs
      
      * wrap is_torchsde_available to the scheduler
      
      * fix slow test
      
      * test with input waveform
      
      * add input waveform
      
      * remove some todos
      
      * create stableaudio gaussian projection + make style
      
      * add pipeline to toctree
      
      * fix copied from
      
      * make quality
      
      * refactor timestep_features->time_proj
      
      * refactor joint_attention_kwargs->cross_attention_kwargs
      
      * remove forward_chunk
      
      * move StableAudioDitModel to transformers folder
      
      * correct convert + remove partial rotary embed
      
      * apply suggestions from yiyixuxu -> removing attn.kv_heads
      
      * remove temb
      
      * remove cross_attention_kwargs
      
      * further removal of cross_attention_kwargs
      
      * remove text encoder autocast to fp16
      
      * continue removing autocast
      
      * make style
      
      * refactor how text and audio are embedded
      
      * add paper
      
      * update example code
      
      * make style
      
      * unify projection model forward + fix device placement
      
      * make style
      
      * remove fuse qkv
      
      * apply suggestions from review
      
      * Update src/diffusers/pipelines/stable_audio/pipeline_stable_audio.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * make style
      
      * smaller models in fast tests
      
      * pass sequential offloading fast tests
      
      * add docs for vae and autoencoder
      
      * make style and update example
      
      * remove useless import
      
      * add cosine scheduler
      
      * dummy classes
      
      * cosine scheduler docs
      
      * better description of scheduler
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      69e72b1d
  4. 20 Jul, 2024 1 commit
  5. 18 Jul, 2024 1 commit
  6. 17 Jul, 2024 1 commit
  7. 11 Jul, 2024 1 commit
  8. 08 Jul, 2024 2 commits
  9. 27 Jun, 2024 1 commit
  10. 12 Jun, 2024 2 commits
  11. 05 Jun, 2024 1 commit
  12. 29 May, 2024 1 commit
  13. 24 May, 2024 1 commit
  14. 16 May, 2024 1 commit
  15. 10 May, 2024 1 commit
    • Mark Van Aken's avatar
      #7535 Update FloatTensor type hints to Tensor (#7883) · be4afa0b
      Mark Van Aken authored
      * find & replace all FloatTensors to Tensor
      
      * apply formatting
      
      * Update torch.FloatTensor to torch.Tensor in the remaining files
      
      * formatting
      
      * Fix the rest of the places where FloatTensor is used as well as in documentation
      
      * formatting
      
      * Update new file from FloatTensor to Tensor
      be4afa0b
  16. 09 May, 2024 2 commits
  17. 08 May, 2024 1 commit
    • Philip Pham's avatar
      Check shape and remove deprecated APIs in scheduling_ddpm_flax.py (#7703) · f29b9348
      Philip Pham authored
      `model_output.shape` may only have rank 1.
      
      There are warnings related to use of random keys.
      
      ```
      tests/schedulers/test_scheduler_flax.py: 13 warnings
        /Users/phillypham/diffusers/src/diffusers/schedulers/scheduling_ddpm_flax.py:268: FutureWarning: normal accepts a single key, but was given a key array of shape (1, 2) != (). Use jax.vmap for batching. In a future JAX version, this will be an error.
          noise = jax.random.normal(split_key, shape=model_output.shape, dtype=self.dtype)
      
      tests/schedulers/test_scheduler_flax.py::FlaxDDPMSchedulerTest::test_betas
        /Users/phillypham/virtualenv/diffusers/lib/python3.9/site-packages/jax/_src/random.py:731: FutureWarning: uniform accepts a single key, but was given a key array of shape (1,) != (). Use jax.vmap for batching. In a future JAX version, this will be an error.
          u = uniform(key, shape, dtype, lo, hi)  # type: ignore[arg-type]
      ```
      f29b9348
  18. 03 May, 2024 1 commit
  19. 29 Apr, 2024 1 commit
  20. 27 Apr, 2024 1 commit
  21. 03 Apr, 2024 2 commits
    • Beinsezii's avatar
      UniPC Multistep add `rescale_betas_zero_snr` (#7531) · aa190259
      Beinsezii authored
      * UniPC Multistep add `rescale_betas_zero_snr`
      
      Same patch as DPM and Euler with the patched final alpha cumprod
      
      BF16 doesn't seem to break down, I think cause UniPC upcasts during some
      phases already? We could still force an upcast since it only
      loses ≈ 0.005 it/s for me but the difference in output is very small. A
      better endeavor might upcasting in step() and removing all the other
      upcasts elsewhere?
      
      * UniPC ZSNR UT
      
      * Re-add `rescale_betas_zsnr` doc oops
      aa190259
    • Beinsezii's avatar
      UniPC Multistep fix tensor dtype/device on order=3 (#7532) · 19ab04ff
      Beinsezii authored
      * UniPC UTs iterate solvers on FP16
      
      It wasn't catching errs on order==3. Might be excessive?
      
      * UniPC Multistep fix tensor dtype/device on order=3
      
      * UniPC UTs Add v_pred to fp16 test iter
      
      For completions sake. Probably overkill?
      19ab04ff
  22. 02 Apr, 2024 1 commit
    • Sayak Paul's avatar
      add: utility to format our docs too 📜 (#7314) · 4a343077
      Sayak Paul authored
      * add: utility to format our docs too 📜
      
      * debugging saga
      
      * fix: message
      
      * checking
      
      * should be fixed.
      
      * revert pipeline_fixture
      
      * remove empty line
      
      * make style
      
      * fix: setup.py
      
      * style.
      4a343077
  23. 30 Mar, 2024 1 commit
    • Beinsezii's avatar
      Add `final_sigma_zero` to UniPCMultistep (#7517) · f0c81562
      Beinsezii authored
      * Add `final_sigma_zero` to UniPCMultistep
      
      Effectively the same trick as DDIM's `set_alpha_to_one` and
      DPM's `final_sigma_type='zero'`.
      Currently False by default but maybe this should be True?
      
      * `final_sigma_zero: bool` -> `final_sigmas_type: str`
      
      Should 1:1 match DPM Multistep now.
      
      * Set `final_sigmas_type='sigma_min'` in UniPC UTs
      f0c81562
  24. 21 Mar, 2024 1 commit
  25. 19 Mar, 2024 2 commits
  26. 18 Mar, 2024 2 commits
    • M. Tolga Cangöz's avatar
      e97a633b
    • M. Tolga Cangöz's avatar
      Fix Typos (#7325) · 6a05b274
      M. Tolga Cangöz authored
      * Fix PyTorch's convention for inplace functions
      
      * Fix import structure in __init__.py and update config loading logic in test_config.py
      
      * Update configuration access
      
      * Fix typos
      
      * Trim trailing white spaces
      
      * Fix typo in logger name
      
      * Revert "Fix PyTorch's convention for inplace functions"
      
      This reverts commit f65dc4afcb57ceb43d5d06389229d47bafb10d2d.
      
      * Fix typo in step_index property description
      
      * Revert "Update configuration access"
      
      This reverts commit 8d44e870b8c1ad08802e3e904c34baeca1b598f8.
      
      * Revert "Fix import structure in __init__.py and update config loading logic in test_config.py"
      
      This reverts commit 2ad5e8bca25aede3b912da22bd57285b598fe171.
      
      * Fix typos
      
      * Fix typos
      
      * Fix typos
      
      * Fix a typo: tranform -> transform
      6a05b274
  27. 14 Mar, 2024 2 commits
  28. 13 Mar, 2024 2 commits
    • Manuel Brack's avatar
      [Pipeline] Add LEDITS++ pipelines (#6074) · 00eca4b8
      Manuel Brack authored
      
      
      * Setup LEdits++ file structure
      
      * Fix import
      
      * LEditsPP Stable Diffusion pipeline
      
      * Include variable image aspect ratios
      
      * Implement LEDITS++ for SDXL
      
      * clean up LEditsPPPipelineStableDiffusion
      
      * Adjust inversion output
      
      * Added docu, more cleanup for LEditsPPPipelineStableDiffusion
      
      * clean up LEditsPPPipelineStableDiffusionXL
      
      * Update documentation
      
      * Fix documentation import
      
      * Add skeleton IF implementation
      
      * Fix documentation typo
      
      * Add LEDTIS docu to toctree
      
      * Add missing title
      
      * Finalize SD documentation
      
      * Finalize SD-XL documentation
      
      * Fix code style and quality
      
      * Fix typo
      
      * Fix return types
      
      * added LEditsPPPipelineIF; minor changes for LEditsPPPipelineStableDiffusion and LEditsPPPipelineStableDiffusionXL
      
      * Fix copy reference
      
      * add documentation for IF
      
      * Add first tests
      
      * Fix batching for SD-XL
      
      * Fix text encoding and perfect reconstruction for SD-XL
      
      * Add tests for SD-XL, minor changes
      
      * move user_mask to correct device, use cross_attention_kwargs also for inversion
      
      * Example docstring
      
      * Fix attention resolution for non-square images
      
      * Refactoring for PR review
      
      * Safely remove ledits_utils.py
      
      * Style fixes
      
      * Replace assertions with ValueError
      
      * Remove LEditsPPPipelineIF
      
      * Remove unecessary input checks
      
      * Refactoring of CrossAttnProcessor
      
      * Revert unecessary changes to scheduler
      
      * Remove first progress-bar in inversion
      
      * Refactor scheduler usage and reset
      
      * Use imageprocessor instead of custom logic
      
      * Fix scheduler init warning
      
      * Fix error when running the pipeline in fp16
      
      * Update documentation wrt perfect inversion
      
      * Update tests
      
      * Fix code quality and copy consistency
      
      * Update LEditsPP import
      
      * Remove enable/disable methods that are now in StableDiffusionMixin
      
      * Change import in docs
      
      * Revert import structure change
      
      * Fix ledits imports
      
      ---------
      Co-authored-by: default avatarKatharina Kornmeier <katharina.kornmeier@stud.tu-darmstadt.de>
      00eca4b8
    • Sayak Paul's avatar
      [Chore] switch to `logger.warning` (#7289) · 4fbd310f
      Sayak Paul authored
      switch to logger.warning
      4fbd310f
  29. 08 Mar, 2024 1 commit
    • Chi's avatar
      Solve missing clip_sample implementation in FlaxDDIMScheduler. (#7017) · 46fac824
      Chi authored
      
      
      * I added a new doc string to the class. This is more flexible to understanding other developers what are doing and where it's using.
      
      * Update src/diffusers/models/unet_2d_blocks.py
      
      This changes suggest by maintener.
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update src/diffusers/models/unet_2d_blocks.py
      
      Add suggested text
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update unet_2d_blocks.py
      
      I changed the Parameter to Args text.
      
      * Update unet_2d_blocks.py
      
      proper indentation set in this file.
      
      * Update unet_2d_blocks.py
      
      a little bit of change in the act_fun argument line.
      
      * I run the black command to reformat style in the code
      
      * Update unet_2d_blocks.py
      
      similar doc-string add to have in the original diffusion repository.
      
      * Fix bug for mention in this issue section #6901
      
      * Update src/diffusers/schedulers/scheduling_ddim_flax.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Fix linter
      
      * Restore empty line
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      46fac824
  30. 05 Mar, 2024 1 commit
  31. 04 Mar, 2024 1 commit
    • Thiago Crepaldi's avatar
      Enable PyTorch's FakeTensorMode for EulerDiscreteScheduler scheduler (#7151) · ca6cdc77
      Thiago Crepaldi authored
      * Enable FakeTensorMode for EulerDiscreteScheduler scheduler
      
      PyTorch's FakeTensorMode does not support `.numpy()` or `numpy.array()`
      calls.
      
      This PR replaces `sigmas` numpy tensor by a PyTorch tensor equivalent
      
      Repro
      
      ```python
      with torch._subclasses.FakeTensorMode() as fake_mode, ONNXTorchPatcher():
          fake_model = DiffusionPipeline.from_pretrained(model_name, low_cpu_mem_usage=False)
      ```
      
      that otherwise would fail with
      `RuntimeError: .numpy() is not supported for tensor subclasses.`
      
      * Address comments
      ca6cdc77
  32. 27 Feb, 2024 1 commit
    • Beinsezii's avatar
      DPMSolverMultistep add `rescale_betas_zero_snr` (#7097) · 2e31a759
      Beinsezii authored
      * DPMMultistep rescale_betas_zero_snr
      
      * DPM upcast samples in step()
      
      * DPM rescale_betas_zero_snr UT
      
      * DPMSolverMulti move sample upcast after model convert
      
      Avoids having to re-use the dtype.
      
      * Add a newline for Ruff
      2e31a759