1. 12 Jun, 2024 1 commit
  2. 05 Jun, 2024 1 commit
  3. 29 May, 2024 1 commit
  4. 24 May, 2024 1 commit
  5. 16 May, 2024 1 commit
  6. 10 May, 2024 1 commit
    • Mark Van Aken's avatar
      #7535 Update FloatTensor type hints to Tensor (#7883) · be4afa0b
      Mark Van Aken authored
      * find & replace all FloatTensors to Tensor
      
      * apply formatting
      
      * Update torch.FloatTensor to torch.Tensor in the remaining files
      
      * formatting
      
      * Fix the rest of the places where FloatTensor is used as well as in documentation
      
      * formatting
      
      * Update new file from FloatTensor to Tensor
      be4afa0b
  7. 09 May, 2024 2 commits
  8. 08 May, 2024 1 commit
    • Philip Pham's avatar
      Check shape and remove deprecated APIs in scheduling_ddpm_flax.py (#7703) · f29b9348
      Philip Pham authored
      `model_output.shape` may only have rank 1.
      
      There are warnings related to use of random keys.
      
      ```
      tests/schedulers/test_scheduler_flax.py: 13 warnings
        /Users/phillypham/diffusers/src/diffusers/schedulers/scheduling_ddpm_flax.py:268: FutureWarning: normal accepts a single key, but was given a key array of shape (1, 2) != (). Use jax.vmap for batching. In a future JAX version, this will be an error.
          noise = jax.random.normal(split_key, shape=model_output.shape, dtype=self.dtype)
      
      tests/schedulers/test_scheduler_flax.py::FlaxDDPMSchedulerTest::test_betas
        /Users/phillypham/virtualenv/diffusers/lib/python3.9/site-packages/jax/_src/random.py:731: FutureWarning: uniform accepts a single key, but was given a key array of shape (1,) != (). Use jax.vmap for batching. In a future JAX version, this will be an error.
          u = uniform(key, shape, dtype, lo, hi)  # type: ignore[arg-type]
      ```
      f29b9348
  9. 03 May, 2024 1 commit
  10. 29 Apr, 2024 1 commit
  11. 27 Apr, 2024 1 commit
  12. 03 Apr, 2024 2 commits
    • Beinsezii's avatar
      UniPC Multistep add `rescale_betas_zero_snr` (#7531) · aa190259
      Beinsezii authored
      * UniPC Multistep add `rescale_betas_zero_snr`
      
      Same patch as DPM and Euler with the patched final alpha cumprod
      
      BF16 doesn't seem to break down, I think cause UniPC upcasts during some
      phases already? We could still force an upcast since it only
      loses ≈ 0.005 it/s for me but the difference in output is very small. A
      better endeavor might upcasting in step() and removing all the other
      upcasts elsewhere?
      
      * UniPC ZSNR UT
      
      * Re-add `rescale_betas_zsnr` doc oops
      aa190259
    • Beinsezii's avatar
      UniPC Multistep fix tensor dtype/device on order=3 (#7532) · 19ab04ff
      Beinsezii authored
      * UniPC UTs iterate solvers on FP16
      
      It wasn't catching errs on order==3. Might be excessive?
      
      * UniPC Multistep fix tensor dtype/device on order=3
      
      * UniPC UTs Add v_pred to fp16 test iter
      
      For completions sake. Probably overkill?
      19ab04ff
  13. 02 Apr, 2024 1 commit
    • Sayak Paul's avatar
      add: utility to format our docs too 📜 (#7314) · 4a343077
      Sayak Paul authored
      * add: utility to format our docs too 📜
      
      * debugging saga
      
      * fix: message
      
      * checking
      
      * should be fixed.
      
      * revert pipeline_fixture
      
      * remove empty line
      
      * make style
      
      * fix: setup.py
      
      * style.
      4a343077
  14. 30 Mar, 2024 1 commit
    • Beinsezii's avatar
      Add `final_sigma_zero` to UniPCMultistep (#7517) · f0c81562
      Beinsezii authored
      * Add `final_sigma_zero` to UniPCMultistep
      
      Effectively the same trick as DDIM's `set_alpha_to_one` and
      DPM's `final_sigma_type='zero'`.
      Currently False by default but maybe this should be True?
      
      * `final_sigma_zero: bool` -> `final_sigmas_type: str`
      
      Should 1:1 match DPM Multistep now.
      
      * Set `final_sigmas_type='sigma_min'` in UniPC UTs
      f0c81562
  15. 21 Mar, 2024 1 commit
  16. 19 Mar, 2024 2 commits
  17. 18 Mar, 2024 2 commits
    • M. Tolga Cangöz's avatar
      e97a633b
    • M. Tolga Cangöz's avatar
      Fix Typos (#7325) · 6a05b274
      M. Tolga Cangöz authored
      * Fix PyTorch's convention for inplace functions
      
      * Fix import structure in __init__.py and update config loading logic in test_config.py
      
      * Update configuration access
      
      * Fix typos
      
      * Trim trailing white spaces
      
      * Fix typo in logger name
      
      * Revert "Fix PyTorch's convention for inplace functions"
      
      This reverts commit f65dc4afcb57ceb43d5d06389229d47bafb10d2d.
      
      * Fix typo in step_index property description
      
      * Revert "Update configuration access"
      
      This reverts commit 8d44e870b8c1ad08802e3e904c34baeca1b598f8.
      
      * Revert "Fix import structure in __init__.py and update config loading logic in test_config.py"
      
      This reverts commit 2ad5e8bca25aede3b912da22bd57285b598fe171.
      
      * Fix typos
      
      * Fix typos
      
      * Fix typos
      
      * Fix a typo: tranform -> transform
      6a05b274
  18. 14 Mar, 2024 2 commits
  19. 13 Mar, 2024 2 commits
    • Manuel Brack's avatar
      [Pipeline] Add LEDITS++ pipelines (#6074) · 00eca4b8
      Manuel Brack authored
      
      
      * Setup LEdits++ file structure
      
      * Fix import
      
      * LEditsPP Stable Diffusion pipeline
      
      * Include variable image aspect ratios
      
      * Implement LEDITS++ for SDXL
      
      * clean up LEditsPPPipelineStableDiffusion
      
      * Adjust inversion output
      
      * Added docu, more cleanup for LEditsPPPipelineStableDiffusion
      
      * clean up LEditsPPPipelineStableDiffusionXL
      
      * Update documentation
      
      * Fix documentation import
      
      * Add skeleton IF implementation
      
      * Fix documentation typo
      
      * Add LEDTIS docu to toctree
      
      * Add missing title
      
      * Finalize SD documentation
      
      * Finalize SD-XL documentation
      
      * Fix code style and quality
      
      * Fix typo
      
      * Fix return types
      
      * added LEditsPPPipelineIF; minor changes for LEditsPPPipelineStableDiffusion and LEditsPPPipelineStableDiffusionXL
      
      * Fix copy reference
      
      * add documentation for IF
      
      * Add first tests
      
      * Fix batching for SD-XL
      
      * Fix text encoding and perfect reconstruction for SD-XL
      
      * Add tests for SD-XL, minor changes
      
      * move user_mask to correct device, use cross_attention_kwargs also for inversion
      
      * Example docstring
      
      * Fix attention resolution for non-square images
      
      * Refactoring for PR review
      
      * Safely remove ledits_utils.py
      
      * Style fixes
      
      * Replace assertions with ValueError
      
      * Remove LEditsPPPipelineIF
      
      * Remove unecessary input checks
      
      * Refactoring of CrossAttnProcessor
      
      * Revert unecessary changes to scheduler
      
      * Remove first progress-bar in inversion
      
      * Refactor scheduler usage and reset
      
      * Use imageprocessor instead of custom logic
      
      * Fix scheduler init warning
      
      * Fix error when running the pipeline in fp16
      
      * Update documentation wrt perfect inversion
      
      * Update tests
      
      * Fix code quality and copy consistency
      
      * Update LEditsPP import
      
      * Remove enable/disable methods that are now in StableDiffusionMixin
      
      * Change import in docs
      
      * Revert import structure change
      
      * Fix ledits imports
      
      ---------
      Co-authored-by: default avatarKatharina Kornmeier <katharina.kornmeier@stud.tu-darmstadt.de>
      00eca4b8
    • Sayak Paul's avatar
      [Chore] switch to `logger.warning` (#7289) · 4fbd310f
      Sayak Paul authored
      switch to logger.warning
      4fbd310f
  20. 08 Mar, 2024 1 commit
    • Chi's avatar
      Solve missing clip_sample implementation in FlaxDDIMScheduler. (#7017) · 46fac824
      Chi authored
      
      
      * I added a new doc string to the class. This is more flexible to understanding other developers what are doing and where it's using.
      
      * Update src/diffusers/models/unet_2d_blocks.py
      
      This changes suggest by maintener.
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update src/diffusers/models/unet_2d_blocks.py
      
      Add suggested text
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update unet_2d_blocks.py
      
      I changed the Parameter to Args text.
      
      * Update unet_2d_blocks.py
      
      proper indentation set in this file.
      
      * Update unet_2d_blocks.py
      
      a little bit of change in the act_fun argument line.
      
      * I run the black command to reformat style in the code
      
      * Update unet_2d_blocks.py
      
      similar doc-string add to have in the original diffusion repository.
      
      * Fix bug for mention in this issue section #6901
      
      * Update src/diffusers/schedulers/scheduling_ddim_flax.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Fix linter
      
      * Restore empty line
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      46fac824
  21. 05 Mar, 2024 1 commit
  22. 04 Mar, 2024 1 commit
    • Thiago Crepaldi's avatar
      Enable PyTorch's FakeTensorMode for EulerDiscreteScheduler scheduler (#7151) · ca6cdc77
      Thiago Crepaldi authored
      * Enable FakeTensorMode for EulerDiscreteScheduler scheduler
      
      PyTorch's FakeTensorMode does not support `.numpy()` or `numpy.array()`
      calls.
      
      This PR replaces `sigmas` numpy tensor by a PyTorch tensor equivalent
      
      Repro
      
      ```python
      with torch._subclasses.FakeTensorMode() as fake_mode, ONNXTorchPatcher():
          fake_model = DiffusionPipeline.from_pretrained(model_name, low_cpu_mem_usage=False)
      ```
      
      that otherwise would fail with
      `RuntimeError: .numpy() is not supported for tensor subclasses.`
      
      * Address comments
      ca6cdc77
  23. 27 Feb, 2024 3 commits
  24. 13 Feb, 2024 1 commit
  25. 08 Feb, 2024 1 commit
  26. 01 Feb, 2024 1 commit
  27. 31 Jan, 2024 1 commit
  28. 30 Jan, 2024 1 commit
  29. 26 Jan, 2024 1 commit
  30. 22 Jan, 2024 1 commit
  31. 19 Jan, 2024 1 commit
  32. 28 Dec, 2023 1 commit