- 09 May, 2024 2 commits
-
-
YiYi Xu authored
* support custom sigmas and timesteps, dpm euler --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
Tolga Cangöz authored
Fix imports
-
- 08 May, 2024 1 commit
-
-
Philip Pham authored
`model_output.shape` may only have rank 1. There are warnings related to use of random keys. ``` tests/schedulers/test_scheduler_flax.py: 13 warnings /Users/phillypham/diffusers/src/diffusers/schedulers/scheduling_ddpm_flax.py:268: FutureWarning: normal accepts a single key, but was given a key array of shape (1, 2) != (). Use jax.vmap for batching. In a future JAX version, this will be an error. noise = jax.random.normal(split_key, shape=model_output.shape, dtype=self.dtype) tests/schedulers/test_scheduler_flax.py::FlaxDDPMSchedulerTest::test_betas /Users/phillypham/virtualenv/diffusers/lib/python3.9/site-packages/jax/_src/random.py:731: FutureWarning: uniform accepts a single key, but was given a key array of shape (1,) != (). Use jax.vmap for batching. In a future JAX version, this will be an error. u = uniform(key, shape, dtype, lo, hi) # type: ignore[arg-type] ```
-
- 03 May, 2024 1 commit
-
-
Lucain authored
* Deprecate resume_download * align docstring with transformers * style --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 29 Apr, 2024 1 commit
-
-
RuiningLi authored
* Added get_velocity function to EulerDiscreteScheduler. * Fix white space on blank lines * Added copied from statement * back to the original. --------- Co-authored-by:
Ruining Li <ruining@robots.ox.ac.uk> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 27 Apr, 2024 1 commit
-
-
Sayak Paul authored
* introduce sigma schedule. Co-authored-by:
Suraj Patil <surajp815@gmail.com> * address yiyi * update docstrings. * implement the schedule for EDMDPMSolverMultistepScheduler --------- Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 03 Apr, 2024 2 commits
-
-
Beinsezii authored
* UniPC Multistep add `rescale_betas_zero_snr` Same patch as DPM and Euler with the patched final alpha cumprod BF16 doesn't seem to break down, I think cause UniPC upcasts during some phases already? We could still force an upcast since it only loses ≈ 0.005 it/s for me but the difference in output is very small. A better endeavor might upcasting in step() and removing all the other upcasts elsewhere? * UniPC ZSNR UT * Re-add `rescale_betas_zsnr` doc oops
-
Beinsezii authored
* UniPC UTs iterate solvers on FP16 It wasn't catching errs on order==3. Might be excessive? * UniPC Multistep fix tensor dtype/device on order=3 * UniPC UTs Add v_pred to fp16 test iter For completions sake. Probably overkill?
-
- 02 Apr, 2024 1 commit
-
-
Sayak Paul authored
* add: utility to format our docs too
📜 * debugging saga * fix: message * checking * should be fixed. * revert pipeline_fixture * remove empty line * make style * fix: setup.py * style.
-
- 30 Mar, 2024 1 commit
-
-
Beinsezii authored
* Add `final_sigma_zero` to UniPCMultistep Effectively the same trick as DDIM's `set_alpha_to_one` and DPM's `final_sigma_type='zero'`. Currently False by default but maybe this should be True? * `final_sigma_zero: bool` -> `final_sigmas_type: str` Should 1:1 match DPM Multistep now. * Set `final_sigmas_type='sigma_min'` in UniPC UTs
-
- 21 Mar, 2024 1 commit
-
-
M. Tolga Cangöz authored
* Fix typos * Fix typo in SVD.md
-
- 19 Mar, 2024 2 commits
-
-
YiYi Xu authored
* fix * fix * add a tests * fix --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
yiyixuxu <yixu310@gmail,com>
-
Aryan authored
* add missing copied from statements in tcd scheduler * update docstring --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 18 Mar, 2024 2 commits
-
-
M. Tolga Cangöz authored
Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
M. Tolga Cangöz authored
* Fix PyTorch's convention for inplace functions * Fix import structure in __init__.py and update config loading logic in test_config.py * Update configuration access * Fix typos * Trim trailing white spaces * Fix typo in logger name * Revert "Fix PyTorch's convention for inplace functions" This reverts commit f65dc4afcb57ceb43d5d06389229d47bafb10d2d. * Fix typo in step_index property description * Revert "Update configuration access" This reverts commit 8d44e870b8c1ad08802e3e904c34baeca1b598f8. * Revert "Fix import structure in __init__.py and update config loading logic in test_config.py" This reverts commit 2ad5e8bca25aede3b912da22bd57285b598fe171. * Fix typos * Fix typos * Fix typos * Fix a typo: tranform -> transform
-
- 14 Mar, 2024 2 commits
-
-
Kenneth Gerald Hamilton authored
* update get_order_list if statement * revery
-
Beinsezii authored
* Change step_offset scheduler docstrings * Mention it may be needed by some models * More docstrings These ones failed literal S&R because I performed it case-sensitive which is fun. --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 13 Mar, 2024 2 commits
-
-
Manuel Brack authored
* Setup LEdits++ file structure * Fix import * LEditsPP Stable Diffusion pipeline * Include variable image aspect ratios * Implement LEDITS++ for SDXL * clean up LEditsPPPipelineStableDiffusion * Adjust inversion output * Added docu, more cleanup for LEditsPPPipelineStableDiffusion * clean up LEditsPPPipelineStableDiffusionXL * Update documentation * Fix documentation import * Add skeleton IF implementation * Fix documentation typo * Add LEDTIS docu to toctree * Add missing title * Finalize SD documentation * Finalize SD-XL documentation * Fix code style and quality * Fix typo * Fix return types * added LEditsPPPipelineIF; minor changes for LEditsPPPipelineStableDiffusion and LEditsPPPipelineStableDiffusionXL * Fix copy reference * add documentation for IF * Add first tests * Fix batching for SD-XL * Fix text encoding and perfect reconstruction for SD-XL * Add tests for SD-XL, minor changes * move user_mask to correct device, use cross_attention_kwargs also for inversion * Example docstring * Fix attention resolution for non-square images * Refactoring for PR review * Safely remove ledits_utils.py * Style fixes * Replace assertions with ValueError * Remove LEditsPPPipelineIF * Remove unecessary input checks * Refactoring of CrossAttnProcessor * Revert unecessary changes to scheduler * Remove first progress-bar in inversion * Refactor scheduler usage and reset * Use imageprocessor instead of custom logic * Fix scheduler init warning * Fix error when running the pipeline in fp16 * Update documentation wrt perfect inversion * Update tests * Fix code quality and copy consistency * Update LEditsPP import * Remove enable/disable methods that are now in StableDiffusionMixin * Change import in docs * Revert import structure change * Fix ledits imports --------- Co-authored-by:Katharina Kornmeier <katharina.kornmeier@stud.tu-darmstadt.de>
-
Sayak Paul authored
switch to logger.warning
-
- 08 Mar, 2024 1 commit
-
-
Chi authored
* I added a new doc string to the class. This is more flexible to understanding other developers what are doing and where it's using. * Update src/diffusers/models/unet_2d_blocks.py This changes suggest by maintener. Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/models/unet_2d_blocks.py Add suggested text Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update unet_2d_blocks.py I changed the Parameter to Args text. * Update unet_2d_blocks.py proper indentation set in this file. * Update unet_2d_blocks.py a little bit of change in the act_fun argument line. * I run the black command to reformat style in the code * Update unet_2d_blocks.py similar doc-string add to have in the original diffusion repository. * Fix bug for mention in this issue section #6901 * Update src/diffusers/schedulers/scheduling_ddim_flax.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Fix linter * Restore empty line --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 05 Mar, 2024 1 commit
-
-
Michael authored
* add: support TCD scheduler --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 04 Mar, 2024 1 commit
-
-
Thiago Crepaldi authored
* Enable FakeTensorMode for EulerDiscreteScheduler scheduler PyTorch's FakeTensorMode does not support `.numpy()` or `numpy.array()` calls. This PR replaces `sigmas` numpy tensor by a PyTorch tensor equivalent Repro ```python with torch._subclasses.FakeTensorMode() as fake_mode, ONNXTorchPatcher(): fake_model = DiffusionPipeline.from_pretrained(model_name, low_cpu_mem_usage=False) ``` that otherwise would fail with `RuntimeError: .numpy() is not supported for tensor subclasses.` * Address comments
-
- 27 Feb, 2024 3 commits
-
-
Beinsezii authored
* DPMMultistep rescale_betas_zero_snr * DPM upcast samples in step() * DPM rescale_betas_zero_snr UT * DPMSolverMulti move sample upcast after model convert Avoids having to re-use the dtype. * Add a newline for Ruff
-
Suraj Patil authored
* add DPM scheduler with EDM formulation * set sigmas in init * add _compute_sigmas * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * address some review comments * up, * add tests --------- Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Suraj Patil authored
* Add EDMEulerScheduler * address review comments * fix import * fix test * add tests * add co-author Co-authored-by: @dg845 dgu8957@gmail.com
-
- 13 Feb, 2024 1 commit
-
-
YiYi Xu authored
[DPMSolverSinglestepScheduler] correct `get_order_list` for `solver_order=2`and `lower_order_final=True` (#6953) * add * change default --------- Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
- 08 Feb, 2024 1 commit
-
-
Sayak Paul authored
change to 2024
-
- 01 Feb, 2024 1 commit
-
-
YiYi Xu authored
-
- 31 Jan, 2024 1 commit
-
-
Steven Liu authored
add missing param
-
- 30 Jan, 2024 1 commit
-
-
Yunxuan Xiao authored
* load cumprod tensor to device Signed-off-by:
woshiyyya <xiaoyunxuan1998@gmail.com> * fixing ci Signed-off-by:
woshiyyya <xiaoyunxuan1998@gmail.com> * make fix-copies Signed-off-by:
woshiyyya <xiaoyunxuan1998@gmail.com> --------- Signed-off-by:
woshiyyya <xiaoyunxuan1998@gmail.com>
-
- 26 Jan, 2024 1 commit
-
-
Patrick von Platen authored
-
- 22 Jan, 2024 1 commit
-
-
Junsong Chen authored
* add Sa-Solver --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
scxue <xueshuchen17@mails.ucas.edu.cn> Co-authored-by:
jschen <chenjunsong4@h-partners.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
yiyixuxu <yixu310@gmail,com>
-
- 19 Jan, 2024 1 commit
-
-
YiYi Xu authored
* fix --------- Co-authored-by:
yiyixuxu <yixu310@gmail,com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 28 Dec, 2023 1 commit
-
-
Adrian Punga authored
Fix support for MPS MPS doesn't support float64
-
- 26 Dec, 2023 2 commits
-
-
Justin Ruan authored
* Remove unused parameters and fixed `FutureWarning` * Fixed wrong config instance * update unittest for `DDIMInverseScheduler`
-
dg845 authored
* Add rescale_betas_zero_snr argument to DDPMScheduler. * Propagate rescale_betas_zero_snr changes to DDPMParallelScheduler. --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 21 Dec, 2023 1 commit
-
-
Will Berman authored
amused rename Update docs/source/en/api/pipelines/amused.md Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> AdaLayerNormContinuous default values custom micro conditioning micro conditioning docs put lookup from codebook in constructor fix conversion script remove manual fused flash attn kernel add training script temp remove training script add dummy gradient checkpointing func clarify temperatures is an instance variable by setting it remove additional SkipFF block args hardcode norm args rename tests folder fix paths and samples fix tests add training script training readme lora saving and loading non-lora saving/loading some readme fixes guards Update docs/source/en/api/pipelines/amused.md Co-authored-by:
Suraj Patil <surajp815@gmail.com> Update examples/amused/README.md Co-authored-by:
Suraj Patil <surajp815@gmail.com> Update examples/amused/train_amused.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> vae upcasting add fp16 integration tests use tuple for micro cond copyrights remove casts delegate to torch.nn.LayerNorm move temperature to pipeline call upsampling/downsampling changes
-
- 20 Dec, 2023 1 commit
-
-
Beinsezii authored
* EulerAncestral add `rescale_betas_zero_snr` Uses same infinite sigma fix from EulerDiscrete. Interestingly the ancestral version had the opposite problem: too much contrast instead of too little. * UT for EulerAncestral `rescale_betas_zero_snr` * EulerAncestral upcast samples during step() It helps this scheduler too, particularly when the model is using bf16. While the noise dtype is still the model's it's automatically upcasted for the add so all it affects is determinism. --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 15 Dec, 2023 1 commit
-
-
Patrick von Platen authored
* correct * Apply suggestions from code review * make style
-
- 07 Dec, 2023 1 commit
-
-
YiYi Xu authored
* fix * copies --------- Co-authored-by:yiyixuxu <yixu310@gmail,com>
-