- 30 Sep, 2022 1 commit
-
-
Josh Achiam authored
* Allow resolutions that are not multiples of 64 * ran black * fix bug * add test * more explanation * more comments Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 27 Sep, 2022 1 commit
-
-
Kashif Rasul authored
* pytorch only schedulers * fix style * remove match_shape * pytorch only ddpm * remove SchedulerMixin * remove numpy from karras_ve * fix types * remove numpy from lms_discrete * remove numpy from pndm * fix typo * remove mixin and numpy from sde_vp and ve * remove remaining tensor_format * fix style * sigmas has to be torch tensor * removed set_format in readme * remove set format from docs * remove set_format from pipelines * update tests * fix typo * continue to use mixin * fix imports * removed unsed imports * match shape instead of assuming image shapes * remove import typo * update call to add_noise * use math instead of numpy * fix t_index * removed commented out numpy tests * timesteps needs to be discrete * cast timesteps to int in flax scheduler too * fix device mismatch issue * small fix * Update src/diffusers/schedulers/scheduling_pndm.py Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 24 Sep, 2022 1 commit
-
-
Grigory Sizov authored
fix formula for noise levels in karras scheduler and tests
-
- 21 Sep, 2022 2 commits
-
-
Anton Lozhkov authored
-
Mishig Davaadorj authored
-
- 20 Sep, 2022 1 commit
-
-
Anton Lozhkov authored
* Add the K-LMS scheduler to the inpainting pipeline + tests * Remove redundant casts
-
- 16 Sep, 2022 3 commits
-
-
Patrick von Platen authored
* [Download] Smart downloading * add test * finish test * update * make style
-
Anton Lozhkov authored
* Quick fix for the img2img tests * Remove debug lines
-
Anton Lozhkov authored
* Finally fix the image-based SD tests * Remove autocast * Remove autocast in image tests
-
- 12 Sep, 2022 1 commit
-
-
Kashif Rasul authored
* update expected results of slow tests * relax sum and mean tests * Print shapes when reporting exception * formatting * fix sentence * relax test_stable_diffusion_fast_ddim for gpu fp16 * relax flakey tests on GPU * added comment on large tolerences * black * format * set scheduler seed * added generator * use np.isclose * set num_inference_steps to 50 * fix dep. warning * update expected_slice * preprocess if image * updated expected results * updated expected from CI * pass generator to VAE * undo change back to orig * use orignal * revert back the expected on cpu * revert back values for CPU * more undo * update result after using gen * update mean * set generator for mps * update expected on CI server * undo * use new seed every time * cpu manual seed * reduce num_inference_steps * style * use generator for randn Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 08 Sep, 2022 4 commits
-
-
Patrick von Platen authored
* [Tests] Correct image folder tests * up
-
Anton Lozhkov authored
* initial export and design * update imports * custom prover, import fixes * Update src/diffusers/onnx_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/onnx_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * remove push_to_hub * Update src/diffusers/onnx_utils.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * remove torch_device * numpify the rest of the pipeline * torchify the safety checker * revert tensor * Code review suggestions + quality * fix tests * fix provider, add an end-to-end test * style Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
Anton Lozhkov authored
nicer datasets
-
Pedro Cuenca authored
* Initial support for mps in Stable Diffusion pipeline. * Initial "warmup" implementation when using mps. * Make some deterministic tests pass with mps. * Disable training tests when using mps. * SD: generate latents in CPU then move to device. This is especially important when using the mps device, because generators are not supported there. See for example https://github.com/pytorch/pytorch/issues/84288. In addition, the other pipelines seem to use the same approach: generate the random samples then move to the appropriate device. After this change, generating an image in MPS produces the same result as when using the CPU, if the same seed is used. * Remove prints. * Pass AutoencoderKL test_output_pretrained with mps. Sampling from `posterior` must be done in CPU. * Style * Do not use torch.long for log op in mps device. * Perform incompatible padding ops in CPU. UNet tests now pass. See https://github.com/pytorch/pytorch/issues/84535 * Style: fix import order. * Remove unused symbols. * Remove MPSWarmupMixin, do not apply automatically. We do apply warmup in the tests, but not during normal use. This adopts some PR suggestions by @patrickvonplaten. * Add comment for mps fallback to CPU step. * Add README_mps.md for mps installation and use. * Apply `black` to modified files. * Restrict README_mps to SD, show measures in table. * Make PNDM indexing compatible with mps. Addresses #239. * Do not use float64 when using LDMScheduler. Fixes #358. * Fix typo identified by @patil-suraj Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Adapt example to new output style. * Restore 1:1 results reproducibility with CompVis. However, mps latents need to be generated in CPU because generators don't work in the mps device. * Move PyTorch nightly to requirements. * Adapt `test_scheduler_outputs_equivalence` ton MPS. * mps: skip training tests instead of ignoring silently. * Make VQModel tests pass on mps. * mps ddim tests: warmup, increase tolerance. * ScoreSdeVeScheduler indexing made mps compatible. * Make ldm pipeline tests pass using warmup. * Style * Simplify casting as suggested in PR. * Add Known Issues to readme. * `isort` import order. * Remove _mps_warmup helpers from ModelMixin. And just make changes to the tests. * Skip tests using unittest decorator for consistency. * Remove temporary var. * Remove spurious blank space. * Remove unused symbol. * Remove README_mps. Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
-
- 06 Sep, 2022 2 commits
-
-
Patrick von Platen authored
* up * add tests * correct * up * finish * better naming * Update README.md Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Anton Lozhkov authored
move to fp16, update ddim
-
- 05 Sep, 2022 1 commit
-
-
Patrick von Platen authored
* add outputs for models * add for pipelines * finish schedulers * better naming * adapt tests as well * replace dict access with . access * make schedulers works * finish * correct readme * make bcp compatible * up * small fix * finish * more fixes * more fixes * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/models/vae.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Adapt model outputs * Apply more suggestions * finish examples * correct Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 03 Sep, 2022 1 commit
-
-
Patrick von Platen authored
-
- 02 Sep, 2022 1 commit
-
-
Anton Lozhkov authored
* Fix tqdm and OOM * tqdm auto * tqdm is still spamming try to disable it altogether * rather just set the pipe config, to keep the global tqdm clean * style
-
- 01 Sep, 2022 1 commit
-
-
Anton Lozhkov authored
* Fix nondeterministic tests for GPU runs * force SD fast tests to the CPU
-
- 31 Aug, 2022 2 commits
-
-
Patrick von Platen authored
* add fast tests * Finish
-
Anton Lozhkov authored
-
- 30 Aug, 2022 2 commits
-
-
Patrick von Platen authored
* [Examples readme] * Improve * more * save * save * save more * up * up * Apply suggestions from code review Co-authored-by:
Nathan Lambert <nathan@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * up * make deterministic * up * better * up * add generator to img2img pipe * save * make pipelines deterministic * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py Co-authored-by:
Anton Lozhkov <anton@huggingface.co> * apply all changes * more correctnios * finish * improve table * more fixes * up * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Anton Lozhkov <anton@huggingface.co> * Update src/diffusers/pipelines/README.md Co-authored-by:
Suraj Patil <surajp815@gmail.com> * add better links * fix more * finish Co-authored-by:
Nathan Lambert <nathan@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
hysts authored
* Refactor progress bar of pipeline __call__ * Make any tqdm configs available * remove init * add some tests * remove file * finish * make style * improve progress bar test Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 29 Aug, 2022 1 commit
-
-
Patrick von Platen authored
* [Tests] Make sure tests are on GPU * move more models * speed up tests
-
- 24 Aug, 2022 1 commit
-
-
Kashif Rasul authored
* split tests_modeling_utils * Fix SD tests .to(device) * fix merge * Fix style Co-authored-by:anton-l <anton@huggingface.co>
-