- 03 Nov, 2022 3 commits
-
-
Revist authored
* feat: add repaint * fix: fix quality check with `make fix-copies` * fix: remove old unnecessary arg * chore: change default to DDPM (looks better in experiments) * ".to(device)" changed to "device=" Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * make generator device-specific Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * make generator device-specific and change shape Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * fix: add preprocessing for image and mask Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * fix: update test Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Update src/diffusers/pipelines/repaint/pipeline_repaint.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Add docs and examples * Fix toctree Co-authored-by:
fja <fja@zurich.ibm.com> Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Anton Lozhkov <anton@huggingface.co>
-
Anton Lozhkov authored
* Allow saving `None` pipeline components * support flax as well * style
-
Anton Lozhkov authored
* Remove the hub token * replace repos * style
-
- 02 Nov, 2022 14 commits
-
-
Patrick von Platen authored
* [Loading] Ignore unneeded files * up
-
Denis authored
* changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly * Revert "changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly" This reverts commit c5efb525648885f2e7df71f4483a9f248515ad61. * changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly * fixed code style Co-authored-by:lukovnikov <lukovnikov@users.noreply.github.com>
-
Kashif Rasul authored
* initial get_sinusoidal_embeddings * added asserts * better var name * fix docs
-
Yuta Hayashibe authored
-
Grigory Sizov authored
* Fix equality test for ddim and ddpm * add docs for use_clipped_model_output in DDIM * fix inline comment * reorder imports in test_pipelines.py * Ignore use_clipped_model_output if scheduler doesn't take it
-
Omiita authored
Fix a small typo fix a typo in `models/attention.py`. weight -> width
-
Anton Lozhkov authored
* [WIP][CI] Framework and hardware-specific docker images for CI tests * username * fix cpu * try out the image * push latest * update workspace * no root isolation for actions * add a flax image * flax and onnx matrix * fix runners * add reports * onnxruntime image * retry tpu * fix * fix * build onnxruntime * naming * onnxruntime-gpu image * onnxruntime-gpu image, slow tests * latest jax version * trigger flax * run flax tests in one thread * fast flax tests on cpu * fast flax tests on cpu * trigger slow tests * rebuild torch cuda * force cuda provider * fix onnxruntime tests * trigger slow * don't specify gpu for tpu * optimize * memory limit * fix flax tests * disable docker cache
-
Suraj Patil authored
Update README.md
-
Jonathan Rahn authored
Update README.md fixed typo
-
Patrick von Platen authored
* Rename latent * uP
-
rafael authored
* [Community Pipelines] lpw_stable_diffusion: Add is_cancelled_callback * [Community pipelines] lpw_stable_diffusion_onnx: Add is_cancelled_callback
-
Lewington-pitsos authored
* improve test precision get tests passing with greater precision using lewington images * make old numpy load function a wrapper around a more flexible numpy loading function * adhere to black formatting * add more black formatting * adhere to isort * loosen precision and replace path Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Suraj Patil authored
* add euler scheduler in docs * add a section for how to use different scheds * address patrck's comments
-
MatthieuTPHR authored
* 2x speedup using memory efficient attention * remove einops dependency * Swap K, M in op instantiation * Simplify code, remove unnecessary maybe_init call and function, remove unused self.scale parameter * make xformers a soft dependency * remove one-liner functions * change one letter variable to appropriate names * Remove Env variable dependency, remove MemoryEfficientCrossAttention class and use enable_xformers_memory_efficient_attention method * Add memory efficient attention toggle to img2img and inpaint pipelines * Clearer management of xformers' availability * update optimizations markdown to add info about memory efficient attention * add benchmarks for TITAN RTX * More detailed explanation of how the mem eff benchmark were ran * Removing autocast from optimization markdown * import_utils: import torch only if is available Co-authored-by:Nouamane Tazi <nouamane98@gmail.com>
-
- 01 Nov, 2022 1 commit
-
-
MarkRich authored
* initial commit to add imagic to stable diffusion community pipelines * remove some testing changes * comments from PR review for imagic stable diffusion * remove changes from pipeline_stable_diffusion as part of imagic pipeline * clean up example code and add line back in to pipeline_stable_diffusion for imagic pipeline * remove unused functions * small code quality changes for imagic pipeline * clean up readme * remove hardcoded logging values for imagic community example * undo change for DDIMScheduler
-
- 31 Oct, 2022 11 commits
-
-
Laurent Mazare authored
Remove some unused parameter The `downsample_padding` parameter does not seem to be used in `CrossAttnUpBlock2D` (or by any up block for that matter) so removing it.
-
Patrick von Platen authored
* Remove nn sequential * up
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
* [Better scheduler docs] Improve usage examples of schedulers * finish * fix warnings and add test * finish * more replacements * adapt fast tests hf token * correct more * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Integrate compatibility with euler Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
hlky authored
* k-diffusion-euler * make style make quality * make fix-copies * fix tests for euler a * Update src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Update src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Update src/diffusers/schedulers/scheduling_euler_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Update src/diffusers/schedulers/scheduling_euler_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * remove unused arg and method * update doc * quality * make flake happy * use logger instead of warn * raise error instead of deprication * don't require scipy * pass generator in step * fix tests * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update tests/test_scheduler.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * remove unused generator * pass generator as extra_step_kwargs * update tests * pass generator as kwarg * pass generator as kwarg * quality * fix test for lms * fix tests Co-authored-by:
patil-suraj <surajp815@gmail.com> Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Pedro Cuenca authored
Allow None safety_checker when using CPU offload.
-
Patrick von Platen authored
* [GitBot] Automatically close issues after inactivitiy * improve * Add unstale * typo Co-authored-by:anton-l <anton@huggingface.co>
-
Anton Lozhkov authored
* Fix pipelines user_agent, ignore CI requests * fix circular import * N/A versions * N/A versions
-
-
Patrick von Platen authored
-
- 30 Oct, 2022 1 commit
-
-
Jonatan Kłosko authored
* Move safety detection to model call in Flax safety checker * Update src/diffusers/pipelines/stable_diffusion/safety_checker_flax.py
-
- 29 Oct, 2022 5 commits
-
-
Pedro Cuenca authored
* Docs: refer to pre-RC version of PyTorch 1.13.0. * Remove temporary workaround for unavailable op. * Update comment to make it less ambiguous. * Remove use of contiguous in mps. It appears to not longer be necessary. * Special case: use einsum for much better performance in mps * Update mps docs. * MPS: make pipeline work in half precision.
-
Pedro Cuenca authored
Tests: upgrade PyTorch cuda to 11.7. Otherwise the cuda versions of torch and torchvision mismatch, and examples tests fail. We were requesting cuda 11.6 for PyTorch, and the default torchvision (via setup.py). Another option would be to include torchvision in the same pip install line as torch.
-
MarkRich authored
* add seed resizing to community examples * actually add the file responsible for seed resizing
-
Nathan Lambert authored
-
Minwoo Byeon authored
-
- 28 Oct, 2022 5 commits
-
-
Pedro Cuenca authored
* Update training and fine-tuning docs. * Update examples README. * Update README. * Add Flax fine-tuning section. * Accept suggestion Co-authored-by:
Anton Lozhkov <anton@huggingface.co> * Accept suggestion Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co>
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-