- 07 Nov, 2022 1 commit
-
-
Pedro Cuenca authored
Remove warning about half precision on MPS.
-
- 06 Nov, 2022 1 commit
-
-
Cheng Lu authored
* add dpmsolver discrete pytorch scheduler * fix some typos in dpm-solver pytorch * add dpm-solver pytorch in stable-diffusion pipeline * add jax/flax version dpm-solver * change code style * change code style * add docs * add `add_noise` method for dpmsolver * add pytorch unit test for dpmsolver * add dummy object for pytorch dpmsolver * Update src/diffusers/schedulers/scheduling_dpmsolver_discrete.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update tests/test_config.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update tests/test_config.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * resolve the code comments * rename the file * change class name * fix code style * add auto docs for dpmsolver multistep * add more explanations for the stabilizing trick (for steps < 15) * delete the dummy file * change the API name of predict_epsilon, algorithm_type and solver_type * add compatible lists Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 05 Nov, 2022 1 commit
-
-
Pedro Cuenca authored
Flip sin to cos in t embeddings. This was assumed in the previous implementation, but now the default is the opposite. Fixes #1145.
-
- 04 Nov, 2022 5 commits
-
-
Chen Wu (吴尘) authored
* Add CycleDiffusion pipeline for Stable Diffusion * Add the option of passing noise to DDIMScheduler Add the option of providing the noise itself to DDIMScheduler, instead of the random seed generator. * Update README.md * Update README.md * Update pipeline_stable_diffusion_cycle_diffusion.py * Update pipeline_stable_diffusion_cycle_diffusion.py * Update pipeline_stable_diffusion_cycle_diffusion.py * Update pipeline_stable_diffusion_cycle_diffusion.py * Update scheduling_ddim.py * Update import format * Update pipeline_stable_diffusion_cycle_diffusion.py * Update scheduling_ddim.py * Update src/diffusers/schedulers/scheduling_ddim.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/schedulers/scheduling_ddim.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/schedulers/scheduling_ddim.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/schedulers/scheduling_ddim.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/schedulers/scheduling_ddim.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update scheduling_ddim.py * Update scheduling_ddim.py * Update scheduling_ddim.py * add two tests * Update pipeline_stable_diffusion_cycle_diffusion.py * Update pipeline_stable_diffusion_cycle_diffusion.py * Update README.md * Rename pipeline name as suggested in the latest reviewer comment * Update test_pipelines.py * Update test_pipelines.py * Update test_pipelines.py * Update pipeline_stable_diffusion_cycle_diffusion.py * Remove the generator This generator does not control all randomness during sampling, which can be misleading. * Update optimal hyperparameters * Update src/diffusers/pipelines/stable_diffusion/README.md Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update src/diffusers/pipelines/stable_diffusion/README.md Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update src/diffusers/pipelines/stable_diffusion/README.md Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Apply suggestions from code review * uP * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_cycle_diffusion.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * up * up * Replace assert with ValueError * finish docs Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
Pi Esposito authored
* add enable sequential cpu offloading to other stable diffusion pipelines * trigger ci * fix styling * interpolate before converting to device to avoid breking when cpu_offload is enabled with fp16 Co-authored-by:
Pedro Gengo <pedro.gabriel.lourenco@hotmail.com> * style again I need to stop forgething this thing * fix inpainting bug that could cause device misalignment Co-authored-by:
Pedro Gengo <pedro.gabriel.lourenco@hotmail.com> * Apply suggestions from code review Co-authored-by:
Pedro Gengo <pedro.gabriel.lourenco@hotmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Anton Lozhkov authored
* Bump to 0.8.0.dev0 * deprecate int timesteps * style
-
Chenguo Lin authored
Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Patrick von Platen authored
* finish * finish * Update src/diffusers/modeling_utils.py * Update src/diffusers/pipeline_utils.py Co-authored-by:
Anton Lozhkov <anton@huggingface.co> * more fixes * fix Co-authored-by:
Anton Lozhkov <anton@huggingface.co>
-
- 03 Nov, 2022 11 commits
-
-
Patrick von Platen authored
-
Patrick von Platen authored
-
anton-l authored
-
Suraj Patil authored
* handle device for randn in euler step * convert device to str
-
Patrick von Platen authored
* correct naming * finish * Apply suggestions from code review * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
Patrick von Platen authored
-
Suraj Patil authored
* make accelerate hard dep * default fast init * move params to cpu when device map is None * handle device_map=None * handle torch < 1.9 * remove device_map="auto" * style * add accelerate in torch extra * remove accelerate from extras["test"] * raise an error if torch is available but not accelerate * update installation docs * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * improve defautl loading speed even further, allow disabling fats loading * address review comments * adapt the tests * fix test_stable_diffusion_fast_load * fix test_read_init * temp fix for dummy checks * Trigger Build * Apply suggestions from code review Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Anton Lozhkov <anton@huggingface.co>
-
Will Berman authored
* Changes for VQ-diffusion VQVAE Add specify dimension of embeddings to VQModel: `VQModel` will by default set the dimension of embeddings to the number of latent channels. The VQ-diffusion VQVAE has a smaller embedding dimension, 128, than number of latent channels, 256. Add AttnDownEncoderBlock2D and AttnUpDecoderBlock2D to the up and down unet block helpers. VQ-diffusion's VQVAE uses those two block types. * Changes for VQ-diffusion transformer Modify attention.py so SpatialTransformer can be used for VQ-diffusion's transformer. SpatialTransformer: - Can now operate over discrete inputs (classes of vector embeddings) as well as continuous. - `in_channels` was made optional in the constructor so two locations where it was passed as a positional arg were moved to kwargs - modified forward pass to take optional timestep embeddings ImagePositionalEmbeddings: - added to provide positional embeddings to discrete inputs for latent pixels BasicTransformerBlock: - norm layers were made configurable so that the VQ-diffusion could use AdaLayerNorm with timestep embeddings - modified forward pass to take optional timestep embeddings CrossAttention: - now may optionally take a bias parameter for its query, key, and value linear layers FeedForward: - Internal layers are now configurable ApproximateGELU: - Activation function in VQ-diffusion's feedforward layer AdaLayerNorm: - Norm layer modified to incorporate timestep embeddings * Add VQ-diffusion scheduler * Add VQ-diffusion pipeline * Add VQ-diffusion convert script to diffusers * Add VQ-diffusion dummy objects * Add VQ-diffusion markdown docs * Add VQ-diffusion tests * some renaming * some fixes * more renaming * correct * fix typo * correct weights * finalize * fix tests * Apply suggestions from code review Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * finish * finish * up Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Pedro Cuenca authored
* remove batch size from repeat * repeat empty string if uncond_tokens is none * fix inpaint pipes * return back whitespace to pass code quality * Apply suggestions from code review * Fix typos. Co-authored-by:Had <had-95@yandex.ru>
-
Revist authored
* feat: add repaint * fix: fix quality check with `make fix-copies` * fix: remove old unnecessary arg * chore: change default to DDPM (looks better in experiments) * ".to(device)" changed to "device=" Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * make generator device-specific Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * make generator device-specific and change shape Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * fix: add preprocessing for image and mask Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * fix: update test Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Update src/diffusers/pipelines/repaint/pipeline_repaint.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Add docs and examples * Fix toctree Co-authored-by:
fja <fja@zurich.ibm.com> Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Anton Lozhkov <anton@huggingface.co>
-
Anton Lozhkov authored
* Allow saving `None` pipeline components * support flax as well * style
-
- 02 Nov, 2022 9 commits
-
-
Patrick von Platen authored
* [Loading] Ignore unneeded files * up
-
Denis authored
* changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly * Revert "changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly" This reverts commit c5efb525648885f2e7df71f4483a9f248515ad61. * changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly * fixed code style Co-authored-by:lukovnikov <lukovnikov@users.noreply.github.com>
-
Kashif Rasul authored
* initial get_sinusoidal_embeddings * added asserts * better var name * fix docs
-
Grigory Sizov authored
* Fix equality test for ddim and ddpm * add docs for use_clipped_model_output in DDIM * fix inline comment * reorder imports in test_pipelines.py * Ignore use_clipped_model_output if scheduler doesn't take it
-
Omiita authored
Fix a small typo fix a typo in `models/attention.py`. weight -> width
-
Anton Lozhkov authored
* [WIP][CI] Framework and hardware-specific docker images for CI tests * username * fix cpu * try out the image * push latest * update workspace * no root isolation for actions * add a flax image * flax and onnx matrix * fix runners * add reports * onnxruntime image * retry tpu * fix * fix * build onnxruntime * naming * onnxruntime-gpu image * onnxruntime-gpu image, slow tests * latest jax version * trigger flax * run flax tests in one thread * fast flax tests on cpu * fast flax tests on cpu * trigger slow tests * rebuild torch cuda * force cuda provider * fix onnxruntime tests * trigger slow * don't specify gpu for tpu * optimize * memory limit * fix flax tests * disable docker cache
-
Patrick von Platen authored
* Rename latent * uP
-
Lewington-pitsos authored
* improve test precision get tests passing with greater precision using lewington images * make old numpy load function a wrapper around a more flexible numpy loading function * adhere to black formatting * add more black formatting * adhere to isort * loosen precision and replace path Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
MatthieuTPHR authored
* 2x speedup using memory efficient attention * remove einops dependency * Swap K, M in op instantiation * Simplify code, remove unnecessary maybe_init call and function, remove unused self.scale parameter * make xformers a soft dependency * remove one-liner functions * change one letter variable to appropriate names * Remove Env variable dependency, remove MemoryEfficientCrossAttention class and use enable_xformers_memory_efficient_attention method * Add memory efficient attention toggle to img2img and inpaint pipelines * Clearer management of xformers' availability * update optimizations markdown to add info about memory efficient attention * add benchmarks for TITAN RTX * More detailed explanation of how the mem eff benchmark were ran * Removing autocast from optimization markdown * import_utils: import torch only if is available Co-authored-by:Nouamane Tazi <nouamane98@gmail.com>
-
- 31 Oct, 2022 7 commits
-
-
Laurent Mazare authored
Remove some unused parameter The `downsample_padding` parameter does not seem to be used in `CrossAttnUpBlock2D` (or by any up block for that matter) so removing it.
-
Patrick von Platen authored
* Remove nn sequential * up
-
Patrick von Platen authored
-
Patrick von Platen authored
* [Better scheduler docs] Improve usage examples of schedulers * finish * fix warnings and add test * finish * more replacements * adapt fast tests hf token * correct more * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Integrate compatibility with euler Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
hlky authored
* k-diffusion-euler * make style make quality * make fix-copies * fix tests for euler a * Update src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Update src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Update src/diffusers/schedulers/scheduling_euler_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Update src/diffusers/schedulers/scheduling_euler_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * remove unused arg and method * update doc * quality * make flake happy * use logger instead of warn * raise error instead of deprication * don't require scipy * pass generator in step * fix tests * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update tests/test_scheduler.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * remove unused generator * pass generator as extra_step_kwargs * update tests * pass generator as kwarg * pass generator as kwarg * quality * fix test for lms * fix tests Co-authored-by:
patil-suraj <surajp815@gmail.com> Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Pedro Cuenca authored
Allow None safety_checker when using CPU offload.
-
Anton Lozhkov authored
* Fix pipelines user_agent, ignore CI requests * fix circular import * N/A versions * N/A versions
-
- 30 Oct, 2022 1 commit
-
-
Jonatan Kłosko authored
* Move safety detection to model call in Flax safety checker * Update src/diffusers/pipelines/stable_diffusion/safety_checker_flax.py
-
- 29 Oct, 2022 2 commits
-
-
Pedro Cuenca authored
* Docs: refer to pre-RC version of PyTorch 1.13.0. * Remove temporary workaround for unavailable op. * Update comment to make it less ambiguous. * Remove use of contiguous in mps. It appears to not longer be necessary. * Special case: use einsum for much better performance in mps * Update mps docs. * MPS: make pipeline work in half precision.
-
Nathan Lambert authored
-
- 28 Oct, 2022 2 commits
-
-
Patrick von Platen authored
-
Patrick von Platen authored
-