- 17 Oct, 2022 5 commits
-
-
Patrick von Platen authored
* uP * correct * make style * small change
-
Patrick von Platen authored
-
Nathan Raw authored
*
✨ Add Stable Diffusion Interpolation Example *💄 style * Update examples/community/interpolate_stable_diffusion.py Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Patrick von Platen authored
-
Patrick von Platen authored
-
- 16 Oct, 2022 1 commit
-
-
Patrick von Platen authored
[DeviceMap] Make sure stable diffusion can be loaded from older transformers versiosn
-
- 14 Oct, 2022 5 commits
-
-
camenduru authored
* Fix Flax pipeline: width and height are ignored #838 * Fix Flax pipeline: width and height are ignored
-
Anton Lozhkov authored
-
Anton Lozhkov authored
* Bump to 0.6.0.dev0 * Deprecate tensor_format and .samples * style * upd * upd * style * sample -> images * Update src/diffusers/schedulers/scheduling_ddpm.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/schedulers/scheduling_ddim.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/schedulers/scheduling_karras_ve.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/schedulers/scheduling_lms_discrete.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/schedulers/scheduling_pndm.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/schedulers/scheduling_sde_ve.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/schedulers/scheduling_sde_vp.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Omar Sanseviero authored
-
Patrick von Platen authored
-
- 13 Oct, 2022 11 commits
-
-
Patrick von Platen authored
Patch Release: 0.5.1
-
Suraj Patil authored
fix nsfw bug
-
Anton Lozhkov authored
-
Patrick von Platen authored
-
Patrick von Platen authored
* up * finish * add more tests * up * up * finish
-
Pedro Cuenca authored
* Remove set_format in Flax pipeline. * Remove DummyChecker. * Run safety_checker in pipeline. * Don't pmap on every call. We could have decorated `generate` with `pmap`, but I wanted to keep it in case someone wants to invoke it in non-parallel mode. * Remove commented line Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Replicate outside __call__, prepare for optional jitting. * Remove unnecessary clipping. As suggested by @kashif. * Do not jit unless requested. * Send all args to generate. * make style * Remove unused imports. * Fix docstring. Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Patrick von Platen authored
* Give more customizable options for safety checker * Apply suggestions from code review * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py * Finish * make style * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * up Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Anton Lozhkov authored
-
Anton Lozhkov authored
Fix dreambooth loss type with prior preservation
-
Suraj Patil authored
* update flax scheduler API * remoev set format * fix call to scale_model_input * update flax pndm * use int32 * update docstr
-
Patrick von Platen authored
-
- 12 Oct, 2022 10 commits
-
-
Anton Lozhkov authored
* Add diffusers version and pipeline class to the Hub UA * Fallback to class name for pipelines * Update src/diffusers/modeling_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/modeling_flax_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Remove autoclass Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
pink-red authored
-
Suraj Patil authored
* fix ema * style * add comment about copy * style * quality
-
Nathan Lambert authored
* add or fix license formatting * fix quality
-
anton-l authored
-
anton-l authored
-
Patrick von Platen authored
* [Dummy imports] Better error message * Test: load pipeline with LMS scheduler. Fails with a cryptic message if scipy is not installed. * Correct Co-authored-by:Pedro Cuenca <pedro@huggingface.co>
-
Anton Lozhkov authored
-
Patrick von Platen authored
* [Img2Img] Fix batch size mismatch prompts vs. init images * Remove bogus folder * fix * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 11 Oct, 2022 7 commits
-
-
Patrick von Platen authored
-
Pedro Cuenca authored
* mps: alt. implementation for repeat_interleave * style * Bump mps version of PyTorch in the documentation. * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Simplify: do not check for device. * style * Fix repeat dimensions: - The unconditional embeddings are always created from a single prompt. - I was shadowing the batch_size var. * Split long lines as suggested by Suraj. Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
Omar Sanseviero authored
Update custom_pipelines.mdx
-
spezialspezial authored
-
Akash Pannu authored
* pass norm_num_groups param and add tests * set resnet_groups for FlaxUNetMidBlock2D * fixed docstrings * fixed typo * using is_flax_available util and created require_flax decorator
-
Suraj Patil authored
* begin text2image script * loading the datasets, preprocessing & transforms * handle input features correctly * add gradient checkpointing support * fix output names * run unet in train mode not text encoder * use no_grad instead of freezing params * default max steps None * pad to longest * don't pad when tokenizing * fix encode on multi gpu * fix stupid bug * add random flip * add ema * fix ema * put ema on cpu * improve EMA model * contiguous_format * don't warp vae and text encode in accelerate * remove no_grad * use randn_like * fix resize * improve few things * log epoch loss * set log level * don't log each step * remove max_length from collate * style * add report_to option * make scale_lr false by default * add grad clipping * add an option to use 8bit adam * fix logging in multi-gpu, log every step * more comments * remove eval for now * adress review comments * add requirements file * begin readme * begin readme * fix typo * fix push to hub * populate readme * update readme * remove use_auth_token from the script * address some review comments * better mixed precision support * remove redundant to * create ema model early * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * better description for train_data_dir * add diffusers in requirements * update dataset_name_mapping * update readme * add inference example Co-authored-by:
anton-l <anton@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Suraj Patil authored
* support bf16 for stable diffusion * fix typo * address review comments
-
- 10 Oct, 2022 1 commit
-
-
Henrik Forstén authored
* Support deepspeed * Dreambooth DeepSpeed documentation * Remove unnecessary casts, documentation Due to recent commits some casts to half precision are not necessary anymore. Mention that DeepSpeed's version of Adam is about 2x faster. * Review comments
-