- 27 Oct, 2022 6 commits
-
-
Duong A. Nguyen authored
* [Flax] Add DreamBooth * fix sample rng * style * not reuse rng * add dtype for mixed precision training * Add Flax example
-
Duong A. Nguyen authored
Set train mode for text encoder
-
Duong A. Nguyen authored
* [Flax] Add finetune Stable Diffusion * temporary fix * drop_last and seed * add dtype for mixed precision training * style * Add Flax example
-
Patrick von Platen authored
* [Accelerate model loading] Fix meta device and super low memory usage * better naming
-
Suraj Patil authored
make input_args optional
-
Pedro Cuenca authored
* Add failing test for #940. * Do not use torch.float64 in mps. * style * Temporarily skip add_noise for IPNDMScheduler. Until #990 is addressed. * Fix additional float64 error in mps. * Improve add_noise test * Slight edit – I think it's clearer this way.
-
- 26 Oct, 2022 10 commits
-
-
Duong A. Nguyen authored
* add textual inversion flax * make style * make style * replicate vae and unet params * make style * minor * save after end of training * style * Temporary fix Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Add Flax instruction Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
Brian Whicheloe authored
* Make training code usable by external scripts Add parameter inputs to training and argument parsing function to allow this script to be used by an external call. * Apply suggestions from code review Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Simon Kirsten authored
-
Hu Ye authored
-
Pi Esposito authored
* add method to enable cuda with minimal gpu usage to stable diffusion * add test to minimal cuda memory usage * ensure all models but unet are onn torch.float32 * move to cpu_offload along with minor internal changes to make it work * make it test against accelerate master branch * coming back, its official: I don't know how to make it test againt the master branch from accelerate * make it install accelerate from master on tests * go back to accelerate>=0.11 * undo prettier formatting on yml files * undo prettier formatting on yml files againn
-
Julien Simon authored
-
Yuta Hayashibe authored
-
Hu Ye authored
remove tensor_format in the new version
-
Patrick von Platen authored
CompVis -> diffusers script - allow converting from merged checkpoint to either EMA or non-EMA (#991) * improve script * up
-
Pedro Cuenca authored
* Add failing test for #940. * Do not use torch.float64 in mps. * style * Temporarily skip add_noise for IPNDMScheduler. Until #990 is addressed.
-
- 25 Oct, 2022 12 commits
-
-
Yuta Hayashibe authored
* Add --pretrained_model_name_revision option to train_dreambooth.py * Renamed --pretrained_model_name_revision to --revision
-
Ella Charlaix authored
-
Patrick von Platen authored
uP
-
Patrick von Platen authored
* add in fp16 * up
-
Patrick von Platen authored
* start * add more logic * Update src/diffusers/models/unet_2d_condition_flax.py * match weights * up * make model work * making class more general, fixing missed file rename * small fix * make new conversion work * up * finalize conversion * up * first batch of variable renamings * remove c and c_prev var names * add mid and out block structure * add pipeline * up * finish conversion * finish * upload * more fixes * Apply suggestions from code review * add attr * up * uP * up * finish tests * finish * uP * finish * fix test * up * naming consistency in tests * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Nathan Lambert <nathan@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co> * remove hardcoded 16 * Remove bogus * fix some stuff * finish * improve logging * docs * upload Co-authored-by:
Nathan Lambert <nol@berkeley.edu> Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Nathan Lambert <nathan@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co>
-
SkyTNT authored
* [Onnx] support half-precision and fix bugs for onnx pipelines * Update convert_stable_diffusion_checkpoint_to_onnx.py * style * fix has_nsfw_concept * Update convert_stable_diffusion_checkpoint_to_onnx.py * fix style
-
Pedro Cuenca authored
* Docs: refer to pre-RC version of PyTorch 1.13.0. * Remove temporary workaround for unavailable op. * Update comment to make it less ambiguous. * Remove use of contiguous in mps. It appears to not longer be necessary. * Special case: use einsum for much better performance in mps * Update mps docs. * Minor doc update. * Accept suggestion Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co>
-
Anton Lozhkov authored
* [WIP] Debugging mps DDIM tests * revert num_steps * check warmup with a generator * more warmup! * remove xdist * just use a single process
-
Kashif Rasul authored
* added broadcast_to_shape_from_left helper * initial tests * fixed pndm tests * shape required for pndm * added require_flax * fix style * fix more imports Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
MarkRich authored
* Initial composable diffusion pipeline * add composable stable diffusion to readme table * Update examples/community/README.md * Apply suggestions from code review * Update examples/community/README.md * Update examples/community/README.md * Update examples/community/README.md * up * Update examples/community/README.md Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Tanishq Abraham authored
-
Pedro Cuenca authored
Fix typo: torch_type -> torch_dtype
-
- 24 Oct, 2022 4 commits
-
-
Nathan Lambert authored
* add community pipeline docs * fix style in code snippets (lol) * clean up loading docs * add license to doc files * fix some weird links
-
apolinario authored
* Update README.md Additionally add FLAX so the model card can be slimmer and point to this page * Find and replace all * v-1-5 -> v1-5 * revert test changes * Update README.md Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update docs/source/quicktour.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update README.md Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/quicktour.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update README.md Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Revert certain references to v1-5 * Docs changes * Apply suggestions from code review Co-authored-by:
apolinario <joaopaulo.passos+multimodal@gmail.com> Co-authored-by:
anton-l <anton@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
Anton Lozhkov authored
* Reorganize pipeline tests * fix vq
-
Chenguo Lin authored
one small typo in pipeline_ddpm.py just a small typo in one comment
-
- 22 Oct, 2022 1 commit
-
-
Kashif Rasul authored
fix mps failing tests
-
- 21 Oct, 2022 4 commits
-
-
Shyam Sudhakaran authored
* Initial Wildcard Stable Diffusion Pipeline * Added some additional example usage * style * Added links in README and additional documentation * Initial Wildcard Stable Diffusion Pipeline * Added some additional example usage * style * Added links in README and additional documentation * cleanup readme again * Apply suggestions from code review Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
mkshing authored
* Support LMSDiscreteScheduler in LDMPipeline This is a small change to support all schedulers such as LMSDiscreteScheduler in LDMPipeline. What's changed ------- * Add the `scale_model_input` function before `step` to ensure correct denoising (L77) * Add "scale the initial noise by the standard deviation required by the scheduler" * run `make style` Co-authored-by:Anton Lozhkov <anton@huggingface.co>
-
Suraj Patil authored
dont warn for bf16 weights
-
Patrick von Platen authored
* [Tests] Move stable diffusion into their own files * up
-
- 20 Oct, 2022 3 commits
-
-
Anton Lozhkov authored
* Introduce the copy mechanism * init tests * fix dummy tests * with * update copies tests
-
Anton Lozhkov authored
* Bump the version to 0.7.0.dev0 * deprecate offsets * deprecate LMS timesteps * LMS 0.7.0->0.8.0
-
SkyTNT authored
[Community Pipelines] fix pad_tokens_and_weights in lpw_stable_diffusion
-