- 29 Oct, 2022 4 commits
-
-
Pedro Cuenca authored
Tests: upgrade PyTorch cuda to 11.7. Otherwise the cuda versions of torch and torchvision mismatch, and examples tests fail. We were requesting cuda 11.6 for PyTorch, and the default torchvision (via setup.py). Another option would be to include torchvision in the same pip install line as torch.
-
MarkRich authored
* add seed resizing to community examples * actually add the file responsible for seed resizing
-
Nathan Lambert authored
-
Minwoo Byeon authored
-
- 28 Oct, 2022 12 commits
-
-
Pedro Cuenca authored
* Update training and fine-tuning docs. * Update examples README. * Update README. * Add Flax fine-tuning section. * Accept suggestion Co-authored-by:
Anton Lozhkov <anton@huggingface.co> * Accept suggestion Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co>
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
* up * up * up * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py * Apply suggestions from code review
-
Patrick von Platen authored
* [Tests] Speed up slow tests * Up * up
-
Patrick von Platen authored
* improve tests * up * finish * upload * add init * up * finish vae * finish * reduce loading time with device_map * remove device_map from CPU * uP
-
Nouamane Tazi authored
* fix `upsample_nearest_nhwc` for large bsz * fix `upsample_nearest_nhwc` for large bsz
-
Duong A. Nguyen authored
fix jnp dtype
-
- 27 Oct, 2022 13 commits
-
-
Anton Lozhkov authored
-
Pi Esposito authored
* document cpu offloading method * address review comments Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Anton Lozhkov authored
-
Denis authored
tensorboard import in readme, otherwise accelerator.trackers[0] out of range Co-authored-by:lukovnikov <lukovnikov@users.noreply.github.com>
-
Suraj Patil authored
-
Suraj Patil authored
-
Anton Lozhkov authored
Deprecate `init_git_repo` and `push_to_hub`, refactor `train_unconditional.py`
-
Duong A. Nguyen authored
* [Flax] Add DreamBooth * fix sample rng * style * not reuse rng * add dtype for mixed precision training * Add Flax example
-
Duong A. Nguyen authored
Set train mode for text encoder
-
Duong A. Nguyen authored
* [Flax] Add finetune Stable Diffusion * temporary fix * drop_last and seed * add dtype for mixed precision training * style * Add Flax example
-
Patrick von Platen authored
* [Accelerate model loading] Fix meta device and super low memory usage * better naming
-
Suraj Patil authored
make input_args optional
-
Pedro Cuenca authored
* Add failing test for #940. * Do not use torch.float64 in mps. * style * Temporarily skip add_noise for IPNDMScheduler. Until #990 is addressed. * Fix additional float64 error in mps. * Improve add_noise test * Slight edit – I think it's clearer this way.
-
- 26 Oct, 2022 10 commits
-
-
Duong A. Nguyen authored
* add textual inversion flax * make style * make style * replicate vae and unet params * make style * minor * save after end of training * style * Temporary fix Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Add Flax instruction Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
Brian Whicheloe authored
* Make training code usable by external scripts Add parameter inputs to training and argument parsing function to allow this script to be used by an external call. * Apply suggestions from code review Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Simon Kirsten authored
-
Hu Ye authored
-
Pi Esposito authored
* add method to enable cuda with minimal gpu usage to stable diffusion * add test to minimal cuda memory usage * ensure all models but unet are onn torch.float32 * move to cpu_offload along with minor internal changes to make it work * make it test against accelerate master branch * coming back, its official: I don't know how to make it test againt the master branch from accelerate * make it install accelerate from master on tests * go back to accelerate>=0.11 * undo prettier formatting on yml files * undo prettier formatting on yml files againn
-
Julien Simon authored
-
Yuta Hayashibe authored
-
Hu Ye authored
remove tensor_format in the new version
-
Patrick von Platen authored
CompVis -> diffusers script - allow converting from merged checkpoint to either EMA or non-EMA (#991) * improve script * up
-
Pedro Cuenca authored
* Add failing test for #940. * Do not use torch.float64 in mps. * style * Temporarily skip add_noise for IPNDMScheduler. Until #990 is addressed.
-
- 25 Oct, 2022 1 commit
-
-
Yuta Hayashibe authored
* Add --pretrained_model_name_revision option to train_dreambooth.py * Renamed --pretrained_model_name_revision to --revision
-