- 20 Dec, 2022 6 commits
-
-
Pedro Cuenca authored
* Section header for in-painting, inference from checkpoint. * Inference: link to section to perform inference from checkpoint. * Move Dreambooth in-painting instructions to the proper place.
-
Patrick von Platen authored
* allow model download when no internet * up * make style
-
Simon Kirsten authored
* [Flax] Stateless schedulers, fixes and refactors * Remove scheduling_common_flax and some renames * Update src/diffusers/schedulers/scheduling_pndm_flax.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Emil Bogomolov authored
* expose polynomial:power and cosine_with_restarts:num_cycles using get_scheduler func, add it to train_dreambooth.py * fix formatting * fix style * Update src/diffusers/optimization.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Patrick von Platen authored
-
Ilmari Heikkinen authored
* only check for xformers when xformers are enabled * only test for xformers when enabling them
-
- 19 Dec, 2022 19 commits
-
-
Prathik Rao authored
* reflect changes * run make style Co-authored-by:Prathik Rao <prathikrao@microsoft.com> Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev7.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
-
Pedro Cuenca authored
* Fail if there are less images than the effective batch size. * Remove lr-scheduler arg as it's currently ignored. * Make guidance_scale work for batch_size > 1.
-
Anton Lozhkov authored
-
Anton Lozhkov authored
-
Nan Liu authored
* update composable diffusion for an updated diffuser library * fix style/quality for code * Revert "fix style/quality for code" This reverts commit 71f23497639fe69de00d93cf91edc31b08dcd7a4. * update style * reduce memory usage by computing score sequentially
-
anton- authored
-
anton- authored
-
Anton Lozhkov authored
* Transformers version req for UnCLIP * add to the list
-
Anish Shah authored
Update train_unconditional.py Add logger flag to choose between tensorboard and wandb
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Anton Lozhkov authored
* Add CPU offloading to UnCLIP * use fp32 for testing the offload
-
Suraj Patil authored
duplicate maks for num_images_per_prompt
-
Anton Lozhkov authored
-
Patrick von Platen authored
* Remove bogus file * [Unclip] Add efficient attention * [Unclip] Add efficient attention
-
Anton Lozhkov authored
-
Mikołaj Siedlarek authored
-
Will Berman authored
* [unCLIP docs] markdown * [unCLIP docs] UnCLIPPipeline
-
Will Berman authored
* [fix] pipeline_unclip generator pass generator to all schedulers * fix fast tests test data
-
- 18 Dec, 2022 3 commits
-
-
Will Berman authored
* [wip] attention block updates * [wip] unCLIP unet decoder and super res * [wip] unCLIP prior transformer * [wip] scheduler changes * [wip] text proj utility class * [wip] UnCLIPPipeline * [wip] kakaobrain unCLIP convert script * [unCLIP pipeline] fixes re: @patrickvonplaten remove callbacks move denoising loops into call function * UNCLIPScheduler re: @patrickvonplaten Revert changes to DDPMScheduler. Make UNCLIPScheduler, a modified DDPM scheduler with changes to support karlo * mask -> attention_mask re: @patrickvonplaten * [DDPMScheduler] remove leftover change * [docs] PriorTransformer * [docs] UNet2DConditionModel and UNet2DModel * [nit] UNCLIPScheduler -> UnCLIPScheduler matches existing unclip naming better * [docs] SchedulingUnCLIP * [docs] UnCLIPTextProjModel * refactor * finish licenses * rename all to attention_mask and prep in models * more renaming * don't expose unused configs * final renaming fixes * remove x attn mask when not necessary * configure kakao script to use new class embedding config * fix copies * [tests] UnCLIPScheduler * finish x attn * finish * remove more * rename condition blocks * clean more * Apply suggestions from code review * up * fix * [tests] UnCLIPPipelineFastTests * remove unused imports * [tests] UnCLIPPipelineIntegrationTests * correct * make style Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Patrick von Platen authored
-
Anton Lozhkov authored
* Fix/update LDM tests * batched generators
-
- 17 Dec, 2022 3 commits
-
-
Anton Lozhkov authored
* unset level
-
Peter authored
Co-authored-by:Peter <peterto@users.noreply.github.com>
-
Patrick von Platen authored
[Batched Generators] This PR adds generators that are useful to make batched generation fully reproducible (#1718) * [Batched Generators] all batched generators * up * up * up * up * up * up * up * up * up * up * up * up * up * up * up * up * hey * up again * fix tests * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * correct tests Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 16 Dec, 2022 5 commits
-
-
Anton Lozhkov authored
* [WIP] Nightly integration tests * initial SD tests * update SD slow tests * style * repaint * ImageVariations * style * finish imgvar * img2img tests * debug * inpaint 1.5 * inpaint legacy * torch isn't happy about deterministic ops * allclose -> max diff for shorter logs * add SD2 * debug * Update tests/pipelines/stable_diffusion_2/test_stable_diffusion.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update tests/pipelines/stable_diffusion/test_stable_diffusion.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix refs * Update src/diffusers/utils/testing_utils.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * fix refs * remove debug Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Pedro Cuenca authored
* Fix links to flash attention. * Add xformers installation instructions. * Make link to xformers install more prominent. * Link to xformers install from training docs.
-
Patrick von Platen authored
-
Anton Lozhkov authored
* Fix ONNX img2img preprocessing and add fast tests coverage * revert * disable progressbars
-
Partho authored
* Latent Diffusion pipeline accept latents * make style * check for mps randn does not work reproducibly on mps
-
- 15 Dec, 2022 4 commits
-
-
YiYi Xu authored
Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Haihao Shen authored
* Add examples with Intel optimizations (BF16 fine-tuning and inference) * Remove unused package * Add README for intel_opts and refine the description for research projects * Add notes of intel opts for diffusers
-
jiqing-feng authored
* add conf.yaml * enable bf16 enable amp bf16 for unet forward fix style fix readme remove useless file * change amp to full bf16 * align * make stype * fix format
-
CyberMeow authored
* update inpaint_legacy to allow the use of predicted noise to construct intermediate diffused images * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint_legacy.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-