- 27 Jan, 2023 3 commits
-
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
* [LoRA] All to use in inference with pipeline * [LoRA] allow cross attention kwargs passed to pipeline * finish
-
- 26 Jan, 2023 4 commits
-
-
Patrick von Platen authored
-
Will Berman authored
* fuse attention mask * lint * use 0 beta when no attention mask re: @Birch-san
-
Suraj Patil authored
* make scaling factor cnfig arg of vae * fix * make flake happy * fix ldm * fix upscaler * qualirty * Apply suggestions from code review Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * solve conflicts, addres some comments * examples * examples min version * doc * fix type * typo * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * remove duplicate line * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Pedro Cuenca authored
* Allow `UNet2DModel` to use arbitrary class embeddings. We can currently use class conditioning in `UNet2DConditionModel`, but not in `UNet2DModel`. However, `UNet2DConditionModel` requires text conditioning too, which is unrelated to other types of conditioning. This commit makes it possible for `UNet2DModel` to be conditioned on entities other than timesteps. This is useful for training / research purposes. We can currently train models to perform unconditional image generation or text-to-image generation, but it's not straightforward to train a model to perform class-conditioned image generation, if text conditioning is not required. We could potentiall use `UNet2DConditionModel` for class-conditioning without text embeddings by using down/up blocks without cross-conditioning. However: - The mid block currently requires cross attention. - We are required to provide `encoder_hidden_states` to `forward`. * Style * Align class conditioning, add docstring for `num_class_embeds`. * Copy docstring to versatile_diffusion UNetFlatConditionModel
-
- 24 Jan, 2023 1 commit
-
-
Takuma Mori authored
* allow passing op to xFormers attention original code by @patil-suraj huggingface/diffusers@ae0cc0b71f28c0f2c5c27026b18f1bea98b505f1 * correct style by `make style` * add attention_op arg documents * add usage example to docstring Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * add usage example to docstring Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * code style correction by `make style` * Update docstring code to a valid python example Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update docstring code to a valid python example Co-authored-by:
Suraj Patil <surajp815@gmail.com> * style correction by `make style` * Update code exmaple to fully functional Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 19 Jan, 2023 1 commit
-
-
Patrick von Platen authored
correct safetensors
-
- 18 Jan, 2023 1 commit
-
-
Patrick von Platen authored
* [Lora] first upload * add first lora version * upload * more * first training * up * correct * improve * finish loaders and inference * up * up * fix more * up * finish more * finish more * up * up * change year * revert year change * Change lines * Add cloneofsimo as co-author. Co-authored-by:
Simo Ryu <cloneofsimo@gmail.com> * finish * fix docs * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com> * upload * finish Co-authored-by:
Simo Ryu <cloneofsimo@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 17 Jan, 2023 1 commit
-
-
Kashif Rasul authored
* added dit model * import * initial pipeline * initial convert script * initial pipeline * make style * raise valueerror * single function * rename classes * use DDIMScheduler * timesteps embedder * samples to cpu * fix var names * fix numpy type * use timesteps class for proj * fix typo * fix arg name * flip_sin_to_cos and better var names * fix C shape cal * make style * remove unused imports * cleanup * add back patch_size * initial dit doc * typo * Update docs/source/api/pipelines/dit.mdx Co-authored-by:
Suraj Patil <surajp815@gmail.com> * added copyright license headers * added example usage and toc * fix variable names asserts * remove comment * added docs * fix typo * upstream changes * set proper device for drop_ids * added initial dit pipeline test * update docs * fix imports * make fix-copies * isort * fix imports * get rid of more magic numbers * fix code when guidance is off * remove block_kwargs * cleanup script * removed to_2tuple * use FeedForward class instead of another MLP * style * work on mergint DiTBlock with BasicTransformerBlock * added missing final_dropout and args to BasicTransformerBlock * use norm from block * fix arg * remove unused arg * fix call to class_embedder * use timesteps * make style * attn_output gets multiplied * removed commented code * use Transformer2D * use self.is_input_patches * fix flags * fixed conversion to use Transformer2DModel * fixes for pipeline * remove dit.py * fix timesteps device * use randn_tensor and fix fp16 inf. * timesteps_emb already the right dtype * fix dit test class * fix test and style * fix norm2 usage in vq-diffusion * added author names to pipeline and lmagenet labels link * fix tests * use norm_type as string * rename dit to transformer * fix name * fix test * set norm_type = "layer" by default * fix tests * do not skip common tests * Update src/diffusers/models/attention.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * revert AdaLayerNorm API * fix norm_type name * make sure all components are in eval mode * revert norm2 API * compact * finish deprecation * add slow tests * remove @ * refactor some stuff * upload * Update src/diffusers/pipelines/dit/pipeline_dit.py * finish more * finish docs * improve docs * finish docs Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
William Berman <WLBberman@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 16 Jan, 2023 2 commits
-
-
Will Berman authored
re: https://github.com/huggingface/diffusers/issues/1857 We relax some of the checks to deal with unclip reproducibility issues. Mainly by checking the average pixel difference (measured w/in 0-255) instead of the max pixel difference (measured w/in 0-1). - [x] add mixin to UnCLIPPipelineFastTests - [x] add mixin to UnCLIPImageVariationPipelineFastTests - [x] Move UnCLIPPipeline flags in mixin to base class - [x] Small MPS fixes for F.pad and F.interpolate - [x] Made test unCLIP model's dimensions smaller to run tests faster
-
Patrick von Platen authored
-
- 12 Jan, 2023 1 commit
-
-
camenduru authored
* from_flax * oops * oops * make style with pip install -e ".[dev]" * oops * now code quality happy
😋 * allow_patterns += FLAX_WEIGHTS_NAME * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/models/modeling_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * for test * bye bye is_flax_available() * oops * Update src/diffusers/models/modeling_pytorch_flax_utils.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/models/modeling_pytorch_flax_utils.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/models/modeling_pytorch_flax_utils.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/models/modeling_utils.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/models/modeling_utils.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * make style * add test * finihs Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 04 Jan, 2023 1 commit
-
-
Patrick von Platen authored
* [Repro] Correct reproducability * up * up * uP * up * need better image * allow conversion from no state dict checkpoints * up * up * up * up * check tensors * check tensors * check tensors * check tensors * next try * up * up * better name * up * up * Apply suggestions from code review * correct more * up * replace all torch randn * fix * correct * correct * finish * fix more * up
-
- 03 Jan, 2023 1 commit
-
-
Patrick von Platen authored
* [Deterministic torch randn] Allow tensors to be generated on CPU * fix more * up * fix more * up * Update src/diffusers/utils/torch_utils.py Co-authored-by:
Anton Lozhkov <anton@huggingface.co> * Apply suggestions from code review * up * up * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 02 Jan, 2023 1 commit
-
-
YiYi Xu authored
* add a doc page for each pipeline under api/pipelines/stable_diffusion * add pipeline examples to docstrings * updated stable_diffusion_2 page * updated default markdown syntax to list methods based on https://github.com/huggingface/diffusers/pull/1870 * add function decorator Co-authored-by:
yiyixuxu <yixu@Yis-MacBook-Pro.lan> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 01 Jan, 2023 1 commit
-
-
Patrick von Platen authored
* [Attention] Finish refactor attention file * correct more * fix * more fixes * correct * up
-
- 30 Dec, 2022 1 commit
-
-
Patrick von Platen authored
* move files a bit * more refactors * fix more * more fixes * fix more onnx * make style * upload * fix * up * fix more * up again * up * small fix * Update src/diffusers/__init__.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * correct Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 29 Dec, 2022 1 commit
-
-
Simon Kirsten authored
* Flax: Add components function * Flax: Fix img2img and align with other pipeline * Flax: Fix PRNGKey type * Refactor strength to start_timestep * Fix preprocess images * Fix processed_images dimen * latents.shape -> latents_shape * Fix typo * Remove "static" comment * Remove unnecessary optional types in _generate * Apply doc-builder code style. Co-authored-by:Pedro Cuenca <pedro@huggingface.co>
-
- 28 Dec, 2022 1 commit
-
-
Patrick von Platen authored
fix versatile
-
- 27 Dec, 2022 1 commit
-
-
William Held authored
* Width was typod as weight * Run Black
-
- 20 Dec, 2022 3 commits
-
-
Patrick von Platen authored
* first proposal * rename * up * Apply suggestions from code review * better * up * finish * up * rename * correct versatile * up * up * up * up * fix * Apply suggestions from code review * make style * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * add error message Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Patrick von Platen authored
-
Ilmari Heikkinen authored
* only check for xformers when xformers are enabled * only test for xformers when enabling them
-
- 19 Dec, 2022 3 commits
-
-
Patrick von Platen authored
-
Anton Lozhkov authored
-
Patrick von Platen authored
* Remove bogus file * [Unclip] Add efficient attention * [Unclip] Add efficient attention
-
- 18 Dec, 2022 1 commit
-
-
Will Berman authored
* [wip] attention block updates * [wip] unCLIP unet decoder and super res * [wip] unCLIP prior transformer * [wip] scheduler changes * [wip] text proj utility class * [wip] UnCLIPPipeline * [wip] kakaobrain unCLIP convert script * [unCLIP pipeline] fixes re: @patrickvonplaten remove callbacks move denoising loops into call function * UNCLIPScheduler re: @patrickvonplaten Revert changes to DDPMScheduler. Make UNCLIPScheduler, a modified DDPM scheduler with changes to support karlo * mask -> attention_mask re: @patrickvonplaten * [DDPMScheduler] remove leftover change * [docs] PriorTransformer * [docs] UNet2DConditionModel and UNet2DModel * [nit] UNCLIPScheduler -> UnCLIPScheduler matches existing unclip naming better * [docs] SchedulingUnCLIP * [docs] UnCLIPTextProjModel * refactor * finish licenses * rename all to attention_mask and prep in models * more renaming * don't expose unused configs * final renaming fixes * remove x attn mask when not necessary * configure kakao script to use new class embedding config * fix copies * [tests] UnCLIPScheduler * finish x attn * finish * remove more * rename condition blocks * clean more * Apply suggestions from code review * up * fix * [tests] UnCLIPPipelineFastTests * remove unused imports * [tests] UnCLIPPipelineIntegrationTests * correct * make style Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 13 Dec, 2022 1 commit
-
-
Patrick von Platen authored
* [SD] Make sure batched input works correctly * uP * uP * up * up * uP * up * fix mask stuff * up * uP * more up * up * uP * up * finish * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 09 Dec, 2022 1 commit
-
-
Patrick von Platen authored
* do not automatically enable xformers * uP
-
- 07 Dec, 2022 4 commits
-
-
Pedro Cuenca authored
* Make cross-attention check more robust. * Fix copies.
-
Suraj Patil authored
* fix upcast in slice attention * fix dtype * add test * fix test
-
Suraj Patil authored
upcast attention
-
Patrick von Platen authored
* add paint by example * mkae loading possibel * up * Update src/diffusers/models/attention.py * up * finalize weight structure * make example work * make it work * up * up * fix * del * add * update * Apply suggestions from code review * correct transformer 2d * finish * up * up * up * up * fix * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Apply suggestions from code review * up * finish Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 05 Dec, 2022 3 commits
-
-
Patrick von Platen authored
-
Suraj Patil authored
* make attn slice recursive * remove set_attention_slice from blocks * fix copies * make enable_attention_slicing base class method of DiffusionPipeline * fix set_attention_slice * fix set_attention_slice * fix copies * add tests * up * up * up * update * up * uP Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Benjamin Lefaudeux authored
small but mighty
-
- 03 Dec, 2022 1 commit
-
-
Ilmari Heikkinen authored
* Add xformers attention to VAE * Simplify VAE xformers code * Update src/diffusers/models/attention.py Co-authored-by:
Ilmari Heikkinen <ilmari@fhtr.org> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 02 Dec, 2022 1 commit
-
-
Patrick von Platen authored
-