- 12 Apr, 2023 10 commits
-
-
Sayak Paul authored
* fix: norm group test for UNet3D. * chore: speed up the panorama tests (fast). * set default value of _test_inference_batch_single_identical. * fix: batch_sizes default value.
-
Andranik Movsisyan authored
* fix progress bar issue in pipeline_text_to_video_zero.py. Copy scheduler after first backward * fix tensor loading in test_text_to_video_zero.py * make style && make quality
-
Pedro Cuenca authored
Update Flax TPU tests. Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Nipun Jindal authored
* [2737]: Add Karras DPMSolverMultistepScheduler * [2737]: Add Karras DPMSolverMultistepScheduler * Add test * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix: repo consistency. * remove Copied from statement from the set_timestep method. * fix: test * Empty commit. Co-authored-by:
njindal <njindal@adobe.com> --------- Co-authored-by:
njindal <njindal@adobe.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Sean Sube authored
* add support for prompt embeds to SD ONNX pipeline * fix up the pipeline copies * add prompt embeds param to other ONNX pipelines * fix up prompt embeds param for SD upscaling ONNX pipeline * add missing type annotations to ONNX pipes
-
Will Berman authored
* fix pipeline __setattr__ * add test --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Andy authored
* inital commit for lora test cases * help a bit with lora for 3d * fixed lora tests * replaced redundant code --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Pedro Cuenca authored
* add use_memory_efficient params placeholder * test * add memory efficient attention jax * add memory efficient attention jax * newline * forgot dot * Rename use_memory_efficient * Keep dtype last. * Actually use key_chunk_size * Rename symbol * Apply style * Rename use_memory_efficient * Keep dtype last * Pass `use_memory_efficient_attention` in `from_pretrained` * Move JAX memory efficient attention to attention_flax. * Simple test. * style --------- Co-authored-by:
muhammad_hanif <muhammad_hanif@sofcograha.co.id> Co-authored-by:
MuhHanif <48muhhanif@gmail.com>
-
Patrick von Platen authored
* fix slow tsets * make style
-
Sayak Paul authored
* add: first draft for a better LoRA enabler. * make fix-copies. * feat: backward compatibility. * add: entry to the docs. * add: tests. * fix: docs. * fix: norm group test for UNet3D. * feat: add support for flat dicts. * add depcrcation message instead of warning.
-
- 11 Apr, 2023 7 commits
-
-
Will Berman authored
add AttnAddedKVProcessor2_0 block
-
Chanchana Sornsoontorn authored
*
⚙ ️chore(train_controlnet) fix typo in logger message *⚙ ️chore(models) refactor modules order; make them the same as calling order When printing the BasicTransformerBlock to stdout, I think it's crucial that the attributes order are shown in proper order. And also previously the "3. Feed Forward" comment was not making sense. It should have been close to self.ff but it's instead next to self.norm3 * correct many tests * remove bogus file * make style * correct more tests * finish tests * fix one more * make style * make unclip deterministic *⚙ ️chore(models/attention) reorganize comments in BasicTransformerBlock class --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Will Berman authored
* add only cross attention to simple attention blocks * add test for only_cross_attention re: @patrickvonplaten * mid_block_only_cross_attention better default allow mid_block_only_cross_attention to default to `only_cross_attention` when `only_cross_attention` is given as a single boolean
-
Pedro Cuenca authored
* Fix invocation of some slow tests. We use __call__ rather than pmapping the generation function ourselves because the number of static arguments is different now. * style
-
Will Berman authored
-
Patrick von Platen authored
* [Config] Fix config prints and save, load * Only use potential nn.Modules for dtype and device * Correct vae image processor * make sure in_channels is not accessed directly * make sure in channels is only accessed via config * Make sure schedulers only access config attributes * Make sure to access config in SAG * Fix vae processor and make style * add tests * uP * make style * Fix more naming issues * Final fix with vae config * change more
-
Pedro Cuenca authored
-
- 10 Apr, 2023 4 commits
-
-
Andranik Movsisyan authored
* add TextToVideoZeroPipeline and CrossFrameAttnProcessor * add docs for text-to-video zero * add teaser image for text-to-video zero docs * Fix review changes. Add Documentation. Add test * clean up the codes in pipeline_text_to_video.py. Add descriptive comments and docstrings * make style && make quality * make fix-copies * make requested changes to docs. use huggingface server links for resources, delete res folder * make style && make quality && make fix-copies * make style && make quality * Apply suggestions from code review --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
William Berman authored
-
William Berman authored
-
Will Berman authored
dynamic threshold sampling bug fix and docs
-
- 06 Apr, 2023 1 commit
-
-
Nipun Jindal authored
* [2905]: Add Karras pattern to discrete euler * [2905]: Add Karras pattern to discrete euler * Review comments * Review comments * Review comments * Review comments --------- Co-authored-by:njindal <njindal@adobe.com>
-
- 05 Apr, 2023 1 commit
-
-
Patrick von Platen authored
* [Pipeline download] Improve pipeline download for index and passed components * correct * add more tests * up
-
- 04 Apr, 2023 1 commit
-
-
Sayak Paul authored
-
- 31 Mar, 2023 3 commits
-
-
Patrick von Platen authored
speed up test
-
Patrick von Platen authored
* fix slow tests * uP
-
Takuma Mori authored
* add use_karras_sigmas option thanks @Stax124 * fix sigma_min/max from scheduler.sigmas * add docstring * revert to use k_diffusion_model.sigma, to(device) * add integration test * make style
-
- 30 Mar, 2023 1 commit
-
-
Pi Esposito authored
* add load textual inversion embeddings draft * fix quality * fix typo * make fix copies * move to textual inversion mixin * make it accept from sd-concept library * accept list of paths to embeddings * fix styling of stable diffusion pipeline * add dummy TextualInversionMixin * add docstring to textualinversionmixin * add load textual inversion embeddings draft * fix quality * fix typo * make fix copies * move to textual inversion mixin * make it accept from sd-concept library * accept list of paths to embeddings * fix styling of stable diffusion pipeline * add dummy TextualInversionMixin * add docstring to textualinversionmixin * add case for parsing embedding from auto1111 UI format Co-authored-by:
Evan Jones <evan.a.jones3@gmail.com> Co-authored-by:
Ana Tamais <aninhamoraestamais@gmail.com> * fix style after rebase * move textual inversion mixin to loaders * move mixin inheritance to DiffusionPipeline from StableDiffusionPipeline) * update dummy class name * addressed allo comments * fix old dangling import * fix style * proposal * remove bogus * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Will Berman <wlbberman@gmail.com> * finish * make style * up * fix code quality * fix code quality - again * fix code quality - 3 * fix alt diffusion code quality * fix model editing pipeline * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Finish --------- Co-authored-by:
Evan Jones <evan.a.jones3@gmail.com> Co-authored-by:
Ana Tamais <aninhamoraestamais@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Will Berman <wlbberman@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 28 Mar, 2023 4 commits
-
-
Sayak Paul authored
[Tests] Adds a test to check if `image_embeds` None case is handled properly in `StableUnCLIPImg2ImgPipeline` (#2861) * improve stable unclip doc. * add: test to check if image_emebds None case is handled. * apply formatting/
-
Kashif Rasul authored
-
Patrick von Platen authored
Improve init
-
Pedro Cuenca authored
* Workaround for saving dynamo-wrapped models. * Accept suggestion from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Apply workaround when overriding pipeline components. * Ensure the correct config.json is saved to disk. Instead of the dynamo class. * Save correct module (not compiled one) * Add test * style * fix docstrings * Go back to using string comparisons. PyTorch CPU does not have _dynamo. * Simple test for save_pretrained of compiled models. * Helper function to test whether module is compiled. --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 27 Mar, 2023 2 commits
-
-
Pedro Cuenca authored
* Helper function to disable custom attention processors. * Restore code deleted by mistake. * Format * Fix modeling_text_unet copy.
-
Patrick von Platen authored
-
- 24 Mar, 2023 2 commits
-
-
Kashif Rasul authored
* Relax DiT test * relax 2 more tests * fix style * skip test on mac due to older protobuf
-
Bahjat Kawar authored
* TIME first commit * styling. * styling 2. * fixes; tests * apply styling and doc fix. * remove sups. * fixes * remove temp file * move augmentations to const * added doc entry * code quality * customize augmentations * quality * quality --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 23 Mar, 2023 4 commits
-
-
Sanchit Gandhi authored
* Add AudioLDM * up * add vocoder * start unet * unconditional unet * clap, vocoder and vae * clean-up: conversion scripts * fix: conversion script token_type_ids * clean-up: pipeline docstring * tests: from SD * clean-up: cpu offload vocoder instead of safety checker * feat: adapt tests to audioldm * feat: add docs * clean-up: amend pipeline docstrings * clean-up: make style * clean-up: make fix-copies * fix: add doc path to toctree * clean-up: args for conversion script * clean-up: paths to checkpoints * fix: use conditional unet * clean-up: make style * fix: type hints for UNet * clean-up: docstring for UNet * clean-up: make style * clean-up: remove duplicate in docstring * clean-up: make style * clean-up: make fix-copies * clean-up: move imports to start in code snippet * fix: pass cross_attention_dim as a list/tuple to unet * clean-up: make fix-copies * fix: update checkpoint path * fix: unet cross_attention_dim in tests * film embeddings -> class embeddings * Apply suggestions from code review Co-authored-by:
Will Berman <wlbberman@gmail.com> * fix: unet film embed to use existing args * fix: unet tests to use existing args * fix: make style * fix: transformers import and version in init * clean-up: make style * Revert "clean-up: make style" This reverts commit 5d6d1f8b324f5583e7805dc01e2c86e493660d66. * clean-up: make style * clean-up: use pipeline tester mixin tests where poss * clean-up: skip attn slicing test * fix: add torch dtype to docs * fix: remove conversion script out of src * fix: remove .detach from 1d waveform * fix: reduce default num inf steps * fix: swap height/width -> audio_length_in_s * clean-up: make style * fix: remove nightly tests * fix: imports in conversion script * clean-up: slim-down to two slow tests * clean-up: slim-down fast tests * fix: batch consistent tests * clean-up: make style * clean-up: remove vae slicing fast test * clean-up: propagate changes to doc * fix: increase test tol to 1e-2 * clean-up: finish docs * clean-up: make style * feat: vocoder / VAE compatibility check * feat: possibly expand / cut audio waveform * fix: pipeline call signature test * fix: slow tests output len * clean-up: make style * make style --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
William Berman <WLBberman@gmail.com>
-
YiYi Xu authored
* add contronet flax --------- Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
Pedro Cuenca authored
* Skip mps in text-to-video tests. * style * Skip UNet3D mps tests.
-
Kashif Rasul authored
* initial TokenEncoder and ContinuousEncoder * initial modules * added ContinuousContextTransformer * fix copy paste error * use numpy for get_sequence_length * initial terminal relative positional encodings * fix weights keys * fix assert * cross attend style: concat encodings * make style * concat once * fix formatting * Initial SpectrogramPipeline * fix input_tokens * make style * added mel output * ignore weights for config * move mel to numpy * import pipeline * fix class names and import * moved models to models folder * import ContinuousContextTransformer and SpectrogramDiffusionPipeline * initial spec diffusion converstion script * renamed config to t5config * added weight loading * use arguments instead of t5config * broadcast noise time to batch dim * fix call * added scale_to_features * fix weights * transpose laynorm weight * scale is a vector * scale the query outputs * added comment * undo scaling * undo depth_scaling * inital get_extended_attention_mask * attention_mask is none in self-attention * cleanup * manually invert attention * nn.linear need bias=False * added T5LayerFFCond * remove to fix conflict * make style and dummy * remove unsed variables * remove predict_epsilon * Move accelerate to a soft-dependency (#1134) * finish * finish * Update src/diffusers/modeling_utils.py * Update src/diffusers/pipeline_utils.py Co-authored-by:
Anton Lozhkov <anton@huggingface.co> * more fixes * fix Co-authored-by:
Anton Lozhkov <anton@huggingface.co> * fix order * added initial midi to note token data pipeline * added int to int tokenizer * remove duplicate * added logic for segments * add melgan to pipeline * move autoregressive gen into pipeline * added note_representation_processor_chain * fix dtypes * remove immutabledict req * initial doc * use np.where * require note_seq * fix typo * update dependency * added note-seq to test * added is_note_seq_available * fix import * added toc * added example usage * undo for now * moved docs * fix merge * fix imports * predict first segment * avoid un-needed copy to and from cpu * make style * Copyright * fix style * add test and fix inference steps * remove bogus files * reorder models * up * remove transformers dependency * make work with diffusers cross attention * clean more * remove @ * improve further * up * uP * Apply suggestions from code review * Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py * loop over all tokens * make style * Added a section on the model * fix formatting * grammer * formatting * make fix-copies * Update src/diffusers/pipelines/__init__.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/spectrogram_diffusion/pipeline_spectrogram_diffusion.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * added callback ad optional ionnx * do not squeeze batch dim * clean up more * upload * convert jax to nnumpy * make style * fix warning * make fix-copies * fix warning * add initial fast tests * add initial pipeline_params * eval mode due to dropout * skip batch tests as pipeline runs on a single file * make style * fix relative path * fix doc tests * Update src/diffusers/models/t5_film_transformer.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/models/t5_film_transformer.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update docs/source/en/api/pipelines/spectrogram_diffusion.mdx Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * add MidiProcessor * format * fix org * Apply suggestions from code review * Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py * make style * pin protobuf to <4 * fix formatting * white space * tensorboard needs protobuf --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Anton Lozhkov <anton@huggingface.co>
-