"data/data.json" did not exist on "f7c86e68084e7462d13e2acc56c4d22d5d9e2392"
- 05 Jun, 2024 1 commit
-
-
Sayak Paul authored
* remove legacy code from load_attn_procs. * finish first draft * fix more. * fix more * add test * add serialization support. * fix-copies * require peft backend for lora tests * style * fix test * fix loading. * empty * address benjamin's feedback.
-
- 29 May, 2024 1 commit
-
-
Sayak Paul authored
* use IPAdapterPlusImageProjectionBlock in IPAdapterPlusImageProjection * reposition IPAdapterPlusImageProjection * refactor complete? * fix heads param retrieval. * update test dict creation method.
-
- 30 Apr, 2024 1 commit
-
-
Sayak Paul authored
* introduce _no_split_modules. * unnecessary spaces. * remove unnecessary kwargs and style * fix: accelerate imports. * change to _determine_device_map * add the blocks that have residual connections. * add: CrossAttnUpBlock2D * add: testin * style * line-spaces * quality * add disk offload test without safetensors. * checking disk offloading percentages. * change model split * add: utility for checking multi-gpu requirement. * model parallelism test * splits. * splits. * splits * splits. * splits. * splits. * offload folder to test_disk_offload_with_safetensors * add _no_split_modules * fix-copies
-
- 19 Apr, 2024 1 commit
-
-
Fabio Rigano authored
* Switch to peft and multi proj layers * Move Face ID loading and inference to core --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 05 Apr, 2024 1 commit
-
-
Sayak Paul authored
* reduce block sizes for unet1d. * reduce blocks for unet_2d. * reduce block size for unet_motion * increase channels. * correctly increase channels. * reduce number of layers in unet2dconditionmodel tests. * reduce block sizes for unet2dconditionmodel tests * reduce block sizes for unet3dconditionmodel. * fix: test_feed_forward_chunking * fix: test_forward_with_norm_groups * skip spatiotemporal tests on MPS. * reduce block size in AutoencoderKL. * reduce block sizes for vqmodel. * further reduce block size. * make style. * Empty-Commit * reduce sizes for ConsistencyDecoderVAETests * further reduction. * further block reductions in AutoencoderKL and AssymetricAutoencoderKL. * massively reduce the block size in unet2dcontionmodel. * reduce sizes for unet3d * fix tests in unet3d. * reduce blocks further in motion unet. * fix: output shape * add attention_head_dim to the test configuration. * remove unexpected keyword arg * up a bit. * groups. * up again * fix
-
- 18 Mar, 2024 1 commit
-
-
M. Tolga Cangöz authored
Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 23 Feb, 2024 1 commit
-
-
Aryan authored
* begin IPAdapterTesterMixin --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
- 08 Feb, 2024 1 commit
-
-
Sayak Paul authored
change to 2024
-
- 01 Feb, 2024 1 commit
-
-
Sayak Paul authored
* harmonize the module structure for models in tests * make the folders modules. --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
- 31 Jan, 2024 1 commit
-
-
YiYi Xu authored
--------- Co-authored-by:
yiyixuxu <yixu310@gmail,com> Co-authored-by:
Alvaro Somoza <somoza.alvaro@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 28 Dec, 2023 1 commit
-
-
YiYi Xu authored
* refactor ip-adapter-imageproj, gligen --------- Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
- 07 Dec, 2023 1 commit
-
-
Dhruv Nair authored
update
-
- 05 Dec, 2023 1 commit
-
-
Arsalan authored
* utils and test modifications to enable device agnostic testing * device for manual seed in unet1d * fix generator condition in vae test * consistency changes to testing * make style * add device agnostic testing changes to source and one model test * make dtype check fns private, log cuda fp16 case * remove dtype checks from import utils, move to testing_utils * adding tests for most model classes and one pipeline * fix vae import
-
- 04 Dec, 2023 1 commit
-
-
takuoko authored
* Support IP-Adapter Plus * fix format * restore before black format * restore before black format * generic * Refactor PerceiverAttention * format * fix test and refactor PerceiverAttention * generic encode_image * keep attention implementation * merge tests * encode_image backward compatible * code quality * fix controlnet inpaint pipeline * refactor FFN * refactor FFN --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
- 21 Nov, 2023 1 commit
-
-
YiYi Xu authored
* add ip-adapter --------- Co-authored-by:
okotaku <to78314910@gmail.com> Co-authored-by:
sayakpaul <spsayakpaul@gmail.com> Co-authored-by:
yiyixuxu <yixu310@gmail,com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 20 Oct, 2023 1 commit
-
-
Vishnu V Jaddipal authored
* Added args, kwargs to ```U * Add UNetMidBlock2D as a supported mid block type * Fix extra init input for UNetMidBlock2D, change allowed types for Mid-block init * Update unet_2d_condition.py * Update unet_2d_condition.py * Update unet_2d_condition.py * Update unet_2d_condition.py * Update unet_2d_condition.py * Update unet_2d_condition.py * Update unet_2d_condition.py * Update unet_2d_condition.py * Update unet_2d_blocks.py * Update unet_2d_blocks.py * Update unet_2d_blocks.py * Update unet_2d_condition.py * Update unet_2d_blocks.py * Updated docstring, increased check strictness Updated the docstring for ```UNet2DConditionModel``` to include ```reverse_transformer_layers_per_block``` and updated checking for nested list type ```transformer_layers_per_block``` * Add basic shape-check test for asymmetrical unets * Update src/diffusers/models/unet_2d_blocks.py Removed blank line Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update unet_2d_condition.py Remove blank space * Update unet_2d_condition.py Changed docstring for `mid_block_type` * Fixed docstring and wrong default value * Reformat with black * Reformat with necessary commands * Add UNetMidBlockFlat to versatile_diffusion/modeling_text_unet.py to ensure consistency * Removed args, kwargs, use on mid-block type * Make fix-copies * Update src/diffusers/models/unet_2d_condition.py Wrap into single line Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * make fix-copies --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 26 Sep, 2023 1 commit
-
-
Dhruv Nair authored
* fix other tests * fix tests * fix tests * Update tests/pipelines/shap_e/test_shap_e_img2img.py * Update tests/pipelines/shap_e/test_shap_e_img2img.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix upstream merge mistake * fix tests: * test fix * Update tests/lora/test_lora_layers_old_backend.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update tests/lora/test_lora_layers_old_backend.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 18 Sep, 2023 1 commit
-
-
Patrick von Platen authored
* [LoRA] Centralize LoRA tests * [LoRA] Centralize LoRA tests * [LoRA] Centralize LoRA tests * [LoRA] Centralize LoRA tests * [LoRA] Centralize LoRA tests
-
- 12 Sep, 2023 1 commit
-
-
Dhruv Nair authored
-
- 11 Sep, 2023 1 commit
-
-
Dhruv Nair authored
* initial commit * move modules to import struct * add dummy objects and _LazyModule * add lazy import to schedulers * clean up unused imports * lazy import on models module * lazy import for schedulers module * add lazy import to pipelines module * lazy import altdiffusion * lazy import audio diffusion * lazy import audioldm * lazy import consistency model * lazy import controlnet * lazy import dance diffusion ddim ddpm * lazy import deepfloyd * lazy import kandinksy * lazy imports * lazy import semantic diffusion * lazy imports * lazy import stable diffusion * move sd output to its own module * clean up * lazy import t2iadapter * lazy import unclip * lazy import versatile and vq diffsuion * lazy import vq diffusion * helper to fetch objects from modules * lazy import sdxl * lazy import txt2vid * lazy import stochastic karras * fix model imports * fix bug * lazy import * clean up * clean up * fixes for tests * fixes for tests * clean up * remove import of torch_utils from utils module * clean up * clean up * fix mistake import statement * dedicated modules for exporting and loading * remove testing utils from utils module * fixes from merge conflicts * Update src/diffusers/pipelines/kandinsky2_2/__init__.py * fix docs * fix alt diffusion copied from * fix check dummies * fix more docs * remove accelerate import from utils module * add type checking * make style * fix check dummies * remove torch import from xformers check * clean up error message * fixes after upstream merges * dummy objects fix * fix tests * remove unused module import --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 01 Sep, 2023 1 commit
-
-
Dhruv Nair authored
* proposal for flaky tests * more precision fixes * move more tests to use cosine distance * more test fixes * clean up * use default attn * clean up * update expected value * make style * make style * Apply suggestions from code review * Update src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_img2img.py * make style * fix failing tests --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 24 Aug, 2023 1 commit
-
-
Dhruv Nair authored
use max diff to compare model outputs
-
- 17 Aug, 2023 1 commit
-
-
Patrick von Platen authored
* make safetensors default * set default save method as safetensors * update tests * update to support saving safetensors * update test to account for safetensors default * update example tests to use safetensors * update example to support safetensors * update unet tests for safetensors * fix failing loader tests * fix qc issues * fix pipeline tests * fix example test --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 04 Aug, 2023 1 commit
-
-
Patrick von Platen authored
* correct * correct blocks * finish * finish * finish * Apply suggestions from code review * fix * up * up * up * Update examples/dreambooth/README_sdxl.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Apply suggestions from code review --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 06 Jul, 2023 1 commit
-
-
Prathik Rao authored
* add default to unet output to prevent it from being a required arg * add unit test * make style * adjust unit test * mark as fast test * adjust assert statement in test --------- Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net> Co-authored-by:root <root@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
-
- 15 Jun, 2023 1 commit
-
-
Patrick von Platen authored
* relax tolerance slightly * Add more tests * upload readme * upload readme * Apply suggestions from code review * Improve API Autoencoder KL * finalize * finalize tests * finalize tests * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * up --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 22 May, 2023 2 commits
-
-
Birch-san authored
* Cross-attention masks prefer qualified symbol, fix accidental Optional prefer qualified symbol in AttentionProcessor prefer qualified symbol in embeddings.py qualified symbol in transformed_2d qualify FloatTensor in unet_2d_blocks move new transformer_2d params attention_mask, encoder_attention_mask to the end of the section which is assumed (e.g. by functions such as checkpoint()) to have a stable positional param interface. regard return_dict as a special-case which is assumed to be injected separately from positional params (e.g. by create_custom_forward()). move new encoder_attention_mask param to end of CrossAttn block interfaces and Unet2DCondition interface, to maintain positional param interface. regenerate modeling_text_unet.py remove unused import unet_2d_condition encoder_attention_mask docs Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> versatile_diffusion/modeling_text_unet.py encoder_attention_mask docs Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> transformer_2d encoder_attention_mask docs Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> unet_2d_blocks.py: add parameter name comments Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> revert description. bool-to-bias treatment happens in unet_2d_condition only. comment parameter names fix copies, style * encoder_attention_mask for SimpleCrossAttnDownBlock2D, SimpleCrossAttnUpBlock2D * encoder_attention_mask for UNetMidBlock2DSimpleCrossAttn * support attention_mask, encoder_attention_mask in KCrossAttnDownBlock2D, KCrossAttnUpBlock2D, KAttentionBlock. fix binding of attention_mask, cross_attention_kwargs params in KCrossAttnDownBlock2D, KCrossAttnUpBlock2D checkpoint invocations. * fix mistake made during merge conflict resolution * regenerate versatile_diffusion * pass time embedding into checkpointed attention invocation * always assume encoder_attention_mask is a mask (i.e. not a bias). * style, fix-copies * add tests for cross-attention masks * add test for padding of attention mask * explain mask's query_tokens dim. fix explanation about broadcasting over channels; we actually broadcast over query tokens * support both masks and biases in Transformer2DModel#forward. document behaviour * fix-copies * delete attention_mask docs on the basis I never tested self-attention masking myself. not comfortable explaining it, since I don't actually understand how a self-attn mask can work in its current form: the key length will be different in every ResBlock (we don't downsample the mask when we downsample the image). * review feedback: the standard Unet blocks shouldn't pass temb to attn (only to resnet). remove from KCrossAttnDownBlock2D,KCrossAttnUpBlock2D#forward. * remove encoder_attention_mask param from SimpleCrossAttn{Up,Down}Block2D,UNetMidBlock2DSimpleCrossAttn, and mask-choice in those blocks' #forward, on the basis that they only do one type of attention, so the consumer can pass whichever type of attention_mask is appropriate. * put attention mask padding back to how it was (since the SD use-case it enabled wasn't important, and it breaks the original unclip use-case). disable the test which was added. * fix-copies * style * fix-copies * put encoder_attention_mask param back into Simple block forward interfaces, to ensure consistency of forward interface. * restore passing of emb to KAttentionBlock#forward, on the basis that removal caused test failures. restore also the passing of emb to checkpointed calls to KAttentionBlock#forward. * make simple unet2d blocks use encoder_attention_mask, but only when attention_mask is None. this should fix UnCLIP compatibility. * fix copies
-
Patrick von Platen authored
* up * fix more * Apply suggestions from code review * fix more * fix more * Check it * Remove 16:8 * fix more * fix more * fix more * up * up * Test only stable diffusion * Test only two files * up * Try out spinning up processes that can be killed * up * Apply suggestions from code review * up * up
-
- 11 May, 2023 1 commit
-
-
Sayak Paul authored
* enable deterministic pytorch and cuda operations. * disable manual seeding. * make style && make quality for unet_2d tests. * enable determinism for the unet2dconditional model. * add CUBLAS_WORKSPACE_CONFIG for better reproducibility. * relax tolerance (very weird issue, though). * revert to torch manual_seed() where needed. * relax more tolerance. * better placement of the cuda variable and relax more tolerance. * enable determinism for 3d condition model. * relax tolerance. * add: determinism to alt_diffusion. * relax tolerance for alt diffusion. * dance diffusion. * dance diffusion is flaky. * test_dict_tuple_outputs_equivalent edit. * fix two more tests. * fix more ddim tests. * fix: argument. * change to diff in place of difference. * fix: test_save_load call. * test_save_load_float16 call. * fix: expected_max_diff * fix: paint by example. * relax tolerance. * add determinism to 1d unet model. * torch 2.0 regressions seem to be brutal * determinism to vae. * add reason to skipping. * up tolerance. * determinism to vq. * determinism to cuda. * determinism to the generic test pipeline file. * refactor general pipelines testing a bit. * determinism to alt diffusion i2i * up tolerance for alt diff i2i and audio diff * up tolerance. * determinism to audioldm * increase tolerance for audioldm lms. * increase tolerance for paint by paint. * increase tolerance for repaint. * determinism to cycle diffusion and sd 1. * relax tol for cycle diffusion
🚲 * relax tol for sd 1.0 * relax tol for controlnet. * determinism to img var. * relax tol for img variation. * tolerance to i2i sd * make style * determinism to inpaint. * relax tolerance for inpaiting. * determinism for inpainting legacy * relax tolerance. * determinism to instruct pix2pix * determinism to model editing. * model editing tolerance. * panorama determinism * determinism to pix2pix zero. * determinism to sag. * sd 2. determinism * sd. tolerance * disallow tf32 matmul. * relax tolerance is all you need. * make style and determinism to sd 2 depth * relax tolerance for depth. * tolerance to diffedit. * tolerance to sd 2 inpaint. * up tolerance. * determinism in upscaling. * tolerance in upscaler. * more tolerance relaxation. * determinism to v pred. * up tol for v_pred * unclip determinism * determinism to unclip img2img * determinism to text to video. * determinism to last set of tests * up tol. * vq cumsum doesn't have a deterministic kernel * relax tol * relax tol
-
- 20 Apr, 2023 1 commit
-
-
nupurkmr9 authored
* diffusers==0.14.0 update * custom diffusion update * custom diffusion update * custom diffusion update * custom diffusion update * custom diffusion update * custom diffusion update * custom diffusion * custom diffusion * custom diffusion * custom diffusion * custom diffusion * apply formatting and get rid of bare except. * refactor readme and other minor changes. * misc refactor. * fix: repo_id issue and loaders logging bug. * fix: save_model_card. * fix: save_model_card. * fix: save_model_card. * add: doc entry. * refactor doc,. * custom diffusion * custom diffusion * custom diffusion * apply style. * remove tralining whitespace. * fix: toctree entry. * remove unnecessary print. * custom diffusion * custom diffusion * custom diffusion test * custom diffusion xformer update * custom diffusion xformer update * custom diffusion xformer update --------- Co-authored-by:
Nupur Kumari <nupurkumari@Nupurs-MacBook-Pro.local> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Nupur Kumari <nupurkumari@nupurs-mbp.wifi.local.cmu.edu>
-
- 13 Apr, 2023 1 commit
-
-
Patrick von Platen authored
* [Tests] parallelize * finish folder structuring * Parallelize tests more * Correct saving of pipelines * make sure logging level is correct * try again * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> --------- Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 12 Apr, 2023 1 commit
-
-
Andy authored
* inital commit for lora test cases * help a bit with lora for 3d * fixed lora tests * replaced redundant code --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 27 Mar, 2023 1 commit
-
-
Pedro Cuenca authored
* Helper function to disable custom attention processors. * Restore code deleted by mistake. * Format * Fix modeling_text_unet copy.
-
- 23 Mar, 2023 1 commit
-
-
Sanchit Gandhi authored
* Add AudioLDM * up * add vocoder * start unet * unconditional unet * clap, vocoder and vae * clean-up: conversion scripts * fix: conversion script token_type_ids * clean-up: pipeline docstring * tests: from SD * clean-up: cpu offload vocoder instead of safety checker * feat: adapt tests to audioldm * feat: add docs * clean-up: amend pipeline docstrings * clean-up: make style * clean-up: make fix-copies * fix: add doc path to toctree * clean-up: args for conversion script * clean-up: paths to checkpoints * fix: use conditional unet * clean-up: make style * fix: type hints for UNet * clean-up: docstring for UNet * clean-up: make style * clean-up: remove duplicate in docstring * clean-up: make style * clean-up: make fix-copies * clean-up: move imports to start in code snippet * fix: pass cross_attention_dim as a list/tuple to unet * clean-up: make fix-copies * fix: update checkpoint path * fix: unet cross_attention_dim in tests * film embeddings -> class embeddings * Apply suggestions from code review Co-authored-by:
Will Berman <wlbberman@gmail.com> * fix: unet film embed to use existing args * fix: unet tests to use existing args * fix: make style * fix: transformers import and version in init * clean-up: make style * Revert "clean-up: make style" This reverts commit 5d6d1f8b324f5583e7805dc01e2c86e493660d66. * clean-up: make style * clean-up: use pipeline tester mixin tests where poss * clean-up: skip attn slicing test * fix: add torch dtype to docs * fix: remove conversion script out of src * fix: remove .detach from 1d waveform * fix: reduce default num inf steps * fix: swap height/width -> audio_length_in_s * clean-up: make style * fix: remove nightly tests * fix: imports in conversion script * clean-up: slim-down to two slow tests * clean-up: slim-down fast tests * fix: batch consistent tests * clean-up: make style * clean-up: remove vae slicing fast test * clean-up: propagate changes to doc * fix: increase test tol to 1e-2 * clean-up: finish docs * clean-up: make style * feat: vocoder / VAE compatibility check * feat: possibly expand / cut audio waveform * fix: pipeline call signature test * fix: slow tests output len * clean-up: make style * make style --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
William Berman <WLBberman@gmail.com>
-
- 21 Mar, 2023 1 commit
-
-
Alexander Pivovarov authored
Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 16 Mar, 2023 1 commit
-
-
Nicolas Patry authored
* Adding `use_safetensors` argument to give more control to users about which weights they use. * Doc style. * Rebased (not functional). * Rebased and functional with tests. * Style. * Apply suggestions from code review * Style. * Addressing comments. * Update tests/test_pipelines.py Co-authored-by:
Will Berman <wlbberman@gmail.com> * Black ??? --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Will Berman <wlbberman@gmail.com>
-
- 15 Mar, 2023 2 commits
-
-
Patrick von Platen authored
* rename file * rename attention * fix more * rename more * up * more deprecation imports * fixes
-
Kashif Rasul authored
* fix AttnProcessor2_0 Fix use of AttnProcessor2_0 for cross attention with mask * added scale_qk and out_bias flags * fixed for xformers * check if it has scale argument * Update cross_attention.py * check torch version * fix sliced attn * style * set scale * fix test * fixed addedKV processor * revert back AttnProcessor2_0 * if missing if * fix inner_dim --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 04 Mar, 2023 1 commit
-
-
Nicolas Patry authored
* Fix regression introduced in #2448 * Style.
-
- 03 Mar, 2023 1 commit
-
-
Nicolas Patry authored
* Adding support for `safetensors` and LoRa. * Adding metadata.
-