- 15 Sep, 2023 1 commit
-
-
Bagheera authored
Remove logger.info statement from Unet2DCondition code to ensure torch compile reliably succeeds (#4982) * Remove logger.info statement from Unet2DCondition code to ensure torch compile reliably succeeds * Convert logging statement to a comment for future archaeologists * Update src/diffusers/models/unet_2d_condition.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> --------- Co-authored-by:
bghira <bghira@users.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 04 Sep, 2023 2 commits
-
-
dg845 authored
* Add dropout param to get_down_block/get_up_block and UNet2DModel/UNet2DConditionModel. * Add dropout param to Versatile Diffusion modeling, which has a copy of UNet2DConditionModel and its own get_down_block/get_up_block functions.
-
Sayak Paul authored
* throw warning when more than one lora is attempted to be fused. * introduce support of lora scale during fusion. * change test name * changes * change to _lora_scale * lora_scale to call whenever applicable. * debugging * lora_scale additional. * cross_attention_kwargs * lora_scale -> scale. * lora_scale fix * lora_scale in patched projection. * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * styling. * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * remove unneeded prints. * remove unneeded prints. * assign cross_attention_kwargs. * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * clean up. * refactor scale retrieval logic a bit. * fix nonetypw * fix: tests * add more tests * more fixes. * figure out a way to pass lora_scale. * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * unify the retrieval logic of lora_scale. * move adjust_lora_scale_text_encoder to lora.py. * introduce dynamic adjustment lora scale support to sd * fix up copies * Empty-Commit * add: test to check fusion equivalence on different scales. * handle lora fusion warning. * make lora smaller * make lora smaller * make lora smaller --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 01 Sep, 2023 2 commits
-
-
Dhruv Nair authored
* proposal for flaky tests * more precision fixes * move more tests to use cosine distance * more test fixes * clean up * use default attn * clean up * update expected value * make style * make style * Apply suggestions from code review * Update src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_img2img.py * make style * fix failing tests --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Nguyễn Công Tú Anh authored
* Add GLIGEN Text Image implementation * add style transfer from image * fix check_repository_consistency * add convert script GLIGEN model to Diffusers * rename attention type * fix style code * remove PositionNetTextImage * Revert "fix check_repository_consistency" This reverts commit 15f098c96e00bb9e67b831161615b30a2d28d815. * change attention type name * update docs for GLIGEN * change examples with hf-document-image * fix style * add CLIPImageProjection for GLIGEN * Add new encode_prompt, load project matrix in pipe init * move CLIPImageProjection to stable_diffusion * add comment
-
- 29 Aug, 2023 1 commit
-
-
Chong Mou authored
* T2I-Adapter-XL * update * update * add pipeline * modify pipeline * modify pipeline * modify pipeline * modify pipeline * modify pipeline * modify modeling_text_unet * fix styling. * fix: copies. * adapter settings * new test case * new test case * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * revert prints. * new test case * remove print * org test case * add test_pipeline * styling. * fix copies. * modify test parameter * style. * add adapter-xl doc * double quotes in docs * Fix potential type mismatch * style. --------- Co-authored-by:sayakpaul <spsayakpaul@gmail.com>
-
- 28 Aug, 2023 1 commit
-
-
Patrick von Platen authored
* [LoRA Attn] Refactor LoRA attn * correct for network alphas * fix more * fix more tests * fix more tests * Move below * Finish * better version * correct serialization format * fix * fix more * fix more * fix more * Apply suggestions from code review * Update src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_img2img.py * deprecation * relax atol for slow test slighly * Finish tests * make style * make style
-
- 16 Aug, 2023 1 commit
-
-
nikhil-masterful authored
* Add GLIGEN implementation * GLIGEN: Fix code quality check failures * GLIGEN: Fix Import block un-sorted or un-formatted failures * GLIGEN: Fix check_repository_consistency failures * GLIGEN: Add 'PositionNet' to versatile_diffusion/modeling_text_unet.py * GLIGEN: check_repository_consistency: fix 'copy does not match' error * GLIGEN: Fix review comments (1) * GLIGEN: Fix E721 Do not compare types, use `isinstance()` failures * GLIGEN : Ensure _encode_prompt() copy matches to StableDiffusionPipeline * GLIGEN: Fix ruff E721 failure in unidiffuser/test_unidiffuser.py * GLIGEN: doc_builder: restyle pipeline_stable_diffusion_gligen.py * GIGLEN: reset files unrelated to gligen * GLIGEN: Fix documentation comments (1) * GLIGEN: Fix review comments (2) * GLIGEN: Added FastTest * GLIGEN: Fix review comments (3)
-
- 04 Aug, 2023 1 commit
-
-
Patrick von Platen authored
* correct * correct blocks * finish * finish * finish * Apply suggestions from code review * fix * up * up * up * Update examples/dreambooth/README_sdxl.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Apply suggestions from code review --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 25 Jul, 2023 1 commit
-
-
Sayak Paul authored
* add automatic licensing. * debugging * debugging * more debugging * more debugging. * run make fix-copies. * change to default tracker.
-
- 17 Jul, 2023 1 commit
-
-
Will Berman authored
* Quick implementation of t2i-adapter Load adapter module with from_pretrained Prototyping generalized adapter framework Writeup doc string for sideload framework(WIP) + some minor update on implementation Update adapter models Remove old adapter optional args in UNet Add StableDiffusionAdapterPipeline unit test Handle cpu offload in StableDiffusionAdapterPipeline Auto correct coding style Update model repo name to "RzZ/sd-v1-4-adapter-pipeline" Refactor MultiAdapter to better compatible with config system Export MultiAdapter Create pipeline document template from controlnet Create dummy objects Supproting new AdapterLight model Fix StableDiffusionAdapterPipeline common pipeline test [WIP] Update adapter pipeline document Handle num_inference_steps in StableDiffusionAdapterPipeline Update definition of Adapter "channels_in" Update documents Apply code style Fix doc typo and merge error Update doc string and example Quality of life improvement Remove redundant code and file from prototyping Remove unused pageage Remove comments Fix title Fix typo Add conditioning scale arg Bring back old implmentation Offload sideload Add supply info on document Update src/diffusers/models/adapter.py Co-authored-by:
Will Berman <wlbberman@gmail.com> Update MultiAdapter constructor Swap out custom checkpoint and update pipeline constructor Update docment Apply suggestions from code review Co-authored-by:
Will Berman <wlbberman@gmail.com> Correcting style Following single-file policy Update auto size in image preprocess func Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_adapter.py Co-authored-by:
Will Berman <wlbberman@gmail.com> fix copies Update adapter pipeline behavior Add adapter_conditioning_scale doc string Add the missing doc string Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Fix few bugs from suggestion Handle L-mode PIL image as control image Rename to differentiate adapter resblock Update src/diffusers/models/adapter.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Fix typo Update adapter parameter name Update test case and code style Fix copies Fix typo Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_adapter.py Co-authored-by:
Will Berman <wlbberman@gmail.com> Update Adapter class name Add checkpoint converting script Fix style Fix-copies Remove dev script Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Updates for parameter rename Fix convert_adapter remove main fix diff more refactoring more more small fixes refactor tests more slow tests more tests Update docs/source/en/api/pipelines/overview.mdx Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> add community contributor to docs Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> fix remove from_adapters license paper link docs more url fixes more docs fix fixes fix fix * fix sample inplace add * additional_kwargs -> additional_residuals * move t2i adapter pipeline to own module * preprocess -> _preprocess_adapter_image * add TencentArc to license * fix example code links * add image converter and fix example doc string * fix links * clearer additional residual application --------- Co-authored-by:
HimariO <dsfhe49854@gmail.com>
-
- 06 Jul, 2023 4 commits
-
-
Patrick von Platen authored
* disable num attenion heads * finish
-
YiYi Xu authored
* Kandinsky2_2 * fix init kandinsky2_2 * kandinsky2_2 fix inpainting * rename pipelines: remove decoder + 2_2 -> V22 * Update scheduling_unclip.py * remove text_encoder and tokenizer arguments from doc string * add test for text2img * add tests for text2img & img2img * fix * add test for inpaint * add prior tests * style * copies * add controlnet test * style * add a test for controlnet_img2img * update prior_emb2emb api to accept image_embedding or image * add a test for prior_emb2emb * style * remove try except * example * fix * add doc string examples to all kandinsky pipelines * style * update doc * style * add a top about 2.2 * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * vae -> movq * vae -> movq * style * fix the #copied from * remove decoder from file name * update doc: add a section for kandinsky 2.2 * fix * fix-copies * add coped from * add copies from for prior * add copies from for prior emb2emb * copy from for img2img * copied from for inpaint * more copied from * more copies from * more copies * remove the yiyi comments * Apply suggestions from code review * Self-contained example, pipeline order * Import prior output instead of redefining. * Style * Make VQModel compatible with model offload. * Fix copies --------- Co-authored-by:
Shahmatov Arseniy <62886550+cene555@users.noreply.github.com> Co-authored-by:
yiyixuxu <yixu310@gmail,com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Patrick von Platen authored
* Add new text encoder * add transformers depth * More * Correct conversion script * Fix more * Fix more * Correct more * correct text encoder * Finish all * proof that in works in run local xl * clean up * Get refiner to work * Add red castle * Fix batch size * Improve pipelines more * Finish text2image tests * Add img2img test * Fix more * fix import * Fix embeddings for classic models (#3888) Fix embeddings for classic SD models. * Allow multiple prompts to be passed to the refiner (#3895) * finish more * Apply suggestions from code review * add watermarker * Model offload (#3889) * Model offload. * Model offload for refiner / img2img * Hardcode encoder offload on img2img vae encode Saves some GPU RAM in img2img / refiner tasks so it remains below 8 GB. --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * correct * fix * clean print * Update install warning for `invisible-watermark` * add: missing docstrings. * fix and simplify the usage example in img2img. * fix setup for watermarking. * Revert "fix setup for watermarking." This reverts commit 491bc9f5a640bbf46a97a8e52d6eff7e70eb8e4b. * fix: watermarking setup. * fix: op. * run make fix-copies. * make sure tests pass * improve convert * make tests pass * make tests pass * better error message * fiinsh * finish * Fix final test --------- Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Prathik Rao authored
* add default to unet output to prevent it from being a required arg * add unit test * make style * adjust unit test * mark as fast test * adjust assert statement in test --------- Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net> Co-authored-by:root <root@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
-
- 30 Jun, 2023 1 commit
-
-
Steven Liu authored
* add modelmixin and unets * remove old model page * minor fixes * fix unet2dcondition * add vqmodel and autoencoderkl * add rest of models * fix autoencoderkl path * fix toctree * fix toctree again * apply feedback * apply feedback * fix copies * fix controlnet copy * fix copies
-
- 22 Jun, 2023 1 commit
-
-
Patrick von Platen authored
* relax tolerance slightly * correct incorrect naming * correct namingc * correct more * Apply suggestions from code review * Fix more * Correct more * correct incorrect naming * Update src/diffusers/models/controlnet.py * Correct flax * Correct renaming * Correct blocks * Fix more * Correct more * mkae style * mkae style * mkae style * mkae style * mkae style * Fix flax * mkae style * rename * rename * rename attn head dim to attention_head_dim * correct flax * make style * improve * Correct more * make style * fix more * mkae style * Update src/diffusers/models/controlnet_flax.py * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> --------- Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 05 Jun, 2023 1 commit
-
-
Will Berman authored
* move activation dispatches into helper function * tests
-
- 30 May, 2023 1 commit
-
-
Patrick von Platen authored
Make sure we also change the config when setting `encoder_hid_dim_type=="text_proj"` and allow xformers (#3615) * fix if * make style * make style * add tests for xformers * make style * update
-
- 25 May, 2023 1 commit
-
-
YiYi Xu authored
add kandinsky2.1 --------- Co-authored-by:
yiyixuxu <yixu310@gmail,com> Co-authored-by:
Ayush Mangal <43698245+ayushtues@users.noreply.github.com> Co-authored-by:
ayushmangal <ayushmangal@microsoft.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 22 May, 2023 1 commit
-
-
Birch-san authored
* Cross-attention masks prefer qualified symbol, fix accidental Optional prefer qualified symbol in AttentionProcessor prefer qualified symbol in embeddings.py qualified symbol in transformed_2d qualify FloatTensor in unet_2d_blocks move new transformer_2d params attention_mask, encoder_attention_mask to the end of the section which is assumed (e.g. by functions such as checkpoint()) to have a stable positional param interface. regard return_dict as a special-case which is assumed to be injected separately from positional params (e.g. by create_custom_forward()). move new encoder_attention_mask param to end of CrossAttn block interfaces and Unet2DCondition interface, to maintain positional param interface. regenerate modeling_text_unet.py remove unused import unet_2d_condition encoder_attention_mask docs Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> versatile_diffusion/modeling_text_unet.py encoder_attention_mask docs Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> transformer_2d encoder_attention_mask docs Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> unet_2d_blocks.py: add parameter name comments Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> revert description. bool-to-bias treatment happens in unet_2d_condition only. comment parameter names fix copies, style * encoder_attention_mask for SimpleCrossAttnDownBlock2D, SimpleCrossAttnUpBlock2D * encoder_attention_mask for UNetMidBlock2DSimpleCrossAttn * support attention_mask, encoder_attention_mask in KCrossAttnDownBlock2D, KCrossAttnUpBlock2D, KAttentionBlock. fix binding of attention_mask, cross_attention_kwargs params in KCrossAttnDownBlock2D, KCrossAttnUpBlock2D checkpoint invocations. * fix mistake made during merge conflict resolution * regenerate versatile_diffusion * pass time embedding into checkpointed attention invocation * always assume encoder_attention_mask is a mask (i.e. not a bias). * style, fix-copies * add tests for cross-attention masks * add test for padding of attention mask * explain mask's query_tokens dim. fix explanation about broadcasting over channels; we actually broadcast over query tokens * support both masks and biases in Transformer2DModel#forward. document behaviour * fix-copies * delete attention_mask docs on the basis I never tested self-attention masking myself. not comfortable explaining it, since I don't actually understand how a self-attn mask can work in its current form: the key length will be different in every ResBlock (we don't downsample the mask when we downsample the image). * review feedback: the standard Unet blocks shouldn't pass temb to attn (only to resnet). remove from KCrossAttnDownBlock2D,KCrossAttnUpBlock2D#forward. * remove encoder_attention_mask param from SimpleCrossAttn{Up,Down}Block2D,UNetMidBlock2DSimpleCrossAttn, and mask-choice in those blocks' #forward, on the basis that they only do one type of attention, so the consumer can pass whichever type of attention_mask is appropriate. * put attention mask padding back to how it was (since the SD use-case it enabled wasn't important, and it breaks the original unclip use-case). disable the test which was added. * fix-copies * style * fix-copies * put encoder_attention_mask param back into Simple block forward interfaces, to ensure consistency of forward interface. * restore passing of emb to KAttentionBlock#forward, on the basis that removal caused test failures. restore also the passing of emb to checkpointed calls to KAttentionBlock#forward. * make simple unet2d blocks use encoder_attention_mask, but only when attention_mask is None. this should fix UnCLIP compatibility. * fix copies
-
- 02 May, 2023 1 commit
-
-
Patrick von Platen authored
* Fix more torch compile breaks * add tests * Fix all * fix controlnet * fix more * Add Horace He as co-author. > > Co-authored-by:
Horace He <horacehe2007@yahoo.com> * Add Horace He as co-author. Co-authored-by:
Horace He <horacehe2007@yahoo.com> --------- Co-authored-by:
Horace He <horacehe2007@yahoo.com>
-
- 01 May, 2023 1 commit
-
-
Patrick von Platen authored
* fix more * Fix more * fix more * Apply suggestions from code review * fix * make style * make fix-copies * fix * make sure torch compile * Clean * fix test
-
- 25 Apr, 2023 1 commit
-
-
Patrick von Platen authored
* add * clean * up * clean up more * fix more tests * Improve docs further * improve * more fixes docs * Improve docs more * Update src/diffusers/models/unet_2d_condition.py * fix * up * update doc links * make fix-copies * add safety checker and watermarker to stage 3 doc page code snippets * speed optimizations docs * memory optimization docs * make style * add watermarking snippets to doc string examples * make style * use pt_to_pil helper functions in doc strings * skip mps tests * Improve safety * make style * new logic * fix * fix bad onnx design * make new stable diffusion upscale pipeline model arguments optional * define has_nsfw_concept when non-pil output type * lowercase linked to notebook name --------- Co-authored-by:William Berman <WLBberman@gmail.com>
-
- 18 Apr, 2023 2 commits
-
-
Will Berman authored
This mimics the dtype cast for the standard time embeddings
-
Will Berman authored
Adding act fn config to the unet timestep class embedding and conv activation. The custom activation defaults to silu which is the default activation function for both the conv act and the timestep class embeddings so default behavior is not changed. The only unet which use the custom activation is the stable diffusion latent upscaler https://huggingface.co/stabilityai/sd-x2-latent-upscaler/blob/main/unet/config.json (I ran a script against the hub to confirm). The latent upscaler does not use the conv activation nor the timestep class embeddings so we don't change its behavior.
-
- 17 Apr, 2023 1 commit
-
-
Patrick von Platen authored
* Better deprecation message * Better deprecation message * Better doc string * Fixes * fix more * fix more * Improve __getattr__ * correct more * fix more * fix * Improve more * more improvements * fix more * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * make style * Fix all rest & add tests & remove old deprecation fns --------- Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 11 Apr, 2023 4 commits
-
-
Will Berman authored
add group norm type to attention processor cross attention norm This lets the cross attention norm use both a group norm block and a layer norm block. The group norm operates along the channels dimension and requires input shape (batch size, channels, *) where as the layer norm with a single `normalized_shape` dimension only operates over the least significant dimension i.e. (*, channels). The channels we want to normalize are the hidden dimension of the encoder hidden states. By convention, the encoder hidden states are always passed as (batch size, sequence length, hidden states). This means the layer norm can operate on the tensor without modification, but the group norm requires flipping the last two dimensions to operate on (batch size, hidden states, sequence length). All existing attention processors will have the same logic and we can consolidate it in a helper function `prepare_encoder_hidden_states` prepare_encoder_hidden_states -> norm_encoder_hidden_states re: @patrickvonplaten move norm_cross defined check to outside norm_encoder_hidden_states add missing attn.norm_cross check
-
Will Berman authored
* unet time embedding activation function * typo act_fn -> time_embedding_act_fn * flatten conditional
-
Will Berman authored
* add only cross attention to simple attention blocks * add test for only_cross_attention re: @patrickvonplaten * mid_block_only_cross_attention better default allow mid_block_only_cross_attention to default to `only_cross_attention` when `only_cross_attention` is given as a single boolean
-
Patrick von Platen authored
* [Config] Fix config prints and save, load * Only use potential nn.Modules for dtype and device * Correct vae image processor * make sure in_channels is not accessed directly * make sure in channels is only accessed via config * Make sure schedulers only access config attributes * Make sure to access config in SAG * Fix vae processor and make style * add tests * uP * make style * Fix more naming issues * Final fix with vae config * change more
-
- 10 Apr, 2023 3 commits
-
-
William Berman authored
`encoder_hid_dim` provides an additional projection for the input `encoder_hidden_states` from `encoder_hidden_dim` to `cross_attention_dim`
-
William Berman authored
-
William Berman authored
-
- 27 Mar, 2023 1 commit
-
-
Pedro Cuenca authored
* Helper function to disable custom attention processors. * Restore code deleted by mistake. * Format * Fix modeling_text_unet copy.
-
- 23 Mar, 2023 1 commit
-
-
Sanchit Gandhi authored
* Add AudioLDM * up * add vocoder * start unet * unconditional unet * clap, vocoder and vae * clean-up: conversion scripts * fix: conversion script token_type_ids * clean-up: pipeline docstring * tests: from SD * clean-up: cpu offload vocoder instead of safety checker * feat: adapt tests to audioldm * feat: add docs * clean-up: amend pipeline docstrings * clean-up: make style * clean-up: make fix-copies * fix: add doc path to toctree * clean-up: args for conversion script * clean-up: paths to checkpoints * fix: use conditional unet * clean-up: make style * fix: type hints for UNet * clean-up: docstring for UNet * clean-up: make style * clean-up: remove duplicate in docstring * clean-up: make style * clean-up: make fix-copies * clean-up: move imports to start in code snippet * fix: pass cross_attention_dim as a list/tuple to unet * clean-up: make fix-copies * fix: update checkpoint path * fix: unet cross_attention_dim in tests * film embeddings -> class embeddings * Apply suggestions from code review Co-authored-by:
Will Berman <wlbberman@gmail.com> * fix: unet film embed to use existing args * fix: unet tests to use existing args * fix: make style * fix: transformers import and version in init * clean-up: make style * Revert "clean-up: make style" This reverts commit 5d6d1f8b324f5583e7805dc01e2c86e493660d66. * clean-up: make style * clean-up: use pipeline tester mixin tests where poss * clean-up: skip attn slicing test * fix: add torch dtype to docs * fix: remove conversion script out of src * fix: remove .detach from 1d waveform * fix: reduce default num inf steps * fix: swap height/width -> audio_length_in_s * clean-up: make style * fix: remove nightly tests * fix: imports in conversion script * clean-up: slim-down to two slow tests * clean-up: slim-down fast tests * fix: batch consistent tests * clean-up: make style * clean-up: remove vae slicing fast test * clean-up: propagate changes to doc * fix: increase test tol to 1e-2 * clean-up: finish docs * clean-up: make style * feat: vocoder / VAE compatibility check * feat: possibly expand / cut audio waveform * fix: pipeline call signature test * fix: slow tests output len * clean-up: make style * make style --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
William Berman <WLBberman@gmail.com>
-
- 21 Mar, 2023 1 commit
-
-
Alexander Pivovarov authored
Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 15 Mar, 2023 1 commit
-
-
Patrick von Platen authored
* rename file * rename attention * fix more * rename more * up * more deprecation imports * fixes
-
- 14 Mar, 2023 1 commit
-
-
Haiwen Huang authored
* fix the in-place modification in unet condition when using controlnet, which will cause backprop errors when training * add clone to mid block * fix-copies --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
William Berman <WLBberman@gmail.com>
-
- 07 Mar, 2023 1 commit
-
-
clarencechen authored
* Improve dynamic threshold * Update code * Add dynamic threshold to ddim and ddpm * Encapsulate and leverage code copy mechanism Update style * Clean up DDPM/DDIM constructor arguments * add test * also add to unipc --------- Co-authored-by:
Peter Lin <peterlin9863@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-