"src/vscode:/vscode.git/clone" did not exist on "fcf38946e423a306da5428a498ac98ecb7a0d757"
- 07 Feb, 2023 3 commits
-
-
Patrick von Platen authored
* before running make style * remove left overs from flake8 * finish * make fix-copies * final fix * more fixes
-
YiYi Xu authored
* Modify UNet2DConditionModel - allow skipping mid_block - adding a norm_group_size argument so that we can set the `num_groups` for group norm using `num_channels//norm_group_size` - allow user to set dimension for the timestep embedding (`time_embed_dim`) - the kernel_size for `conv_in` and `conv_out` is now configurable - add random fourier feature layer (`GaussianFourierProjection`) for `time_proj` - allow user to add the time and class embeddings before passing through the projection layer together - `time_embedding(t_emb + class_label))` - added 2 arguments `attn1_types` and `attn2_types` * currently we have argument `only_cross_attention`: when it's set to `True`, we will have a to the `BasicTransformerBlock` block with 2 cross-attention , otherwise we get a self-attention followed by a cross-attention; in k-upscaler, we need to have blocks that include just one cross-attention, or self-attention -> cross-attention; so I added `attn1_types` and `attn2_types` to the unet's argument list to allow user specify the attention types for the 2 positions in each block; note that I stil kept the `only_cross_attention` argument for unet for easy configuration, but it will be converted to `attn1_type` and `attn2_type` when passing down to the down blocks - the position of downsample layer and upsample layer is now configurable - in k-upscaler unet, there is only one skip connection per each up/down block (instead of each layer in stable diffusion unet), added `skip_freq = "block"` to support this use case - if user passes attention_mask to unet, it will prepare the mask and pass a flag to cross attention processer to skip the `prepare_attention_mask` step inside cross attention block add up/down blocks for k-upscaler modify CrossAttention class - make the `dropout` layer in `to_out` optional - `use_conv_proj` - use conv instead of linear for all projection layers (i.e. `to_q`, `to_k`, `to_v`, `to_out`) whenever possible. note that when it's used to do cross attention, to_k, to_v has to be linear because the `encoder_hidden_states` is not 2d - `cross_attention_norm` - add an optional layernorm on encoder_hidden_states - `attention_dropout`: add an optional dropout on attention score adapt BasicTransformerBlock - add an ada groupnorm layer to conditioning attention input with timestep embedding - allow skipping the FeedForward layer in between the attentions - replaced the only_cross_attention argument with attn1_type and attn2_type for more flexible configuration update timestep embedding: add new act_fn gelu and an optional act_2 modified ResnetBlock2D - refactored with AdaGroupNorm class (the timestep scale shift normalization) - add `mid_channel` argument - allow the first conv to have a different output dimension from the second conv - add option to use input AdaGroupNorm on the input instead of groupnorm - add options to add a dropout layer after each conv - allow user to set the bias in conv_shortcut (needed for k-upscaler) - add gelu adding conversion script for k-upscaler unet add pipeline * fix attention mask * fix a typo * fix a bug * make sure model can be used with GPU * make pipeline work with fp16 * fix an error in BasicTransfomerBlock * make style * fix typo * some more fixes * uP * up * correct more * some clean-up * clean time proj * up * uP * more changes * remove the upcast_attention=True from unet config * remove attn1_types, attn2_types etc * fix * revert incorrect changes up/down samplers * make style * remove outdated files * Apply suggestions from code review * attention refactor * refactor cross attention * Apply suggestions from code review * update * up * update * Apply suggestions from code review * finish * Update src/diffusers/models/cross_attention.py * more fixes * up * up * up * finish * more corrections of conversion state * act_2 -> act_2_fn * remove dropout_after_conv from ResnetBlock2D * make style * simplify KAttentionBlock * add fast test for latent upscaler pipeline * add slow test * slow test fp16 * make style * add doc string for pipeline_stable_diffusion_latent_upscale * add api doc page for latent upscaler pipeline * deprecate attention mask * clean up embeddings * simplify resnet * up * clean up resnet * up * correct more * up * up * improve a bit more * correct more * more clean-ups * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * add docstrings for new unet config * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * # Copied from * encode the image if not latent * remove force casting vae to fp32 * fix * add comments about preconditioning parameters from k-diffusion paper * attn1_type, attn2_type -> add_self_attention * clean up get_down_block and get_up_block * fix * fixed a typo(?) in ada group norm * update slice attention processer for cross attention * update slice * fix fast test * update the checkpoint * finish tests * fix-copies * fix-copy for modeling_text_unet.py * make style * make style * fix f-string * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix import * correct changes * fix resnet * make fix-copies * correct euler scheduler * add missing #copied from for preprocess * revert * fix * fix copies * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/models/cross_attention.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * clean up conversion script * KDownsample2d,KUpsample2d -> KDownsample2D,KUpsample2D * more * Update src/diffusers/models/unet_2d_condition.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * remove prepare_extra_step_kwargs * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix a typo in timestep embedding * remove num_image_per_prompt * fix fasttest * make style + fix-copies * fix * fix xformer test * fix style * doc string * make style * fix-copies * docstring for time_embedding_norm * make style * final finishes * make fix-copies * fix tests --------- Co-authored-by:
yiyixuxu <yixu@yis-macbook-pro.lan> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
chavinlo authored
* Create convert_vae_pt_to_diffusers.py Just a simple script to convert VAE.pt files to diffusers format Tested with: https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/VAEs/orangemix.vae.pt * Update convert_vae_pt_to_diffusers.py Forgot to add the function call * make style --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
chavinlo <example@example.com>
-
- 26 Jan, 2023 1 commit
-
-
Patrick von Platen authored
-
- 25 Jan, 2023 2 commits
-
-
Patrick von Platen authored
* make tests deterministic * run slow tests * prepare for testing * finish * refactor * add print statements * finish more * correct some test failures * more fixes * set up to correct tests * more corrections * up * fix more * more prints * add * up * up * up * uP * uP * more fixes * uP * up * up * up * up * fix more * up * up * clean tests * up * up * up * more fixes * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> * make * correct * finish * finish Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
Patrick von Platen authored
-
- 20 Jan, 2023 3 commits
-
-
Damian Stewart authored
* convert __main__ to a function call and call it * add missing type hint * make style check pass * move loading to src/diffusers Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Lucain authored
* Switch to huggingface_hub.ModelCard * Remove modelcards dependency in favor of Jinja2
-
Suraj Patil authored
* being pix2pix * ifx * cfg image_latents * fix some docstr * fix * fix * hack * fix * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * add comments to explain the hack * move __call__ to the top * doc * remove height and width * remove depreications * fix doc str * quality * fast tests * chnage model id * fast tests * fix test * address Pedro's comments * copyright * Simple doc page. * Apply suggestions from code review * style * Remove import * address some review comments * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * style Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 18 Jan, 2023 1 commit
-
-
Patrick von Platen authored
* [Lora] first upload * add first lora version * upload * more * first training * up * correct * improve * finish loaders and inference * up * up * fix more * up * finish more * finish more * up * up * change year * revert year change * Change lines * Add cloneofsimo as co-author. Co-authored-by:
Simo Ryu <cloneofsimo@gmail.com> * finish * fix docs * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com> * upload * finish Co-authored-by:
Simo Ryu <cloneofsimo@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 17 Jan, 2023 3 commits
-
-
Kashif Rasul authored
* added dit model * import * initial pipeline * initial convert script * initial pipeline * make style * raise valueerror * single function * rename classes * use DDIMScheduler * timesteps embedder * samples to cpu * fix var names * fix numpy type * use timesteps class for proj * fix typo * fix arg name * flip_sin_to_cos and better var names * fix C shape cal * make style * remove unused imports * cleanup * add back patch_size * initial dit doc * typo * Update docs/source/api/pipelines/dit.mdx Co-authored-by:
Suraj Patil <surajp815@gmail.com> * added copyright license headers * added example usage and toc * fix variable names asserts * remove comment * added docs * fix typo * upstream changes * set proper device for drop_ids * added initial dit pipeline test * update docs * fix imports * make fix-copies * isort * fix imports * get rid of more magic numbers * fix code when guidance is off * remove block_kwargs * cleanup script * removed to_2tuple * use FeedForward class instead of another MLP * style * work on mergint DiTBlock with BasicTransformerBlock * added missing final_dropout and args to BasicTransformerBlock * use norm from block * fix arg * remove unused arg * fix call to class_embedder * use timesteps * make style * attn_output gets multiplied * removed commented code * use Transformer2D * use self.is_input_patches * fix flags * fixed conversion to use Transformer2DModel * fixes for pipeline * remove dit.py * fix timesteps device * use randn_tensor and fix fp16 inf. * timesteps_emb already the right dtype * fix dit test class * fix test and style * fix norm2 usage in vq-diffusion * added author names to pipeline and lmagenet labels link * fix tests * use norm_type as string * rename dit to transformer * fix name * fix test * set norm_type = "layer" by default * fix tests * do not skip common tests * Update src/diffusers/models/attention.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * revert AdaLayerNorm API * fix norm_type name * make sure all components are in eval mode * revert norm2 API * compact * finish deprecation * add slow tests * remove @ * refactor some stuff * upload * Update src/diffusers/pipelines/dit/pipeline_dit.py * finish more * finish docs * improve docs * finish docs Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
William Berman <WLBberman@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Pedro Cuenca authored
* Check k-diffusion version is at least 0.0.12 * make style
-
Jerry Jiarui XU authored
* [Flax] Add Flax inpainting impl * fixed copies, add README.md * fixed README.md * add test * format * update README.md
-
- 04 Jan, 2023 2 commits
-
-
qsh-zh authored
* feat : add log-rho deis multistep deis * docs :fix typo * docs : add docs for impl algo * docs : remove duplicate ref * finish deis * add docs * fix Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Patrick von Platen authored
* [Repro] Correct reproducability * up * up * uP * up * need better image * allow conversion from no state dict checkpoints * up * up * up * up * check tensors * check tensors * check tensors * check tensors * next try * up * up * better name * up * up * Apply suggestions from code review * correct more * up * replace all torch randn * fix * correct * correct * finish * fix more * up
-
- 03 Jan, 2023 2 commits
-
-
Patrick von Platen authored
* [Deterministic torch randn] Allow tensors to be generated on CPU * fix more * up * fix more * up * Update src/diffusers/utils/torch_utils.py Co-authored-by:
Anton Lozhkov <anton@huggingface.co> * Apply suggestions from code review * up * up * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Anton Lozhkov authored
Add accelerate and xformers to diffusers-cli env
-
- 02 Jan, 2023 1 commit
-
-
YiYi Xu authored
* add a doc page for each pipeline under api/pipelines/stable_diffusion * add pipeline examples to docstrings * updated stable_diffusion_2 page * updated default markdown syntax to list methods based on https://github.com/huggingface/diffusers/pull/1870 * add function decorator Co-authored-by:
yiyixuxu <yixu@Yis-MacBook-Pro.lan> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 30 Dec, 2022 1 commit
-
-
Patrick von Platen authored
* move files a bit * more refactors * fix more * more fixes * fix more onnx * make style * upload * fix * up * fix more * up again * up * small fix * Update src/diffusers/__init__.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * correct Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 28 Dec, 2022 1 commit
-
-
Will Berman authored
* unCLIP image variation * remove prior comment re: @pcuenca * stable diffusion -> unCLIP re: @pcuenca * add copy froms re: @patil-suraj
-
- 20 Dec, 2022 1 commit
-
-
Dhruv Naik authored
* add flax img2img pipeline * update pipeline * black format file * remove argg from get_timesteps * update get_timesteps * fix bug: make use of timesteps for for_loop * black file * black, isort, flake8 * update docstring * update readme * update flax img2img readme * update sd pipeline init * Update src/diffusers/pipelines/stable_diffusion/pipeline_flax_stable_diffusion_img2img.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/pipelines/stable_diffusion/pipeline_flax_stable_diffusion_img2img.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * update inits * revert change * update var name to image, typo * update readme * return new t_start instead of modified timestep * black format * isort files * update docs * fix-copies * update prng_seed typing Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 19 Dec, 2022 1 commit
-
-
Anton Lozhkov authored
* Transformers version req for UnCLIP * add to the list
-
- 18 Dec, 2022 1 commit
-
-
Will Berman authored
* [wip] attention block updates * [wip] unCLIP unet decoder and super res * [wip] unCLIP prior transformer * [wip] scheduler changes * [wip] text proj utility class * [wip] UnCLIPPipeline * [wip] kakaobrain unCLIP convert script * [unCLIP pipeline] fixes re: @patrickvonplaten remove callbacks move denoising loops into call function * UNCLIPScheduler re: @patrickvonplaten Revert changes to DDPMScheduler. Make UNCLIPScheduler, a modified DDPM scheduler with changes to support karlo * mask -> attention_mask re: @patrickvonplaten * [DDPMScheduler] remove leftover change * [docs] PriorTransformer * [docs] UNet2DConditionModel and UNet2DModel * [nit] UNCLIPScheduler -> UnCLIPScheduler matches existing unclip naming better * [docs] SchedulingUnCLIP * [docs] UnCLIPTextProjModel * refactor * finish licenses * rename all to attention_mask and prep in models * more renaming * don't expose unused configs * final renaming fixes * remove x attn mask when not necessary * configure kakao script to use new class embedding config * fix copies * [tests] UnCLIPScheduler * finish x attn * finish * remove more * rename condition blocks * clean more * Apply suggestions from code review * up * fix * [tests] UnCLIPPipelineFastTests * remove unused imports * [tests] UnCLIPPipelineIntegrationTests * correct * make style Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 16 Dec, 2022 1 commit
-
-
Anton Lozhkov authored
* [WIP] Nightly integration tests * initial SD tests * update SD slow tests * style * repaint * ImageVariations * style * finish imgvar * img2img tests * debug * inpaint 1.5 * inpaint legacy * torch isn't happy about deterministic ops * allclose -> max diff for shorter logs * add SD2 * debug * Update tests/pipelines/stable_diffusion_2/test_stable_diffusion.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update tests/pipelines/stable_diffusion/test_stable_diffusion.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix refs * Update src/diffusers/utils/testing_utils.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * fix refs * remove debug Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 14 Dec, 2022 1 commit
-
-
Chino authored
-
- 09 Dec, 2022 1 commit
-
-
Anton Lozhkov authored
* Adapt to forced transformers version in some dependent libraries * style * Update __init__.py * update requires_backends
-
- 08 Dec, 2022 3 commits
-
-
Suraj Patil authored
* begin depth pipeline * add depth estimation model * fix prepare_depth_mask * add a comment about autocast * copied from, quality, cleanup * begin tests * handle tensors * norm image tensor * fix batch size * fix tests * fix enable_sequential_cpu_offload * fix save load * fix test_save_load_float16 * fix test_save_load_optional_components * fix test_float16_inference * fix test_cpu_offload_forward_pass * fix test_dict_tuple_outputs_equivalent * up * fix fast tests * fix test_stable_diffusion_img2img_multiple_init_images * fix few more fast tests * don't use device map for DPT * fix test_stable_diffusion_pipeline_with_sequential_cpu_offloading * accept external depth maps * prepare_depth_mask -> prepare_depth_map * fix file name * fix file name * quality * check transformers version * fix test names * use skipif * fix import * add docs * skip tests on mps * correct version * uP * Update docs/source/api/pipelines/stable_diffusion_2.mdx * fix fix-copies * fix fix-copies Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
anton- <anton@huggingface.co>
-
Anton Lozhkov authored
* Fix PyCharm/VSCode static type checking for dummy objects * Re-add dummies * Fix AudioDiffusion imports * fix import * fix import * Update utils/check_dummies.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/utils/import_utils.py * Update src/diffusers/__init__.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/stable_diffusion/__init__.py * fix double import Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Patrick von Platen authored
* uP * uP
-
- 07 Dec, 2022 2 commits
-
-
Cheng Lu authored
* add singlestep dpmsolver * fix a style typo * fix a style typo * add docs * finish Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Patrick von Platen authored
* add paint by example * mkae loading possibel * up * Update src/diffusers/models/attention.py * up * finalize weight structure * make example work * make it work * up * up * fix * del * add * update * Apply suggestions from code review * correct transformer 2d * finish * up * up * up * up * fix * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Apply suggestions from code review * up * finish Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 06 Dec, 2022 1 commit
-
-
Suraj Patil authored
* add check_min_version for examples * move __version__ to the top * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * fix comment * fix error_message * adapt the install message Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 05 Dec, 2022 1 commit
-
-
Robert Dargavel Smith authored
* add AudioDiffusionPipeline and LatentAudioDiffusionPipeline * add docs to toc * fix tests * fix tests * fix tests * fix tests * fix tests * Update pr_tests.yml Fix tests * parent 499ff34b3edc3e0c506313ab48f21514d8f58b09 author teticio <teticio@gmail.com> 1668765652 +0000 committer teticio <teticio@gmail.com> 1669041721 +0000 parent 499ff34b3edc3e0c506313ab48f21514d8f58b09 author teticio <teticio@gmail.com> 1668765652 +0000 committer teticio <teticio@gmail.com> 1669041704 +0000 add colab notebook [Flax] Fix loading scheduler from subfolder (#1319) [FLAX] Fix loading scheduler from subfolder Fix/Enable all schedulers for in-painting (#1331) * inpaint fix k lms * onnox as well * up Correct path to schedlure (#1322) * [Examples] Correct path * uP Avoid nested fix-copies (#1332) * Avoid nested `# Copied from` statements during `make fix-copies` * style Fix img2img speed with LMS-Discrete Scheduler (#896) Casting `self.sigmas` into a different dtype (the one of original_samples) is not advisable. In my img2img pipeline this leads to a long running time in the `integrate.quad` call later on- by long I mean more than 10x slower. Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Fix the order of casts for onnx inpainting (#1338) Legacy Inpainting Pipeline for Onnx Models (#1237) * Add legacy inpainting pipeline compatibility for onnx * remove commented out line * Add onnx legacy inpainting test * Fix slow decorators * pep8 styling * isort styling * dummy object * ordering consistency * style * docstring styles * Refactor common prompt encoding pattern * Update tests to permanent repository home * support all available schedulers until ONNX IO binding is available Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * updated styling from PR suggested feedback Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> Jax infer support negative prompt (#1337) * support negative prompts in sd jax pipeline * pass batched neg_prompt * only encode when negative prompt is None Co-authored-by:
Juan Acevedo <jfacevedo@google.com> Update README.md: Minor change to Imagic code snippet, missing dir error (#1347) Minor change to Imagic Readme Missing dir causes an error when running the example code. make style change the sample model (#1352) * Update alt_diffusion.mdx * Update alt_diffusion.mdx Add bit diffusion [WIP] (#971) * Create bit_diffusion.py Bit diffusion based on the paper, arXiv:2208.04202, Chen2022AnalogBG * adding bit diffusion to new branch ran tests * tests * tests * tests * tests * removed test folders + added to README * Update README.md Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * move Mel to module in pipeline construction, make librosa optional * fix imports * fix copy & paste error in comment * fix style * add missing register_to_config * fix class docstrings * fix class docstrings * tweak docstrings * tweak docstrings * update slow test * put trailing commas back * respect alphabetical order * remove LatentAudioDiffusion, make vqvae optional * move Mel from models back to pipelines :-) * allow loading of pretrained audiodiffusion models * fix tests * fix dummies * remove reference to latent_audio_diffusion in docs * unused import * inherit from SchedulerMixin to make loadable * Apply suggestions from code review * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 02 Dec, 2022 1 commit
-
-
Patrick von Platen authored
* up * up * finish * finish * up * up * finish
-
- 01 Dec, 2022 1 commit
-
-
Patrick von Platen authored
* Correct stacklevel * fix
-
- 29 Nov, 2022 1 commit
-
-
Anton Lozhkov authored
* Add `ort_nightly_directml` to the `onnxruntime` candidates * style
-
- 28 Nov, 2022 2 commits
-
-
Patrick von Platen authored
* Add heun * Finish first version of heun * remove bogus * finish * finish * improve * up * up * fix more * change progress bar * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py * finish * up * up * up
-
Nicolas Patry authored
* [Proposal] Support loading from safetensors if file is present. * Style. * Fix. * Adding some test to check loading logic. + modify download logic to not download pytorch file if not necessary. * Fixing the logic. * Adressing comments. * factor out into a function. * Remove dead function. * Typo. * Extra fetch only if safetensors is there. * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 25 Nov, 2022 1 commit
-
-
Suraj Patil authored
* StableDiffusionUpscalePipeline * fix a few things * make it better * fix image batching * run vae in fp32 * fix docstr * resize to mul of 64 * doc * remove safety_checker * add max_noise_level * fix Copied * begin tests * slow tests * default max_noise_level * remove kwargs * doc * fix * fix fast tests * fix fast tests * no sf * don't offload vae Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 24 Nov, 2022 1 commit
-
-
Patrick von Platen authored
* up * up * fix * uP * more fixes * up * uP * up * up * uP * fix final tests
-