1. 06 Jul, 2023 2 commits
    • Patrick von Platen's avatar
      Fix SD XL Docs (#3971) · 38e563d0
      Patrick von Platen authored
      * finish sd xl docs
      
      * make style
      
      * Apply suggestions from code review
      
      * uP
      
      * uP
      
      * Correct
      38e563d0
    • Patrick von Platen's avatar
      [SD-XL] Add new pipelines (#3859) · bc9a8cef
      Patrick von Platen authored
      
      
      * Add new text encoder
      
      * add transformers depth
      
      * More
      
      * Correct conversion script
      
      * Fix more
      
      * Fix more
      
      * Correct more
      
      * correct text encoder
      
      * Finish all
      
      * proof that in works in run local xl
      
      * clean up
      
      * Get refiner to work
      
      * Add red castle
      
      * Fix batch size
      
      * Improve pipelines more
      
      * Finish text2image tests
      
      * Add img2img test
      
      * Fix more
      
      * fix import
      
      * Fix embeddings for classic models (#3888)
      
      Fix embeddings for classic SD models.
      
      * Allow multiple prompts to be passed to the refiner (#3895)
      
      * finish more
      
      * Apply suggestions from code review
      
      * add watermarker
      
      * Model offload (#3889)
      
      * Model offload.
      
      * Model offload for refiner / img2img
      
      * Hardcode encoder offload on img2img vae encode
      
      Saves some GPU RAM in img2img / refiner tasks so it remains below 8 GB.
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * correct
      
      * fix
      
      * clean print
      
      * Update install warning for `invisible-watermark`
      
      * add: missing docstrings.
      
      * fix and simplify the usage example in img2img.
      
      * fix setup for watermarking.
      
      * Revert "fix setup for watermarking."
      
      This reverts commit 491bc9f5a640bbf46a97a8e52d6eff7e70eb8e4b.
      
      * fix: watermarking setup.
      
      * fix: op.
      
      * run make fix-copies.
      
      * make sure tests pass
      
      * improve convert
      
      * make tests pass
      
      * make tests pass
      
      * better error message
      
      * fiinsh
      
      * finish
      
      * Fix final test
      
      ---------
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      bc9a8cef
  2. 27 Apr, 2023 1 commit
    • Nipun Jindal's avatar
      [2064]: Add stochastic sampler (sample_dpmpp_sde) (#3020) · fd512d74
      Nipun Jindal authored
      
      
      * [2064]: Add stochastic sampler
      
      * [2064]: Add stochastic sampler
      
      * [2064]: Add stochastic sampler
      
      * [2064]: Add stochastic sampler
      
      * [2064]: Add stochastic sampler
      
      * [2064]: Add stochastic sampler
      
      * [2064]: Add stochastic sampler
      
      * Review comments
      
      * [Review comment]: Add is_torchsde_available()
      
      * [Review comment]: Test and docs
      
      * [Review comment]
      
      * [Review comment]
      
      * [Review comment]
      
      * [Review comment]
      
      * [Review comment]
      
      ---------
      Co-authored-by: default avatarnjindal <njindal@adobe.com>
      fd512d74
  3. 25 Apr, 2023 1 commit
    • Patrick von Platen's avatar
      add model (#3230) · e51f19ae
      Patrick von Platen authored
      
      
      * add
      
      * clean
      
      * up
      
      * clean up more
      
      * fix more tests
      
      * Improve docs further
      
      * improve
      
      * more fixes docs
      
      * Improve docs more
      
      * Update src/diffusers/models/unet_2d_condition.py
      
      * fix
      
      * up
      
      * update doc links
      
      * make fix-copies
      
      * add safety checker and watermarker to stage 3 doc page code snippets
      
      * speed optimizations docs
      
      * memory optimization docs
      
      * make style
      
      * add watermarking snippets to doc string examples
      
      * make style
      
      * use pt_to_pil helper functions in doc strings
      
      * skip mps tests
      
      * Improve safety
      
      * make style
      
      * new logic
      
      * fix
      
      * fix bad onnx design
      
      * make new stable diffusion upscale pipeline model arguments optional
      
      * define has_nsfw_concept when non-pil output type
      
      * lowercase linked to notebook name
      
      ---------
      Co-authored-by: default avatarWilliam Berman <WLBberman@gmail.com>
      e51f19ae
  4. 31 Mar, 2023 1 commit
  5. 24 Mar, 2023 1 commit
  6. 23 Mar, 2023 1 commit
    • Kashif Rasul's avatar
      Music Spectrogram diffusion pipeline (#1044) · 2ef9bdd7
      Kashif Rasul authored
      
      
      * initial TokenEncoder and ContinuousEncoder
      
      * initial modules
      
      * added ContinuousContextTransformer
      
      * fix copy paste error
      
      * use numpy for get_sequence_length
      
      * initial terminal relative positional encodings
      
      * fix weights keys
      
      * fix assert
      
      * cross attend style: concat encodings
      
      * make style
      
      * concat once
      
      * fix formatting
      
      * Initial SpectrogramPipeline
      
      * fix input_tokens
      
      * make style
      
      * added mel output
      
      * ignore weights for config
      
      * move mel to numpy
      
      * import pipeline
      
      * fix class names and import
      
      * moved models to models folder
      
      * import ContinuousContextTransformer and SpectrogramDiffusionPipeline
      
      * initial spec diffusion converstion script
      
      * renamed config to t5config
      
      * added weight loading
      
      * use arguments instead of t5config
      
      * broadcast noise time to batch dim
      
      * fix call
      
      * added scale_to_features
      
      * fix weights
      
      * transpose laynorm weight
      
      * scale is a vector
      
      * scale the query outputs
      
      * added comment
      
      * undo scaling
      
      * undo depth_scaling
      
      * inital get_extended_attention_mask
      
      * attention_mask is none in self-attention
      
      * cleanup
      
      * manually invert attention
      
      * nn.linear need bias=False
      
      * added T5LayerFFCond
      
      * remove to fix conflict
      
      * make style and dummy
      
      * remove unsed variables
      
      * remove predict_epsilon
      
      * Move accelerate to a soft-dependency (#1134)
      
      * finish
      
      * finish
      
      * Update src/diffusers/modeling_utils.py
      
      * Update src/diffusers/pipeline_utils.py
      Co-authored-by: default avatarAnton Lozhkov <anton@huggingface.co>
      
      * more fixes
      
      * fix
      Co-authored-by: default avatarAnton Lozhkov <anton@huggingface.co>
      
      * fix order
      
      * added initial midi to note token data pipeline
      
      * added int to int tokenizer
      
      * remove duplicate
      
      * added logic for segments
      
      * add melgan to pipeline
      
      * move autoregressive gen into pipeline
      
      * added note_representation_processor_chain
      
      * fix dtypes
      
      * remove immutabledict req
      
      * initial doc
      
      * use np.where
      
      * require note_seq
      
      * fix typo
      
      * update dependency
      
      * added note-seq to test
      
      * added is_note_seq_available
      
      * fix import
      
      * added toc
      
      * added example usage
      
      * undo for now
      
      * moved docs
      
      * fix merge
      
      * fix imports
      
      * predict first segment
      
      * avoid un-needed copy to and from cpu
      
      * make style
      
      * Copyright
      
      * fix style
      
      * add test and fix inference steps
      
      * remove bogus files
      
      * reorder models
      
      * up
      
      * remove transformers dependency
      
      * make work with diffusers cross attention
      
      * clean more
      
      * remove @
      
      * improve further
      
      * up
      
      * uP
      
      * Apply suggestions from code review
      
      * Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py
      
      * loop over all tokens
      
      * make style
      
      * Added a section on the model
      
      * fix formatting
      
      * grammer
      
      * formatting
      
      * make fix-copies
      
      * Update src/diffusers/pipelines/__init__.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/spectrogram_diffusion/pipeline_spectrogram_diffusion.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * added callback ad optional ionnx
      
      * do not squeeze batch dim
      
      * clean up more
      
      * upload
      
      * convert jax to nnumpy
      
      * make style
      
      * fix warning
      
      * make fix-copies
      
      * fix warning
      
      * add initial fast tests
      
      * add initial pipeline_params
      
      * eval mode due to dropout
      
      * skip batch tests as pipeline runs on a single file
      
      * make style
      
      * fix relative path
      
      * fix doc tests
      
      * Update src/diffusers/models/t5_film_transformer.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/models/t5_film_transformer.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update docs/source/en/api/pipelines/spectrogram_diffusion.mdx
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * add MidiProcessor
      
      * format
      
      * fix org
      
      * Apply suggestions from code review
      
      * Update tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py
      
      * make style
      
      * pin protobuf to <4
      
      * fix formatting
      
      * white space
      
      * tensorboard needs protobuf
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarAnton Lozhkov <anton@huggingface.co>
      2ef9bdd7
  7. 22 Mar, 2023 1 commit
    • Patrick von Platen's avatar
      [MS Text To Video] Add first text to video (#2738) · ca1a2229
      Patrick von Platen authored
      
      
      * [MS Text To Video} Add first text to video
      
      * upload
      
      * make first model example
      
      * match unet3d params
      
      * make sure weights are correcctly converted
      
      * improve
      
      * forward pass works, but diff result
      
      * make forward work
      
      * fix more
      
      * finish
      
      * refactor video output class.
      
      * feat: add support for a video export utility.
      
      * fix: opencv availability check.
      
      * run make fix-copies.
      
      * add: docs for the model components.
      
      * add: standalone pipeline doc.
      
      * edit docstring of the pipeline.
      
      * add: right path to TransformerTempModel
      
      * add: first set of tests.
      
      * complete fast tests for text to video.
      
      * fix bug
      
      * up
      
      * three fast tests failing.
      
      * add: note on slow tests
      
      * make work with all schedulers
      
      * apply styling.
      
      * add slow tests
      
      * change file name
      
      * update
      
      * more correction
      
      * more fixes
      
      * finish
      
      * up
      
      * Apply suggestions from code review
      
      * up
      
      * finish
      
      * make copies
      
      * fix pipeline tests
      
      * fix more tests
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * apply suggestions
      
      * up
      
      * revert
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      ca1a2229
  8. 07 Mar, 2023 1 commit
  9. 01 Mar, 2023 1 commit
  10. 16 Feb, 2023 2 commits
    • Pedro Cuenca's avatar
      `enable_model_cpu_offload` (#2285) · 2777264e
      Pedro Cuenca authored
      * enable_model_offload PoC
      
      It's surprisingly more involved than expected, see comments in the PR.
      
      * Rename final_offload_hook
      
      * Invoke the vae forward hook manually.
      
      * Completely remove decoder.
      
      * Style
      
      * apply_forward_hook decorator
      
      * Rename method.
      
      * Style
      
      * Copy enable_model_cpu_offload
      
      * Fix copies.
      
      * Remove comment.
      
      * Fix copies
      
      * Missing import
      
      * Fix doc-builder style.
      
      * Merge main and fix again.
      
      * Add docs
      
      * Fix docs.
      
      * Add a couple of tests.
      
      * style
      2777264e
    • Sayak Paul's avatar
      [Pipelines] Adds pix2pix zero (#2334) · fd3d5502
      Sayak Paul authored
      * add: support for BLIP generation.
      
      * add: support for editing synthetic images.
      
      * remove unnecessary comments.
      
      * add inits and run make fix-copies.
      
      * version change of diffusers.
      
      * fix: condition for loading the captioner.
      
      * default conditions_input_image to False.
      
      * guidance_amount -> cross_attention_guidance_amount
      
      * fix inputs to check_inputs()
      
      * fix: attribute.
      
      * fix: prepare_attention_mask() call.
      
      * debugging.
      
      * better placement of references.
      
      * remove torch.no_grad() decorations.
      
      * put torch.no_grad() context before the first denoising loop.
      
      * detach() latents before decoding them.
      
      * put deocding in a torch.no_grad() context.
      
      * add reconstructed image for debugging.
      
      * no_grad(0
      
      * apply formatting.
      
      * address one-off suggestions from the draft PR.
      
      * back to torch.no_grad() and add more elaborate comments.
      
      * refactor prepare_unet() per Patrick's suggestions.
      
      * more elaborate description for .
      
      * formatting.
      
      * add docstrings to the methods specific to pix2pix zero.
      
      * suspecting a redundant noise prediction.
      
      * needed for gradient computation chain.
      
      * less hacks.
      
      * fix: attention mask handling within the processor.
      
      * remove attention reference map computation.
      
      * fix: cross attn args.
      
      * fix: prcoessor.
      
      * store attention maps.
      
      * fix: attention processor.
      
      * update docs and better treatment to xa args.
      
      * update the final noise computation call.
      
      * change xa args call.
      
      * remove xa args option from the pipeline.
      
      * add: docs.
      
      * first test.
      
      * fix: url call.
      
      * fix: argument call.
      
      * remove image conditioning for now.
      
      * 🚨 add: fast tests.
      
      * explicit placement of the xa attn weights.
      
      * add: slow tests 🐢
      
      * fix: tests.
      
      * edited direction embedding should be on the same device as prompt_embeds.
      
      * debugging message.
      
      * debugging.
      
      * add pix2pix zero pipeline for a non-deterministic test.
      
      * debugging/
      
      * remove debugging message.
      
      * make caption generation _
      
      * address comments (part I).
      
      * address PR comments (part II)
      
      * fix: DDPM test assertion.
      
      * refactor doc.
      
      * address PR comments (part III).
      
      * fix: type annotation for the scheduler.
      
      * apply styling.
      
      * skip_mps and add note on embeddings in the docs.
      fd3d5502
  11. 08 Feb, 2023 1 commit
  12. 26 Jan, 2023 1 commit
  13. 25 Jan, 2023 1 commit
  14. 20 Jan, 2023 2 commits
  15. 18 Jan, 2023 1 commit
  16. 17 Jan, 2023 1 commit
  17. 19 Dec, 2022 1 commit
  18. 14 Dec, 2022 1 commit
  19. 09 Dec, 2022 1 commit
  20. 08 Dec, 2022 3 commits
  21. 05 Dec, 2022 1 commit
    • Robert Dargavel Smith's avatar
      add AudioDiffusionPipeline and LatentAudioDiffusionPipeline #1334 (#1426) · 48d0123f
      Robert Dargavel Smith authored
      
      
      * add AudioDiffusionPipeline and LatentAudioDiffusionPipeline
      
      * add docs to toc
      
      * fix tests
      
      * fix tests
      
      * fix tests
      
      * fix tests
      
      * fix tests
      
      * Update pr_tests.yml
      
      Fix tests
      
      * parent 499ff34b3edc3e0c506313ab48f21514d8f58b09
      author teticio <teticio@gmail.com> 1668765652 +0000
      committer teticio <teticio@gmail.com> 1669041721 +0000
      
      parent 499ff34b3edc3e0c506313ab48f21514d8f58b09
      author teticio <teticio@gmail.com> 1668765652 +0000
      committer teticio <teticio@gmail.com> 1669041704 +0000
      
      add colab notebook
      
      [Flax] Fix loading scheduler from subfolder (#1319)
      
      [FLAX] Fix loading scheduler from subfolder
      
      Fix/Enable all schedulers for in-painting (#1331)
      
      * inpaint fix k lms
      
      * onnox as well
      
      * up
      
      Correct path to schedlure (#1322)
      
      * [Examples] Correct path
      
      * uP
      
      Avoid nested fix-copies (#1332)
      
      * Avoid nested `# Copied from` statements during `make fix-copies`
      
      * style
      
      Fix img2img speed with LMS-Discrete Scheduler (#896)
      
      Casting `self.sigmas` into a different dtype (the one of original_samples) is not advisable. In my img2img pipeline this leads to a long running time in the  `integrate.quad` call later on- by long I mean more than 10x slower.
      Co-authored-by: default avatarAnton Lozhkov <anton@huggingface.co>
      
      Fix the order of casts for onnx inpainting (#1338)
      
      Legacy Inpainting Pipeline for Onnx Models (#1237)
      
      * Add legacy inpainting pipeline compatibility for onnx
      
      * remove commented out line
      
      * Add onnx legacy inpainting test
      
      * Fix slow decorators
      
      * pep8 styling
      
      * isort styling
      
      * dummy object
      
      * ordering consistency
      
      * style
      
      * docstring styles
      
      * Refactor common prompt encoding pattern
      
      * Update tests to permanent repository home
      
      * support all available schedulers until ONNX IO binding is available
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * updated styling from PR suggested feedback
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      Jax infer support negative prompt (#1337)
      
      * support negative prompts in sd jax pipeline
      
      * pass batched neg_prompt
      
      * only encode when negative prompt is None
      Co-authored-by: default avatarJuan Acevedo <jfacevedo@google.com>
      
      Update README.md: Minor change to Imagic code snippet, missing dir error (#1347)
      
      Minor change to Imagic Readme
      
      Missing dir causes an error when running the example code.
      
      make style
      
      change the sample model (#1352)
      
      * Update alt_diffusion.mdx
      
      * Update alt_diffusion.mdx
      
      Add bit diffusion [WIP] (#971)
      
      * Create bit_diffusion.py
      
      Bit diffusion based on the paper, arXiv:2208.04202, Chen2022AnalogBG
      
      * adding bit diffusion to new branch
      
      ran tests
      
      * tests
      
      * tests
      
      * tests
      
      * tests
      
      * removed test folders + added to README
      
      * Update README.md
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * move Mel to module in pipeline construction, make librosa optional
      
      * fix imports
      
      * fix copy & paste error in comment
      
      * fix style
      
      * add missing register_to_config
      
      * fix class docstrings
      
      * fix class docstrings
      
      * tweak docstrings
      
      * tweak docstrings
      
      * update slow test
      
      * put trailing commas back
      
      * respect alphabetical order
      
      * remove LatentAudioDiffusion, make vqvae optional
      
      * move Mel from models back to pipelines :-)
      
      * allow loading of pretrained audiodiffusion models
      
      * fix tests
      
      * fix dummies
      
      * remove reference to latent_audio_diffusion in docs
      
      * unused import
      
      * inherit from SchedulerMixin to make loadable
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      48d0123f
  22. 29 Nov, 2022 1 commit
  23. 28 Nov, 2022 1 commit
  24. 23 Nov, 2022 3 commits
  25. 04 Nov, 2022 1 commit
  26. 02 Nov, 2022 1 commit
    • MatthieuTPHR's avatar
      Up to 2x speedup on GPUs using memory efficient attention (#532) · 98c42134
      MatthieuTPHR authored
      
      
      * 2x speedup using memory efficient attention
      
      * remove einops dependency
      
      * Swap K, M in op instantiation
      
      * Simplify code, remove unnecessary maybe_init call and function, remove unused self.scale parameter
      
      * make xformers a soft dependency
      
      * remove one-liner functions
      
      * change one letter variable to appropriate names
      
      * Remove Env variable dependency, remove MemoryEfficientCrossAttention class and use enable_xformers_memory_efficient_attention method
      
      * Add memory efficient attention toggle to img2img and inpaint pipelines
      
      * Clearer management of xformers' availability
      
      * update optimizations markdown to add info about memory efficient attention
      
      * add benchmarks for TITAN RTX
      
      * More detailed explanation of how the mem eff benchmark were ran
      
      * Removing autocast from optimization markdown
      
      * import_utils: import torch only if is available
      Co-authored-by: default avatarNouamane Tazi <nouamane98@gmail.com>
      98c42134
  27. 31 Oct, 2022 1 commit
  28. 04 Oct, 2022 1 commit
    • Pi Esposito's avatar
      add accelerate to load models with smaller memory footprint (#361) · 4d1cce2f
      Pi Esposito authored
      
      
      * add accelerate to load models with smaller memory footprint
      
      * remove low_cpu_mem_usage as it is reduntant
      
      * move accelerate init weights context to modelling utils
      
      * add test to ensure results are the same when loading with accelerate
      
      * add tests to ensure ram usage gets lower when using accelerate
      
      * move accelerate logic to single snippet under modelling utils and remove it from configuration utils
      
      * format code using to pass quality check
      
      * fix imports with isor
      
      * add accelerate to test extra deps
      
      * only import accelerate if device_map is set to auto
      
      * move accelerate availability check to diffusers import utils
      
      * format code
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      4d1cce2f
  29. 16 Sep, 2022 1 commit
  30. 08 Sep, 2022 1 commit
  31. 17 Aug, 2022 1 commit