1. 05 Dec, 2022 1 commit
    • Robert Dargavel Smith's avatar
      add AudioDiffusionPipeline and LatentAudioDiffusionPipeline #1334 (#1426) · 48d0123f
      Robert Dargavel Smith authored
      
      
      * add AudioDiffusionPipeline and LatentAudioDiffusionPipeline
      
      * add docs to toc
      
      * fix tests
      
      * fix tests
      
      * fix tests
      
      * fix tests
      
      * fix tests
      
      * Update pr_tests.yml
      
      Fix tests
      
      * parent 499ff34b3edc3e0c506313ab48f21514d8f58b09
      author teticio <teticio@gmail.com> 1668765652 +0000
      committer teticio <teticio@gmail.com> 1669041721 +0000
      
      parent 499ff34b3edc3e0c506313ab48f21514d8f58b09
      author teticio <teticio@gmail.com> 1668765652 +0000
      committer teticio <teticio@gmail.com> 1669041704 +0000
      
      add colab notebook
      
      [Flax] Fix loading scheduler from subfolder (#1319)
      
      [FLAX] Fix loading scheduler from subfolder
      
      Fix/Enable all schedulers for in-painting (#1331)
      
      * inpaint fix k lms
      
      * onnox as well
      
      * up
      
      Correct path to schedlure (#1322)
      
      * [Examples] Correct path
      
      * uP
      
      Avoid nested fix-copies (#1332)
      
      * Avoid nested `# Copied from` statements during `make fix-copies`
      
      * style
      
      Fix img2img speed with LMS-Discrete Scheduler (#896)
      
      Casting `self.sigmas` into a different dtype (the one of original_samples) is not advisable. In my img2img pipeline this leads to a long running time in the  `integrate.quad` call later on- by long I mean more than 10x slower.
      Co-authored-by: default avatarAnton Lozhkov <anton@huggingface.co>
      
      Fix the order of casts for onnx inpainting (#1338)
      
      Legacy Inpainting Pipeline for Onnx Models (#1237)
      
      * Add legacy inpainting pipeline compatibility for onnx
      
      * remove commented out line
      
      * Add onnx legacy inpainting test
      
      * Fix slow decorators
      
      * pep8 styling
      
      * isort styling
      
      * dummy object
      
      * ordering consistency
      
      * style
      
      * docstring styles
      
      * Refactor common prompt encoding pattern
      
      * Update tests to permanent repository home
      
      * support all available schedulers until ONNX IO binding is available
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * updated styling from PR suggested feedback
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      Jax infer support negative prompt (#1337)
      
      * support negative prompts in sd jax pipeline
      
      * pass batched neg_prompt
      
      * only encode when negative prompt is None
      Co-authored-by: default avatarJuan Acevedo <jfacevedo@google.com>
      
      Update README.md: Minor change to Imagic code snippet, missing dir error (#1347)
      
      Minor change to Imagic Readme
      
      Missing dir causes an error when running the example code.
      
      make style
      
      change the sample model (#1352)
      
      * Update alt_diffusion.mdx
      
      * Update alt_diffusion.mdx
      
      Add bit diffusion [WIP] (#971)
      
      * Create bit_diffusion.py
      
      Bit diffusion based on the paper, arXiv:2208.04202, Chen2022AnalogBG
      
      * adding bit diffusion to new branch
      
      ran tests
      
      * tests
      
      * tests
      
      * tests
      
      * tests
      
      * removed test folders + added to README
      
      * Update README.md
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * move Mel to module in pipeline construction, make librosa optional
      
      * fix imports
      
      * fix copy & paste error in comment
      
      * fix style
      
      * add missing register_to_config
      
      * fix class docstrings
      
      * fix class docstrings
      
      * tweak docstrings
      
      * tweak docstrings
      
      * update slow test
      
      * put trailing commas back
      
      * respect alphabetical order
      
      * remove LatentAudioDiffusion, make vqvae optional
      
      * move Mel from models back to pipelines :-)
      
      * allow loading of pretrained audiodiffusion models
      
      * fix tests
      
      * fix dummies
      
      * remove reference to latent_audio_diffusion in docs
      
      * unused import
      
      * inherit from SchedulerMixin to make loadable
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      48d0123f
  2. 29 Nov, 2022 1 commit
  3. 28 Nov, 2022 1 commit
  4. 23 Nov, 2022 3 commits
  5. 04 Nov, 2022 1 commit
  6. 02 Nov, 2022 1 commit
    • MatthieuTPHR's avatar
      Up to 2x speedup on GPUs using memory efficient attention (#532) · 98c42134
      MatthieuTPHR authored
      
      
      * 2x speedup using memory efficient attention
      
      * remove einops dependency
      
      * Swap K, M in op instantiation
      
      * Simplify code, remove unnecessary maybe_init call and function, remove unused self.scale parameter
      
      * make xformers a soft dependency
      
      * remove one-liner functions
      
      * change one letter variable to appropriate names
      
      * Remove Env variable dependency, remove MemoryEfficientCrossAttention class and use enable_xformers_memory_efficient_attention method
      
      * Add memory efficient attention toggle to img2img and inpaint pipelines
      
      * Clearer management of xformers' availability
      
      * update optimizations markdown to add info about memory efficient attention
      
      * add benchmarks for TITAN RTX
      
      * More detailed explanation of how the mem eff benchmark were ran
      
      * Removing autocast from optimization markdown
      
      * import_utils: import torch only if is available
      Co-authored-by: default avatarNouamane Tazi <nouamane98@gmail.com>
      98c42134
  7. 31 Oct, 2022 1 commit
  8. 04 Oct, 2022 1 commit
    • Pi Esposito's avatar
      add accelerate to load models with smaller memory footprint (#361) · 4d1cce2f
      Pi Esposito authored
      
      
      * add accelerate to load models with smaller memory footprint
      
      * remove low_cpu_mem_usage as it is reduntant
      
      * move accelerate init weights context to modelling utils
      
      * add test to ensure results are the same when loading with accelerate
      
      * add tests to ensure ram usage gets lower when using accelerate
      
      * move accelerate logic to single snippet under modelling utils and remove it from configuration utils
      
      * format code using to pass quality check
      
      * fix imports with isor
      
      * add accelerate to test extra deps
      
      * only import accelerate if device_map is set to auto
      
      * move accelerate availability check to diffusers import utils
      
      * format code
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      4d1cce2f
  9. 16 Sep, 2022 1 commit
  10. 08 Sep, 2022 1 commit
  11. 17 Aug, 2022 1 commit