"src/vscode:/vscode.git/clone" did not exist on "31d1f3c8c0c296bbdef9fa1651cfa7995cbed4b1"
  1. 16 Feb, 2023 1 commit
    • Sayak Paul's avatar
      [Pipelines] Adds pix2pix zero (#2334) · fd3d5502
      Sayak Paul authored
      * add: support for BLIP generation.
      
      * add: support for editing synthetic images.
      
      * remove unnecessary comments.
      
      * add inits and run make fix-copies.
      
      * version change of diffusers.
      
      * fix: condition for loading the captioner.
      
      * default conditions_input_image to False.
      
      * guidance_amount -> cross_attention_guidance_amount
      
      * fix inputs to check_inputs()
      
      * fix: attribute.
      
      * fix: prepare_attention_mask() call.
      
      * debugging.
      
      * better placement of references.
      
      * remove torch.no_grad() decorations.
      
      * put torch.no_grad() context before the first denoising loop.
      
      * detach() latents before decoding them.
      
      * put deocding in a torch.no_grad() context.
      
      * add reconstructed image for debugging.
      
      * no_grad(0
      
      * apply formatting.
      
      * address one-off suggestions from the draft PR.
      
      * back to torch.no_grad() and add more elaborate comments.
      
      * refactor prepare_unet() per Patrick's suggestions.
      
      * more elaborate description for .
      
      * formatting.
      
      * add docstrings to the methods specific to pix2pix zero.
      
      * suspecting a redundant noise prediction.
      
      * needed for gradient computation chain.
      
      * less hacks.
      
      * fix: attention mask handling within the processor.
      
      * remove attention reference map computation.
      
      * fix: cross attn args.
      
      * fix: prcoessor.
      
      * store attention maps.
      
      * fix: attention processor.
      
      * update docs and better treatment to xa args.
      
      * update the final noise computation call.
      
      * change xa args call.
      
      * remove xa args option from the pipeline.
      
      * add: docs.
      
      * first test.
      
      * fix: url call.
      
      * fix: argument call.
      
      * remove image conditioning for now.
      
      * 🚨 add: fast tests.
      
      * explicit placement of the xa attn weights.
      
      * add: slow tests 🐢
      
      * fix: tests.
      
      * edited direction embedding should be on the same device as prompt_embeds.
      
      * debugging message.
      
      * debugging.
      
      * add pix2pix zero pipeline for a non-deterministic test.
      
      * debugging/
      
      * remove debugging message.
      
      * make caption generation _
      
      * address comments (part I).
      
      * address PR comments (part II)
      
      * fix: DDPM test assertion.
      
      * refactor doc.
      
      * address PR comments (part III).
      
      * fix: type annotation for the scheduler.
      
      * apply styling.
      
      * skip_mps and add note on embeddings in the docs.
      fd3d5502
  2. 08 Feb, 2023 1 commit
  3. 26 Jan, 2023 1 commit
  4. 25 Jan, 2023 1 commit
  5. 20 Jan, 2023 2 commits
  6. 18 Jan, 2023 1 commit
  7. 17 Jan, 2023 1 commit
  8. 19 Dec, 2022 1 commit
  9. 14 Dec, 2022 1 commit
  10. 09 Dec, 2022 1 commit
  11. 08 Dec, 2022 3 commits
  12. 05 Dec, 2022 1 commit
    • Robert Dargavel Smith's avatar
      add AudioDiffusionPipeline and LatentAudioDiffusionPipeline #1334 (#1426) · 48d0123f
      Robert Dargavel Smith authored
      
      
      * add AudioDiffusionPipeline and LatentAudioDiffusionPipeline
      
      * add docs to toc
      
      * fix tests
      
      * fix tests
      
      * fix tests
      
      * fix tests
      
      * fix tests
      
      * Update pr_tests.yml
      
      Fix tests
      
      * parent 499ff34b3edc3e0c506313ab48f21514d8f58b09
      author teticio <teticio@gmail.com> 1668765652 +0000
      committer teticio <teticio@gmail.com> 1669041721 +0000
      
      parent 499ff34b3edc3e0c506313ab48f21514d8f58b09
      author teticio <teticio@gmail.com> 1668765652 +0000
      committer teticio <teticio@gmail.com> 1669041704 +0000
      
      add colab notebook
      
      [Flax] Fix loading scheduler from subfolder (#1319)
      
      [FLAX] Fix loading scheduler from subfolder
      
      Fix/Enable all schedulers for in-painting (#1331)
      
      * inpaint fix k lms
      
      * onnox as well
      
      * up
      
      Correct path to schedlure (#1322)
      
      * [Examples] Correct path
      
      * uP
      
      Avoid nested fix-copies (#1332)
      
      * Avoid nested `# Copied from` statements during `make fix-copies`
      
      * style
      
      Fix img2img speed with LMS-Discrete Scheduler (#896)
      
      Casting `self.sigmas` into a different dtype (the one of original_samples) is not advisable. In my img2img pipeline this leads to a long running time in the  `integrate.quad` call later on- by long I mean more than 10x slower.
      Co-authored-by: default avatarAnton Lozhkov <anton@huggingface.co>
      
      Fix the order of casts for onnx inpainting (#1338)
      
      Legacy Inpainting Pipeline for Onnx Models (#1237)
      
      * Add legacy inpainting pipeline compatibility for onnx
      
      * remove commented out line
      
      * Add onnx legacy inpainting test
      
      * Fix slow decorators
      
      * pep8 styling
      
      * isort styling
      
      * dummy object
      
      * ordering consistency
      
      * style
      
      * docstring styles
      
      * Refactor common prompt encoding pattern
      
      * Update tests to permanent repository home
      
      * support all available schedulers until ONNX IO binding is available
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * updated styling from PR suggested feedback
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      Jax infer support negative prompt (#1337)
      
      * support negative prompts in sd jax pipeline
      
      * pass batched neg_prompt
      
      * only encode when negative prompt is None
      Co-authored-by: default avatarJuan Acevedo <jfacevedo@google.com>
      
      Update README.md: Minor change to Imagic code snippet, missing dir error (#1347)
      
      Minor change to Imagic Readme
      
      Missing dir causes an error when running the example code.
      
      make style
      
      change the sample model (#1352)
      
      * Update alt_diffusion.mdx
      
      * Update alt_diffusion.mdx
      
      Add bit diffusion [WIP] (#971)
      
      * Create bit_diffusion.py
      
      Bit diffusion based on the paper, arXiv:2208.04202, Chen2022AnalogBG
      
      * adding bit diffusion to new branch
      
      ran tests
      
      * tests
      
      * tests
      
      * tests
      
      * tests
      
      * removed test folders + added to README
      
      * Update README.md
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * move Mel to module in pipeline construction, make librosa optional
      
      * fix imports
      
      * fix copy & paste error in comment
      
      * fix style
      
      * add missing register_to_config
      
      * fix class docstrings
      
      * fix class docstrings
      
      * tweak docstrings
      
      * tweak docstrings
      
      * update slow test
      
      * put trailing commas back
      
      * respect alphabetical order
      
      * remove LatentAudioDiffusion, make vqvae optional
      
      * move Mel from models back to pipelines :-)
      
      * allow loading of pretrained audiodiffusion models
      
      * fix tests
      
      * fix dummies
      
      * remove reference to latent_audio_diffusion in docs
      
      * unused import
      
      * inherit from SchedulerMixin to make loadable
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      48d0123f
  13. 29 Nov, 2022 1 commit
  14. 28 Nov, 2022 1 commit
  15. 23 Nov, 2022 3 commits
  16. 04 Nov, 2022 1 commit
  17. 02 Nov, 2022 1 commit
    • MatthieuTPHR's avatar
      Up to 2x speedup on GPUs using memory efficient attention (#532) · 98c42134
      MatthieuTPHR authored
      
      
      * 2x speedup using memory efficient attention
      
      * remove einops dependency
      
      * Swap K, M in op instantiation
      
      * Simplify code, remove unnecessary maybe_init call and function, remove unused self.scale parameter
      
      * make xformers a soft dependency
      
      * remove one-liner functions
      
      * change one letter variable to appropriate names
      
      * Remove Env variable dependency, remove MemoryEfficientCrossAttention class and use enable_xformers_memory_efficient_attention method
      
      * Add memory efficient attention toggle to img2img and inpaint pipelines
      
      * Clearer management of xformers' availability
      
      * update optimizations markdown to add info about memory efficient attention
      
      * add benchmarks for TITAN RTX
      
      * More detailed explanation of how the mem eff benchmark were ran
      
      * Removing autocast from optimization markdown
      
      * import_utils: import torch only if is available
      Co-authored-by: default avatarNouamane Tazi <nouamane98@gmail.com>
      98c42134
  18. 31 Oct, 2022 1 commit
  19. 04 Oct, 2022 1 commit
    • Pi Esposito's avatar
      add accelerate to load models with smaller memory footprint (#361) · 4d1cce2f
      Pi Esposito authored
      
      
      * add accelerate to load models with smaller memory footprint
      
      * remove low_cpu_mem_usage as it is reduntant
      
      * move accelerate init weights context to modelling utils
      
      * add test to ensure results are the same when loading with accelerate
      
      * add tests to ensure ram usage gets lower when using accelerate
      
      * move accelerate logic to single snippet under modelling utils and remove it from configuration utils
      
      * format code using to pass quality check
      
      * fix imports with isor
      
      * add accelerate to test extra deps
      
      * only import accelerate if device_map is set to auto
      
      * move accelerate availability check to diffusers import utils
      
      * format code
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      4d1cce2f
  20. 16 Sep, 2022 1 commit
  21. 08 Sep, 2022 1 commit
  22. 17 Aug, 2022 1 commit