1. 26 Mar, 2024 1 commit
  2. 25 Mar, 2024 1 commit
  3. 14 Mar, 2024 1 commit
    • M. Tolga Cangöz's avatar
      [`Tests`] Update a deprecated parameter in test files and fix several typos (#7277) · 5d848ec0
      M. Tolga Cangöz authored
      * Add properties and `IPAdapterTesterMixin` tests for `StableDiffusionPanoramaPipeline`
      
      * Fix variable name typo and update comments
      
      * Update deprecated `output_type="numpy"` to "np" in test files
      
      * Discard changes to src/diffusers/pipelines/stable_diffusion_panorama/pipeline_stable_diffusion_panorama.py
      
      * Update test_stable_diffusion_panorama.py
      
      * Update numbers in README.md
      
      * Update get_guidance_scale_embedding method to use timesteps instead of w
      
      * Update number of checkpoints in README.md
      
      * Add type hints and fix var name
      
      * Fix PyTorch's convention for inplace functions
      
      * Fix a typo
      
      * Revert "Fix PyTorch's convention for inplace functions"
      
      This reverts commit 74350cf65b2c9aa77f08bec7937d7a8b13edb509.
      
      * Fix typos
      
      * Indent
      
      * Refactor get_guidance_scale_embedding method in LEditsPPPipelineStableDiffusionXL class
      5d848ec0
  4. 08 Feb, 2024 2 commits
  5. 27 Nov, 2023 1 commit
  6. 21 Nov, 2023 1 commit
  7. 10 Nov, 2023 1 commit
  8. 26 Oct, 2023 1 commit
  9. 26 Sep, 2023 1 commit
  10. 25 Sep, 2023 2 commits
  11. 11 Sep, 2023 2 commits
    • Patrick von Platen's avatar
      Make sure Flax pipelines can be loaded into PyTorch (#4971) · 6bbee104
      Patrick von Platen authored
      * Make sure Flax pipelines can be loaded into PyTorch
      
      * add test
      
      * Update src/diffusers/pipelines/pipeline_utils.py
      6bbee104
    • Dhruv Nair's avatar
      Lazy Import for Diffusers (#4829) · b6e0b016
      Dhruv Nair authored
      
      
      * initial commit
      
      * move modules to import struct
      
      * add dummy objects and _LazyModule
      
      * add lazy import to schedulers
      
      * clean up unused imports
      
      * lazy import on models module
      
      * lazy import for schedulers module
      
      * add lazy import to pipelines module
      
      * lazy import altdiffusion
      
      * lazy import audio diffusion
      
      * lazy import audioldm
      
      * lazy import consistency model
      
      * lazy import controlnet
      
      * lazy import dance diffusion ddim ddpm
      
      * lazy import deepfloyd
      
      * lazy import kandinksy
      
      * lazy imports
      
      * lazy import semantic diffusion
      
      * lazy imports
      
      * lazy import stable diffusion
      
      * move sd output to its own module
      
      * clean up
      
      * lazy import t2iadapter
      
      * lazy import unclip
      
      * lazy import versatile and vq diffsuion
      
      * lazy import vq diffusion
      
      * helper to fetch objects from modules
      
      * lazy import sdxl
      
      * lazy import txt2vid
      
      * lazy import stochastic karras
      
      * fix model imports
      
      * fix bug
      
      * lazy import
      
      * clean up
      
      * clean up
      
      * fixes for tests
      
      * fixes for tests
      
      * clean up
      
      * remove import of torch_utils from utils module
      
      * clean up
      
      * clean up
      
      * fix mistake import statement
      
      * dedicated modules for exporting and loading
      
      * remove testing utils from utils module
      
      * fixes from  merge conflicts
      
      * Update src/diffusers/pipelines/kandinsky2_2/__init__.py
      
      * fix docs
      
      * fix alt diffusion copied from
      
      * fix check dummies
      
      * fix more docs
      
      * remove accelerate import from utils module
      
      * add type checking
      
      * make style
      
      * fix check dummies
      
      * remove torch import from xformers check
      
      * clean up error message
      
      * fixes after upstream merges
      
      * dummy objects fix
      
      * fix tests
      
      * remove unused module import
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      b6e0b016
  12. 24 Aug, 2023 1 commit
  13. 17 Aug, 2023 2 commits
  14. 11 Aug, 2023 1 commit
  15. 28 Jul, 2023 1 commit
  16. 27 Jul, 2023 1 commit
  17. 10 Jul, 2023 2 commits
  18. 08 Jun, 2023 1 commit
  19. 02 Jun, 2023 2 commits
  20. 30 May, 2023 1 commit
  21. 23 May, 2023 3 commits
    • Pedro Cuenca's avatar
      Run `torch.compile` tests in separate subprocesses (#3503) · bde2cb5d
      Pedro Cuenca authored
      * Run ControlNet compile test in a separate subprocess
      
      `torch.compile()` spawns several subprocesses and the GPU memory used
      was not reclaimed after the test ran. This approach was taken from
      `transformers`.
      
      * Style
      
      * Prepare a couple more compile tests to run in subprocess.
      
      * Use require_torch_2 decorator.
      
      * Test inpaint_compile in subprocess.
      
      * Run img2img compile test in subprocess.
      
      * Run stable diffusion compile test in subprocess.
      
      * style
      
      * Temporarily trigger on pr to test.
      
      * Revert "Temporarily trigger on pr to test."
      
      This reverts commit 82d76868ddf9cc634a9f14b2b0aef1d5433cd750.
      bde2cb5d
    • Patrick von Platen's avatar
      Make sure Diffusers works even if Hub is down (#3447) · 9e2734a7
      Patrick von Platen authored
      * Make sure Diffusers works even if Hub is down
      
      * Make sure hub down is well tested
      9e2734a7
    • Patrick von Platen's avatar
      Allow custom pipeline loading (#3504) · d4197bf4
      Patrick von Platen authored
      d4197bf4
  22. 22 May, 2023 1 commit
    • Patrick von Platen's avatar
      Refactor full determinism (#3485) · 51843fd7
      Patrick von Platen authored
      * up
      
      * fix more
      
      * Apply suggestions from code review
      
      * fix more
      
      * fix more
      
      * Check it
      
      * Remove 16:8
      
      * fix more
      
      * fix more
      
      * fix more
      
      * up
      
      * up
      
      * Test only stable diffusion
      
      * Test only two files
      
      * up
      
      * Try out spinning up processes that can be killed
      
      * up
      
      * Apply suggestions from code review
      
      * up
      
      * up
      51843fd7
  23. 11 May, 2023 1 commit
    • Sayak Paul's avatar
      [Tests] better determinism (#3374) · 90f5f3c4
      Sayak Paul authored
      * enable deterministic pytorch and cuda operations.
      
      * disable manual seeding.
      
      * make style && make quality for unet_2d tests.
      
      * enable determinism for the unet2dconditional model.
      
      * add CUBLAS_WORKSPACE_CONFIG for better reproducibility.
      
      * relax tolerance (very weird issue, though).
      
      * revert to torch manual_seed() where needed.
      
      * relax more tolerance.
      
      * better placement of the cuda variable and relax more tolerance.
      
      * enable determinism for 3d condition model.
      
      * relax tolerance.
      
      * add: determinism to alt_diffusion.
      
      * relax tolerance for alt diffusion.
      
      * dance diffusion.
      
      * dance diffusion is flaky.
      
      * test_dict_tuple_outputs_equivalent edit.
      
      * fix two more tests.
      
      * fix more ddim tests.
      
      * fix: argument.
      
      * change to diff in place of difference.
      
      * fix: test_save_load call.
      
      * test_save_load_float16 call.
      
      * fix: expected_max_diff
      
      * fix: paint by example.
      
      * relax tolerance.
      
      * add determinism to 1d unet model.
      
      * torch 2.0 regressions seem to be brutal
      
      * determinism to vae.
      
      * add reason to skipping.
      
      * up tolerance.
      
      * determinism to vq.
      
      * determinism to cuda.
      
      * determinism to the generic test pipeline file.
      
      * refactor general pipelines testing a bit.
      
      * determinism to alt diffusion i2i
      
      * up tolerance for alt diff i2i and audio diff
      
      * up tolerance.
      
      * determinism to audioldm
      
      * increase tolerance for audioldm lms.
      
      * increase tolerance for paint by paint.
      
      * increase tolerance for repaint.
      
      * determinism to cycle diffusion and sd 1.
      
      * relax tol for cycle diffusion 🚲
      
      * relax tol for sd 1.0
      
      * relax tol for controlnet.
      
      * determinism to img var.
      
      * relax tol for img variation.
      
      * tolerance to i2i sd
      
      * make style
      
      * determinism to inpaint.
      
      * relax tolerance for inpaiting.
      
      * determinism for inpainting legacy
      
      * relax tolerance.
      
      * determinism to instruct pix2pix
      
      * determinism to model editing.
      
      * model editing tolerance.
      
      * panorama determinism
      
      * determinism to pix2pix zero.
      
      * determinism to sag.
      
      * sd 2. determinism
      
      * sd. tolerance
      
      * disallow tf32 matmul.
      
      * relax tolerance is all you need.
      
      * make style and determinism to sd 2 depth
      
      * relax tolerance for depth.
      
      * tolerance to diffedit.
      
      * tolerance to sd 2 inpaint.
      
      * up tolerance.
      
      * determinism in upscaling.
      
      * tolerance in upscaler.
      
      * more tolerance relaxation.
      
      * determinism to v pred.
      
      * up tol for v_pred
      
      * unclip determinism
      
      * determinism to unclip img2img
      
      * determinism to text to video.
      
      * determinism to last set of tests
      
      * up tol.
      
      * vq cumsum doesn't have a deterministic kernel
      
      * relax tol
      
      * relax tol
      90f5f3c4
  24. 08 May, 2023 1 commit
    • pdoane's avatar
      Batched load of textual inversions (#3277) · 3d8b3d7c
      pdoane authored
      
      
      * Batched load of textual inversions
      
      - Only call resize_token_embeddings once per batch as it is the most expensive operation
      - Allow pretrained_model_name_or_path and token to be an optional list
      - Remove Dict from type annotation pretrained_model_name_or_path as it was not supported in this function
      - Add comment that single files (e.g. .pt/.safetensors) are supported
      - Add comment for token parameter
      - Convert token override log message from warning to info
      
      * Update src/diffusers/loaders.py
      
      Check for duplicate tokens
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update condition for None tokens
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      3d8b3d7c
  25. 25 Apr, 2023 2 commits
    • Patrick von Platen's avatar
      add model (#3230) · e51f19ae
      Patrick von Platen authored
      
      
      * add
      
      * clean
      
      * up
      
      * clean up more
      
      * fix more tests
      
      * Improve docs further
      
      * improve
      
      * more fixes docs
      
      * Improve docs more
      
      * Update src/diffusers/models/unet_2d_condition.py
      
      * fix
      
      * up
      
      * update doc links
      
      * make fix-copies
      
      * add safety checker and watermarker to stage 3 doc page code snippets
      
      * speed optimizations docs
      
      * memory optimization docs
      
      * make style
      
      * add watermarking snippets to doc string examples
      
      * make style
      
      * use pt_to_pil helper functions in doc strings
      
      * skip mps tests
      
      * Improve safety
      
      * make style
      
      * new logic
      
      * fix
      
      * fix bad onnx design
      
      * make new stable diffusion upscale pipeline model arguments optional
      
      * define has_nsfw_concept when non-pil output type
      
      * lowercase linked to notebook name
      
      ---------
      Co-authored-by: default avatarWilliam Berman <WLBberman@gmail.com>
      e51f19ae
    • pdoane's avatar
      Fix issue in maybe_convert_prompt (#3188) · 0d196f9f
      pdoane authored
      When the token used for textual inversion does not have any special symbols (e.g. it is not surrounded by <>), the tokenizer does not properly split the replacement tokens.  Adding a space for the padding tokens fixes this.
      0d196f9f
  26. 13 Apr, 2023 1 commit
  27. 12 Apr, 2023 1 commit
  28. 11 Apr, 2023 1 commit
    • Patrick von Platen's avatar
      Fix config prints and save, load of pipelines (#2849) · 8b451eb6
      Patrick von Platen authored
      * [Config] Fix config prints and save, load
      
      * Only use potential nn.Modules for dtype and device
      
      * Correct vae image processor
      
      * make sure in_channels is not accessed directly
      
      * make sure in channels is only accessed via config
      
      * Make sure schedulers only access config attributes
      
      * Make sure to access config in SAG
      
      * Fix vae processor and make style
      
      * add tests
      
      * uP
      
      * make style
      
      * Fix more naming issues
      
      * Final fix with vae config
      
      * change more
      8b451eb6
  29. 05 Apr, 2023 1 commit
  30. 30 Mar, 2023 1 commit
  31. 28 Mar, 2023 1 commit