1. 27 Sep, 2023 1 commit
  2. 25 Sep, 2023 1 commit
  3. 22 Sep, 2023 1 commit
    • Pedro Cuenca's avatar
      SDXL flax (#4254) · 3651b14c
      Pedro Cuenca authored
      
      
      * support transformer_layers_per block in flax UNet
      
      * add support for text_time additional embeddings to Flax UNet
      
      * rename attention layers for VAE
      
      * add shape asserts when renaming attention layers
      
      * transpose VAE attention layers
      
      * add pipeline flax SDXL code [WIP]
      
      * continue add pipeline flax SDXL code [WIP]
      
      * cleanup
      
      * Working on JIT support
      
      Fixed prompt embedding shapes so they work in parallel mode. Assuming we
      always have both text encoders for now, for simplicity.
      
      * Fixing embeddings (untested)
      
      * Remove spurious line
      
      * Shard guidance_scale when jitting.
      
      * Decode images
      
      * Fix sharding
      
      * style
      
      * Refiner UNet can be loaded.
      
      * Refiner / img2img pipeline
      
      * Allow latent outputs from base and latent inputs in refiner
      
      This makes it possible to chain base + refiner without having to use the
      vae decoder in the base model, the vae encoder in the refiner, skipping
      conversions to/from PIL, and avoiding TPU <-> CPU memory copies.
      
      * Adapt to FlaxCLIPTextModelOutput
      
      * Update Flax XL pipeline to FlaxCLIPTextModelOutput
      
      * make fix-copies
      
      * make style
      
      * add euler scheduler
      
      * Fix import
      
      * Fix copies, comment unused code.
      
      * Fix SDXL Flax imports
      
      * Fix euler discrete begin
      
      * improve init import
      
      * finish
      
      * put discrete euler in init
      
      * fix flax euler
      
      * Fix more
      
      * make style
      
      * correct init
      
      * correct init
      
      * Temporarily remove FlaxStableDiffusionXLImg2ImgPipeline
      
      * correct pipelines
      
      * finish
      
      ---------
      Co-authored-by: default avatarMartin Müller <martin.muller.me@gmail.com>
      Co-authored-by: default avatarpatil-suraj <surajp815@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      3651b14c
  4. 15 Aug, 2023 1 commit
  5. 21 Jul, 2023 1 commit
    • Steven Liu's avatar
      [docs] Clean up pipeline apis (#3905) · a69754bb
      Steven Liu authored
      * start with stable diffusion
      
      * fix
      
      * finish stable diffusion pipelines
      
      * fix path to pipeline output
      
      * fix flax paths
      
      * fix copies
      
      * add up to score sde ve
      
      * finish first pass of pipelines
      
      * fix copies
      
      * second review
      
      * align doc titles
      
      * more review fixes
      
      * final review
      a69754bb
  6. 21 Jun, 2023 1 commit
  7. 12 Apr, 2023 1 commit
    • Pedro Cuenca's avatar
      Flax memory efficient attention (#2889) · dc277501
      Pedro Cuenca authored
      
      
      * add use_memory_efficient params placeholder
      
      * test
      
      * add memory efficient attention jax
      
      * add memory efficient attention jax
      
      * newline
      
      * forgot dot
      
      * Rename use_memory_efficient
      
      * Keep dtype last.
      
      * Actually use key_chunk_size
      
      * Rename symbol
      
      * Apply style
      
      * Rename use_memory_efficient
      
      * Keep dtype last
      
      * Pass `use_memory_efficient_attention` in `from_pretrained`
      
      * Move JAX memory efficient attention to attention_flax.
      
      * Simple test.
      
      * style
      
      ---------
      Co-authored-by: default avatarmuhammad_hanif <muhammad_hanif@sofcograha.co.id>
      Co-authored-by: default avatarMuhHanif <48muhhanif@gmail.com>
      dc277501
  8. 27 Mar, 2023 1 commit
  9. 23 Mar, 2023 1 commit
  10. 01 Mar, 2023 1 commit
  11. 07 Feb, 2023 1 commit
  12. 30 Dec, 2022 1 commit
  13. 29 Dec, 2022 1 commit
    • Simon Kirsten's avatar
      Flax: Fix img2img and align with other pipeline (#1824) · ab0e92fd
      Simon Kirsten authored
      
      
      * Flax: Add components function
      
      * Flax: Fix img2img and align with other pipeline
      
      * Flax: Fix PRNGKey type
      
      * Refactor strength to start_timestep
      
      * Fix preprocess images
      
      * Fix processed_images dimen
      
      * latents.shape -> latents_shape
      
      * Fix typo
      
      * Remove "static" comment
      
      * Remove unnecessary optional types in _generate
      
      * Apply doc-builder code style.
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      ab0e92fd
  14. 14 Dec, 2022 1 commit
  15. 07 Dec, 2022 1 commit
  16. 28 Nov, 2022 1 commit
  17. 22 Nov, 2022 1 commit
  18. 15 Nov, 2022 1 commit
  19. 10 Nov, 2022 1 commit
  20. 09 Nov, 2022 1 commit
  21. 03 Nov, 2022 1 commit
  22. 02 Nov, 2022 1 commit
  23. 31 Oct, 2022 2 commits
  24. 27 Oct, 2022 1 commit
  25. 24 Oct, 2022 1 commit
  26. 13 Oct, 2022 2 commits
  27. 05 Oct, 2022 1 commit
  28. 03 Oct, 2022 1 commit
    • Pedro Cuenca's avatar
      Fix import with Flax but without PyTorch (#688) · 688031c5
      Pedro Cuenca authored
      * Don't use `load_state_dict` if torch is not installed.
      
      * Define `SchedulerOutput` to use torch or flax arrays.
      
      * Don't import LMSDiscreteScheduler without torch.
      
      * Create distinct FlaxSchedulerOutput.
      
      * Additional changes required for FlaxSchedulerMixin
      
      * Do not import torch pipelines in Flax.
      
      * Revert "Define `SchedulerOutput` to use torch or flax arrays."
      
      This reverts commit f653140134b74d9ffec46d970eb46925fe3a409d.
      
      * Prefix Flax scheduler outputs for consistency.
      
      * make style
      
      * FlaxSchedulerOutput is now a dataclass.
      
      * Don't use f-string without placeholders.
      
      * Add blank line.
      
      * Style (docstrings)
      688031c5
  29. 24 Sep, 2022 1 commit
  30. 21 Sep, 2022 2 commits
    • Pedro Cuenca's avatar
      Return Flax scheduler state (#601) · a9fdb3de
      Pedro Cuenca authored
      * Optionally return state in from_config.
      
      Useful for Flax schedulers.
      
      * has_state is now a property, make check more strict.
      
      I don't check the class is `SchedulerMixin` to prevent circular
      dependencies. It should be enough that the class name starts with "Flax"
      the object declares it "has_state" and the "create_state" exists too.
      
      * Use state in pipeline from_pretrained.
      
      * Make style
      a9fdb3de
    • Pedro Cuenca's avatar
      Fix params replication when using the dummy checker (#602) · fb03aad8
      Pedro Cuenca authored
      Fix params replication when sing the dummy checker.
      fb03aad8
  31. 20 Sep, 2022 2 commits
  32. 16 Sep, 2022 2 commits
  33. 08 Sep, 2022 3 commits
    • Anton Lozhkov's avatar
      [ONNX] Stable Diffusion exporter and pipeline (#399) · 8d9c4a53
      Anton Lozhkov authored
      
      
      * initial export and design
      
      * update imports
      
      * custom prover, import fixes
      
      * Update src/diffusers/onnx_utils.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/onnx_utils.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * remove push_to_hub
      
      * Update src/diffusers/onnx_utils.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * remove torch_device
      
      * numpify the rest of the pipeline
      
      * torchify the safety checker
      
      * revert tensor
      
      * Code review suggestions + quality
      
      * fix tests
      
      * fix provider, add an end-to-end test
      
      * style
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      8d9c4a53
    • Patrick von Platen's avatar
      [Docs] DiffusionPipeline (#418) · e7457b37
      Patrick von Platen authored
      * Start
      
      * up
      
      * up
      
      * finish
      e7457b37
    • Pedro Cuenca's avatar
      Inference support for `mps` device (#355) · 5dda1735
      Pedro Cuenca authored
      * Initial support for mps in Stable Diffusion pipeline.
      
      * Initial "warmup" implementation when using mps.
      
      * Make some deterministic tests pass with mps.
      
      * Disable training tests when using mps.
      
      * SD: generate latents in CPU then move to device.
      
      This is especially important when using the mps device, because
      generators are not supported there. See for example
      https://github.com/pytorch/pytorch/issues/84288.
      
      In addition, the other pipelines seem to use the same approach: generate
      the random samples then move to the appropriate device.
      
      After this change, generating an image in MPS produces the same result
      as when using the CPU, if the same seed is used.
      
      * Remove prints.
      
      * Pass AutoencoderKL test_output_pretrained with mps.
      
      Sampling from `posterior` must be done in CPU.
      
      * Style
      
      * Do not use torch.long for log op in mps device.
      
      * Perform incompatible padding ops in CPU.
      
      UNet tests now pass.
      See https://github.com/pytorch/pytorch/issues/84535
      
      
      
      * Style: fix import order.
      
      * Remove unused symbols.
      
      * Remove MPSWarmupMixin, do not apply automatically.
      
      We do apply warmup in the tests, but not during normal use.
      This adopts some PR suggestions by @patrickvonplaten.
      
      * Add comment for mps fallback to CPU step.
      
      * Add README_mps.md for mps installation and use.
      
      * Apply `black` to modified files.
      
      * Restrict README_mps to SD, show measures in table.
      
      * Make PNDM indexing compatible with mps.
      
      Addresses #239.
      
      * Do not use float64 when using LDMScheduler.
      
      Fixes #358.
      
      * Fix typo identified by @patil-suraj
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Adapt example to new output style.
      
      * Restore 1:1 results reproducibility with CompVis.
      
      However, mps latents need to be generated in CPU because generators
      don't work in the mps device.
      
      * Move PyTorch nightly to requirements.
      
      * Adapt `test_scheduler_outputs_equivalence` ton MPS.
      
      * mps: skip training tests instead of ignoring silently.
      
      * Make VQModel tests pass on mps.
      
      * mps ddim tests: warmup, increase tolerance.
      
      * ScoreSdeVeScheduler indexing made mps compatible.
      
      * Make ldm pipeline tests pass using warmup.
      
      * Style
      
      * Simplify casting as suggested in PR.
      
      * Add Known Issues to readme.
      
      * `isort` import order.
      
      * Remove _mps_warmup helpers from ModelMixin.
      
      And just make changes to the tests.
      
      * Skip tests using unittest decorator for consistency.
      
      * Remove temporary var.
      
      * Remove spurious blank space.
      
      * Remove unused symbol.
      
      * Remove README_mps.
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> 
      5dda1735