1. 25 Oct, 2022 4 commits
  2. 24 Oct, 2022 2 commits
  3. 21 Oct, 2022 1 commit
    • mkshing's avatar
      Support LMSDiscreteScheduler in LDMPipeline (#891) · 31af4d17
      mkshing authored
      
      
      * Support LMSDiscreteScheduler in LDMPipeline
      
      This is a small change to support all schedulers such as LMSDiscreteScheduler in LDMPipeline.
      
      What's changed
      -------
      * Add the `scale_model_input` function before `step` to ensure correct denoising (L77)
      
      * Add "scale the initial noise by the standard deviation required by the scheduler"
      
      * run `make style`
      Co-authored-by: default avatarAnton Lozhkov <anton@huggingface.co>
      31af4d17
  4. 19 Oct, 2022 5 commits
  5. 18 Oct, 2022 2 commits
  6. 14 Oct, 2022 1 commit
  7. 13 Oct, 2022 7 commits
  8. 12 Oct, 2022 1 commit
  9. 11 Oct, 2022 2 commits
  10. 10 Oct, 2022 1 commit
    • Patrick von Platen's avatar
      [Low CPU memory] + device map (#772) · fab17528
      Patrick von Platen authored
      
      
      * add accelerate to load models with smaller memory footprint
      
      * remove low_cpu_mem_usage as it is reduntant
      
      * move accelerate init weights context to modelling utils
      
      * add test to ensure results are the same when loading with accelerate
      
      * add tests to ensure ram usage gets lower when using accelerate
      
      * move accelerate logic to single snippet under modelling utils and remove it from configuration utils
      
      * format code using to pass quality check
      
      * fix imports with isor
      
      * add accelerate to test extra deps
      
      * only import accelerate if device_map is set to auto
      
      * move accelerate availability check to diffusers import utils
      
      * format code
      
      * add device map to pipeline abstraction
      
      * lint it to pass PR quality check
      
      * fix class check to use accelerate when using diffusers ModelMixin subclasses
      
      * use low_cpu_mem_usage in transformers if device_map is not available
      
      * NoModuleLayer
      
      * comment out tests
      
      * up
      
      * uP
      
      * finish
      
      * Update src/diffusers/pipelines/stable_diffusion/safety_checker.py
      
      * finish
      
      * uP
      
      * make style
      Co-authored-by: default avatarPi Esposito <piero.skywalker@gmail.com>
      fab17528
  11. 07 Oct, 2022 1 commit
    • Suraj Patil's avatar
      [img2img, inpainting] fix fp16 inference (#769) · 92d70863
      Suraj Patil authored
      * handle dtype in vae and image2image pipeline
      
      * fix inpaint in fp16
      
      * dtype should be handled in add_noise
      
      * style
      
      * address review comments
      
      * add simple fast tests to check fp16
      
      * fix test name
      
      * put mask in fp16
      92d70863
  12. 06 Oct, 2022 3 commits
  13. 05 Oct, 2022 6 commits
  14. 04 Oct, 2022 1 commit
  15. 03 Oct, 2022 3 commits
    • Patrick von Platen's avatar
      [Utils] Add deprecate function and move testing_utils under utils (#659) · f1484b81
      Patrick von Platen authored
      * [Utils] Add deprecate function
      
      * up
      
      * up
      
      * uP
      
      * up
      
      * up
      
      * up
      
      * up
      
      * uP
      
      * up
      
      * fix
      
      * up
      
      * move to deprecation utils file
      
      * fix
      
      * fix
      
      * fix more
      f1484b81
    • Patrick von Platen's avatar
      b35bac4d
    • Pedro Cuenca's avatar
      Fix import with Flax but without PyTorch (#688) · 688031c5
      Pedro Cuenca authored
      * Don't use `load_state_dict` if torch is not installed.
      
      * Define `SchedulerOutput` to use torch or flax arrays.
      
      * Don't import LMSDiscreteScheduler without torch.
      
      * Create distinct FlaxSchedulerOutput.
      
      * Additional changes required for FlaxSchedulerMixin
      
      * Do not import torch pipelines in Flax.
      
      * Revert "Define `SchedulerOutput` to use torch or flax arrays."
      
      This reverts commit f653140134b74d9ffec46d970eb46925fe3a409d.
      
      * Prefix Flax scheduler outputs for consistency.
      
      * make style
      
      * FlaxSchedulerOutput is now a dataclass.
      
      * Don't use f-string without placeholders.
      
      * Add blank line.
      
      * Style (docstrings)
      688031c5