1. 16 Nov, 2022 1 commit
  2. 15 Nov, 2022 2 commits
  3. 09 Nov, 2022 1 commit
  4. 08 Nov, 2022 1 commit
  5. 06 Nov, 2022 1 commit
    • Cheng Lu's avatar
      Add multistep DPM-Solver discrete scheduler (#1132) · b4a1ed85
      Cheng Lu authored
      
      
      * add dpmsolver discrete pytorch scheduler
      
      * fix some typos in dpm-solver pytorch
      
      * add dpm-solver pytorch in stable-diffusion pipeline
      
      * add jax/flax version dpm-solver
      
      * change code style
      
      * change code style
      
      * add docs
      
      * add `add_noise` method for dpmsolver
      
      * add pytorch unit test for dpmsolver
      
      * add dummy object for pytorch dpmsolver
      
      * Update src/diffusers/schedulers/scheduling_dpmsolver_discrete.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Update tests/test_config.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Update tests/test_config.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * resolve the code comments
      
      * rename the file
      
      * change class name
      
      * fix code style
      
      * add auto docs for dpmsolver multistep
      
      * add more explanations for the stabilizing trick (for steps < 15)
      
      * delete the dummy file
      
      * change the API name of predict_epsilon, algorithm_type and solver_type
      
      * add compatible lists
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      b4a1ed85
  6. 04 Nov, 2022 2 commits
  7. 03 Nov, 2022 6 commits
    • Patrick von Platen's avatar
      make style · bde4880c
      Patrick von Platen authored
      bde4880c
    • Patrick von Platen's avatar
      Correct VQDiffusion Pipeline import · 33108bfa
      Patrick von Platen authored
      33108bfa
    • Patrick von Platen's avatar
      fix copies · 988c8222
      Patrick von Platen authored
      988c8222
    • Suraj Patil's avatar
      default fast model loading 🔥 (#1115) · 74821781
      Suraj Patil authored
      
      
      * make accelerate hard dep
      
      * default fast init
      
      * move params to cpu when device map is None
      
      * handle device_map=None
      
      * handle torch < 1.9
      
      * remove device_map="auto"
      
      * style
      
      * add accelerate in torch extra
      
      * remove accelerate from extras["test"]
      
      * raise an error if torch is available but not accelerate
      
      * update installation docs
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * improve defautl loading speed even further, allow disabling fats loading
      
      * address review comments
      
      * adapt the tests
      
      * fix test_stable_diffusion_fast_load
      
      * fix test_read_init
      
      * temp fix for dummy checks
      
      * Trigger Build
      
      * Apply suggestions from code review
      Co-authored-by: default avatarAnton Lozhkov <anton@huggingface.co>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarAnton Lozhkov <anton@huggingface.co>
      74821781
    • Will Berman's avatar
      VQ-diffusion (#658) · ef2ea33c
      Will Berman authored
      
      
      * Changes for VQ-diffusion VQVAE
      
      Add specify dimension of embeddings to VQModel:
      `VQModel` will by default set the dimension of embeddings to the number
      of latent channels. The VQ-diffusion VQVAE has a smaller
      embedding dimension, 128, than number of latent channels, 256.
      
      Add AttnDownEncoderBlock2D and AttnUpDecoderBlock2D to the up and down
      unet block helpers. VQ-diffusion's VQVAE uses those two block types.
      
      * Changes for VQ-diffusion transformer
      
      Modify attention.py so SpatialTransformer can be used for
      VQ-diffusion's transformer.
      
      SpatialTransformer:
      - Can now operate over discrete inputs (classes of vector embeddings) as well as continuous.
      - `in_channels` was made optional in the constructor so two locations where it was passed as a positional arg were moved to kwargs
      - modified forward pass to take optional timestep embeddings
      
      ImagePositionalEmbeddings:
      - added to provide positional embeddings to discrete inputs for latent pixels
      
      BasicTransformerBlock:
      - norm layers were made configurable so that the VQ-diffusion could use AdaLayerNorm with timestep embeddings
      - modified forward pass to take optional timestep embeddings
      
      CrossAttention:
      - now may optionally take a bias parameter for its query, key, and value linear layers
      
      FeedForward:
      - Internal layers are now configurable
      
      ApproximateGELU:
      - Activation function in VQ-diffusion's feedforward layer
      
      AdaLayerNorm:
      - Norm layer modified to incorporate timestep embeddings
      
      * Add VQ-diffusion scheduler
      
      * Add VQ-diffusion pipeline
      
      * Add VQ-diffusion convert script to diffusers
      
      * Add VQ-diffusion dummy objects
      
      * Add VQ-diffusion markdown docs
      
      * Add VQ-diffusion tests
      
      * some renaming
      
      * some fixes
      
      * more renaming
      
      * correct
      
      * fix typo
      
      * correct weights
      
      * finalize
      
      * fix tests
      
      * Apply suggestions from code review
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * finish
      
      * finish
      
      * up
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      ef2ea33c
    • Revist's avatar
      feat: add repaint (#974) · d38c8043
      Revist authored
      
      
      * feat: add repaint
      
      * fix: fix quality check with `make fix-copies`
      
      * fix: remove old unnecessary arg
      
      * chore: change default to DDPM (looks better in experiments)
      
      * ".to(device)" changed to "device="
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * make generator device-specific
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * make generator device-specific and change shape
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * fix: add preprocessing for image and mask
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * fix: update test
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * Update src/diffusers/pipelines/repaint/pipeline_repaint.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Add docs and examples
      
      * Fix toctree
      Co-authored-by: default avatarfja <fja@zurich.ibm.com>
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarAnton Lozhkov <anton@huggingface.co>
      d38c8043
  8. 02 Nov, 2022 2 commits
    • Lewington-pitsos's avatar
      Integration tests precision improvement for inpainting (#1052) · 8ee21915
      Lewington-pitsos authored
      
      
      * improve test precision
      
      get tests passing with greater precision using lewington images
      
      * make old numpy load function a wrapper around a more flexible numpy loading function
      
      * adhere to black formatting
      
      * add more black formatting
      
      * adhere to isort
      
      * loosen precision and replace path
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      8ee21915
    • MatthieuTPHR's avatar
      Up to 2x speedup on GPUs using memory efficient attention (#532) · 98c42134
      MatthieuTPHR authored
      
      
      * 2x speedup using memory efficient attention
      
      * remove einops dependency
      
      * Swap K, M in op instantiation
      
      * Simplify code, remove unnecessary maybe_init call and function, remove unused self.scale parameter
      
      * make xformers a soft dependency
      
      * remove one-liner functions
      
      * change one letter variable to appropriate names
      
      * Remove Env variable dependency, remove MemoryEfficientCrossAttention class and use enable_xformers_memory_efficient_attention method
      
      * Add memory efficient attention toggle to img2img and inpaint pipelines
      
      * Clearer management of xformers' availability
      
      * update optimizations markdown to add info about memory efficient attention
      
      * add benchmarks for TITAN RTX
      
      * More detailed explanation of how the mem eff benchmark were ran
      
      * Removing autocast from optimization markdown
      
      * import_utils: import torch only if is available
      Co-authored-by: default avatarNouamane Tazi <nouamane98@gmail.com>
      98c42134
  9. 31 Oct, 2022 2 commits
  10. 28 Oct, 2022 3 commits
  11. 25 Oct, 2022 1 commit
  12. 24 Oct, 2022 1 commit
  13. 20 Oct, 2022 2 commits
  14. 19 Oct, 2022 2 commits
  15. 18 Oct, 2022 1 commit
  16. 14 Oct, 2022 1 commit
  17. 13 Oct, 2022 1 commit
  18. 12 Oct, 2022 1 commit
  19. 11 Oct, 2022 1 commit
    • Akash Pannu's avatar
      Flax: Trickle down `norm_num_groups` (#789) · a1242044
      Akash Pannu authored
      * pass norm_num_groups param and add tests
      
      * set resnet_groups for FlaxUNetMidBlock2D
      
      * fixed docstrings
      
      * fixed typo
      
      * using is_flax_available util and created require_flax decorator
      a1242044
  20. 06 Oct, 2022 1 commit
  21. 05 Oct, 2022 1 commit
  22. 04 Oct, 2022 2 commits
    • Pedro Cuenca's avatar
      Fix import if PyTorch is not installed (#715) · 215bb408
      Pedro Cuenca authored
      * Fix import if PyTorch is not installed.
      
      * Style (blank line)
      215bb408
    • Pi Esposito's avatar
      add accelerate to load models with smaller memory footprint (#361) · 4d1cce2f
      Pi Esposito authored
      
      
      * add accelerate to load models with smaller memory footprint
      
      * remove low_cpu_mem_usage as it is reduntant
      
      * move accelerate init weights context to modelling utils
      
      * add test to ensure results are the same when loading with accelerate
      
      * add tests to ensure ram usage gets lower when using accelerate
      
      * move accelerate logic to single snippet under modelling utils and remove it from configuration utils
      
      * format code using to pass quality check
      
      * fix imports with isor
      
      * add accelerate to test extra deps
      
      * only import accelerate if device_map is set to auto
      
      * move accelerate availability check to diffusers import utils
      
      * format code
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      4d1cce2f
  23. 03 Oct, 2022 1 commit
  24. 24 Sep, 2022 1 commit
  25. 21 Sep, 2022 1 commit
  26. 20 Sep, 2022 1 commit