1. 07 Nov, 2022 7 commits
  2. 06 Nov, 2022 1 commit
    • Cheng Lu's avatar
      Add multistep DPM-Solver discrete scheduler (#1132) · b4a1ed85
      Cheng Lu authored
      
      
      * add dpmsolver discrete pytorch scheduler
      
      * fix some typos in dpm-solver pytorch
      
      * add dpm-solver pytorch in stable-diffusion pipeline
      
      * add jax/flax version dpm-solver
      
      * change code style
      
      * change code style
      
      * add docs
      
      * add `add_noise` method for dpmsolver
      
      * add pytorch unit test for dpmsolver
      
      * add dummy object for pytorch dpmsolver
      
      * Update src/diffusers/schedulers/scheduling_dpmsolver_discrete.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Update tests/test_config.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Update tests/test_config.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * resolve the code comments
      
      * rename the file
      
      * change class name
      
      * fix code style
      
      * add auto docs for dpmsolver multistep
      
      * add more explanations for the stabilizing trick (for steps < 15)
      
      * delete the dummy file
      
      * change the API name of predict_epsilon, algorithm_type and solver_type
      
      * add compatible lists
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      b4a1ed85
  3. 05 Nov, 2022 1 commit
  4. 04 Nov, 2022 9 commits
  5. 03 Nov, 2022 15 commits
  6. 02 Nov, 2022 7 commits
    • Patrick von Platen's avatar
      [Loading] Ignore unneeded files (#1107) · c39a511b
      Patrick von Platen authored
      * [Loading] Ignore unneeded files
      
      * up
      c39a511b
    • Denis's avatar
      Training to predict x0 in training example (#1031) · cbcd0512
      Denis authored
      
      
      * changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly
      
      * Revert "changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly"
      
      This reverts commit c5efb525648885f2e7df71f4483a9f248515ad61.
      
      * changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly
      
      * fixed code style
      Co-authored-by: default avatarlukovnikov <lukovnikov@users.noreply.github.com>
      cbcd0512
    • Kashif Rasul's avatar
      [Flax] time embedding (#1081) · 0b61cea3
      Kashif Rasul authored
      * initial get_sinusoidal_embeddings
      
      * added asserts
      
      * better var name
      
      * fix docs
      0b61cea3
    • Yuta Hayashibe's avatar
      Fix padding in dreambooth (#1030) · 33c48745
      Yuta Hayashibe authored
      33c48745
    • Grigory Sizov's avatar
      Fix tests for equivalence of DDIM and DDPM pipelines (#1069) · 5cd29d62
      Grigory Sizov authored
      * Fix equality test for ddim and ddpm
      
      * add docs for use_clipped_model_output in DDIM
      
      * fix inline comment
      
      * reorder imports in test_pipelines.py
      
      * Ignore use_clipped_model_output if scheduler doesn't take it
      5cd29d62
    • Omiita's avatar
      Fix a small typo of a variable name (#1063) · 1216a3b1
      Omiita authored
      Fix a small typo
      
      fix a typo in `models/attention.py`.
      weight -> width
      1216a3b1
    • Anton Lozhkov's avatar
      [CI] Framework and hardware-specific CI tests (#997) · 4e59bcc6
      Anton Lozhkov authored
      * [WIP][CI] Framework and hardware-specific docker images for CI tests
      
      * username
      
      * fix cpu
      
      * try out the image
      
      * push latest
      
      * update workspace
      
      * no root isolation for actions
      
      * add a flax image
      
      * flax and onnx matrix
      
      * fix runners
      
      * add reports
      
      * onnxruntime image
      
      * retry tpu
      
      * fix
      
      * fix
      
      * build onnxruntime
      
      * naming
      
      * onnxruntime-gpu image
      
      * onnxruntime-gpu image, slow tests
      
      * latest jax version
      
      * trigger flax
      
      * run flax tests in one thread
      
      * fast flax tests on cpu
      
      * fast flax tests on cpu
      
      * trigger slow tests
      
      * rebuild torch cuda
      
      * force cuda provider
      
      * fix onnxruntime tests
      
      * trigger slow
      
      * don't specify gpu for tpu
      
      * optimize
      
      * memory limit
      
      * fix flax tests
      
      * disable docker cache
      4e59bcc6