"src/vscode:/vscode.git/clone" did not exist on "ed507680e35b7628bc11235255cfc58ad1101626"
  1. 03 Nov, 2022 3 commits
  2. 02 Nov, 2022 9 commits
    • Patrick von Platen's avatar
      [Loading] Ignore unneeded files (#1107) · c39a511b
      Patrick von Platen authored
      * [Loading] Ignore unneeded files
      
      * up
      c39a511b
    • Denis's avatar
      Training to predict x0 in training example (#1031) · cbcd0512
      Denis authored
      
      
      * changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly
      
      * Revert "changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly"
      
      This reverts commit c5efb525648885f2e7df71f4483a9f248515ad61.
      
      * changed training example to add option to train model that predicts x0 (instead of eps), changed DDPM pipeline accordingly
      
      * fixed code style
      Co-authored-by: default avatarlukovnikov <lukovnikov@users.noreply.github.com>
      cbcd0512
    • Kashif Rasul's avatar
      [Flax] time embedding (#1081) · 0b61cea3
      Kashif Rasul authored
      * initial get_sinusoidal_embeddings
      
      * added asserts
      
      * better var name
      
      * fix docs
      0b61cea3
    • Grigory Sizov's avatar
      Fix tests for equivalence of DDIM and DDPM pipelines (#1069) · 5cd29d62
      Grigory Sizov authored
      * Fix equality test for ddim and ddpm
      
      * add docs for use_clipped_model_output in DDIM
      
      * fix inline comment
      
      * reorder imports in test_pipelines.py
      
      * Ignore use_clipped_model_output if scheduler doesn't take it
      5cd29d62
    • Omiita's avatar
      Fix a small typo of a variable name (#1063) · 1216a3b1
      Omiita authored
      Fix a small typo
      
      fix a typo in `models/attention.py`.
      weight -> width
      1216a3b1
    • Anton Lozhkov's avatar
      [CI] Framework and hardware-specific CI tests (#997) · 4e59bcc6
      Anton Lozhkov authored
      * [WIP][CI] Framework and hardware-specific docker images for CI tests
      
      * username
      
      * fix cpu
      
      * try out the image
      
      * push latest
      
      * update workspace
      
      * no root isolation for actions
      
      * add a flax image
      
      * flax and onnx matrix
      
      * fix runners
      
      * add reports
      
      * onnxruntime image
      
      * retry tpu
      
      * fix
      
      * fix
      
      * build onnxruntime
      
      * naming
      
      * onnxruntime-gpu image
      
      * onnxruntime-gpu image, slow tests
      
      * latest jax version
      
      * trigger flax
      
      * run flax tests in one thread
      
      * fast flax tests on cpu
      
      * fast flax tests on cpu
      
      * trigger slow tests
      
      * rebuild torch cuda
      
      * force cuda provider
      
      * fix onnxruntime tests
      
      * trigger slow
      
      * don't specify gpu for tpu
      
      * optimize
      
      * memory limit
      
      * fix flax tests
      
      * disable docker cache
      4e59bcc6
    • Patrick von Platen's avatar
      Rename latent (#1102) · d53ffbbd
      Patrick von Platen authored
      * Rename latent
      
      * uP
      d53ffbbd
    • Lewington-pitsos's avatar
      Integration tests precision improvement for inpainting (#1052) · 8ee21915
      Lewington-pitsos authored
      
      
      * improve test precision
      
      get tests passing with greater precision using lewington images
      
      * make old numpy load function a wrapper around a more flexible numpy loading function
      
      * adhere to black formatting
      
      * add more black formatting
      
      * adhere to isort
      
      * loosen precision and replace path
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      8ee21915
    • MatthieuTPHR's avatar
      Up to 2x speedup on GPUs using memory efficient attention (#532) · 98c42134
      MatthieuTPHR authored
      
      
      * 2x speedup using memory efficient attention
      
      * remove einops dependency
      
      * Swap K, M in op instantiation
      
      * Simplify code, remove unnecessary maybe_init call and function, remove unused self.scale parameter
      
      * make xformers a soft dependency
      
      * remove one-liner functions
      
      * change one letter variable to appropriate names
      
      * Remove Env variable dependency, remove MemoryEfficientCrossAttention class and use enable_xformers_memory_efficient_attention method
      
      * Add memory efficient attention toggle to img2img and inpaint pipelines
      
      * Clearer management of xformers' availability
      
      * update optimizations markdown to add info about memory efficient attention
      
      * add benchmarks for TITAN RTX
      
      * More detailed explanation of how the mem eff benchmark were ran
      
      * Removing autocast from optimization markdown
      
      * import_utils: import torch only if is available
      Co-authored-by: default avatarNouamane Tazi <nouamane98@gmail.com>
      98c42134
  3. 31 Oct, 2022 7 commits
  4. 30 Oct, 2022 1 commit
  5. 29 Oct, 2022 2 commits
    • Pedro Cuenca's avatar
      Experimental: allow fp16 in `mps` (#961) · 95414bd6
      Pedro Cuenca authored
      * Docs: refer to pre-RC version of PyTorch 1.13.0.
      
      * Remove temporary workaround for unavailable op.
      
      * Update comment to make it less ambiguous.
      
      * Remove use of contiguous in mps.
      
      It appears to not longer be necessary.
      
      * Special case: use einsum for much better performance in mps
      
      * Update mps docs.
      
      * MPS: make pipeline work in half precision.
      95414bd6
    • Nathan Lambert's avatar
      clean incomplete pages (#1008) · 12fd0736
      Nathan Lambert authored
      12fd0736
  6. 28 Oct, 2022 5 commits
  7. 27 Oct, 2022 5 commits
  8. 26 Oct, 2022 4 commits
  9. 25 Oct, 2022 4 commits