1. 02 Nov, 2022 2 commits
    • Anton Lozhkov's avatar
      [CI] Framework and hardware-specific CI tests (#997) · 4e59bcc6
      Anton Lozhkov authored
      * [WIP][CI] Framework and hardware-specific docker images for CI tests
      
      * username
      
      * fix cpu
      
      * try out the image
      
      * push latest
      
      * update workspace
      
      * no root isolation for actions
      
      * add a flax image
      
      * flax and onnx matrix
      
      * fix runners
      
      * add reports
      
      * onnxruntime image
      
      * retry tpu
      
      * fix
      
      * fix
      
      * build onnxruntime
      
      * naming
      
      * onnxruntime-gpu image
      
      * onnxruntime-gpu image, slow tests
      
      * latest jax version
      
      * trigger flax
      
      * run flax tests in one thread
      
      * fast flax tests on cpu
      
      * fast flax tests on cpu
      
      * trigger slow tests
      
      * rebuild torch cuda
      
      * force cuda provider
      
      * fix onnxruntime tests
      
      * trigger slow
      
      * don't specify gpu for tpu
      
      * optimize
      
      * memory limit
      
      * fix flax tests
      
      * disable docker cache
      4e59bcc6
    • Lewington-pitsos's avatar
      Integration tests precision improvement for inpainting (#1052) · 8ee21915
      Lewington-pitsos authored
      
      
      * improve test precision
      
      get tests passing with greater precision using lewington images
      
      * make old numpy load function a wrapper around a more flexible numpy loading function
      
      * adhere to black formatting
      
      * add more black formatting
      
      * adhere to isort
      
      * loosen precision and replace path
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      8ee21915
  2. 31 Oct, 2022 4 commits
  3. 28 Oct, 2022 8 commits
  4. 27 Oct, 2022 2 commits
  5. 26 Oct, 2022 2 commits
    • Pi Esposito's avatar
      minimal stable diffusion GPU memory usage with accelerate hooks (#850) · b2e2d141
      Pi Esposito authored
      * add method to enable cuda with minimal gpu usage to stable diffusion
      
      * add test to minimal cuda memory usage
      
      * ensure all models but unet are onn torch.float32
      
      * move to cpu_offload along with minor internal changes to make it work
      
      * make it test against accelerate master branch
      
      * coming back, its official: I don't know how to make it test againt the master branch from accelerate
      
      * make it install accelerate from master on tests
      
      * go back to accelerate>=0.11
      
      * undo prettier formatting on yml files
      
      * undo prettier formatting on yml files againn
      b2e2d141
    • Pedro Cuenca's avatar
      Do not use torch.float64 on the mps device (#942) · 0343d8f5
      Pedro Cuenca authored
      * Add failing test for #940.
      
      * Do not use torch.float64 in mps.
      
      * style
      
      * Temporarily skip add_noise for IPNDMScheduler.
      
      Until #990 is addressed.
      0343d8f5
  6. 25 Oct, 2022 4 commits
  7. 24 Oct, 2022 1 commit
  8. 22 Oct, 2022 1 commit
  9. 21 Oct, 2022 1 commit
  10. 20 Oct, 2022 4 commits
  11. 19 Oct, 2022 4 commits
  12. 18 Oct, 2022 2 commits
  13. 17 Oct, 2022 2 commits
    • Pedro Cuenca's avatar
      Fix autoencoder test (#886) · 100e094c
      Pedro Cuenca authored
      Fix autoencoder test.
      100e094c
    • Anton Lozhkov's avatar
      Add Apple M1 tests (#796) · cca59ce3
      Anton Lozhkov authored
      
      
      * [CI] Add Apple M1 tests
      
      * setup-python
      
      * python build
      
      * conda install
      
      * remove branch
      
      * only 3.8 is built for osx-arm
      
      * try fetching prebuilt tokenizers
      
      * use user cache
      
      * update shells
      
      * Reports and cleanup
      
      * -> MPS
      
      * Disable parallel tests
      
      * Better naming
      
      * investigate worker crash
      
      * return xdist
      
      * restart
      
      * num_workers=2
      
      * still crashing?
      
      * faulthandler for segfaults
      
      * faulthandler for segfaults
      
      * remove restarts, stop on segfault
      
      * torch version
      
      * change installation order
      
      * Use pre-RC version of PyTorch.
      
      To be updated when it is released.
      
      * Skip crashing test on MPS, add new one that works.
      
      * Skip cuda tests in mps device.
      
      * Actually use generator in test.
      
      I think this was a typo.
      
      * make style
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      cca59ce3
  14. 14 Oct, 2022 2 commits
  15. 13 Oct, 2022 1 commit