"src/vscode:/vscode.git/clone" did not exist on "a0592a13eecfa7d0d9971d2837e7380aedce65e5"
  1. 28 Oct, 2022 8 commits
  2. 27 Oct, 2022 2 commits
  3. 26 Oct, 2022 2 commits
    • Pi Esposito's avatar
      minimal stable diffusion GPU memory usage with accelerate hooks (#850) · b2e2d141
      Pi Esposito authored
      * add method to enable cuda with minimal gpu usage to stable diffusion
      
      * add test to minimal cuda memory usage
      
      * ensure all models but unet are onn torch.float32
      
      * move to cpu_offload along with minor internal changes to make it work
      
      * make it test against accelerate master branch
      
      * coming back, its official: I don't know how to make it test againt the master branch from accelerate
      
      * make it install accelerate from master on tests
      
      * go back to accelerate>=0.11
      
      * undo prettier formatting on yml files
      
      * undo prettier formatting on yml files againn
      b2e2d141
    • Pedro Cuenca's avatar
      Do not use torch.float64 on the mps device (#942) · 0343d8f5
      Pedro Cuenca authored
      * Add failing test for #940.
      
      * Do not use torch.float64 in mps.
      
      * style
      
      * Temporarily skip add_noise for IPNDMScheduler.
      
      Until #990 is addressed.
      0343d8f5
  4. 25 Oct, 2022 4 commits
  5. 24 Oct, 2022 1 commit
  6. 22 Oct, 2022 1 commit
  7. 21 Oct, 2022 1 commit
  8. 20 Oct, 2022 4 commits
  9. 19 Oct, 2022 4 commits
  10. 18 Oct, 2022 2 commits
  11. 17 Oct, 2022 2 commits
    • Pedro Cuenca's avatar
      Fix autoencoder test (#886) · 100e094c
      Pedro Cuenca authored
      Fix autoencoder test.
      100e094c
    • Anton Lozhkov's avatar
      Add Apple M1 tests (#796) · cca59ce3
      Anton Lozhkov authored
      
      
      * [CI] Add Apple M1 tests
      
      * setup-python
      
      * python build
      
      * conda install
      
      * remove branch
      
      * only 3.8 is built for osx-arm
      
      * try fetching prebuilt tokenizers
      
      * use user cache
      
      * update shells
      
      * Reports and cleanup
      
      * -> MPS
      
      * Disable parallel tests
      
      * Better naming
      
      * investigate worker crash
      
      * return xdist
      
      * restart
      
      * num_workers=2
      
      * still crashing?
      
      * faulthandler for segfaults
      
      * faulthandler for segfaults
      
      * remove restarts, stop on segfault
      
      * torch version
      
      * change installation order
      
      * Use pre-RC version of PyTorch.
      
      To be updated when it is released.
      
      * Skip crashing test on MPS, add new one that works.
      
      * Skip cuda tests in mps device.
      
      * Actually use generator in test.
      
      I think this was a typo.
      
      * make style
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      cca59ce3
  12. 14 Oct, 2022 2 commits
  13. 13 Oct, 2022 5 commits
  14. 12 Oct, 2022 2 commits