1. 01 Jan, 2023 1 commit
  2. 07 Dec, 2022 1 commit
    • Patrick von Platen's avatar
      Add paint by example (#1533) · 896c98a2
      Patrick von Platen authored
      
      
      * add paint by example
      
      * mkae loading possibel
      
      * up
      
      * Update src/diffusers/models/attention.py
      
      * up
      
      * finalize weight structure
      
      * make example work
      
      * make it work
      
      * up
      
      * up
      
      * fix
      
      * del
      
      * add
      
      * update
      
      * Apply suggestions from code review
      
      * correct transformer 2d
      
      * finish
      
      * up
      
      * up
      
      * up
      
      * up
      
      * fix
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Apply suggestions from code review
      
      * up
      
      * finish
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      896c98a2
  3. 03 Nov, 2022 1 commit
    • Will Berman's avatar
      VQ-diffusion (#658) · ef2ea33c
      Will Berman authored
      
      
      * Changes for VQ-diffusion VQVAE
      
      Add specify dimension of embeddings to VQModel:
      `VQModel` will by default set the dimension of embeddings to the number
      of latent channels. The VQ-diffusion VQVAE has a smaller
      embedding dimension, 128, than number of latent channels, 256.
      
      Add AttnDownEncoderBlock2D and AttnUpDecoderBlock2D to the up and down
      unet block helpers. VQ-diffusion's VQVAE uses those two block types.
      
      * Changes for VQ-diffusion transformer
      
      Modify attention.py so SpatialTransformer can be used for
      VQ-diffusion's transformer.
      
      SpatialTransformer:
      - Can now operate over discrete inputs (classes of vector embeddings) as well as continuous.
      - `in_channels` was made optional in the constructor so two locations where it was passed as a positional arg were moved to kwargs
      - modified forward pass to take optional timestep embeddings
      
      ImagePositionalEmbeddings:
      - added to provide positional embeddings to discrete inputs for latent pixels
      
      BasicTransformerBlock:
      - norm layers were made configurable so that the VQ-diffusion could use AdaLayerNorm with timestep embeddings
      - modified forward pass to take optional timestep embeddings
      
      CrossAttention:
      - now may optionally take a bias parameter for its query, key, and value linear layers
      
      FeedForward:
      - Internal layers are now configurable
      
      ApproximateGELU:
      - Activation function in VQ-diffusion's feedforward layer
      
      AdaLayerNorm:
      - Norm layer modified to incorporate timestep embeddings
      
      * Add VQ-diffusion scheduler
      
      * Add VQ-diffusion pipeline
      
      * Add VQ-diffusion convert script to diffusers
      
      * Add VQ-diffusion dummy objects
      
      * Add VQ-diffusion markdown docs
      
      * Add VQ-diffusion tests
      
      * some renaming
      
      * some fixes
      
      * more renaming
      
      * correct
      
      * fix typo
      
      * correct weights
      
      * finalize
      
      * fix tests
      
      * Apply suggestions from code review
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * finish
      
      * finish
      
      * up
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      ef2ea33c
  4. 17 Oct, 2022 1 commit
    • Anton Lozhkov's avatar
      Add Apple M1 tests (#796) · cca59ce3
      Anton Lozhkov authored
      
      
      * [CI] Add Apple M1 tests
      
      * setup-python
      
      * python build
      
      * conda install
      
      * remove branch
      
      * only 3.8 is built for osx-arm
      
      * try fetching prebuilt tokenizers
      
      * use user cache
      
      * update shells
      
      * Reports and cleanup
      
      * -> MPS
      
      * Disable parallel tests
      
      * Better naming
      
      * investigate worker crash
      
      * return xdist
      
      * restart
      
      * num_workers=2
      
      * still crashing?
      
      * faulthandler for segfaults
      
      * faulthandler for segfaults
      
      * remove restarts, stop on segfault
      
      * torch version
      
      * change installation order
      
      * Use pre-RC version of PyTorch.
      
      To be updated when it is released.
      
      * Skip crashing test on MPS, add new one that works.
      
      * Skip cuda tests in mps device.
      
      * Actually use generator in test.
      
      I think this was a typo.
      
      * make style
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      cca59ce3
  5. 03 Oct, 2022 1 commit
  6. 16 Sep, 2022 2 commits
  7. 17 Aug, 2022 1 commit
  8. 04 Jul, 2022 1 commit
  9. 03 Jul, 2022 1 commit
  10. 27 Jun, 2022 6 commits
  11. 26 Jun, 2022 1 commit
  12. 25 Jun, 2022 1 commit
  13. 22 Jun, 2022 4 commits
  14. 21 Jun, 2022 3 commits
  15. 20 Jun, 2022 5 commits
  16. 17 Jun, 2022 6 commits
  17. 15 Jun, 2022 2 commits
  18. 14 Jun, 2022 2 commits