"sgl-kernel/git@developer.sourcefind.cn:change/sglang.git" did not exist on "7d3b7c87f501a959e2e5b5f4c3170b35194789f9"
  1. 11 Apr, 2023 1 commit
    • Chanchana Sornsoontorn's avatar
      Fix typo and format BasicTransformerBlock attributes (#2953) · 52c4d32d
      Chanchana Sornsoontorn authored
      * ️chore(train_controlnet) fix typo in logger message
      
      * ️chore(models) refactor modules order; make them the same as calling order
      
      When printing the BasicTransformerBlock to stdout, I think it's crucial that the attributes order are shown in proper order. And also previously the "3. Feed Forward" comment was not making sense. It should have been close to self.ff but it's instead next to self.norm3
      
      * correct many tests
      
      * remove bogus file
      
      * make style
      
      * correct more tests
      
      * finish tests
      
      * fix one more
      
      * make style
      
      * make unclip deterministic
      
      * 
      
      ️chore(models/attention) reorganize comments in BasicTransformerBlock class
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      52c4d32d
  2. 01 Mar, 2023 1 commit
  3. 04 Jan, 2023 1 commit
    • Erin's avatar
      Test ResnetBlock2D (#1850) · 9e17983d
      Erin authored
      * test resnet block
      
      * fix code format required by isort
      
      * add torch device
      
      * nit
      9e17983d
  4. 01 Jan, 2023 1 commit
  5. 07 Dec, 2022 1 commit
    • Patrick von Platen's avatar
      Add paint by example (#1533) · 896c98a2
      Patrick von Platen authored
      
      
      * add paint by example
      
      * mkae loading possibel
      
      * up
      
      * Update src/diffusers/models/attention.py
      
      * up
      
      * finalize weight structure
      
      * make example work
      
      * make it work
      
      * up
      
      * up
      
      * fix
      
      * del
      
      * add
      
      * update
      
      * Apply suggestions from code review
      
      * correct transformer 2d
      
      * finish
      
      * up
      
      * up
      
      * up
      
      * up
      
      * fix
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Apply suggestions from code review
      
      * up
      
      * finish
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      896c98a2
  6. 03 Nov, 2022 1 commit
    • Will Berman's avatar
      VQ-diffusion (#658) · ef2ea33c
      Will Berman authored
      
      
      * Changes for VQ-diffusion VQVAE
      
      Add specify dimension of embeddings to VQModel:
      `VQModel` will by default set the dimension of embeddings to the number
      of latent channels. The VQ-diffusion VQVAE has a smaller
      embedding dimension, 128, than number of latent channels, 256.
      
      Add AttnDownEncoderBlock2D and AttnUpDecoderBlock2D to the up and down
      unet block helpers. VQ-diffusion's VQVAE uses those two block types.
      
      * Changes for VQ-diffusion transformer
      
      Modify attention.py so SpatialTransformer can be used for
      VQ-diffusion's transformer.
      
      SpatialTransformer:
      - Can now operate over discrete inputs (classes of vector embeddings) as well as continuous.
      - `in_channels` was made optional in the constructor so two locations where it was passed as a positional arg were moved to kwargs
      - modified forward pass to take optional timestep embeddings
      
      ImagePositionalEmbeddings:
      - added to provide positional embeddings to discrete inputs for latent pixels
      
      BasicTransformerBlock:
      - norm layers were made configurable so that the VQ-diffusion could use AdaLayerNorm with timestep embeddings
      - modified forward pass to take optional timestep embeddings
      
      CrossAttention:
      - now may optionally take a bias parameter for its query, key, and value linear layers
      
      FeedForward:
      - Internal layers are now configurable
      
      ApproximateGELU:
      - Activation function in VQ-diffusion's feedforward layer
      
      AdaLayerNorm:
      - Norm layer modified to incorporate timestep embeddings
      
      * Add VQ-diffusion scheduler
      
      * Add VQ-diffusion pipeline
      
      * Add VQ-diffusion convert script to diffusers
      
      * Add VQ-diffusion dummy objects
      
      * Add VQ-diffusion markdown docs
      
      * Add VQ-diffusion tests
      
      * some renaming
      
      * some fixes
      
      * more renaming
      
      * correct
      
      * fix typo
      
      * correct weights
      
      * finalize
      
      * fix tests
      
      * Apply suggestions from code review
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * finish
      
      * finish
      
      * up
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      ef2ea33c
  7. 17 Oct, 2022 1 commit
    • Anton Lozhkov's avatar
      Add Apple M1 tests (#796) · cca59ce3
      Anton Lozhkov authored
      
      
      * [CI] Add Apple M1 tests
      
      * setup-python
      
      * python build
      
      * conda install
      
      * remove branch
      
      * only 3.8 is built for osx-arm
      
      * try fetching prebuilt tokenizers
      
      * use user cache
      
      * update shells
      
      * Reports and cleanup
      
      * -> MPS
      
      * Disable parallel tests
      
      * Better naming
      
      * investigate worker crash
      
      * return xdist
      
      * restart
      
      * num_workers=2
      
      * still crashing?
      
      * faulthandler for segfaults
      
      * faulthandler for segfaults
      
      * remove restarts, stop on segfault
      
      * torch version
      
      * change installation order
      
      * Use pre-RC version of PyTorch.
      
      To be updated when it is released.
      
      * Skip crashing test on MPS, add new one that works.
      
      * Skip cuda tests in mps device.
      
      * Actually use generator in test.
      
      I think this was a typo.
      
      * make style
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      cca59ce3
  8. 03 Oct, 2022 1 commit
  9. 16 Sep, 2022 2 commits
  10. 17 Aug, 2022 1 commit
  11. 04 Jul, 2022 1 commit
  12. 03 Jul, 2022 1 commit
  13. 27 Jun, 2022 6 commits
  14. 26 Jun, 2022 1 commit
  15. 25 Jun, 2022 1 commit
  16. 22 Jun, 2022 4 commits
  17. 21 Jun, 2022 3 commits
  18. 20 Jun, 2022 5 commits
  19. 17 Jun, 2022 6 commits
  20. 15 Jun, 2022 1 commit