"sgl-kernel/git@developer.sourcefind.cn:change/sglang.git" did not exist on "7d3b7c87f501a959e2e5b5f4c3170b35194789f9"
- 11 Apr, 2023 1 commit
-
-
Chanchana Sornsoontorn authored
*
⚙ ️chore(train_controlnet) fix typo in logger message *⚙ ️chore(models) refactor modules order; make them the same as calling order When printing the BasicTransformerBlock to stdout, I think it's crucial that the attributes order are shown in proper order. And also previously the "3. Feed Forward" comment was not making sense. It should have been close to self.ff but it's instead next to self.norm3 * correct many tests * remove bogus file * make style * correct more tests * finish tests * fix one more * make style * make unclip deterministic *⚙ ️chore(models/attention) reorganize comments in BasicTransformerBlock class --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 01 Mar, 2023 1 commit
-
-
Patrick von Platen authored
-
- 04 Jan, 2023 1 commit
-
-
Erin authored
* test resnet block * fix code format required by isort * add torch device * nit
-
- 01 Jan, 2023 1 commit
-
-
Patrick von Platen authored
* [Attention] Finish refactor attention file * correct more * fix * more fixes * correct * up
-
- 07 Dec, 2022 1 commit
-
-
Patrick von Platen authored
* add paint by example * mkae loading possibel * up * Update src/diffusers/models/attention.py * up * finalize weight structure * make example work * make it work * up * up * fix * del * add * update * Apply suggestions from code review * correct transformer 2d * finish * up * up * up * up * fix * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Apply suggestions from code review * up * finish Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 03 Nov, 2022 1 commit
-
-
Will Berman authored
* Changes for VQ-diffusion VQVAE Add specify dimension of embeddings to VQModel: `VQModel` will by default set the dimension of embeddings to the number of latent channels. The VQ-diffusion VQVAE has a smaller embedding dimension, 128, than number of latent channels, 256. Add AttnDownEncoderBlock2D and AttnUpDecoderBlock2D to the up and down unet block helpers. VQ-diffusion's VQVAE uses those two block types. * Changes for VQ-diffusion transformer Modify attention.py so SpatialTransformer can be used for VQ-diffusion's transformer. SpatialTransformer: - Can now operate over discrete inputs (classes of vector embeddings) as well as continuous. - `in_channels` was made optional in the constructor so two locations where it was passed as a positional arg were moved to kwargs - modified forward pass to take optional timestep embeddings ImagePositionalEmbeddings: - added to provide positional embeddings to discrete inputs for latent pixels BasicTransformerBlock: - norm layers were made configurable so that the VQ-diffusion could use AdaLayerNorm with timestep embeddings - modified forward pass to take optional timestep embeddings CrossAttention: - now may optionally take a bias parameter for its query, key, and value linear layers FeedForward: - Internal layers are now configurable ApproximateGELU: - Activation function in VQ-diffusion's feedforward layer AdaLayerNorm: - Norm layer modified to incorporate timestep embeddings * Add VQ-diffusion scheduler * Add VQ-diffusion pipeline * Add VQ-diffusion convert script to diffusers * Add VQ-diffusion dummy objects * Add VQ-diffusion markdown docs * Add VQ-diffusion tests * some renaming * some fixes * more renaming * correct * fix typo * correct weights * finalize * fix tests * Apply suggestions from code review Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * finish * finish * up Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 17 Oct, 2022 1 commit
-
-
Anton Lozhkov authored
* [CI] Add Apple M1 tests * setup-python * python build * conda install * remove branch * only 3.8 is built for osx-arm * try fetching prebuilt tokenizers * use user cache * update shells * Reports and cleanup * -> MPS * Disable parallel tests * Better naming * investigate worker crash * return xdist * restart * num_workers=2 * still crashing? * faulthandler for segfaults * faulthandler for segfaults * remove restarts, stop on segfault * torch version * change installation order * Use pre-RC version of PyTorch. To be updated when it is released. * Skip crashing test on MPS, add new one that works. * Skip cuda tests in mps device. * Actually use generator in test. I think this was a typo. * make style Co-authored-by:Pedro Cuenca <pedro@huggingface.co>
-
- 03 Oct, 2022 1 commit
-
-
Patrick von Platen authored
* [Utils] Add deprecate function * up * up * uP * up * up * up * up * uP * up * fix * up * move to deprecation utils file * fix * fix * fix more
-
- 16 Sep, 2022 2 commits
-
-
Anton Lozhkov authored
-
Sid Sahai authored
* add test for AttentionBlock, SpatialTransformer * add context_dim, handle device * removed dropout test * fixes, add dropout test
-
- 17 Aug, 2022 1 commit
-
-
Anton Lozhkov authored
* Revive Make utils * Add datasets for training too
-
- 04 Jul, 2022 1 commit
-
-
Suraj Patil authored
-
- 03 Jul, 2022 1 commit
-
-
Patrick von Platen authored
* make unet rl work * uploaad files / code * upload files * make style correct * finish
-
- 27 Jun, 2022 6 commits
-
-
patil-suraj authored
-
patil-suraj authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
- 26 Jun, 2022 1 commit
-
-
Patrick von Platen authored
-
- 25 Jun, 2022 1 commit
-
-
Patrick von Platen authored
-
- 22 Jun, 2022 4 commits
-
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
- 21 Jun, 2022 3 commits
-
-
patil-suraj authored
-
patil-suraj authored
-
anton-l authored
-
- 20 Jun, 2022 5 commits
-
-
patil-suraj authored
-
patil-suraj authored
-
patil-suraj authored
-
patil-suraj authored
-
Patrick von Platen authored
-
- 17 Jun, 2022 6 commits
-
-
patil-suraj authored
-
patil-suraj authored
-
patil-suraj authored
-
patil-suraj authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
- 15 Jun, 2022 1 commit
-
-
Patrick von Platen authored
-