- 13 Mar, 2024 1 commit
-
-
Sayak Paul authored
* fix PyTorch classes and start deprecsation cycles. * remove args crafting for accommodating scale. * remove scale check in feedforward. * assert against nn.Linear and not CompatibleLinear. * remove conv_cls and lineaR_cls. * remove scale *
👋 scale. * fix: unet2dcondition * fix attention.py * fix: attention.py again * fix: unet_2d_blocks. * fix-copies. * more fixes. * fix: resnet.py * more fixes * fix i2vgenxl unet. * depcrecate scale gently. * fix-copies * Apply suggestions from code review Co-authored-by:YiYi Xu <yixu310@gmail.com> * quality * throw warning when scale is passed to the the BasicTransformerBlock class. * remove scale from signature. * cross_attention_kwargs, very nice catch by Yiyi * fix: logger.warn * make deprecation message clearer. * address final comments. * maintain same depcrecation message and also add it to activations. * address yiyi * fix copies * Apply suggestions from code review Co-authored-by:
YiYi Xu <yixu310@gmail.com> * more depcrecation * fix-copies --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 08 Feb, 2024 1 commit
-
-
Sayak Paul authored
change to 2024
-
- 29 Jan, 2024 1 commit
-
-
Sayak Paul authored
* move transformer scripts to transformers modules * move transformer model test * move prior transformer test to directory * fix doc path * correct doc path * add: __init__.py
-
- 05 Dec, 2023 1 commit
-
-
Arsalan authored
* utils and test modifications to enable device agnostic testing * device for manual seed in unet1d * fix generator condition in vae test * consistency changes to testing * make style * add device agnostic testing changes to source and one model test * make dtype check fns private, log cuda fp16 case * remove dtype checks from import utils, move to testing_utils * adding tests for most model classes and one pipeline * fix vae import
-
- 11 Sep, 2023 1 commit
-
-
Dhruv Nair authored
* initial commit * move modules to import struct * add dummy objects and _LazyModule * add lazy import to schedulers * clean up unused imports * lazy import on models module * lazy import for schedulers module * add lazy import to pipelines module * lazy import altdiffusion * lazy import audio diffusion * lazy import audioldm * lazy import consistency model * lazy import controlnet * lazy import dance diffusion ddim ddpm * lazy import deepfloyd * lazy import kandinksy * lazy imports * lazy import semantic diffusion * lazy imports * lazy import stable diffusion * move sd output to its own module * clean up * lazy import t2iadapter * lazy import unclip * lazy import versatile and vq diffsuion * lazy import vq diffusion * helper to fetch objects from modules * lazy import sdxl * lazy import txt2vid * lazy import stochastic karras * fix model imports * fix bug * lazy import * clean up * clean up * fixes for tests * fixes for tests * clean up * remove import of torch_utils from utils module * clean up * clean up * fix mistake import statement * dedicated modules for exporting and loading * remove testing utils from utils module * fixes from merge conflicts * Update src/diffusers/pipelines/kandinsky2_2/__init__.py * fix docs * fix alt diffusion copied from * fix check dummies * fix more docs * remove accelerate import from utils module * add type checking * make style * fix check dummies * remove torch import from xformers check * clean up error message * fixes after upstream merges * dummy objects fix * fix tests * remove unused module import --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 25 Jul, 2023 1 commit
-
-
Batuhan Taskaya authored
* Support to load Kohya-ss style LoRA file format (without restrictions) Co-Authored-By:
Takuma Mori <takuma104@gmail.com> Co-Authored-By:
Sayak Paul <spsayakpaul@gmail.com> * tmp: add sdxl to mlp_modules --------- Co-authored-by:
Takuma Mori <takuma104@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 22 May, 2023 1 commit
-
-
Patrick von Platen authored
* up * fix more * Apply suggestions from code review * fix more * fix more * Check it * Remove 16:8 * fix more * fix more * fix more * up * up * Test only stable diffusion * Test only two files * up * Try out spinning up processes that can be killed * up * Apply suggestions from code review * up * up
-
- 12 May, 2023 1 commit
-
-
Will Berman authored
* Replace `AttentionBlock` with `Attention` * use _from_deprecated_attn_block check re: @patrickvonplaten
-
- 13 Apr, 2023 1 commit
-
-
Patrick von Platen authored
* [Tests] parallelize * finish folder structuring * Parallelize tests more * Correct saving of pipelines * make sure logging level is correct * try again * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> --------- Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 12 Apr, 2023 1 commit
-
-
Patrick von Platen authored
* fix slow tsets * make style
-
- 11 Apr, 2023 1 commit
-
-
Chanchana Sornsoontorn authored
*
⚙ ️chore(train_controlnet) fix typo in logger message *⚙ ️chore(models) refactor modules order; make them the same as calling order When printing the BasicTransformerBlock to stdout, I think it's crucial that the attributes order are shown in proper order. And also previously the "3. Feed Forward" comment was not making sense. It should have been close to self.ff but it's instead next to self.norm3 * correct many tests * remove bogus file * make style * correct more tests * finish tests * fix one more * make style * make unclip deterministic *⚙ ️chore(models/attention) reorganize comments in BasicTransformerBlock class --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 01 Mar, 2023 1 commit
-
-
Patrick von Platen authored
-
- 04 Jan, 2023 1 commit
-
-
Erin authored
* test resnet block * fix code format required by isort * add torch device * nit
-
- 01 Jan, 2023 1 commit
-
-
Patrick von Platen authored
* [Attention] Finish refactor attention file * correct more * fix * more fixes * correct * up
-
- 07 Dec, 2022 1 commit
-
-
Patrick von Platen authored
* add paint by example * mkae loading possibel * up * Update src/diffusers/models/attention.py * up * finalize weight structure * make example work * make it work * up * up * fix * del * add * update * Apply suggestions from code review * correct transformer 2d * finish * up * up * up * up * fix * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Apply suggestions from code review * up * finish Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 03 Nov, 2022 1 commit
-
-
Will Berman authored
* Changes for VQ-diffusion VQVAE Add specify dimension of embeddings to VQModel: `VQModel` will by default set the dimension of embeddings to the number of latent channels. The VQ-diffusion VQVAE has a smaller embedding dimension, 128, than number of latent channels, 256. Add AttnDownEncoderBlock2D and AttnUpDecoderBlock2D to the up and down unet block helpers. VQ-diffusion's VQVAE uses those two block types. * Changes for VQ-diffusion transformer Modify attention.py so SpatialTransformer can be used for VQ-diffusion's transformer. SpatialTransformer: - Can now operate over discrete inputs (classes of vector embeddings) as well as continuous. - `in_channels` was made optional in the constructor so two locations where it was passed as a positional arg were moved to kwargs - modified forward pass to take optional timestep embeddings ImagePositionalEmbeddings: - added to provide positional embeddings to discrete inputs for latent pixels BasicTransformerBlock: - norm layers were made configurable so that the VQ-diffusion could use AdaLayerNorm with timestep embeddings - modified forward pass to take optional timestep embeddings CrossAttention: - now may optionally take a bias parameter for its query, key, and value linear layers FeedForward: - Internal layers are now configurable ApproximateGELU: - Activation function in VQ-diffusion's feedforward layer AdaLayerNorm: - Norm layer modified to incorporate timestep embeddings * Add VQ-diffusion scheduler * Add VQ-diffusion pipeline * Add VQ-diffusion convert script to diffusers * Add VQ-diffusion dummy objects * Add VQ-diffusion markdown docs * Add VQ-diffusion tests * some renaming * some fixes * more renaming * correct * fix typo * correct weights * finalize * fix tests * Apply suggestions from code review Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * finish * finish * up Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 17 Oct, 2022 1 commit
-
-
Anton Lozhkov authored
* [CI] Add Apple M1 tests * setup-python * python build * conda install * remove branch * only 3.8 is built for osx-arm * try fetching prebuilt tokenizers * use user cache * update shells * Reports and cleanup * -> MPS * Disable parallel tests * Better naming * investigate worker crash * return xdist * restart * num_workers=2 * still crashing? * faulthandler for segfaults * faulthandler for segfaults * remove restarts, stop on segfault * torch version * change installation order * Use pre-RC version of PyTorch. To be updated when it is released. * Skip crashing test on MPS, add new one that works. * Skip cuda tests in mps device. * Actually use generator in test. I think this was a typo. * make style Co-authored-by:Pedro Cuenca <pedro@huggingface.co>
-
- 03 Oct, 2022 1 commit
-
-
Patrick von Platen authored
* [Utils] Add deprecate function * up * up * uP * up * up * up * up * uP * up * fix * up * move to deprecation utils file * fix * fix * fix more
-
- 16 Sep, 2022 2 commits
-
-
Anton Lozhkov authored
-
Sid Sahai authored
* add test for AttentionBlock, SpatialTransformer * add context_dim, handle device * removed dropout test * fixes, add dropout test
-
- 17 Aug, 2022 1 commit
-
-
Anton Lozhkov authored
* Revive Make utils * Add datasets for training too
-
- 04 Jul, 2022 1 commit
-
-
Suraj Patil authored
-
- 03 Jul, 2022 1 commit
-
-
Patrick von Platen authored
* make unet rl work * uploaad files / code * upload files * make style correct * finish
-
- 27 Jun, 2022 6 commits
-
-
patil-suraj authored
-
patil-suraj authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
- 26 Jun, 2022 1 commit
-
-
Patrick von Platen authored
-
- 25 Jun, 2022 1 commit
-
-
Patrick von Platen authored
-
- 22 Jun, 2022 4 commits
-
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
- 21 Jun, 2022 3 commits
-
-
patil-suraj authored
-
patil-suraj authored
-
anton-l authored
-
- 20 Jun, 2022 2 commits
-
-
patil-suraj authored
-
patil-suraj authored
-