- 26 Aug, 2025 1 commit
-
-
Sayak Paul authored
* start removing flax stuff. * add deprecation warning. * add warning messages. * more warnings. * remove dockerfiles. * remove more. * Update src/diffusers/models/attention_flax.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * up --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 19 Jun, 2025 1 commit
-
-
Aryan authored
update
-
- 19 May, 2025 1 commit
-
-
Quentin Gallouédec authored
* Use HF Papers * Apply style fixes --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 14 Dec, 2024 1 commit
-
-
Juan Acevedo authored
Co-authored-by:
Juan Acevedo <jfacevedo@google.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 08 Feb, 2024 1 commit
-
-
Sayak Paul authored
change to 2024
-
- 20 Nov, 2023 1 commit
-
-
Kashif Rasul authored
* ruff format * not need to use doc-builder's black styling as the doc is styled in ruff * make fix-copies * comment * use run_ruff
-
- 27 Sep, 2023 1 commit
-
-
Pedro Cuenca authored
-
- 26 Sep, 2023 2 commits
-
-
Pedro Cuenca authored
-
Juan Acevedo authored
* split_head_dim flax attn * Make split_head_dim non default * make style and make quality * add description for split_head_dim flag * Update src/diffusers/models/attention_flax.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> --------- Co-authored-by:
Juan Acevedo <jfacevedo@google.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 07 Jul, 2023 1 commit
-
-
Saurav Maheshkar authored
* feat: add Dropout to Flax UNet * feat: add @compact decorator * fix: drop nn.compact
-
- 12 Apr, 2023 1 commit
-
-
Pedro Cuenca authored
* add use_memory_efficient params placeholder * test * add memory efficient attention jax * add memory efficient attention jax * newline * forgot dot * Rename use_memory_efficient * Keep dtype last. * Actually use key_chunk_size * Rename symbol * Apply style * Rename use_memory_efficient * Keep dtype last * Pass `use_memory_efficient_attention` in `from_pretrained` * Move JAX memory efficient attention to attention_flax. * Simple test. * style --------- Co-authored-by:
muhammad_hanif <muhammad_hanif@sofcograha.co.id> Co-authored-by:
MuhHanif <48muhhanif@gmail.com>
-
- 15 Mar, 2023 1 commit
-
-
Patrick von Platen authored
* rename file * rename attention * fix more * rename more * up * more deprecation imports * fixes
-
- 01 Mar, 2023 2 commits
-
-
Pedro Cuenca authored
Bring flax attention naming in sync with PyTorch.
-
Patrick von Platen authored
-
- 29 Nov, 2022 1 commit
-
-
Pedro Cuenca authored
* Flax: start adapting to Stable Diffusion 2 * More changes. * attention_head_dim can be a tuple. * Fix typos * Add simple SD 2 integration test. Slice values taken from my Ampere GPU. * Add simple UNet integration tests for Flax. Note that the expected values are taken from the PyTorch results. This ensures the Flax and PyTorch versions are not too far off. * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Typos and style * Tests: verify jax is available. * Style * Make flake happy * Remove typo. * Simple Flax SD 2 pipeline tests. * Import order * Remove unused import. Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by: @camenduru
-
- 03 Nov, 2022 1 commit
-
-
Will Berman authored
* Changes for VQ-diffusion VQVAE Add specify dimension of embeddings to VQModel: `VQModel` will by default set the dimension of embeddings to the number of latent channels. The VQ-diffusion VQVAE has a smaller embedding dimension, 128, than number of latent channels, 256. Add AttnDownEncoderBlock2D and AttnUpDecoderBlock2D to the up and down unet block helpers. VQ-diffusion's VQVAE uses those two block types. * Changes for VQ-diffusion transformer Modify attention.py so SpatialTransformer can be used for VQ-diffusion's transformer. SpatialTransformer: - Can now operate over discrete inputs (classes of vector embeddings) as well as continuous. - `in_channels` was made optional in the constructor so two locations where it was passed as a positional arg were moved to kwargs - modified forward pass to take optional timestep embeddings ImagePositionalEmbeddings: - added to provide positional embeddings to discrete inputs for latent pixels BasicTransformerBlock: - norm layers were made configurable so that the VQ-diffusion could use AdaLayerNorm with timestep embeddings - modified forward pass to take optional timestep embeddings CrossAttention: - now may optionally take a bias parameter for its query, key, and value linear layers FeedForward: - Internal layers are now configurable ApproximateGELU: - Activation function in VQ-diffusion's feedforward layer AdaLayerNorm: - Norm layer modified to incorporate timestep embeddings * Add VQ-diffusion scheduler * Add VQ-diffusion pipeline * Add VQ-diffusion convert script to diffusers * Add VQ-diffusion dummy objects * Add VQ-diffusion markdown docs * Add VQ-diffusion tests * some renaming * some fixes * more renaming * correct * fix typo * correct weights * finalize * fix tests * Apply suggestions from code review Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * finish * finish * up Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 23 Sep, 2022 1 commit
-
-
Younes Belkada authored
* documenting `attention_flax.py` file * documenting `embeddings_flax.py` * documenting `unet_blocks_flax.py` * Add new objs to doc page * document `vae_flax.py` * Apply suggestions from code review * modify `unet_2d_condition_flax.py` * make style * Apply suggestions from code review * make style * Apply suggestions from code review * fix indent * fix typo * fix indent unet * Update src/diffusers/models/vae_flax.py * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Mishig Davaadorj <dmishig@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 20 Sep, 2022 2 commits
-
-
Mishig Davaadorj authored
* WIP: flax FlaxDiffusionPipeline & FlaxStableDiffusionPipeline * todo comment * Fix imports * Fix imports * add dummies * Fix empty init * make pipeline work * up * Use Flax schedulers (typing, docstring) * Wrap model imports inside availability checks. * more updates * make sure flax is not broken * make style * more fixes * up Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@latenitesoft.com>
-
Younes Belkada authored
* first commit: - add `from_pt` argument in `from_pretrained` function - add `modeling_flax_pytorch_utils.py` file * small nit - fix a small nit - to not enter in the second if condition * major changes - modify FlaxUnet modules - first conversion script - more keys to be matched * keys match - now all keys match - change module names for correct matching - upsample module name changed * working v1 - test pass with atol and rtol= `4e-02` * replace unsued arg * make quality * add small docstring * add more comments - add TODO for embedding layers * small change - use `jnp.expand_dims` for converting `timesteps` in case it is a 0-dimensional array * add more conditions on conversion - add better test to check for keys conversion * make shapes consistent - output `img_w x img_h x n_channels` from the VAE * Revert "make shapes consistent" This reverts commit 4cad1aeb4aeb224402dad13c018a5d42e96267f6. * fix unet shape - channels first!
-
- 15 Sep, 2022 1 commit
-
-
Pedro Cuenca authored
* First UNet Flax modeling blocks. Mimic the structure of the PyTorch files. The model classes themselves need work, depending on what we do about configuration and initialization. * Remove FlaxUNet2DConfig class. * ignore_for_config non-config args. * Implement `FlaxModelMixin` * Use new mixins for Flax UNet. For some reason the configuration is not correctly applied; the signature of the `__init__` method does not contain all the parameters by the time it's inspected in `extract_init_dict`. * Import `FlaxUNet2DConditionModel` if flax is available. * Rm unused method `framework` * Update src/diffusers/modeling_flax_utils.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Indicate types in flax.struct.dataclass as pointed out by @mishig25 Co-authored-by:
Mishig Davaadorj <mishig.davaadorj@coloradocollege.edu> * Fix typo in transformer block. * make style * some more changes * make style * Add comment * Update src/diffusers/modeling_flax_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Rm unneeded comment * Update docstrings * correct ignore kwargs * make style * Update docstring examples * Make style * Style: remove empty line. * Apply style (after upgrading black from pinned version) * Remove some commented code and unused imports. * Add init_weights (not yet in use until #513). * Trickle down deterministic to blocks. * Rename q, k, v according to the latest PyTorch version. Note that weights were exported with the old names, so we need to be careful. * Flax UNet docstrings, default props as in PyTorch. * Fix minor typos in PyTorch docstrings. * Use FlaxUNet2DConditionOutput as output from UNet. * make style Co-authored-by:
Mishig Davaadorj <dmishig@gmail.com> Co-authored-by:
Mishig Davaadorj <mishig.davaadorj@coloradocollege.edu> Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-