- 24 Nov, 2022 2 commits
-
-
Anton Lozhkov authored
* Support SD2 attention slicing * Support SD2 attention slicing * Add more copies * Use attn_num_head_channels in blocks * fix-copies * Update tests * fix imports
-
Suraj Patil authored
* allow disabling self attention * add class_embedding * fix copies * fix condition * fix copies * do_self_attention -> only_cross_attention * fix copies * num_classes -> num_class_embeds * fix default value
-
- 23 Nov, 2022 2 commits
-
-
Suraj Patil authored
* boom boom * remove duplicate arg * add use_linear_proj arg * fix copies * style * add fast tests * use_linear_proj -> use_linear_projection
-
Patrick von Platen authored
* up * convert dual unet * revert dual attn * adapt for vd-official * test the full pipeline * mixed inference * mixed inference for text2img * add image prompting * fix clip norm * split text2img and img2img * fix format * refactor text2img * mega pipeline * add optimus * refactor image var * wip text_unet * text unet end to end * update tests * reshape * fix image to text * add some first docs * dual guided pipeline * fix token ratio * propose change * dual transformer as a native module * DualTransformer(nn.Module) * DualTransformer(nn.Module) * correct unconditional image * save-load with mega pipeline * remove image to text * up * uP * fix * up * final fix * remove_unused_weights * test updates * save progress * uP * fix dual prompts * some fixes * finish * style * finish renaming * up * fix * fix * fix * finish Co-authored-by:anton-l <anton@huggingface.co>
-
- 04 Nov, 2022 1 commit
-
-
Chenguo Lin authored
Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 03 Nov, 2022 1 commit
-
-
Will Berman authored
* Changes for VQ-diffusion VQVAE Add specify dimension of embeddings to VQModel: `VQModel` will by default set the dimension of embeddings to the number of latent channels. The VQ-diffusion VQVAE has a smaller embedding dimension, 128, than number of latent channels, 256. Add AttnDownEncoderBlock2D and AttnUpDecoderBlock2D to the up and down unet block helpers. VQ-diffusion's VQVAE uses those two block types. * Changes for VQ-diffusion transformer Modify attention.py so SpatialTransformer can be used for VQ-diffusion's transformer. SpatialTransformer: - Can now operate over discrete inputs (classes of vector embeddings) as well as continuous. - `in_channels` was made optional in the constructor so two locations where it was passed as a positional arg were moved to kwargs - modified forward pass to take optional timestep embeddings ImagePositionalEmbeddings: - added to provide positional embeddings to discrete inputs for latent pixels BasicTransformerBlock: - norm layers were made configurable so that the VQ-diffusion could use AdaLayerNorm with timestep embeddings - modified forward pass to take optional timestep embeddings CrossAttention: - now may optionally take a bias parameter for its query, key, and value linear layers FeedForward: - Internal layers are now configurable ApproximateGELU: - Activation function in VQ-diffusion's feedforward layer AdaLayerNorm: - Norm layer modified to incorporate timestep embeddings * Add VQ-diffusion scheduler * Add VQ-diffusion pipeline * Add VQ-diffusion convert script to diffusers * Add VQ-diffusion dummy objects * Add VQ-diffusion markdown docs * Add VQ-diffusion tests * some renaming * some fixes * more renaming * correct * fix typo * correct weights * finalize * fix tests * Apply suggestions from code review Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * finish * finish * up Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 02 Nov, 2022 1 commit
-
-
MatthieuTPHR authored
* 2x speedup using memory efficient attention * remove einops dependency * Swap K, M in op instantiation * Simplify code, remove unnecessary maybe_init call and function, remove unused self.scale parameter * make xformers a soft dependency * remove one-liner functions * change one letter variable to appropriate names * Remove Env variable dependency, remove MemoryEfficientCrossAttention class and use enable_xformers_memory_efficient_attention method * Add memory efficient attention toggle to img2img and inpaint pipelines * Clearer management of xformers' availability * update optimizations markdown to add info about memory efficient attention * add benchmarks for TITAN RTX * More detailed explanation of how the mem eff benchmark were ran * Removing autocast from optimization markdown * import_utils: import torch only if is available Co-authored-by:Nouamane Tazi <nouamane98@gmail.com>
-
- 31 Oct, 2022 1 commit
-
-
Laurent Mazare authored
Remove some unused parameter The `downsample_padding` parameter does not seem to be used in `CrossAttnUpBlock2D` (or by any up block for that matter) so removing it.
-
- 25 Oct, 2022 1 commit
-
-
Patrick von Platen authored
* start * add more logic * Update src/diffusers/models/unet_2d_condition_flax.py * match weights * up * make model work * making class more general, fixing missed file rename * small fix * make new conversion work * up * finalize conversion * up * first batch of variable renamings * remove c and c_prev var names * add mid and out block structure * add pipeline * up * finish conversion * finish * upload * more fixes * Apply suggestions from code review * add attr * up * uP * up * finish tests * finish * uP * finish * fix test * up * naming consistency in tests * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Nathan Lambert <nathan@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co> * remove hardcoded 16 * Remove bogus * fix some stuff * finish * improve logging * docs * upload Co-authored-by:
Nathan Lambert <nol@berkeley.edu> Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Nathan Lambert <nathan@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co>
-
- 12 Oct, 2022 1 commit
-
-
Nathan Lambert authored
* add or fix license formatting * fix quality
-
- 30 Sep, 2022 1 commit
-
-
Josh Achiam authored
* Allow resolutions that are not multiples of 64 * ran black * fix bug * add test * more explanation * more comments Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 22 Sep, 2022 1 commit
-
-
Suraj Patil authored
* add grad ckpt to downsample blocks * make it work * don't pass gradient_checkpointing to upsample block * add tests for UNet2DConditionModel * add test_gradient_checkpointing * add gradient_checkpointing for up and down blocks * add functions to enable and disable grad ckpt * remove the forward argument * better naming * make supports_gradient_checkpointing private
-
- 16 Sep, 2022 1 commit
-
-
Yuta Hayashibe authored
* Fix typos * Add a typo check action * Fix a bug * Changed to manual typo check currently Ref: https://github.com/huggingface/diffusers/pull/483#pullrequestreview-1104468010 Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Removed a confusing message * Renamed "nin_shortcut" to "in_shortcut" * Add memo about NIN Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com>
-
- 15 Sep, 2022 1 commit
-
-
Suraj Patil authored
* pass norm_num_groups to unet blocs and attention * fix UNet2DConditionModel * add norm_num_groups arg in vae * add tests * remove comment * Apply suggestions from code review
-
- 08 Sep, 2022 1 commit
-
-
Patrick von Platen authored
* Update black * update table
-
- 06 Sep, 2022 1 commit
-
-
Patrick von Platen authored
* up * add tests * correct * up * finish * better naming * Update README.md Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 04 Sep, 2022 1 commit
-
-
Yuntian Deng authored
Update unet_blocks.py fix typo
-
- 25 Aug, 2022 1 commit
-
-
Patrick von Platen authored
* CleanResNet * refactor more * correct
-
- 10 Aug, 2022 1 commit
-
-
Suraj Patil authored
-
- 05 Aug, 2022 1 commit
-
-
Suraj Patil authored
add cross_attention_dim as an argument
-
- 28 Jul, 2022 1 commit
-
-
Patrick von Platen authored
* [Vae and AutoencoderKL clean] * save intermediate finished work * more progress * more progress * finish modeling code * save intermediate * finish * Correct tests
-
- 20 Jul, 2022 1 commit
-
-
Patrick von Platen authored
* up * change model name * renaming * more changes * up * up * up * save checkpoint * finish api / naming * finish config renaming * rename all weights * finish really
-
- 19 Jul, 2022 2 commits
-
-
Patrick von Platen authored
* big purge * more fixes * finish for now
-
Patrick von Platen authored
* upload * make checkpoint work * finalize
-
- 18 Jul, 2022 1 commit
-
-
Patrick von Platen authored
* up * more * uP * make dummy test pass * save intermediate * p * p * finish * finish * finish
-
- 14 Jul, 2022 2 commits
-
-
Patrick von Platen authored
* up * finish * uP
-
Patrick von Platen authored
* save intermediate * up * up
-
- 13 Jul, 2022 1 commit
-
-
Patrick von Platen authored
* up * finished clean up * remove @
-
- 12 Jul, 2022 1 commit
-
-
Patrick von Platen authored
* uP * finish downsampling layers * finish major refactor * remove bugus file
-
- 05 Jul, 2022 1 commit
-
-
Patrick von Platen authored
* upload files * finish
-
- 04 Jul, 2022 2 commits
-
-
Patrick von Platen authored
Finish
-
Patrick von Platen authored
* update mid block * finish mid block
-