- 30 Apr, 2024 1 commit
-
-
Sayak Paul authored
* introduce _no_split_modules. * unnecessary spaces. * remove unnecessary kwargs and style * fix: accelerate imports. * change to _determine_device_map * add the blocks that have residual connections. * add: CrossAttnUpBlock2D * add: testin * style * line-spaces * quality * add disk offload test without safetensors. * checking disk offloading percentages. * change model split * add: utility for checking multi-gpu requirement. * model parallelism test * splits. * splits. * splits * splits. * splits. * splits. * offload folder to test_disk_offload_with_safetensors * add _no_split_modules * fix-copies
-
- 24 Apr, 2024 1 commit
-
-
Junsong Chen authored
* support PixArt-DMD --------- Co-authored-by:
jschen <chenjunsong4@h-partners.com> Co-authored-by:
badayvedat <badayvedat@gmail.com> Co-authored-by:
Vedat Baday <54285744+badayvedat@users.noreply.github.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
yiyixuxu <yixu310@gmail,com>
-
- 22 Apr, 2024 2 commits
-
-
Jenyuan-Huang authored
* enable control ip-adapter per-transformer block on-the-fly --------- Co-authored-by:
sayakpaul <spsayakpaul@gmail.com> Co-authored-by:
ResearcherXman <xhs.research@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
Phil Butler authored
Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 19 Apr, 2024 3 commits
-
-
Dhruv Nair authored
* update * update
-
YiYi Xu authored
* style * Fix device map nits (#7705) --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Fabio Rigano authored
* Switch to peft and multi proj layers * Move Face ID loading and inference to core --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 16 Apr, 2024 1 commit
-
-
UmerHA authored
* CheckIn - created DownSubBlocks * Added extra channels, implemented subblock fwd * Fixed connection sizes * checkin * Removed iter, next in forward * Models for SD21 & SDXL run through * Added back pipelines, cleared up connections * Cleaned up connection creation * added debug logs * updated logs * logs: added input loading * Update umer_debug_logger.py * log: Loading hint * Update umer_debug_logger.py * added logs * Changed debug logging * debug: added more logs * Fixed num_norm_groups * Debug: Logging all of SDXL input * Update umer_debug_logger.py * debug: updated logs * checkim * Readded tests * Removed debug logs * Fixed Slow Tests * Added value ckecks | Updated model_cpu_offload_seq * accelerate-offloading works ; fast tests work * Made unet & addon explicit in controlnet * Updated slow tests * Added dtype/device to ControlNetXS * Filled in test model paths * Added image_encoder/feature_extractor to XL pipe * Fixed fast tests * Added comments and docstrings * Fixed copies * Added docs ; Updates slow tests * Moved changes to UNetMidBlock2DCrossAttn * tiny cleanups * Removed stray prints * Removed ip adapters + freeU - Removed ip adapters + freeU as they don't make sense for ControlNet-XS - Fixed imports of UNet components * Fixed test_save_load_float16 * Make style, quality, fix-copies * Changed loading/saving API for ControlNetXS - Changed loading/saving API for ControlNetXS - other small fixes * Removed ControlNet-XS from research examples * Make style, quality, fix-copies * Small fixes - deleted ControlNetXSModel.init_original - added time_embedding_mix to StableDiffusionControlNetXSPipeline .from_pretrained / StableDiffusionXLControlNetXSPipeline.from_pretrained - fixed copy hints * checkin May 11 '23 * CheckIn Mar 12 '24 * Fixed tests for SD * Added tests for UNetControlNetXSModel * Fixed SDXL tests * cleanup * Delete Pipfile * CheckIn Mar 20 Started replacing sub blocks by `ControlNetXSCrossAttnDownBlock2D` and `ControlNetXSCrossAttnUplock2D` * check-in Mar 23 * checkin 24 Mar * Created init for UNetCnxs and CnxsAddon * CheckIn * Made from_modules, from_unet and no_control work * make style,quality,fix-copies & small changes * Fixed freezing * Added gradient ckpt'ing; fixed tests * Fix slow tests(+compile) ; clear naming confusion * Don't create UNet in init ; removed class_emb * Incorporated review feedback - Deleted get_base_pipeline / get_controlnet_addon for pipes - Pipes inherit from StableDiffusionXLPipeline - Made module dicts for cnxs-addon's down/mid/up classes - Added support for qkv fusion and freeU * Make style, quality, fix-copies * Implemented review feedback * Removed compatibility check for vae/ctrl embedding * make style, quality, fix-copies * Delete Pipfile * Integrated review feedback - Importing ControlNetConditioningEmbedding now - get_down/mid/up_block_addon now outside class - renamed `do_control` to `apply_control` * Reduced size of test tensors For this, added `norm_num_groups` as parameter everywhere * Renamed cnxs-`Addon` to cnxs-`Adapter` - `ControlNetXSAddon` -> `ControlNetXSAdapter` - `ControlNetXSAddonDownBlockComponents` -> `DownBlockControlNetXSAdapter`, and similarly for mid/up - `get_mid_block_addon` -> `get_mid_block_adapter`, and similarly for mid/up * Fixed save_pretrained/from_pretrained bug * Removed redundant code --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 10 Apr, 2024 3 commits
-
-
IDKiro authored
Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
Sayak Paul authored
* get device <-> component mapping when using multiple gpus. * condition the device_map bits. * relax condition * device_map progress. * device_map enhancement * some cleaning up and debugging * Apply suggestions from code review Co-authored-by:
Marc Sun <57196510+SunMarc@users.noreply.github.com> * incorporate suggestions from PR. * remove multi-gpu condition for now. * guard check the component -> device mapping * fix: device_memory variable * dispatching transformers model to have force_hooks=True * better guarding for transformers device_map * introduce support balanced_low_memory and balanced_ultra_low_memory. * remove device_map patch. * fix: intermediate variable scoping. * fix: condition in cpu offload. * fix: flax class restrictions. * remove modifications from cpu_offload and model_offload * incorporate changes. * add a simple forward pass test * add: torch_device in get_inputs() * add: tests * remove print * safe-guard to(), model offloading and cpu offloading when balanced is used as a device_map. * style * remove . * safeguard device_map with more checks and remove invalid device_mapping strategues. * make a class attribute and adjust tests accordingly. * fix device_map check * fix test * adjust comment * fix: device_map attribute * fix: dispatching. * max_memory test for pipeline * version guard the tests * fix guard. * address review feedback. * reset_device_map method. * add: test for reset_hf_device_map * fix a couple things. * add reset_device_map() in the error message. * add tests for checking reset_device_map doesn't have unintended consequences. * fix reset_device_map and offloading tests. * create _get_final_device_map utility. * hf_device_map -> _hf_device_map * add documentation * add notes suggested by Marc. * styling. * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * move updates within gpu condition. * other docs related things * note on ignore a device not specified in . * provide a suggestion if device mapping errors out. * fix: typo. * _hf_device_map -> hf_device_map * Empty-Commit * add: example hf_device_map. --------- Co-authored-by:
Marc Sun <57196510+SunMarc@users.noreply.github.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Sayak Paul authored
* refactor transformer_2d forward logic into meaningful conditions. * Empty-Commit * fix: _operate_on_patched_inputs * fix: _operate_on_patched_inputs * check * fix: patch output computation block. * fix: _operate_on_patched_inputs. * remove print. * move operations to blocks. * more readability neats. * empty commit * Apply suggestions from code review Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Revert "Apply suggestions from code review" This reverts commit 12178b1aa0da3c29434e95a2a0126cf3ef5706a7. --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 09 Apr, 2024 1 commit
-
-
Fabio Rigano authored
* Support multiimage masking --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 03 Apr, 2024 1 commit
-
-
Sayak Paul authored
* refactor transformers 2d into multiple legacy variants. * fix: init. * fix recursive init. * add inits. * make transformer block creation more modular. * complete refactor. * remove forward * debug * remove legacy blocks and refactor within the module itself. * remove print * guard caption projection * remove fetcher. * reduce the number of args. * fix: norm_type * group variables that are shared. * remove _get_transformer_blocks * harmonize the init function signatures. * transformer_blocks to common * repeat .
-
- 02 Apr, 2024 2 commits
-
-
Sayak Paul authored
* add: utility to format our docs too
📜 * debugging saga * fix: message * checking * should be fixed. * revert pipeline_fixture * remove empty line * make style * fix: setup.py * style. -
Sayak Paul authored
* remove class assignments for linear and conv. * fix: self.nn
-
- 01 Apr, 2024 2 commits
-
-
YiYi Xu authored
* add from_pipe --------- Co-authored-by:
yiyixuxu <yixu310@gmail,com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
Jianbing Wu authored
* Fix SVD bug (shape of `time_context`) * Formatting code * Formatting src/diffusers/models/transformers/transformer_temporal.py by `make style && make quality` --------- Co-authored-by:
kevinkhwu <kevinkhwu@tencent.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 26 Mar, 2024 1 commit
-
-
M. Tolga Cangöz authored
* Fix typos * Add docstring to `decode` method in `ConsistencyDecoderVAE` * Fix tiling * Enable tiled VAE decoding with customizable tile sample size and overlap factor * Revert "Enable tiled VAE decoding with customizable tile sample size and overlap factor" This reverts commit 181049675e83cea7b33ae2bbeba2aff7ae1b1761. * Add VAE tiling test for `ConsistencyDecoderVAE` --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 21 Mar, 2024 1 commit
-
-
Yuanhao Zhai authored
Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 20 Mar, 2024 1 commit
-
-
Sayak Paul authored
* cleanse and refactor lora testing suite. * more cleanup. * make check_if_lora_correctly_set a utility function * fix: typo * retrigger ci * style
-
- 19 Mar, 2024 6 commits
-
-
Dhruv Nair authored
update
-
Stephen authored
* Change path to posix * running isort * run style and quality checks --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
laksjdjf authored
* Fix ControlNetModel.from_unet do not load add_embedding * delete white space in blank line --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* debugging * let's see the numbers * let's see the numbers * let's see the numbers * restrict tolerance. * increase inference steps. * shallow copy of cross_attentionkwargs * remove print
-
lawfordp2017 authored
* Correction for non-integral image resolutions with quantizations other than float32. * Support for training, and use of diffusers-style casting.
-
Sayak Paul authored
* pop scale from the top-level unet instead of getting it. * improve readability. * Apply suggestions from code review Co-authored-by:
YiYi Xu <yixu310@gmail.com> * fix a little bit. --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 18 Mar, 2024 1 commit
-
-
M. Tolga Cangöz authored
* Fix PyTorch's convention for inplace functions * Fix import structure in __init__.py and update config loading logic in test_config.py * Update configuration access * Fix typos * Trim trailing white spaces * Fix typo in logger name * Revert "Fix PyTorch's convention for inplace functions" This reverts commit f65dc4afcb57ceb43d5d06389229d47bafb10d2d. * Fix typo in step_index property description * Revert "Update configuration access" This reverts commit 8d44e870b8c1ad08802e3e904c34baeca1b598f8. * Revert "Fix import structure in __init__.py and update config loading logic in test_config.py" This reverts commit 2ad5e8bca25aede3b912da22bd57285b598fe171. * Fix typos * Fix typos * Fix typos * Fix a typo: tranform -> transform
-
- 14 Mar, 2024 1 commit
-
-
M. Tolga Cangöz authored
* Add properties and `IPAdapterTesterMixin` tests for `StableDiffusionPanoramaPipeline` * Fix variable name typo and update comments * Update deprecated `output_type="numpy"` to "np" in test files * Discard changes to src/diffusers/pipelines/stable_diffusion_panorama/pipeline_stable_diffusion_panorama.py * Update test_stable_diffusion_panorama.py * Update numbers in README.md * Update get_guidance_scale_embedding method to use timesteps instead of w * Update number of checkpoints in README.md * Add type hints and fix var name * Fix PyTorch's convention for inplace functions * Fix a typo * Revert "Fix PyTorch's convention for inplace functions" This reverts commit 74350cf65b2c9aa77f08bec7937d7a8b13edb509. * Fix typos * Indent * Refactor get_guidance_scale_embedding method in LEditsPPPipelineStableDiffusionXL class
-
- 13 Mar, 2024 4 commits
-
-
Alexander Bonnet authored
* fix typo in UNet2DConditionModel documentation * Fix indentation that may fix doc rendering * Fix squished doc lines
-
Dhruv Nair authored
* update * update * update * update * update * update --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* fix PyTorch classes and start deprecsation cycles. * remove args crafting for accommodating scale. * remove scale check in feedforward. * assert against nn.Linear and not CompatibleLinear. * remove conv_cls and lineaR_cls. * remove scale *
👋 scale. * fix: unet2dcondition * fix attention.py * fix: attention.py again * fix: unet_2d_blocks. * fix-copies. * more fixes. * fix: resnet.py * more fixes * fix i2vgenxl unet. * depcrecate scale gently. * fix-copies * Apply suggestions from code review Co-authored-by:YiYi Xu <yixu310@gmail.com> * quality * throw warning when scale is passed to the the BasicTransformerBlock class. * remove scale from signature. * cross_attention_kwargs, very nice catch by Yiyi * fix: logger.warn * make deprecation message clearer. * address final comments. * maintain same depcrecation message and also add it to activations. * address yiyi * fix copies * Apply suggestions from code review Co-authored-by:
YiYi Xu <yixu310@gmail.com> * more depcrecation * fix-copies --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
Sayak Paul authored
switch to logger.warning
-
- 09 Mar, 2024 2 commits
-
-
Sayak Paul authored
remove tf mention
-
Xiaodong Wang authored
[UNet_Spatio_Temporal_Condition] fix default num_attention_heads in unet_spatio_temporal_condition (#7205) fix default num_attention_heads in unet_spatio_temporal_condition Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 08 Mar, 2024 2 commits
-
-
Martin Müller authored
* make mid block optional for flax UNet * make style
-
Sayak Paul authored
* throw error when patch inputs and layernorm are provided for transformers2d. * add comment on supported norm_types in transformers2d * more check * fix: norm _type handling
-
- 07 Mar, 2024 1 commit
-
-
Rinne authored
fix: remove duplicate code in TemporalBasicTransformerBlock. Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 06 Mar, 2024 1 commit
-
-
Kashif Rasul authored
* initial diffNext v3 * move to v3 folder * imports * dry up the unets * no switch_level * fix init * add switch_level tp config * Fixed some things * Added pooled text embeddings * Initial work on adding image encoder * changes from @dome272 * Stuff for the image encoder processing and variable naming in decoder * fix arg name * inference fixes * inference fixes * default TimestepBlock without conds * c_skip=0 by default * fix bfloat16 to cpu * use config * undo temp change * fix gen_c_embeddings args * change text encoding * text encoding * undo print * undo .gitignore change * Allow WuerstchenV3PriorPipeline to use the base DDPM & DDIM schedulers * use WuerstchenV3Unet in both pipelines * fix imports * initial failing tests * cleanup * use scheduler.timesterps * some fixes to the tests, still not fully working * fix tests * fix prior tests * add dropout to the model_kwargs * more tests passing * update expected_slice * initial rename * rename tests * rename class names * make fix-copies * initial docs * autodocs * typos * fix arg docs * add text_encoder info * combined pipeline has optional image arg * fix documentation * Update src/diffusers/pipelines/stable_cascade/modeling_stable_cascade_common.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/stable_cascade/modeling_stable_cascade_common.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/stable_cascade/modeling_stable_cascade_common.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/stable_cascade/modeling_stable_cascade_common.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/stable_cascade/pipeline_stable_cascade.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/stable_cascade/modeling_stable_cascade_common.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * use self.config * Update src/diffusers/pipelines/stable_cascade/modeling_stable_cascade_common.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * c_in -> in_channels * removed kwargs from unet's forward * Update src/diffusers/pipelines/stable_cascade/pipeline_stable_cascade.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/stable_cascade/pipeline_stable_cascade.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * remove older callback api * removed kwargs and fixed decoder guidance > 1 * decoder takes emeds * check and use image_embeds * fixed all but one decoder test * fix decoder tests * update callback api * fix some more combined tests * push combined pipeline * initial docs * fix doc_string * update combined api * no test_callback_inputs test for combined pipeline * add optional components * fix ordering of components * fix combined tests * update convert script * Update src/diffusers/pipelines/stable_cascade/pipeline_stable_cascade_prior.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/stable_cascade/pipeline_stable_cascade_prior.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/stable_cascade/pipeline_stable_cascade_prior.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * fix imports * move effnet out of deniosing loop * prompt_embeds_pooled only when doing guidance * Fix repeat shape * move StableCascadeUnet to models/unets/ * more descriptive names * converted when numpy() * StableCascadePriorPipelineOutput docs * rename StableCascadeUNet * add slow tests * fix slow tests * update * update * updated model_path * add args for weights * set push_to_hub to false * update * update * update * update * update * update * update * update * update * update * update * update * update * update --------- Co-authored-by:
Dominic Rampas <d6582533@gmail.com> Co-authored-by:
Pablo Pernias <pablo@pernias.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
99991 <99991@users.noreply.github.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 04 Mar, 2024 1 commit
-
-
fpgaminer authored
Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 03 Mar, 2024 1 commit
-
-
Junsong Chen authored
* feat 256px diffusers inference bug * change the max_length of T5 to pipeline config file * fix bug in convert_pixart_alpha_to_diffusers.py * Update scripts/convert_pixart_alpha_to_diffusers.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * remove multi_scale_train parser * Update src/diffusers/pipelines/pixart_alpha/pipeline_pixart_alpha.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/pixart_alpha/pipeline_pixart_alpha.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * styling * change `model_token_max_length` to call argument. * Refactoring * add: max_sequence_length to the docstring. --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-