- 22 Apr, 2024 1 commit
-
-
Phil Butler authored
Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 19 Apr, 2024 5 commits
-
-
Sai-Suraj-27 authored
* Fixed type annotations for compatability with python 3.8 * Add required imports.
-
Dhruv Nair authored
* update * update
-
Dhruv Nair authored
update
-
YiYi Xu authored
* style * Fix device map nits (#7705) --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Fabio Rigano authored
* Switch to peft and multi proj layers * Move Face ID loading and inference to core --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 16 Apr, 2024 2 commits
-
-
Sayak Paul authored
* is_cosxl_edit arg in SDXL ip2p. * Empty-Commit Co-authored-by:
Yiyi Xu <yixu310@gmail.com> * doc * remove redundant logic. * reflect drhuv's comments. --------- Co-authored-by:
Yiyi Xu <yixu310@gmail.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
UmerHA authored
* CheckIn - created DownSubBlocks * Added extra channels, implemented subblock fwd * Fixed connection sizes * checkin * Removed iter, next in forward * Models for SD21 & SDXL run through * Added back pipelines, cleared up connections * Cleaned up connection creation * added debug logs * updated logs * logs: added input loading * Update umer_debug_logger.py * log: Loading hint * Update umer_debug_logger.py * added logs * Changed debug logging * debug: added more logs * Fixed num_norm_groups * Debug: Logging all of SDXL input * Update umer_debug_logger.py * debug: updated logs * checkim * Readded tests * Removed debug logs * Fixed Slow Tests * Added value ckecks | Updated model_cpu_offload_seq * accelerate-offloading works ; fast tests work * Made unet & addon explicit in controlnet * Updated slow tests * Added dtype/device to ControlNetXS * Filled in test model paths * Added image_encoder/feature_extractor to XL pipe * Fixed fast tests * Added comments and docstrings * Fixed copies * Added docs ; Updates slow tests * Moved changes to UNetMidBlock2DCrossAttn * tiny cleanups * Removed stray prints * Removed ip adapters + freeU - Removed ip adapters + freeU as they don't make sense for ControlNet-XS - Fixed imports of UNet components * Fixed test_save_load_float16 * Make style, quality, fix-copies * Changed loading/saving API for ControlNetXS - Changed loading/saving API for ControlNetXS - other small fixes * Removed ControlNet-XS from research examples * Make style, quality, fix-copies * Small fixes - deleted ControlNetXSModel.init_original - added time_embedding_mix to StableDiffusionControlNetXSPipeline .from_pretrained / StableDiffusionXLControlNetXSPipeline.from_pretrained - fixed copy hints * checkin May 11 '23 * CheckIn Mar 12 '24 * Fixed tests for SD * Added tests for UNetControlNetXSModel * Fixed SDXL tests * cleanup * Delete Pipfile * CheckIn Mar 20 Started replacing sub blocks by `ControlNetXSCrossAttnDownBlock2D` and `ControlNetXSCrossAttnUplock2D` * check-in Mar 23 * checkin 24 Mar * Created init for UNetCnxs and CnxsAddon * CheckIn * Made from_modules, from_unet and no_control work * make style,quality,fix-copies & small changes * Fixed freezing * Added gradient ckpt'ing; fixed tests * Fix slow tests(+compile) ; clear naming confusion * Don't create UNet in init ; removed class_emb * Incorporated review feedback - Deleted get_base_pipeline / get_controlnet_addon for pipes - Pipes inherit from StableDiffusionXLPipeline - Made module dicts for cnxs-addon's down/mid/up classes - Added support for qkv fusion and freeU * Make style, quality, fix-copies * Implemented review feedback * Removed compatibility check for vae/ctrl embedding * make style, quality, fix-copies * Delete Pipfile * Integrated review feedback - Importing ControlNetConditioningEmbedding now - get_down/mid/up_block_addon now outside class - renamed `do_control` to `apply_control` * Reduced size of test tensors For this, added `norm_num_groups` as parameter everywhere * Renamed cnxs-`Addon` to cnxs-`Adapter` - `ControlNetXSAddon` -> `ControlNetXSAdapter` - `ControlNetXSAddonDownBlockComponents` -> `DownBlockControlNetXSAdapter`, and similarly for mid/up - `get_mid_block_addon` -> `get_mid_block_adapter`, and similarly for mid/up * Fixed save_pretrained/from_pretrained bug * Removed redundant code --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 12 Apr, 2024 1 commit
-
-
Benjamin Bossan authored
Fix a bug that causes the the call to set_lora_device to ignore the DoRA parameters.
-
- 11 Apr, 2024 5 commits
-
-
Sai-Suraj-27 authored
Fixed deprecated logger.warn with logger.warning.
-
Yiqin Zhao authored
-
Steven Munn authored
* Skip scaling if scale is identity * move check for weight one to scale and unscale lora * fix code style/quality * Empty-Commit --------- Co-authored-by:
Steven Munn <stevenjmunn@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Steven Munn <5297082+stevenjlm@users.noreply.github.com>
-
Sayak Paul authored
* playground vae encoding should use std and mean of the vae. * style. * fix-copies.
-
YiYi Xu authored
* fix * up --------- Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
- 10 Apr, 2024 3 commits
-
-
IDKiro authored
Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
Sayak Paul authored
* get device <-> component mapping when using multiple gpus. * condition the device_map bits. * relax condition * device_map progress. * device_map enhancement * some cleaning up and debugging * Apply suggestions from code review Co-authored-by:
Marc Sun <57196510+SunMarc@users.noreply.github.com> * incorporate suggestions from PR. * remove multi-gpu condition for now. * guard check the component -> device mapping * fix: device_memory variable * dispatching transformers model to have force_hooks=True * better guarding for transformers device_map * introduce support balanced_low_memory and balanced_ultra_low_memory. * remove device_map patch. * fix: intermediate variable scoping. * fix: condition in cpu offload. * fix: flax class restrictions. * remove modifications from cpu_offload and model_offload * incorporate changes. * add a simple forward pass test * add: torch_device in get_inputs() * add: tests * remove print * safe-guard to(), model offloading and cpu offloading when balanced is used as a device_map. * style * remove . * safeguard device_map with more checks and remove invalid device_mapping strategues. * make a class attribute and adjust tests accordingly. * fix device_map check * fix test * adjust comment * fix: device_map attribute * fix: dispatching. * max_memory test for pipeline * version guard the tests * fix guard. * address review feedback. * reset_device_map method. * add: test for reset_hf_device_map * fix a couple things. * add reset_device_map() in the error message. * add tests for checking reset_device_map doesn't have unintended consequences. * fix reset_device_map and offloading tests. * create _get_final_device_map utility. * hf_device_map -> _hf_device_map * add documentation * add notes suggested by Marc. * styling. * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * move updates within gpu condition. * other docs related things * note on ignore a device not specified in . * provide a suggestion if device mapping errors out. * fix: typo. * _hf_device_map -> hf_device_map * Empty-Commit * add: example hf_device_map. --------- Co-authored-by:
Marc Sun <57196510+SunMarc@users.noreply.github.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Sayak Paul authored
* refactor transformer_2d forward logic into meaningful conditions. * Empty-Commit * fix: _operate_on_patched_inputs * fix: _operate_on_patched_inputs * check * fix: patch output computation block. * fix: _operate_on_patched_inputs. * remove print. * move operations to blocks. * more readability neats. * empty commit * Apply suggestions from code review Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Revert "Apply suggestions from code review" This reverts commit 12178b1aa0da3c29434e95a2a0126cf3ef5706a7. --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 09 Apr, 2024 1 commit
-
-
Fabio Rigano authored
* Support multiimage masking --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 08 Apr, 2024 2 commits
-
-
w4ffl35 authored
Allow safety and feature extractor arguments to be passed to convert_from_ckpt Allows management of safety checker and feature extractor from outside of the convert ckpt class. Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Nguyễn Công Tú Anh authored
* add audioldm2 tts * change gpt2 max new tokens * remove unnecessary pipeline and class * add TTS to AudioLDM2Pipeline * add TTS docs * delete unnecessary file * remove unnecessary import * add audioldm2 slow testcase * fix code quality * remove AudioLDMLearnablePositionalEmbedding * add variable check vits encoder * add use_learned_position_embedding --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 05 Apr, 2024 1 commit
-
-
YiYi Xu authored
add set_begin_index for all if pipelines
-
- 03 Apr, 2024 4 commits
-
-
Abhinav Gopal authored
* Update pipeline_animatediff_video2video.py * commit with test for whether latent input can be passed into animatediffvid2vid
-
Sayak Paul authored
* refactor transformers 2d into multiple legacy variants. * fix: init. * fix recursive init. * add inits. * make transformer block creation more modular. * complete refactor. * remove forward * debug * remove legacy blocks and refactor within the module itself. * remove print * guard caption projection * remove fetcher. * reduce the number of args. * fix: norm_type * group variables that are shared. * remove _get_transformer_blocks * harmonize the init function signatures. * transformer_blocks to common * repeat .
-
Beinsezii authored
* UniPC Multistep add `rescale_betas_zero_snr` Same patch as DPM and Euler with the patched final alpha cumprod BF16 doesn't seem to break down, I think cause UniPC upcasts during some phases already? We could still force an upcast since it only loses ≈ 0.005 it/s for me but the difference in output is very small. A better endeavor might upcasting in step() and removing all the other upcasts elsewhere? * UniPC ZSNR UT * Re-add `rescale_betas_zsnr` doc oops
-
Beinsezii authored
* UniPC UTs iterate solvers on FP16 It wasn't catching errs on order==3. Might be excessive? * UniPC Multistep fix tensor dtype/device on order=3 * UniPC UTs Add v_pred to fp16 test iter For completions sake. Probably overkill?
-
- 02 Apr, 2024 3 commits
-
-
Sayak Paul authored
* add: utility to format our docs too
📜 * debugging saga * fix: message * checking * should be fixed. * revert pipeline_fixture * remove empty line * make style * fix: setup.py * style. -
Bagheera authored
* 7529 do not disable autocast for cuda devices * Remove typecasting error check for non-mps platforms, as a correct autocast implementation makes it a non-issue * add autocast fix to other training examples * disable native_amp for dreambooth (sdxl) * disable native_amp for pix2pix (sdxl) * remove tests from remaining files * disable native_amp on huggingface accelerator for every training example that uses it * convert more usages of autocast to nullcontext, make style fixes * make style fixes * style. * Empty-Commit --------- Co-authored-by:
bghira <bghira@users.github.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* remove class assignments for linear and conv. * fix: self.nn
-
- 01 Apr, 2024 2 commits
-
-
YiYi Xu authored
* add from_pipe --------- Co-authored-by:
yiyixuxu <yixu310@gmail,com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
Jianbing Wu authored
* Fix SVD bug (shape of `time_context`) * Formatting code * Formatting src/diffusers/models/transformers/transformer_temporal.py by `make style && make quality` --------- Co-authored-by:
kevinkhwu <kevinkhwu@tencent.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 30 Mar, 2024 3 commits
-
-
Stephen authored
* fix ip adapter support * Update sag pipelines tests, adjust sag pipeline to pass tests --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
Beinsezii authored
* Add `final_sigma_zero` to UniPCMultistep Effectively the same trick as DDIM's `set_alpha_to_one` and DPM's `final_sigma_type='zero'`. Currently False by default but maybe this should be True? * `final_sigma_zero: bool` -> `final_sigmas_type: str` Should 1:1 match DPM Multistep now. * Set `final_sigmas_type='sigma_min'` in UniPC UTs
-
UmerHA authored
Fixed important typo
-
- 29 Mar, 2024 2 commits
-
-
UmerHA authored
* Initial commit * Implemented block lora - implemented block lora - updated docs - added tests * Finishing up * Reverted unrelated changes made by make style * Fixed typo * Fixed bug + Made text_encoder_2 scalable * Integrated some review feedback * Incorporated review feedback * Fix tests * Made every module configurable * Adapter to new lora test structure * Final cleanup * Some more final fixes - Included examples in `using_peft_for_inference.md` - Added hint that only attns are scaled - Removed NoneTypes - Added test to check mismatching lens of adapter names / weights raise error * Update using_peft_for_inference.md * Update using_peft_for_inference.md * Make style, quality, fix-copies * Updated tutorial;Warning if scale/adapter mismatch * floats are forwarded as-is; changed tutorial scale * make style, quality, fix-copies * Fixed typo in tutorial * Moved some warnings into `lora_loader_utils.py` * Moved scale/lora mismatch warnings back * Integrated final review suggestions * Empty commit to trigger CI * Reverted emoty commit to trigger CI --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* speed up test_vae_slicing in animatediff * speed up test_karras_schedulers_shape for attend and excite. * style. * get the static slices out. * specify torch print options. * modify * test run with controlnet * specify kwarg * fix: things * not None * flatten * controlnet img2img * complete controlet sd * finish more * finish more * finish more * finish more * finish the final batch * add cpu check for expected_pipe_slice. * finish the rest * remove print * style * fix ssd1b controlnet test * checking ssd1b * disable the test. * make the test_ip_adapter_single controlnet test more robust * fix: simple inpaint * multi * disable panorama * enable again * panorama is shaky so leave it for now * remove print * raise tolerance.
-
- 28 Mar, 2024 3 commits
-
-
Lvkesheng Shen authored
* Bug fix for controlnetpipeline check_image Bug fix for controlnetpipeline check_image when using multicontrolnet and prompt list * Update test_inference_multiple_prompt_input function * Update test_controlnet.py add test for multiple prompts and multiple image conditioning * Update test_controlnet.py Fix format error --------- Co-authored-by:
Lvkesheng Shen <45848260+Fantast416@users.noreply.github.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
YiYi Xu authored
* add remove_all_hooks * a few more fix and tests * up * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * split tests * add --------- Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Sayak Paul authored
import load_model_dict_into_meta only once
-
- 27 Mar, 2024 1 commit
-
-
YiYi Xu authored
fix Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 26 Mar, 2024 1 commit
-
-
Disty0 authored
Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-