- 19 Apr, 2024 1 commit
-
-
Fabio Rigano authored
* Switch to peft and multi proj layers * Move Face ID loading and inference to core --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 16 Apr, 2024 1 commit
-
-
UmerHA authored
* CheckIn - created DownSubBlocks * Added extra channels, implemented subblock fwd * Fixed connection sizes * checkin * Removed iter, next in forward * Models for SD21 & SDXL run through * Added back pipelines, cleared up connections * Cleaned up connection creation * added debug logs * updated logs * logs: added input loading * Update umer_debug_logger.py * log: Loading hint * Update umer_debug_logger.py * added logs * Changed debug logging * debug: added more logs * Fixed num_norm_groups * Debug: Logging all of SDXL input * Update umer_debug_logger.py * debug: updated logs * checkim * Readded tests * Removed debug logs * Fixed Slow Tests * Added value ckecks | Updated model_cpu_offload_seq * accelerate-offloading works ; fast tests work * Made unet & addon explicit in controlnet * Updated slow tests * Added dtype/device to ControlNetXS * Filled in test model paths * Added image_encoder/feature_extractor to XL pipe * Fixed fast tests * Added comments and docstrings * Fixed copies * Added docs ; Updates slow tests * Moved changes to UNetMidBlock2DCrossAttn * tiny cleanups * Removed stray prints * Removed ip adapters + freeU - Removed ip adapters + freeU as they don't make sense for ControlNet-XS - Fixed imports of UNet components * Fixed test_save_load_float16 * Make style, quality, fix-copies * Changed loading/saving API for ControlNetXS - Changed loading/saving API for ControlNetXS - other small fixes * Removed ControlNet-XS from research examples * Make style, quality, fix-copies * Small fixes - deleted ControlNetXSModel.init_original - added time_embedding_mix to StableDiffusionControlNetXSPipeline .from_pretrained / StableDiffusionXLControlNetXSPipeline.from_pretrained - fixed copy hints * checkin May 11 '23 * CheckIn Mar 12 '24 * Fixed tests for SD * Added tests for UNetControlNetXSModel * Fixed SDXL tests * cleanup * Delete Pipfile * CheckIn Mar 20 Started replacing sub blocks by `ControlNetXSCrossAttnDownBlock2D` and `ControlNetXSCrossAttnUplock2D` * check-in Mar 23 * checkin 24 Mar * Created init for UNetCnxs and CnxsAddon * CheckIn * Made from_modules, from_unet and no_control work * make style,quality,fix-copies & small changes * Fixed freezing * Added gradient ckpt'ing; fixed tests * Fix slow tests(+compile) ; clear naming confusion * Don't create UNet in init ; removed class_emb * Incorporated review feedback - Deleted get_base_pipeline / get_controlnet_addon for pipes - Pipes inherit from StableDiffusionXLPipeline - Made module dicts for cnxs-addon's down/mid/up classes - Added support for qkv fusion and freeU * Make style, quality, fix-copies * Implemented review feedback * Removed compatibility check for vae/ctrl embedding * make style, quality, fix-copies * Delete Pipfile * Integrated review feedback - Importing ControlNetConditioningEmbedding now - get_down/mid/up_block_addon now outside class - renamed `do_control` to `apply_control` * Reduced size of test tensors For this, added `norm_num_groups` as parameter everywhere * Renamed cnxs-`Addon` to cnxs-`Adapter` - `ControlNetXSAddon` -> `ControlNetXSAdapter` - `ControlNetXSAddonDownBlockComponents` -> `DownBlockControlNetXSAdapter`, and similarly for mid/up - `get_mid_block_addon` -> `get_mid_block_adapter`, and similarly for mid/up * Fixed save_pretrained/from_pretrained bug * Removed redundant code --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 12 Apr, 2024 1 commit
-
-
Benjamin Bossan authored
Fix a bug that causes the the call to set_lora_device to ignore the DoRA parameters.
-
- 10 Apr, 2024 2 commits
-
-
Sayak Paul authored
* give it a shot. * print. * correct assertion. * gather results from the rest of the tests. * change the assertion values where needed. * remove print statements.
-
Sayak Paul authored
* get device <-> component mapping when using multiple gpus. * condition the device_map bits. * relax condition * device_map progress. * device_map enhancement * some cleaning up and debugging * Apply suggestions from code review Co-authored-by:
Marc Sun <57196510+SunMarc@users.noreply.github.com> * incorporate suggestions from PR. * remove multi-gpu condition for now. * guard check the component -> device mapping * fix: device_memory variable * dispatching transformers model to have force_hooks=True * better guarding for transformers device_map * introduce support balanced_low_memory and balanced_ultra_low_memory. * remove device_map patch. * fix: intermediate variable scoping. * fix: condition in cpu offload. * fix: flax class restrictions. * remove modifications from cpu_offload and model_offload * incorporate changes. * add a simple forward pass test * add: torch_device in get_inputs() * add: tests * remove print * safe-guard to(), model offloading and cpu offloading when balanced is used as a device_map. * style * remove . * safeguard device_map with more checks and remove invalid device_mapping strategues. * make a class attribute and adjust tests accordingly. * fix device_map check * fix test * adjust comment * fix: device_map attribute * fix: dispatching. * max_memory test for pipeline * version guard the tests * fix guard. * address review feedback. * reset_device_map method. * add: test for reset_hf_device_map * fix a couple things. * add reset_device_map() in the error message. * add tests for checking reset_device_map doesn't have unintended consequences. * fix reset_device_map and offloading tests. * create _get_final_device_map utility. * hf_device_map -> _hf_device_map * add documentation * add notes suggested by Marc. * styling. * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * move updates within gpu condition. * other docs related things * note on ignore a device not specified in . * provide a suggestion if device mapping errors out. * fix: typo. * _hf_device_map -> hf_device_map * Empty-Commit * add: example hf_device_map. --------- Co-authored-by:
Marc Sun <57196510+SunMarc@users.noreply.github.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 09 Apr, 2024 2 commits
-
-
Fabio Rigano authored
* Support multiimage masking --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
YiYi Xu authored
* disable test * update --------- Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
- 08 Apr, 2024 1 commit
-
-
Nguyễn Công Tú Anh authored
* add audioldm2 tts * change gpt2 max new tokens * remove unnecessary pipeline and class * add TTS to AudioLDM2Pipeline * add TTS docs * delete unnecessary file * remove unnecessary import * add audioldm2 slow testcase * fix code quality * remove AudioLDMLearnablePositionalEmbedding * add variable check vits encoder * add use_learned_position_embedding --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 05 Apr, 2024 1 commit
-
-
Sayak Paul authored
* reduce block sizes for unet1d. * reduce blocks for unet_2d. * reduce block size for unet_motion * increase channels. * correctly increase channels. * reduce number of layers in unet2dconditionmodel tests. * reduce block sizes for unet2dconditionmodel tests * reduce block sizes for unet3dconditionmodel. * fix: test_feed_forward_chunking * fix: test_forward_with_norm_groups * skip spatiotemporal tests on MPS. * reduce block size in AutoencoderKL. * reduce block sizes for vqmodel. * further reduce block size. * make style. * Empty-Commit * reduce sizes for ConsistencyDecoderVAETests * further reduction. * further block reductions in AutoencoderKL and AssymetricAutoencoderKL. * massively reduce the block size in unet2dcontionmodel. * reduce sizes for unet3d * fix tests in unet3d. * reduce blocks further in motion unet. * fix: output shape * add attention_head_dim to the test configuration. * remove unexpected keyword arg * up a bit. * groups. * up again * fix
-
- 04 Apr, 2024 1 commit
-
-
UmerHA authored
* Skip `test_freeu_enabled ` on MPS * Small fixes - import skip_mps correctly - disable all instances of test_freeu_enabled * Empty commit to trigger tests * Empty commit to trigger CI
-
- 03 Apr, 2024 3 commits
-
-
Abhinav Gopal authored
* Update pipeline_animatediff_video2video.py * commit with test for whether latent input can be passed into animatediffvid2vid
-
Beinsezii authored
* UniPC Multistep add `rescale_betas_zero_snr` Same patch as DPM and Euler with the patched final alpha cumprod BF16 doesn't seem to break down, I think cause UniPC upcasts during some phases already? We could still force an upcast since it only loses ≈ 0.005 it/s for me but the difference in output is very small. A better endeavor might upcasting in step() and removing all the other upcasts elsewhere? * UniPC ZSNR UT * Re-add `rescale_betas_zsnr` doc oops
-
Beinsezii authored
* UniPC UTs iterate solvers on FP16 It wasn't catching errs on order==3. Might be excessive? * UniPC Multistep fix tensor dtype/device on order=3 * UniPC UTs Add v_pred to fp16 test iter For completions sake. Probably overkill?
-
- 02 Apr, 2024 2 commits
-
-
Sayak Paul authored
* start printing the tensors. * print full throttle * set static slices for 7 tests. * remove printing. * flatten * disable test for controlnet * what happens when things are seeded properly? * set the right value * style./ * make pia test fail to check things * print. * fix pia. * checking for animatediff. * fix: animatediff. * video synthesis * final piece. * style. * print guess. * fix: assertion for control guess. --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
Dhruv Nair authored
update
-
- 01 Apr, 2024 3 commits
-
-
YiYi Xu authored
* add from_pipe --------- Co-authored-by:
yiyixuxu <yixu310@gmail,com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
YiYi Xu authored
fix
-
Dhruv Nair authored
update
-
- 30 Mar, 2024 2 commits
-
-
Stephen authored
* fix ip adapter support * Update sag pipelines tests, adjust sag pipeline to pass tests --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
Beinsezii authored
* Add `final_sigma_zero` to UniPCMultistep Effectively the same trick as DDIM's `set_alpha_to_one` and DPM's `final_sigma_type='zero'`. Currently False by default but maybe this should be True? * `final_sigma_zero: bool` -> `final_sigmas_type: str` Should 1:1 match DPM Multistep now. * Set `final_sigmas_type='sigma_min'` in UniPC UTs
-
- 29 Mar, 2024 4 commits
-
-
UmerHA authored
* Initial commit * Implemented block lora - implemented block lora - updated docs - added tests * Finishing up * Reverted unrelated changes made by make style * Fixed typo * Fixed bug + Made text_encoder_2 scalable * Integrated some review feedback * Incorporated review feedback * Fix tests * Made every module configurable * Adapter to new lora test structure * Final cleanup * Some more final fixes - Included examples in `using_peft_for_inference.md` - Added hint that only attns are scaled - Removed NoneTypes - Added test to check mismatching lens of adapter names / weights raise error * Update using_peft_for_inference.md * Update using_peft_for_inference.md * Make style, quality, fix-copies * Updated tutorial;Warning if scale/adapter mismatch * floats are forwarded as-is; changed tutorial scale * make style, quality, fix-copies * Fixed typo in tutorial * Moved some warnings into `lora_loader_utils.py` * Moved scale/lora mismatch warnings back * Integrated final review suggestions * Empty commit to trigger CI * Reverted emoty commit to trigger CI --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Dhruv Nair authored
* update * update --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* speed up test_vae_slicing in animatediff * speed up test_karras_schedulers_shape for attend and excite. * style. * get the static slices out. * specify torch print options. * modify * test run with controlnet * specify kwarg * fix: things * not None * flatten * controlnet img2img * complete controlet sd * finish more * finish more * finish more * finish more * finish the final batch * add cpu check for expected_pipe_slice. * finish the rest * remove print * style * fix ssd1b controlnet test * checking ssd1b * disable the test. * make the test_ip_adapter_single controlnet test more robust * fix: simple inpaint * multi * disable panorama * enable again * panorama is shaky so leave it for now * remove print * raise tolerance.
-
YiYi Xu authored
use float16 and add torch.no_grad()
-
- 28 Mar, 2024 2 commits
-
-
Lvkesheng Shen authored
* Bug fix for controlnetpipeline check_image Bug fix for controlnetpipeline check_image when using multicontrolnet and prompt list * Update test_inference_multiple_prompt_input function * Update test_controlnet.py add test for multiple prompts and multiple image conditioning * Update test_controlnet.py Fix format error --------- Co-authored-by:
Lvkesheng Shen <45848260+Fantast416@users.noreply.github.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
YiYi Xu authored
* add remove_all_hooks * a few more fix and tests * up * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * split tests * add --------- Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 27 Mar, 2024 1 commit
-
-
UmerHA authored
Skipping test_lora_fuse_nan on mps Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 26 Mar, 2024 3 commits
-
-
M. Tolga Cangöz authored
* Fix typos * Add docstring to `decode` method in `ConsistencyDecoderVAE` * Fix tiling * Enable tiled VAE decoding with customizable tile sample size and overlap factor * Revert "Enable tiled VAE decoding with customizable tile sample size and overlap factor" This reverts commit 181049675e83cea7b33ae2bbeba2aff7ae1b1761. * Add VAE tiling test for `ConsistencyDecoderVAE` --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* feat: support dora loras from community * safe-guard dora operations under peft version. * pop use_dora when False * make dora lora from kohya work. * fix: kohya conversion utils. * add a fast test for DoRA compatibility.. * add a nightly test.
-
Sayak Paul authored
skip dynamo tests when python is 3.12.
-
- 25 Mar, 2024 5 commits
-
-
UmerHA authored
* Update test_lora_layers_peft.py * Update utils.py
-
M. Tolga Cangöz authored
* Fix typos * Fix typos * Fix typos --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Dhruv Nair authored
* update * update * update
-
M. Tolga Cangöz authored
[`IP-Adapter`] Fix IP-Adapter Support and Refactor Callback for `StableDiffusionPanoramaPipeline` (#7262) * Add properties and `IPAdapterTesterMixin` tests for `StableDiffusionPanoramaPipeline` * Update torch manual seed to use `torch.Generator(device=device)` * Refactor
📞 🔙 to support `callback_on_step_end` * make fix-copies -
Sayak Paul authored
* strtobool * replace Command from setuptools.
-
- 20 Mar, 2024 1 commit
-
-
Sayak Paul authored
* cleanse and refactor lora testing suite. * more cleanup. * make check_if_lora_correctly_set a utility function * fix: typo * retrigger ci * style
-
- 19 Mar, 2024 4 commits
-
-
Dhruv Nair authored
Fix issue with prompt embeds and latents in SD Cascade Decoder with multiple image embeddings for a single prompt. (#7381) * fix * update * update --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* debugging * let's see the numbers * let's see the numbers * let's see the numbers * restrict tolerance. * increase inference steps. * shallow copy of cross_attentionkwargs * remove print
-
YiYi Xu authored
* fix * fix * add a tests * fix --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
yiyixuxu <yixu310@gmail,com>
-
M. Tolga Cangöz authored
Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-