- 08 May, 2024 6 commits
-
-
Pierre Dulac authored
SDXL LoRA weights for text encoders should be decoupled on save The method checks if at least one of unet, text_encoder and text_encoder_2 lora weights are passed, which was not reflected in the implentation. Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
YiYi Xu authored
fix Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
Aryan authored
* update conversion script to handle motion adapter sdxl checkpoint * add animatediff xl * handle addition_embed_type * fix output * update * add imports * make fix-copies * add decode latents * update docstrings * add animatediff sdxl to docs * remove unnecessary lines * update example * add test * revert conv_in conv_out kernel param * remove unused param addition_embed_type_num_heads * latest IPAdapter impl * make fix-copies * fix return * add IPAdapterTesterMixin to tests * fix return * revert based on suggestion * add freeinit * fix test_to_dtype test * use StableDiffusionMixin instead of different helper methods * fix progress bar iterations * apply suggestions from review * hardcode flip_sin_to_cos and freq_shift * make fix-copies * fix ip adapter implementation * fix last failing test * make style * Update docs/source/en/api/pipelines/animatediff.md Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * remove todo * fix doc-builder errors --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
Philip Pham authored
`model_output.shape` may only have rank 1. There are warnings related to use of random keys. ``` tests/schedulers/test_scheduler_flax.py: 13 warnings /Users/phillypham/diffusers/src/diffusers/schedulers/scheduling_ddpm_flax.py:268: FutureWarning: normal accepts a single key, but was given a key array of shape (1, 2) != (). Use jax.vmap for batching. In a future JAX version, this will be an error. noise = jax.random.normal(split_key, shape=model_output.shape, dtype=self.dtype) tests/schedulers/test_scheduler_flax.py::FlaxDDPMSchedulerTest::test_betas /Users/phillypham/virtualenv/diffusers/lib/python3.9/site-packages/jax/_src/random.py:731: FutureWarning: uniform accepts a single key, but was given a key array of shape (1,) != (). Use jax.vmap for batching. In a future JAX version, this will be an error. u = uniform(key, shape, dtype, lo, hi) # type: ignore[arg-type] ``` -
Tolga Cangöz authored
Fix image's upcasting before `vae.encode()` when using `fp16` Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
Hyoungwon Cho authored
* edited_pag_implementation * update --------- Co-authored-by:yiyixuxu <yixu310@gmail.com>
-
- 07 May, 2024 2 commits
-
-
Bagheera authored
* 7879 - adjust documentation to use naruto dataset, since pokemon is now gated * replace references to pokemon in docs * more references to pokemon replaced * Japanese translation update --------- Co-authored-by:bghira <bghira@users.github.com>
-
Álvaro Somoza authored
* return layer weight if not found * better system and test * key example and typo
-
- 06 May, 2024 2 commits
-
-
Steven Liu authored
* combine * edits
-
Guillaume LEGENDRE authored
-
- 03 May, 2024 5 commits
-
-
Steven Liu authored
* lcm * lcm lora * fix * fix hfoption * edits
-
HelloWorldBeginner authored
Add Ascend NPU support for SDXL fine-tuning and fix the model saving bug when using DeepSpeed. (#7816) * Add Ascend NPU support for SDXL fine-tuning and fix the model saving bug when using DeepSpeed. * fix check code quality * Decouple the NPU flash attention and make it an independent module. * add doc and unit tests for npu flash attention. --------- Co-authored-by:
mhh001 <mahonghao1@huawei.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Dhruv Nair authored
update
-
Lucain authored
* Deprecate resume_download * align docstring with transformers * style --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Aritra Roy Gosthipaty authored
reducing model size
-
- 02 May, 2024 8 commits
-
-
Dhruv Nair authored
update Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Guillaume LEGENDRE authored
* Move to new GPU Runners for slow tests * Move to new GPU Runners for nightly tests
-
Guillaume LEGENDRE authored
-
Dhruv Nair authored
update Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Dhruv Nair authored
update Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Dhruv Nair authored
update
-
yunseong Cho authored
fix key error for different order Co-authored-by:
yunseong <yunseong.cho@superlabs.us> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
Aritra Roy Gosthipaty authored
chore: initial size reduction of models
-
- 01 May, 2024 4 commits
-
-
YiYi Xu authored
update prepare_ip_adapter_ for pix2pix
-
YiYi Xu authored
* up * add comment to the tests + fix dit --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* fix: device module tests * remove patch file * Empty-Commit
-
Dhruv Nair authored
* update * update
-
- 30 Apr, 2024 8 commits
-
-
Steven Liu authored
* community pipelines * feedback * consolidate
-
Tolga Cangöz authored
Fix cpu offload
-
Dhruv Nair authored
* add debug workflow * update
-
Linoy Tsaban authored
* add blora * add blora * add blora * add blora * little changes * little changes * remove redundancies * fixes * add B LoRA to readme * style * inference * defaults + path to loras+ generation * minor changes * style * minor changes * minor changes * blora arg * added --lora_unet_blocks * style * Update examples/advanced_diffusion_training/README.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * add commit hash to B-LoRA repo cloneing * change inference, remove cloning * change inference, remove cloning add section about configureable unet blocks * change inference, remove cloning add section about configureable unet blocks * Apply suggestions from code review --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* introduce _no_split_modules. * unnecessary spaces. * remove unnecessary kwargs and style * fix: accelerate imports. * change to _determine_device_map * add the blocks that have residual connections. * add: CrossAttnUpBlock2D * add: testin * style * line-spaces * quality * add disk offload test without safetensors. * checking disk offloading percentages. * change model split * add: utility for checking multi-gpu requirement. * model parallelism test * splits. * splits. * splits * splits. * splits. * splits. * offload folder to test_disk_offload_with_safetensors * add _no_split_modules * fix-copies
-
Aritra Roy Gosthipaty authored
* chore: reducing model sizes * chore: shrinks further * chore: shrinks further * chore: shrinking model for img2img pipeline * chore: reducing size of model for inpaint pipeline --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Aritra Roy Gosthipaty authored
* chore: reducing unet size for faster tests * review suggestions --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Aritra Roy Gosthipaty authored
chore: reducing model size for ddim fast pipeline Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 29 Apr, 2024 5 commits
-
-
Clint Adams authored
FlaxStableDiffusionSafetyChecker sets main_input_name to "clip_input". This makes StableDiffusionSafetyChecker consistent. Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
RuiningLi authored
* Added get_velocity function to EulerDiscreteScheduler. * Fix white space on blank lines * Added copied from statement * back to the original. --------- Co-authored-by:
Ruining Li <ruining@robots.ox.ac.uk> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
jschoormans authored
* added TextualInversionMixIn to controlnet_inpaint_sd_xl pipeline --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
Dhruv Nair authored
* update * update
-
Yushu authored
swap the order for do_classifier_free_guidance concat with repeat Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-