- 20 Dec, 2024 2 commits
-
-
Daniel Regado authored
* Added support for single IPAdapter on SD3.5 pipeline --------- Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
dg845 authored
* Port UNet2DModel gradient checkpointing code from #6718. --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Vincent Neemie <92559302+VincentNeemie@users.noreply.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
hlky <hlky@hlky.ac>
-
- 19 Dec, 2024 9 commits
-
-
djm authored
-
hlky authored
-
Dhruv Nair authored
update
-
Dhruv Nair authored
update
-
Shenghai Yuan authored
* 1217 * 1217 * 1217 * update * reverse * add test * update test * make style * update * make style --------- Co-authored-by:Aryan <aryan@huggingface.co>
-
hlky authored
* Check correct model type is passed to `from_pretrained` * Flax, skip scheduler * test_wrong_model * Fix for scheduler * Update tests/pipelines/test_pipelines.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * EnumMeta * Flax * scheduler in expected types * make * type object 'CLIPTokenizer' has no attribute '_PipelineFastTests__name' * support union * fix typing in kandinsky * make * add LCMScheduler * 'LCMScheduler' object has no attribute 'sigmas' * tests for wrong scheduler * make * update * warning * tests * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * import FlaxSchedulerMixin * skip scheduler --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
赵三石 authored
x-flux single-blocks lora load Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
hlky authored
-
Aryan authored
* update * udpate * fix test
-
- 18 Dec, 2024 10 commits
-
-
Aryan authored
fix joint pos embedding device
-
Dhruv Nair authored
update
-
Dhruv Nair authored
update
-
hlky authored
-
Andrés Romero authored
* flux_control_inpaint - failing test_flux_different_prompts * removing test_flux_different_prompts? * fix style * fix from PR comments * fix style * reducing guidance_scale in demo * Update src/diffusers/pipelines/flux/pipeline_flux_control_inpaint.py Co-authored-by:
hlky <hlky@hlky.ac> * make * prepare_latents is not copied from * update docs * typos --------- Co-authored-by:
affromero <ubuntu@ip-172-31-17-146.ec2.internal> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
hlky <hlky@hlky.ac>
-
Qin Zhou authored
* Support pass kwargs to sd3 custom attention processor --------- Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
Xinyuan Zhao authored
-
hlky authored
* Use `torch` in `get_2d_rotary_pos_embed` * Add deprecation
-
Sayak Paul authored
fix: licensing header.
-
Sayak Paul authored
* feat: lora support for SANA. * make fix-copies * rename test class. * attention_kwargs -> cross_attention_kwargs. * Revert "attention_kwargs -> cross_attention_kwargs." This reverts commit 23433bf9bccc12e0f2f55df26bae58a894e8b43b. * exhaust 119 max line limit * sana lora fine-tuning script. * readme * add a note about the supported models. * Apply suggestions from code review Co-authored-by:
Aryan <aryan@huggingface.co> * style * docs for attention_kwargs. * remove lora_scale from pag pipeline. * copy fix --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 17 Dec, 2024 5 commits
-
-
hlky authored
-
Dhruv Nair authored
* update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * Update src/diffusers/models/transformers/transformer_mochi.py Co-authored-by:
Aryan <aryan@huggingface.co> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
Dhruv Nair authored
* update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * Update src/diffusers/quantizers/gguf/utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * update * update * update * update * update * update * update * update * update * update * Update docs/source/en/quantization/gguf.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * update * update * update * update --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
Aryan authored
* add lora support for ltx * add tests * fix copied from comments * update --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Aryan authored
update
-
- 16 Dec, 2024 14 commits
-
-
Steven Liu authored
* attnprocessors * lora * make style * fix * fix * sana * typo
-
Aryan authored
* torchao quantizer --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
Kaiwen Sheng authored
-
hlky authored
-
hlky authored
Fix repaint scheduler
-
hlky authored
-
hlky authored
-
hlky authored
* Use non-human subject in StableDiffusion3ControlNetPipeline example * make style
-
hlky authored
-
hlky authored
use_flow_sigmas copy
-
hlky authored
* Add `dynamic_shifting` to SD3 * calculate_shift * FlowMatchHeunDiscreteScheduler doesn't support mu * Inpaint/img2img
-
hlky authored
-
fancy45daddy authored
* Update pipeline_controlnet.py * make style --------- Co-authored-by:hlky <hlky@hlky.ac>
-
Aryan authored
* copy transformer * copy vae * copy pipeline * make fix-copies * refactor; make original code work with diffusers; test latents for comparison generated with this commit * move rope into pipeline; remove flash attention; refactor * begin conversion script * make style * refactor attention * refactor * refactor final layer * their mlp -> our feedforward * make style * add docs * refactor layer names * refactor modulation * cleanup * refactor norms * refactor activations * refactor single blocks attention * refactor attention processor * make style * cleanup a bit * refactor double transformer block attention * update mochi attn proc * use diffusers attention implementation in all modules; checkpoint for all values matching original * remove helper functions in vae * refactor upsample * refactor causal conv * refactor resnet * refactor * refactor * refactor * grad checkpointing * autoencoder test * fix scaling factor * refactor clip * refactor llama text encoding * add coauthor Co-Authored-By:
"Gregory D. Hunkins" <greg@ollano.com> * refactor rope; diff: 0.14990234375; reason and fix: create rope grid on cpu and move to device Note: The following line diverges from original behaviour. We create the grid on the device, whereas original implementation creates it on CPU and then moves it to device. This results in numerical differences in layerwise debugging outputs, but visually it is the same. * use diffusers timesteps embedding; diff: 0.10205078125 * rename * convert * update * add tests for transformer * add pipeline tests; text encoder 2 is not optional * fix attention implementation for torch * add example * update docs * update docs * apply suggestions from review * refactor vae * update * Apply suggestions from code review Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video.py Co-authored-by:
hlky <hlky@hlky.ac> * make fix-copies * update --------- Co-authored-by:
"Gregory D. Hunkins" <greg@ollano.com> Co-authored-by:
hlky <hlky@hlky.ac>
-