- 18 Dec, 2024 11 commits
-
-
Dhruv Nair authored
update
-
Dhruv Nair authored
update
-
Dhruv Nair authored
* update * Update docs/source/en/quantization/gguf.md Co-authored-by:
Aryan <aryan@huggingface.co> --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
hlky authored
-
Andrés Romero authored
* flux_control_inpaint - failing test_flux_different_prompts * removing test_flux_different_prompts? * fix style * fix from PR comments * fix style * reducing guidance_scale in demo * Update src/diffusers/pipelines/flux/pipeline_flux_control_inpaint.py Co-authored-by:
hlky <hlky@hlky.ac> * make * prepare_latents is not copied from * update docs * typos --------- Co-authored-by:
affromero <ubuntu@ip-172-31-17-146.ec2.internal> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
hlky <hlky@hlky.ac>
-
Qin Zhou authored
* Support pass kwargs to sd3 custom attention processor --------- Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
Xinyuan Zhao authored
-
Sayak Paul authored
fix: reamde -> readme
-
hlky authored
* Use `torch` in `get_2d_rotary_pos_embed` * Add deprecation
-
Sayak Paul authored
fix: licensing header.
-
Sayak Paul authored
* feat: lora support for SANA. * make fix-copies * rename test class. * attention_kwargs -> cross_attention_kwargs. * Revert "attention_kwargs -> cross_attention_kwargs." This reverts commit 23433bf9bccc12e0f2f55df26bae58a894e8b43b. * exhaust 119 max line limit * sana lora fine-tuning script. * readme * add a note about the supported models. * Apply suggestions from code review Co-authored-by:
Aryan <aryan@huggingface.co> * style * docs for attention_kwargs. * remove lora_scale from pag pipeline. * copy fix --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 17 Dec, 2024 10 commits
-
-
hlky authored
-
cjkangme authored
fix: fix typo that cause error
-
Steven Liu authored
delete_adapters Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Aryan authored
update
-
Dhruv Nair authored
* update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * Update src/diffusers/models/transformers/transformer_mochi.py Co-authored-by:
Aryan <aryan@huggingface.co> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
Dhruv Nair authored
* update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * Update src/diffusers/quantizers/gguf/utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * update * update * update * update * update * update * update * update * update * update * Update docs/source/en/quantization/gguf.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * update * update * update * update --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
Aryan authored
update Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Aryan authored
* add lora support for ltx * add tests * fix copied from comments * update --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Aryan authored
update
-
Sayak Paul authored
add contribution note for lawrence.
-
- 16 Dec, 2024 19 commits
-
-
Steven Liu authored
* attnprocessors * lora * make style * fix * fix * sana * typo
-
Aryan authored
* torchao quantizer --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
Kaiwen Sheng authored
-
hlky authored
-
hlky authored
Fix repaint scheduler
-
hlky authored
-
hlky authored
-
hlky authored
* Use non-human subject in StableDiffusion3ControlNetPipeline example * make style
-
hlky authored
-
hlky authored
use_flow_sigmas copy
-
hlky authored
* Add `dynamic_shifting` to SD3 * calculate_shift * FlowMatchHeunDiscreteScheduler doesn't support mu * Inpaint/img2img
-
hlky authored
-
Sayak Paul authored
add rest of the lora loader mixins to the docs.
-
fancy45daddy authored
* Update pipeline_controlnet.py * make style --------- Co-authored-by:hlky <hlky@hlky.ac>
-
Aryan authored
* copy transformer * copy vae * copy pipeline * make fix-copies * refactor; make original code work with diffusers; test latents for comparison generated with this commit * move rope into pipeline; remove flash attention; refactor * begin conversion script * make style * refactor attention * refactor * refactor final layer * their mlp -> our feedforward * make style * add docs * refactor layer names * refactor modulation * cleanup * refactor norms * refactor activations * refactor single blocks attention * refactor attention processor * make style * cleanup a bit * refactor double transformer block attention * update mochi attn proc * use diffusers attention implementation in all modules; checkpoint for all values matching original * remove helper functions in vae * refactor upsample * refactor causal conv * refactor resnet * refactor * refactor * refactor * grad checkpointing * autoencoder test * fix scaling factor * refactor clip * refactor llama text encoding * add coauthor Co-Authored-By:
"Gregory D. Hunkins" <greg@ollano.com> * refactor rope; diff: 0.14990234375; reason and fix: create rope grid on cpu and move to device Note: The following line diverges from original behaviour. We create the grid on the device, whereas original implementation creates it on CPU and then moves it to device. This results in numerical differences in layerwise debugging outputs, but visually it is the same. * use diffusers timesteps embedding; diff: 0.10205078125 * rename * convert * update * add tests for transformer * add pipeline tests; text encoder 2 is not optional * fix attention implementation for torch * add example * update docs * update docs * apply suggestions from review * refactor vae * update * Apply suggestions from code review Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video.py Co-authored-by:
hlky <hlky@hlky.ac> * make fix-copies * update --------- Co-authored-by:
"Gregory D. Hunkins" <greg@ollano.com> Co-authored-by:
hlky <hlky@hlky.ac>
-
Dhruv Nair authored
update
-
Sayak Paul authored
minor stuff to ltx video docs.
-
Sayak Paul authored
-
Sayak Paul authored
update always test pipelines list.
-