"vscode:/vscode.git/clone" did not exist on "cc7b8bd439bf6b2cc09c7d71084e5302044d4d91"
- 18 Jun, 2025 2 commits
-
-
Sayak Paul authored
change to 2025 licensing for remaining
-
Leo Jiang authored
* [training] add ds support to lora hidream * Apply style fixes --------- Co-authored-by:
J石页 <jiangshuo9@h-partners.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 17 Jun, 2025 1 commit
-
-
Linoy Tsaban authored
* lora alpha * Apply style fixes * Update examples/advanced_diffusion_training/README_flux.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * fix readme format --------- Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 16 Jun, 2025 1 commit
-
-
Sayak Paul authored
* show how metadata stuff should be incorporated in training scripts. * typing * fix --------- Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 13 Jun, 2025 1 commit
-
-
Sayak Paul authored
* feat: parse metadata from lora state dicts. * tests * fix tests * key renaming * fix * smol update * smol updates * load metadata. * automatically save metadata in save_lora_adapter. * propagate changes. * changes * add test to models too. * tigher tests. * updates * fixes * rename tests. * sorted. * Update src/diffusers/loaders/lora_base.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * review suggestions. * removeprefix. * propagate changes. * fix-copies * sd * docs. * fixes * get review ready. * one more test to catch error. * change to a different approach. * fix-copies. * todo * sd3 * update * revert changes in get_peft_kwargs. * update * fixes * fixes * simplify _load_sft_state_dict_metadata * update * style fix * uipdate * update * update * empty commit * _pack_dict_with_prefix * update * TODO 1. * todo: 2. * todo: 3. * update * update * Apply suggestions from code review Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * reraise. * move argument. --------- Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 09 Jun, 2025 1 commit
-
-
Philip Brown authored
* Add community class StableDiffusionXL_T5Pipeline Will be used with base model opendiffusionai/stablediffusionxl_t5 * Changed pooled_embeds to use projection instead of slice * "make style" tweaks * Added comments to top of code * Apply style fixes
-
- 05 Jun, 2025 1 commit
-
-
Markus Pobitzer authored
[examples] flux-control: use num_training_steps_for_scheduler in get_scheduler instead of args.max_train_steps * accelerator.num_processes Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 30 May, 2025 1 commit
-
-
co63oc authored
* Fix typos in strings and comments Signed-off-by:
co63oc <co63oc@users.noreply.github.com> * Update src/diffusers/hooks/hooks.py Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Update src/diffusers/hooks/hooks.py Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Update layerwise_casting.py * Apply style fixes * update --------- Signed-off-by:
co63oc <co63oc@users.noreply.github.com> Co-authored-by:
Aryan <contact.aryanvs@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 29 May, 2025 2 commits
-
-
Justin Ruan authored
fix wrong indent for training controlnet
-
Yuanzhou Cai authored
fix lr scheduler steps count Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 27 May, 2025 1 commit
-
-
Linoy Tsaban authored
add comment to install prodigy
-
- 20 May, 2025 1 commit
-
-
Sai Shreyas Bhavanasi authored
* Refactoring Regional Prompting pipeline to use Diffusion Pipeline instead of Stable Diffusion Pipeline * Apply style fixes
-
- 19 May, 2025 1 commit
-
-
Quentin Gallouédec authored
* Use HF Papers * Apply style fixes --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 13 May, 2025 2 commits
-
-
Kenneth Gerald Hamilton authored
[train_dreambooth.py] Fix the LR Schedulers when num_train_epochs is passed in a distributed training env (#11239) Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Abdellah Oumida authored
-
- 08 May, 2025 2 commits
-
-
scxue authored
* test permission * Add cross attention type for Sana-Sprint. * Add Sana-Sprint training script in diffusers. * make style && make quality; * modify the attention processor with `set_attn_processor` and change `SanaAttnProcessor3_0` to `SanaVanillaAttnProcessor` * Add import for SanaVanillaAttnProcessor * Add README file. * Apply suggestions from code review * style * Update examples/research_projects/sana/README.md --------- Co-authored-by:
lawrence-cj <cjs1020440147@icloud.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Linoy Tsaban authored
* add lora_alpha and lora_dropout * Apply style fixes * add lora_alpha and lora_dropout * Apply style fixes * revert lora_alpha until #11324 is merged * Apply style fixes * empty commit --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 05 May, 2025 6 commits
-
-
RogerSinghChugh authored
* Update training script for txt to img sdxl with lora supp with new interpolation. * ran make style and make quality.
-
Yijun Lee authored
* Set LANCZOS as the default interpolation method for image resizing. * style: run make style and quality checks
-
Sayak Paul authored
* feat: enable quantization for hidream lora training. * better handle compute dtype. * finalize. * fix dtype. --------- Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
Evan Han authored
[train_dreambooth_lora_lumina2] Add LANCZOS as the default interpolation mode for image resizing (#11491) [ADD] interpolation
-
MinJu-Ha authored
[train_dreambooth_lora_sdxl] Add --image_interpolation_mode option for image resizing (default to lanczos) (#11490) feat(train_dreambooth_lora_sdxl): support --image_interpolation_mode with default to lanczos
-
Parag Ekbote authored
* Add LANCZOS as default interplotation mode. * update script * Update as per code review. * make style.
-
- 02 May, 2025 2 commits
-
-
Yash authored
[train_dreambooth_lora_flux_advanced] Add LANCZOS as the default interpolation mode for image resizing (#11472) * [train_controlnet_sdxl] Add LANCZOS as the default interpolation mode for image resizing * [train_dreambooth_lora_flux_advanced] Add LANCZOS as the default interpolation mode for image resizing
-
Yuanzhou authored
[train_dreambooth_lora_sdxl_advanced] Add LANCZOS as the default interpolation mode for image resizing (#11471)
-
- 01 May, 2025 1 commit
-
-
co63oc authored
* Fix typos in docs and comments * Apply style fixes --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 30 Apr, 2025 2 commits
-
-
Vaibhav Kumawat authored
* Add LANCZOS as default interplotation mode. * LANCZOS as default interplotation * LANCZOS as default interplotation mode * Added LANCZOS as default interplotation mode
-
captainzz authored
* upload StableDiffusion3InstructPix2PixPipeline * Move to community * Add readme * Fix images * remove images * Change image url * fix * Apply style fixes
-
- 29 Apr, 2025 1 commit
-
-
Youlun Peng authored
Set LANCZOS as the default interpolation for image resizing
-
- 28 Apr, 2025 3 commits
-
-
Linoy Tsaban authored
remove unnecessary pipeline moving to cpu in validation Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
tongyu authored
* Update train_text_to_image_lora.py * update_train_text_to_image_lora
-
tongyu authored
* Update train_text_to_image.py * update
-
- 26 Apr, 2025 1 commit
-
-
Mert Erbak authored
* Set LANCZOS as default interpolation mode for resizing * [train_dreambooth_lora.py] Set LANCZOS as default interpolation mode for resizing
-
- 24 Apr, 2025 2 commits
-
-
co63oc authored
-
Linoy Tsaban authored
* 1. add pre-computation of prompt embeddings when custom prompts are used as well 2. save model card even if model is not pushed to hub 3. remove scheduler initialization from code example - not necessary anymore (it's now if the base model's config) 4. add skip_final_inference - to allow to run with validation, but skip the final loading of the pipeline with the lora weights to reduce memory reqs * pre encode validation prompt as well * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * pre encode validation prompt as well * Apply style fixes * empty commit * change default trained modules * empty commit * address comments + change encoding of validation prompt (before it was only pre-encoded if custom prompts are provided, but should be pre-encoded either way) * Apply style fixes * empty commit * fix validation_embeddings definition * fix final inference condition * fix pipeline deletion in last inference * Apply style fixes * empty commit * layers * remove readme remarks on only pre-computing when instance prompt is provided and change example to 3d icons * smol fix * empty commit --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 23 Apr, 2025 3 commits
-
-
Teriks authored
* Kolors additional pipelines, community contrib --------- Co-authored-by:
Teriks <Teriks@users.noreply.github.com> Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
Ishan Dutta authored
-
Ameer Azam authored
Small change requirements_sana.txt to requirements_hidream.txt
-
- 22 Apr, 2025 1 commit
-
-
Linoy Tsaban authored
* initial commit * initial commit * initial commit * initial commit * initial commit * initial commit * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Bagheera <59658056+bghira@users.noreply.github.com> * move prompt embeds, pooled embeds outside * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
hlky <hlky@hlky.ac> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
hlky <hlky@hlky.ac> * fix import * fix import and tokenizer 4, text encoder 4 loading * te * prompt embeds * fix naming * shapes * initial commit to add HiDreamImageLoraLoaderMixin * fix init * add tests * loader * fix model input * add code example to readme * fix default max length of text encoders * prints * nullify training cond in unpatchify for temp fix to incompatible shaping of transformer output during training * smol fix * unpatchify * unpatchify * fix validation * flip pred and loss * fix shift!!! * revert unpatchify changes (for now) * smol fix * Apply style fixes * workaround moe training * workaround moe training * remove prints * to reduce some memory, keep vae in `weight_dtype` same as we have for flux (as it's the same vae) https://github.com/huggingface/diffusers/blob/bbd0c161b55ba2234304f1e6325832dd69c60565/examples/dreambooth/train_dreambooth_lora_flux.py#L1207 * refactor to align with HiDream refactor * refactor to align with HiDream refactor * refactor to align with HiDream refactor * add support for cpu offloading of text encoders * Apply style fixes * adjust lr and rank for train example * fix copies * Apply style fixes * update README * update README * update README * fix license * keep prompt2,3,4 as None in validation * remove reverse ode comment * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * vae offload change * fix text encoder offloading * Apply style fixes * cleaner to_kwargs * fix module name in copied from * add requirements * fix offloading * fix offloading * fix offloading * update transformers version in reqs * try AutoTokenizer * try AutoTokenizer * Apply style fixes * empty commit * Delete tests/lora/test_lora_layers_hidream.py * change tokenizer_4 to load with AutoTokenizer as well * make text_encoder_four and tokenizer_four configurable * save model card * save model card * revert T5 * fix test * remove non diffusers lumina2 conversion --------- Co-authored-by:
Bagheera <59658056+bghira@users.noreply.github.com> Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 21 Apr, 2025 1 commit
-
-
PromeAI authored
* fix issue that training flux controlnet was unstable and validation results were unstable * del unused code pieces, fix grammar --------- Co-authored-by:
Your Name <you@example.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-