- 09 Jan, 2024 3 commits
-
-
Sayak Paul authored
fix: vae type
-
jiqing-feng authored
* enable stable-xl textual inversion * check if optimizer_2 exists * check text_encoder_2 before using * add textual inversion for sdxl in a single file * fix style * fix example style * reset for error changes * add readme for sdxl * fix style * disable autocast as it will cause cast error when weight_dtype=bf16 * fix spelling error * fix style and readme and 8bit optimizer * add README_sdxl.md link * add tracker key on log_validation * run style * rm the second center crop --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Patrick von Platen authored
* finish * finish --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 08 Jan, 2024 2 commits
-
-
Yasuna authored
* add tutorials to toctree.yml * fix title * fix words * add overview ja * fix diffusion to 拡散 * fix line 21 * add space * delete supported pipline * fix tutorial_overview.md * fix space * fix typo * Delete docs/source/ja/tutorials/using_peft_for_inference.md this file is not translated * Delete docs/source/ja/tutorials/basic_training.md this file is not translated * Delete docs/source/ja/tutorials/autopipeline.md this file is not translated * fix toctree
-
Sayak Paul authored
-
- 06 Jan, 2024 1 commit
-
-
Sayak Paul authored
minor changes
-
- 05 Jan, 2024 15 commits
-
-
Sayak Paul authored
-
Lucain authored
-
Sayak Paul authored
* introduce integrations module. * remove duplicate methods. * better imports. * move to loaders.py * remove peftadaptermixin from modelmixin. * add: peftadaptermixin selectively. * add: entry to _toctree * Empty-Commit
-
Dhruv Nair authored
Correctly handle creating model index json files when setting compiled modules in pipelines. (#6436) update
-
Liang Hou authored
-
Vinh H. Pham authored
* init works * add gluegen pipeline * add gluegen code * add another way to load language adapter * make style * Update README.md * change doc
-
Sayak Paul authored
* add: experimental script for diffusion dpo training. * random_crop cli. * fix: caption tokenization. * fix: pixel_values index. * fix: grad? * debug * fix: reduction. * fixes in the loss calculation. * style * fix: unwrap call. * fix: validation inference. * add: initial sdxl script * debug * make sure images in the tuple are of same res * fix model_max_length * report print * boom * fix: numerical issues. * fix: resolution * comment about resize. * change the order of the training transformation. * save call. * debug * remove print * manually detaching necessary? * use the same vae for validation. * add: readme.
-
Sayak Paul authored
* introduce unload_lora. * fix-copies
-
Sayak Paul authored
* post release * style --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Junsheng121 authored
* null-text-inversion-implementation * edited * edited * edited * edited * edited * edit * makestyle --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* edebug * debug * more debug * more more debug * remove tests for LoRAAttnProcessors. * rename
-
Linoy Tsaban authored
* unwrap text encoder when saving hook only for full text encoder tuning * unwrap text encoder when saving hook only for full text encoder tuning * save embeddings in each checkpoint as well * save embeddings in each checkpoint as well * save embeddings in each checkpoint as well * Update examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
jiqing-feng authored
* Intel Gen 4 Xeon and later support bf16 * fix bf16 notes
-
Horseee authored
* add documentation for DeepCache * fix typo * add wandb url for DeepCache * fix some typos * add item in _toctree.yml * update formats for arguments * Update deepcache.md * Update docs/source/en/optimization/deepcache.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * add StableDiffusionXLPipeline in doc * Separate SDPipeline and SDXLPipeline * Add the paper link of ablation experiments for hyper-parameters * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
dg845 authored
* Make WDS pipeline interpolation type configurable. * Make the VAE encoding batch size configurable. * Make lora_alpha and lora_dropout configurable for LCM LoRA scripts. * Generalize scalings_for_boundary_conditions function and make the timestep scaling configurable. * Make LoRA target modules configurable for LCM-LoRA scripts. * Move resolve_interpolation_mode to src/diffusers/training_utils.py and make interpolation type configurable in non-WDS script. * apply suggestions from review
-
- 04 Jan, 2024 10 commits
-
-
Steven Liu authored
fix local links
-
Lucain authored
* Respect offline mode when loading model * default to local entry if connectionerror
-
Sayak Paul authored
-
Sayak Paul authored
* debug * debug test_with_different_scales_fusion_equivalence * use the right method. * place it right. * let's see. * let's see again * alright then. * add a comment.
-
sayakpaul authored
-
sayakpaul authored
-
Sayak Paul authored
* disable running peft non-peft lora test in the peft env. * Empty-Commit
-
Chi authored
* I added a new doc string to the class. This is more flexible to understanding other developers what are doing and where it's using. * Update src/diffusers/models/unet_2d_blocks.py This changes suggest by maintener. Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/models/unet_2d_blocks.py Add suggested text Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update unet_2d_blocks.py I changed the Parameter to Args text. * Update unet_2d_blocks.py proper indentation set in this file. * Update unet_2d_blocks.py a little bit of change in the act_fun argument line. * I run the black command to reformat style in the code * Update unet_2d_blocks.py similar doc-string add to have in the original diffusion repository. * Batter way to write binarize function * Solve check_code_quality error * My mistake to run pull request but not reformated file * Update image_processor.py * remove extra variable and space * Update image_processor.py * Run ruff libarary to reformat my file --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 03 Jan, 2024 3 commits
-
-
Sayak Paul authored
Update README_sdxl.md
-
Sayak Paul authored
* handle rest of the stuff related to deprecated lora stuff. * fix: copies * don't modify the uNet in-place. * fix: temporal autoencoder. * manually remove lora layers. * don't copy unet. * alright * remove lora attn processors from unet3d * fix: unet3d. * styl * Empty-Commit
-
Sayak Paul authored
* add: test to check if peft loras are loadable in non-peft envs. * add torch_device approrpiately. * fix: get_dummy_inputs(). * test logits. * rename * debug * debug * fix: generator * new assertion values after fixing the seed. * shape * remove print statements and settle this. * to update values. * change values when lora config is initialized under a fixed seed. * update colab link * update notebook link * sanity restored by getting the exact same values without peft.
-
- 02 Jan, 2024 6 commits
-
-
YiYi Xu authored
add doc Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
Vinh H. Pham authored
correct reading variables
-
Aryan V S authored
* add clip_skip, freeu, qkv * fix * add ip-adapter support * callback on step end * update * fix NoneType bug * fix * add guidance scale embedding * add textual inversion
-
Linoy Tsaban authored
[bug fix] using snr gamma and prior preservation loss in the dreambooth lora sdxl training scripts (#6356) * change timesteps used to calculate snr when --with_prior_preservation is enabled * change timesteps used to calculate snr when --with_prior_preservation is enabled (canonical script) * style * revert canonical script to before snr gamma change --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Daniel Socek authored
-
CyrusVorwald authored
* add StableDiffusionXLControlNetInpaintPipeline to auto pipeline * fixed style
-