- 18 Nov, 2024 2 commits
-
-
Parag Ekbote authored
4 Notebooks for Community Scripts and minor script improvements.
-
Grant Sherrick authored
* Add server example. * Minor updates to README. * Add fixes after local testing. * Apply suggestions from code review Updates to README from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * More doc updates. * Maybe this will work to build the docs correctly? * Fix style issues. * Fix toc. * Minor reformatting. * Move docs to proper loc. * Fix missing tick. * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Sync docs changes back to README. * Very minor update to docs to add space. --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 16 Nov, 2024 1 commit
-
-
Parag Ekbote authored
update file paths to research_projects folder. Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 13 Nov, 2024 1 commit
-
-
Parag Ekbote authored
* Add Notebooks on Community Scripts
-
- 08 Nov, 2024 3 commits
-
-
Sayak Paul authored
fix: gradient unscaling problem Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
SahilCarterr authored
* fix use_dora * fix style and quality * fix use_dora with peft version --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Michael Tkachuk authored
* refactored
-
- 07 Nov, 2024 1 commit
-
-
Sayak Paul authored
* move vae flax module. * controlnet module. * prepare for PR. * revert a commit * gracefully deprecate controlnet deps. * fix * fix doc path * fix-copies * fix path * style * style * conflicts * fix * fix-copies * sparsectrl. * updates * fix * updates * updates * updates * fix --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 06 Nov, 2024 2 commits
-
-
SahilCarterr authored
* updated encode prompt and clip encod prompt --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sookwan Han authored
* Add new community pipeline for 'Adaptive Mask Inpainting', introduced in [ECCV2024] Beyond the Contact: Discovering Comprehensive Affordance for 3D Objects from Pre-trained 2D Diffusion Models
-
- 02 Nov, 2024 1 commit
-
-
Dorsa Rohani authored
* enable cpu ability * model creation + comprehensive testing * training + tests * all tests working * remove unneeded files + clarify docs * update train tests * update readme.md * remove data from gitignore * undo cpu enabled option * Update README.md * update readme * code quality fixes * diffusion policy example * update readme * add pretrained model weights + doc * add comment * add documentation * add docstrings * update comments * update readme * fix code quality * Update examples/reinforcement_learning/README.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update examples/reinforcement_learning/diffusion_policy.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * suggestions + safe globals for weights_only=True * suggestions + safe weights loading * fix code quality * reformat file --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 01 Nov, 2024 4 commits
-
-
Leo Jiang authored
* Improve NPU performance * Improve NPU performance * Improve NPU performance * Improve NPU performance * [bugfix] bugfix for npu free memory * [bugfix] bugfix for npu free memory * [bugfix] bugfix for npu free memory * Reduce memory cost for flux training process --------- Co-authored-by:
蒋硕 <jiangshuo9@h-partners.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Boseong Jeon authored
Handling mixed precision and add unwarp Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
ScilenceForest authored
Update train_controlnet_flux.py Fix the problem of inconsistency between size of image and size of validation_image which causes np.stack to report error. Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Leo Jiang authored
* NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX --------- Co-authored-by:蒋硕 <jiangshuo9@h-partners.com>
-
- 31 Oct, 2024 3 commits
-
-
Abhipsha Das authored
* modelcard generation edit * add missed tag * fix param name * fix var * change str to dict * add use_dora check * use correct tags for lora * make style && make quality --------- Co-authored-by:Aryan <aryan@huggingface.co>
-
Sayak Paul authored
* use the lr when using 8bit adam. * remove lr as we pack it in params_to_optimize. --------- Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
Sayak Paul authored
[training] fixes to the quantization training script and add AdEMAMix optimizer as an option (#9806) * fixes * more fixes.
-
- 28 Oct, 2024 5 commits
-
-
Raul Ciotescu authored
* add the controlnet pipeline for pixart alpha --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
junsongc <cjs1020440147@icloud.com>
-
Linoy Tsaban authored
* make lora target modules configurable and change the default * style * make lora target modules configurable and change the default * fix bug when using prodigy and training te * fix mixed precision training as proposed in https://github.com/huggingface/diffusers/pull/9565 for full dreambooth as well * add test and notes * style * address sayaks comments * style * fix test --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Linoy Tsaban authored
* configurable layers * configurable layers * update README * style * add test * style * add layer test, update readme, add nargs * readme * test style * remove print, change nargs * test arg change * style * revert nargs 2/2 * address sayaks comments * style * address sayaks comments
-
Biswaroop authored
[Fix] remove setting lr for T5 text encoder when using prodigy in flux dreambooth lora script (#9473) * fix: removed setting of text encoder lr for T5 as it's not being tuned * fix: removed setting of text encoder lr for T5 as it's not being tuned --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
Vinh H. Pham authored
[Fix] train_dreambooth_lora_flux_advanced ValueError: unexpected save model: <class 'transformers.models.t5.modeling_t5.T5EncoderModel'> (#9777) fix save state te T5
-
- 26 Oct, 2024 1 commit
-
-
Sayak Paul authored
Update README.md
-
- 25 Oct, 2024 2 commits
-
-
Ina authored
* flux pipline: readability enhancement.
-
Sayak Paul authored
* add flux training script with quantization * remove exclamation
-
- 23 Oct, 2024 1 commit
-
-
Linoy Tsaban authored
* improve readme * style --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 22 Oct, 2024 3 commits
-
-
Sayak Paul authored
* post-release * style
-
Yu Zheng authored
* use make_image_grid in diffusers.utils * use checkpoint on the Hub
-
Tolga Cangöz authored
* [matryoshka.py] Add schedule_shifted_power attribute and update get_schedule_shifted method
-
- 21 Oct, 2024 1 commit
-
-
G.O.D authored
* Update train_controlnet.py reduce float value error for bfloat16 * Update train_controlnet_sdxl.py * style --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
yiyixuxu <yixu310@gmail.com>
-
- 19 Oct, 2024 1 commit
-
-
hlky authored
-
- 18 Oct, 2024 1 commit
-
-
Linoy Tsaban authored
* fix arg naming * fix arg naming * fix arg naming * fix arg naming
-
- 17 Oct, 2024 1 commit
-
-
Linoy Tsaban authored
* add ostris trainer to README & add cache latents of vae * add ostris trainer to README & add cache latents of vae * style * readme * add test for latent caching * add ostris noise scheduler https://github.com/ostris/ai-toolkit/blob/9ee1ef2a0a2a9a02b92d114a95f21312e5906e54/toolkit/samplers/custom_flowmatch_sampler.py#L95 * style * fix import * style * fix tests * style * --change upcasting of transformer? * update readme according to main * add pivotal tuning for CLIP * fix imports, encode_prompt call,add TextualInversionLoaderMixin to FluxPipeline for inference * TextualInversionLoaderMixin support for FluxPipeline for inference * move changes to advanced flux script, revert canonical * add latent caching to canonical script * revert changes to canonical script to keep it separate from https://github.com/huggingface/diffusers/pull/9160 * revert changes to canonical script to keep it separate from https://github.com/huggingface/diffusers/pull/9160 * style * remove redundant line and change code block placement to align with logic * add initializer_token arg * add transformer frac for range support from pure textual inversion to the orig pivotal tuning * support pure textual inversion - wip * adjustments to support pure textual inversion and transformer optimization in only part of the epochs * fix logic when using initializer token * fix pure_textual_inversion_condition * fix ti/pivotal loading of last validation run * remove embeddings loading for ti in final training run (to avoid adding huggingface hub dependency) * support pivotal for t5 * adapt pivotal for T5 encoder * adapt pivotal for T5 encoder and support in flux pipeline * t5 pivotal support + support fo pivotal for clip only or both * fix param chaining * fix param chaining * README first draft * readme * readme * readme * style * fix import * style * add fix from https://github.com/huggingface/diffusers/pull/9419 * add to readme, change function names * te lr changes * readme * change concept tokens logic * fix indices * change arg name * style * dummy test * revert dummy test * reorder pivoting * add warning in case the token abstraction is not the instance prompt * experimental - wip - specific block training * fix documentation and token abstraction processing * remove transformer block specification feature (for now) * style * fix copies * fix indexing issue when --initializer_concept has different amounts * add if TextualInversionLoaderMixin to all flux pipelines * style * fix import * fix imports * address review comments - remove necessary prints & comments, use pin_memory=True, use free_memory utils, unify warning and prints * style * logger info fix * make lora target modules configurable and change the default * make lora target modules configurable and change the default * style * make lora target modules configurable and change the default, add notes to readme * style * add tests * style * fix repo id * add updated requirements for advanced flux * fix indices of t5 pivotal tuning embeddings * fix path in test * remove `pin_memory` * fix filename of embedding * fix filename of embedding --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 16 Oct, 2024 1 commit
-
-
Linoy Tsaban authored
* add latent caching + smol updates * update license * replace with free_memory * add --upcast_before_saving to allow saving transformer weights in lower precision * fix models to accumulate * fix mixed precision issue as proposed in https://github.com/huggingface/diffusers/pull/9565 * smol update to readme * style * fix caching latents * style * add tests for latent caching * style * fix latent caching --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 15 Oct, 2024 3 commits
-
-
Aryan authored
* update * update * update * update * update * add coauthor Co-Authored-By:
yuan-shenghai <963658029@qq.com> * add coauthor Co-Authored-By:
Shenghai Yuan <140951558+SHYuanBest@users.noreply.github.com> * update Co-Authored-By:
yuan-shenghai <963658029@qq.com> * update --------- Co-authored-by:
yuan-shenghai <963658029@qq.com> Co-authored-by:
Shenghai Yuan <140951558+SHYuanBest@users.noreply.github.com>
-
wony617 authored
* [docs] refactoring docstrings in community/hd_painter.py * Update examples/community/hd_painter.py Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * make style --------- Co-authored-by:
Aryan <contact.aryanvs@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
0x名無し authored
* fixed issue #9350, Tensor is deprecated * ran make style
-
- 14 Oct, 2024 2 commits
-
-
Tolga Cangöz authored
-
Leo Jiang authored
* Improve the performance and suitable for NPU * Improve the performance and suitable for NPU computing * Improve the performance and suitable for NPU * Improve the performance and suitable for NPU * Improve the performance and suitable for NPU * Improve the performance and suitable for NPU --------- Co-authored-by:
蒋硕 <jiangshuo9@h-partners.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-