1. 02 Nov, 2024 1 commit
    • Dorsa Rohani's avatar
      Add Diffusion Policy for Reinforcement Learning (#9824) · c10f875f
      Dorsa Rohani authored
      
      
      * enable cpu ability
      
      * model creation + comprehensive testing
      
      * training + tests
      
      * all tests working
      
      * remove unneeded files + clarify docs
      
      * update train tests
      
      * update readme.md
      
      * remove data from gitignore
      
      * undo cpu enabled option
      
      * Update README.md
      
      * update readme
      
      * code quality fixes
      
      * diffusion policy example
      
      * update readme
      
      * add pretrained model weights + doc
      
      * add comment
      
      * add documentation
      
      * add docstrings
      
      * update comments
      
      * update readme
      
      * fix code quality
      
      * Update examples/reinforcement_learning/README.md
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update examples/reinforcement_learning/diffusion_policy.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * suggestions + safe globals for weights_only=True
      
      * suggestions + safe weights loading
      
      * fix code quality
      
      * reformat file
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      c10f875f
  2. 01 Nov, 2024 4 commits
  3. 31 Oct, 2024 3 commits
  4. 28 Oct, 2024 5 commits
  5. 26 Oct, 2024 1 commit
  6. 25 Oct, 2024 2 commits
  7. 23 Oct, 2024 1 commit
  8. 22 Oct, 2024 3 commits
  9. 21 Oct, 2024 1 commit
  10. 19 Oct, 2024 1 commit
  11. 18 Oct, 2024 1 commit
  12. 17 Oct, 2024 1 commit
    • Linoy Tsaban's avatar
      [Flux] Add advanced training script + support textual inversion inference (#9434) · 9a7f8246
      Linoy Tsaban authored
      * add ostris trainer to README & add cache latents of vae
      
      * add ostris trainer to README & add cache latents of vae
      
      * style
      
      * readme
      
      * add test for latent caching
      
      * add ostris noise scheduler
      https://github.com/ostris/ai-toolkit/blob/9ee1ef2a0a2a9a02b92d114a95f21312e5906e54/toolkit/samplers/custom_flowmatch_sampler.py#L95
      
      * style
      
      * fix import
      
      * style
      
      * fix tests
      
      * style
      
      * --change upcasting of transformer?
      
      * update readme according to main
      
      * add pivotal tuning for CLIP
      
      * fix imports, encode_prompt call,add TextualInversionLoaderMixin to FluxPipeline for inference
      
      * TextualInversionLoaderMixin support for FluxPipeline for inference
      
      * move changes to advanced flux script, revert canonical
      
      * add latent caching to canonical script
      
      * revert changes to canonical script to keep it separate from https://github.com/huggingface/diffusers/pull/9160
      
      * revert changes to canonical script to keep it separate from https://github.com/huggingface/diffusers/pull/9160
      
      * style
      
      * remove redundant line and change code block placement to align with logic
      
      * add initializer_token arg
      
      * add transformer frac for range support from pure textual inversion to the orig pivotal tuning
      
      * support pure textual inversion - wip
      
      * adjustments to support pure textual inversion and transformer optimization in only part of the epochs
      
      * fix logic when using initializer token
      
      * fix pure_textual_inversion_condition
      
      * fix ti/pivotal loading of last validation run
      
      * remove embeddings loading for ti in final training run (to avoid adding huggingface hub dependency)
      
      * support pivotal for t5
      
      * adapt pivotal for T5 encoder
      
      * adapt pivotal for T5 encoder and support in flux pipeline
      
      * t5 pivotal support + support fo pivotal for clip only or both
      
      * fix param chaining
      
      * fix param chaining
      
      * README first draft
      
      * readme
      
      * readme
      
      * readme
      
      * style
      
      * fix import
      
      * style
      
      * add fix from https://github.com/huggingface/diffusers/pull/9419
      
      
      
      * add to readme, change function names
      
      * te lr changes
      
      * readme
      
      * change concept tokens logic
      
      * fix indices
      
      * change arg name
      
      * style
      
      * dummy test
      
      * revert dummy test
      
      * reorder pivoting
      
      * add warning in case the token abstraction is not the instance prompt
      
      * experimental - wip - specific block training
      
      * fix documentation and token abstraction processing
      
      * remove transformer block specification feature (for now)
      
      * style
      
      * fix copies
      
      * fix indexing issue when --initializer_concept has different amounts
      
      * add if TextualInversionLoaderMixin to all flux pipelines
      
      * style
      
      * fix import
      
      * fix imports
      
      * address review comments - remove necessary prints & comments, use pin_memory=True, use free_memory utils, unify warning and prints
      
      * style
      
      * logger info fix
      
      * make lora target modules configurable and change the default
      
      * make lora target modules configurable and change the default
      
      * style
      
      * make lora target modules configurable and change the default, add notes to readme
      
      * style
      
      * add tests
      
      * style
      
      * fix repo id
      
      * add updated requirements for advanced flux
      
      * fix indices of t5 pivotal tuning embeddings
      
      * fix path in test
      
      * remove `pin_memory`
      
      * fix filename of embedding
      
      * fix filename of embedding
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      9a7f8246
  13. 16 Oct, 2024 1 commit
  14. 15 Oct, 2024 3 commits
  15. 14 Oct, 2024 3 commits
  16. 11 Oct, 2024 2 commits
  17. 08 Oct, 2024 1 commit
    • glide-the's avatar
      fix: CogVideox train dataset _preprocess_data crop video (#9574) · 66eef9a6
      glide-the authored
      
      
      * Removed int8 to float32 conversion (`* 2.0 - 1.0`) from `train_transforms` as it caused image overexposure.
      
      Added `_resize_for_rectangle_crop` function to enable video cropping functionality. The cropping mode can be configured via `video_reshape_mode`, supporting options: ['center', 'random', 'none'].
      
      * The number 127.5 may experience precision loss during division operations.
      
      * wandb request pil image Type
      
      * Resizing bug
      
      * del jupyter
      
      * make style
      
      * Update examples/cogvideo/README.md
      
      * make style
      
      ---------
      
      Co-authored-by: --unset <--unset>
      Co-authored-by: default avatarAryan <aryan@huggingface.co>
      66eef9a6
  18. 07 Oct, 2024 1 commit
  19. 28 Sep, 2024 2 commits
  20. 27 Sep, 2024 1 commit
    • PromeAI's avatar
      [examples] add train flux-controlnet scripts in example. (#9324) · 534848c3
      PromeAI authored
      
      
      * add train flux-controlnet scripts in example.
      
      * fix error
      
      * fix subfolder error
      
      * fix preprocess error
      
      * Update examples/controlnet/README_flux.md
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update examples/controlnet/README_flux.md
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * fix readme
      
      * fix note error
      
      * add some Tutorial for deepspeed
      
      * fix some Format Error
      
      * add dataset_path example
      
      * remove print, add guidance_scale CLI, readable apply
      
      * Update examples/controlnet/README_flux.md
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * update,push_to_hub,save_weight_dtype,static method,clear_objs_and_retain_memory,report_to=wandb
      
      * add push to hub in readme
      
      * apply weighting schemes
      
      * add note
      
      * Update examples/controlnet/README_flux.md
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * make code style and quality
      
      * fix some unnoticed error
      
      * make code style and quality
      
      * add example controlnet in readme
      
      * add test controlnet
      
      * rm Remove duplicate notes
      
      * Fix formatting errors
      
      * add new control image
      
      * add model cpu offload
      
      * update help for adafactor
      
      * make quality & style
      
      * make quality and style
      
      * rename flux_controlnet_model_name_or_path
      
      * fix back src/diffusers/pipelines/flux/pipeline_flux_controlnet.py
      
      * fix dtype error by pre calculate text emb
      
      * rm image save
      
      * quality fix
      
      * fix test
      
      * fix tiny flux train error
      
      * change report to to tensorboard
      
      * fix save name error when test
      
      * Fix shrinking errors
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarYour Name <you@example.com>
      534848c3
  21. 25 Sep, 2024 1 commit
  22. 23 Sep, 2024 1 commit