1. 12 Dec, 2024 1 commit
    • Sayak Paul's avatar
      [WIP][Training] Flux Control LoRA training script (#10130) · 8170dc36
      Sayak Paul authored
      
      
      * update
      
      * add
      
      * update
      
      * add control-lora conversion script; make flux loader handle norms; fix rank calculation assumption
      
      * control lora updates
      
      * remove copied-from
      
      * create separate pipelines for flux control
      
      * make fix-copies
      
      * update docs
      
      * add tests
      
      * fix
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * remove control lora changes
      
      * apply suggestions from review
      
      * Revert "remove control lora changes"
      
      This reverts commit 73cfc519c9b99b7dc3251cc6a90a5db3056c4819.
      
      * update
      
      * update
      
      * improve log messages
      
      * updates.
      
      * updates
      
      * support register_config.
      
      * fix
      
      * fix
      
      * fix
      
      * updates
      
      * updates
      
      * updates
      
      * fix-copies
      
      * fix
      
      * apply suggestions from review
      
      * add tests
      
      * remove conversion script; enable on-the-fly conversion
      
      * bias -> lora_bias.
      
      * fix-copies
      
      * peft.py
      
      * fix lora conversion
      
      * changes
      Co-authored-by: default avatara-r-r-o-w <contact.aryanvs@gmail.com>
      
      * fix-copies
      
      * updates for tests
      
      * fix
      
      * alpha_pattern.
      
      * add a test for varied lora ranks and alphas.
      
      * revert changes in num_channels_latents = self.transformer.config.in_channels // 8
      
      * revert moe
      
      * add a sanity check on unexpected keys when loading norm layers.
      
      * contro lora.
      
      * fixes
      
      * fixes
      
      * fixes
      
      * tests
      
      * reviewer feedback
      
      * fix
      
      * proper peft version for lora_bias
      
      * fix-copies
      
      * updates
      
      * updates
      
      * updates
      
      * remove debug code
      
      * update docs
      
      * integration tests
      
      * nis
      
      * fuse and unload.
      
      * fix
      
      * add slices.
      
      * more updates.
      
      * button up readme
      
      * train()
      
      * add full fine-tuning version.
      
      * fixes
      
      * Apply suggestions from code review
      Co-authored-by: default avatarAryan <aryan@huggingface.co>
      
      * set_grads_to_none remove.
      
      * readme
      
      ---------
      Co-authored-by: default avatarAryan <aryan@huggingface.co>
      Co-authored-by: default avataryiyixuxu <yixu310@gmail.com>
      Co-authored-by: default avatara-r-r-o-w <contact.aryanvs@gmail.com>
      8170dc36