"vscode:/vscode.git/clone" did not exist on "a3904d7e3485f7468ebb165f4383c5a71befe52d"
  1. 02 May, 2025 1 commit
  2. 01 May, 2025 5 commits
  3. 30 Apr, 2025 11 commits
  4. 29 Apr, 2025 1 commit
  5. 28 Apr, 2025 9 commits
  6. 26 Apr, 2025 1 commit
  7. 24 Apr, 2025 4 commits
    • co63oc's avatar
      Fix typos in strings and comments (#11407) · f00a9957
      co63oc authored
      f00a9957
    • Ishan Modi's avatar
      [BUG] fixed WAN docstring (#11226) · e8312e7c
      Ishan Modi authored
      update
      e8312e7c
    • Emiliano's avatar
      Fix Flux IP adapter argument in the pipeline example (#11402) · 79868345
      Emiliano authored
      Fix Flux IP adapter argument in the example
      
      IP-Adapter example had a wrong argument. Fix `true_cfg` -> `true_cfg_scale`
      79868345
    • Linoy Tsaban's avatar
      [HiDream LoRA] optimizations + small updates (#11381) · edd78804
      Linoy Tsaban authored
      
      
      * 1. add pre-computation of prompt embeddings when custom prompts are used as well
      2. save model card even if model is not pushed to hub
      3. remove scheduler initialization from code example - not necessary anymore (it's now if the base model's config)
      4. add skip_final_inference - to allow to run with validation, but skip the final loading of the pipeline with the lora weights to reduce memory reqs
      
      * pre encode validation prompt as well
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * pre encode validation prompt as well
      
      * Apply style fixes
      
      * empty commit
      
      * change default trained modules
      
      * empty commit
      
      * address comments + change encoding of validation prompt (before it was only pre-encoded if custom prompts are provided, but should be pre-encoded either way)
      
      * Apply style fixes
      
      * empty commit
      
      * fix validation_embeddings definition
      
      * fix final inference condition
      
      * fix pipeline deletion in last inference
      
      * Apply style fixes
      
      * empty commit
      
      * layers
      
      * remove readme remarks on only pre-computing when instance prompt is provided and change example to 3d icons
      
      * smol fix
      
      * empty commit
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      edd78804
  8. 23 Apr, 2025 5 commits
  9. 22 Apr, 2025 3 commits
    • YiYi Xu's avatar
      [HiDream] move deprecation to 0.35.0 (#11384) · 448c72a2
      YiYi Xu authored
      up
      448c72a2
    • Aryan's avatar
      Update modeling imports (#11129) · f108ad88
      Aryan authored
      update
      f108ad88
    • Linoy Tsaban's avatar
      [LoRA] add LoRA support to HiDream and fine-tuning script (#11281) · e30d3bf5
      Linoy Tsaban authored
      
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarBagheera <59658056+bghira@users.noreply.github.com>
      
      * move prompt embeds, pooled embeds outside
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * fix import
      
      * fix import and tokenizer 4, text encoder 4 loading
      
      * te
      
      * prompt embeds
      
      * fix naming
      
      * shapes
      
      * initial commit to add HiDreamImageLoraLoaderMixin
      
      * fix init
      
      * add tests
      
      * loader
      
      * fix model input
      
      * add code example to readme
      
      * fix default max length of text encoders
      
      * prints
      
      * nullify training cond in unpatchify for temp fix to incompatible shaping of transformer output during training
      
      * smol fix
      
      * unpatchify
      
      * unpatchify
      
      * fix validation
      
      * flip pred and loss
      
      * fix shift!!!
      
      * revert unpatchify changes (for now)
      
      * smol fix
      
      * Apply style fixes
      
      * workaround moe training
      
      * workaround moe training
      
      * remove prints
      
      * to reduce some memory, keep vae in `weight_dtype` same as we have for flux (as it's the same vae)
      https://github.com/huggingface/diffusers/blob/bbd0c161b55ba2234304f1e6325832dd69c60565/examples/dreambooth/train_dreambooth_lora_flux.py#L1207
      
      
      
      * refactor to align with HiDream refactor
      
      * refactor to align with HiDream refactor
      
      * refactor to align with HiDream refactor
      
      * add support for cpu offloading of text encoders
      
      * Apply style fixes
      
      * adjust lr and rank for train example
      
      * fix copies
      
      * Apply style fixes
      
      * update README
      
      * update README
      
      * update README
      
      * fix license
      
      * keep prompt2,3,4 as None in validation
      
      * remove reverse ode comment
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * vae offload change
      
      * fix text encoder offloading
      
      * Apply style fixes
      
      * cleaner to_kwargs
      
      * fix module name in copied from
      
      * add requirements
      
      * fix offloading
      
      * fix offloading
      
      * fix offloading
      
      * update transformers version in reqs
      
      * try AutoTokenizer
      
      * try AutoTokenizer
      
      * Apply style fixes
      
      * empty commit
      
      * Delete tests/lora/test_lora_layers_hidream.py
      
      * change tokenizer_4 to load with AutoTokenizer as well
      
      * make text_encoder_four and tokenizer_four configurable
      
      * save model card
      
      * save model card
      
      * revert T5
      
      * fix test
      
      * remove non diffusers lumina2 conversion
      
      ---------
      Co-authored-by: default avatarBagheera <59658056+bghira@users.noreply.github.com>
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      e30d3bf5