1. 12 Nov, 2025 1 commit
  2. 07 Oct, 2025 1 commit
    • Linoy Tsaban's avatar
      [Qwen LoRA training] fix bug when offloading (#12440) · 1066de8c
      Linoy Tsaban authored
      * fix bug when offload and cache_latents both enabled
      
      * fix bug when offload and cache_latents both enabled
      
      * fix bug when offload and cache_latents both enabled
      
      * fix bug when offload and cache_latents both enabled
      
      * fix bug when offload and cache_latents both enabled
      
      * fix bug when offload and cache_latents both enabled
      
      * fix bug when offload and cache_latents both enabled
      
      * fix bug when offload and cache_latents both enabled
      
      * fix bug when offload and cache_latents both enabled
      1066de8c
  3. 03 Oct, 2025 1 commit
  4. 09 Sep, 2025 1 commit
  5. 02 Sep, 2025 1 commit
  6. 01 Sep, 2025 1 commit
  7. 26 Aug, 2025 1 commit
  8. 25 Aug, 2025 1 commit
  9. 19 Aug, 2025 1 commit
  10. 18 Aug, 2025 2 commits
  11. 11 Aug, 2025 1 commit
  12. 05 Aug, 2025 2 commits
  13. 31 Jul, 2025 1 commit
  14. 29 Jul, 2025 1 commit
  15. 18 Jul, 2025 1 commit
  16. 16 Jul, 2025 1 commit
  17. 08 Jul, 2025 1 commit
  18. 26 Jun, 2025 2 commits
  19. 23 Jun, 2025 1 commit
  20. 18 Jun, 2025 2 commits
  21. 17 Jun, 2025 1 commit
  22. 16 Jun, 2025 1 commit
  23. 27 May, 2025 1 commit
  24. 19 May, 2025 1 commit
  25. 13 May, 2025 1 commit
  26. 08 May, 2025 1 commit
  27. 05 May, 2025 3 commits
  28. 01 May, 2025 1 commit
  29. 28 Apr, 2025 1 commit
  30. 26 Apr, 2025 1 commit
  31. 24 Apr, 2025 2 commits
    • co63oc's avatar
      Fix typos in strings and comments (#11407) · f00a9957
      co63oc authored
      f00a9957
    • Linoy Tsaban's avatar
      [HiDream LoRA] optimizations + small updates (#11381) · edd78804
      Linoy Tsaban authored
      
      
      * 1. add pre-computation of prompt embeddings when custom prompts are used as well
      2. save model card even if model is not pushed to hub
      3. remove scheduler initialization from code example - not necessary anymore (it's now if the base model's config)
      4. add skip_final_inference - to allow to run with validation, but skip the final loading of the pipeline with the lora weights to reduce memory reqs
      
      * pre encode validation prompt as well
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * pre encode validation prompt as well
      
      * Apply style fixes
      
      * empty commit
      
      * change default trained modules
      
      * empty commit
      
      * address comments + change encoding of validation prompt (before it was only pre-encoded if custom prompts are provided, but should be pre-encoded either way)
      
      * Apply style fixes
      
      * empty commit
      
      * fix validation_embeddings definition
      
      * fix final inference condition
      
      * fix pipeline deletion in last inference
      
      * Apply style fixes
      
      * empty commit
      
      * layers
      
      * remove readme remarks on only pre-computing when instance prompt is provided and change example to 3d icons
      
      * smol fix
      
      * empty commit
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      edd78804
  32. 23 Apr, 2025 2 commits