1. 09 Oct, 2024 1 commit
  2. 16 Sep, 2024 1 commit
  3. 05 Aug, 2024 1 commit
  4. 03 Aug, 2024 1 commit
  5. 26 Jul, 2024 2 commits
  6. 25 Jul, 2024 3 commits
  7. 26 Jun, 2024 1 commit
  8. 24 Jun, 2024 2 commits
  9. 24 May, 2024 1 commit
    • Yue Wu's avatar
      sampling bug fix in diffusers tutorial "basic_training.md" (#8223) · 1096f88e
      Yue Wu authored
      sampling bug fix in basic_training.md
      
      In the diffusers basic training tutorial, setting the manual seed argument (generator=torch.manual_seed(config.seed)) in the pipeline call inside evaluate() function rewinds the dataloader shuffling, leading to overfitting due to the model seeing same sequence of training examples after every evaluation call. Using generator=torch.Generator(device='cpu').manual_seed(config.seed) avoids this.
      1096f88e
  10. 22 Apr, 2024 1 commit
  11. 29 Mar, 2024 1 commit
    • UmerHA's avatar
      Implements Blockwise lora (#7352) · 03024468
      UmerHA authored
      
      
      * Initial commit
      
      * Implemented block lora
      
      - implemented block lora
      - updated docs
      - added tests
      
      * Finishing up
      
      * Reverted unrelated changes made by make style
      
      * Fixed typo
      
      * Fixed bug + Made text_encoder_2 scalable
      
      * Integrated some review feedback
      
      * Incorporated review feedback
      
      * Fix tests
      
      * Made every module configurable
      
      * Adapter to new lora test structure
      
      * Final cleanup
      
      * Some more final fixes
      
      - Included examples in `using_peft_for_inference.md`
      - Added hint that only attns are scaled
      - Removed NoneTypes
      - Added test to check mismatching lens of adapter names / weights raise error
      
      * Update using_peft_for_inference.md
      
      * Update using_peft_for_inference.md
      
      * Make style, quality, fix-copies
      
      * Updated tutorial;Warning if scale/adapter mismatch
      
      * floats are forwarded as-is; changed tutorial scale
      
      * make style, quality, fix-copies
      
      * Fixed typo in tutorial
      
      * Moved some warnings into `lora_loader_utils.py`
      
      * Moved scale/lora mismatch warnings back
      
      * Integrated final review suggestions
      
      * Empty commit to trigger CI
      
      * Reverted emoty commit to trigger CI
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      03024468
  12. 21 Mar, 2024 1 commit
  13. 07 Mar, 2024 1 commit
  14. 04 Mar, 2024 1 commit
  15. 14 Feb, 2024 1 commit
  16. 08 Feb, 2024 1 commit
  17. 31 Jan, 2024 1 commit
  18. 09 Jan, 2024 1 commit
  19. 04 Jan, 2024 1 commit
  20. 31 Dec, 2023 1 commit
  21. 29 Dec, 2023 1 commit
  22. 28 Dec, 2023 1 commit
  23. 26 Dec, 2023 2 commits
  24. 20 Nov, 2023 1 commit
    • M. Tolga Cangöz's avatar
      Revert "[`Docs`] Update and make improvements" (#5858) · c72a1739
      M. Tolga Cangöz authored
      * Revert "[`Docs`] Update and make improvements (#5819)"
      
      This reverts commit c697f524.
      
      * Update README.md
      
      * Update memory.md
      
      * Update basic_training.md
      
      * Update write_own_pipeline.md
      
      * Update fp16.md
      
      * Update basic_training.md
      
      * Update write_own_pipeline.md
      
      * Update write_own_pipeline.md
      c72a1739
  25. 16 Nov, 2023 1 commit
  26. 15 Nov, 2023 1 commit
  27. 08 Nov, 2023 1 commit
  28. 01 Nov, 2023 1 commit
  29. 17 Oct, 2023 1 commit
  30. 16 Oct, 2023 2 commits
  31. 12 Aug, 2023 1 commit
  32. 02 Aug, 2023 1 commit
  33. 26 Jul, 2023 1 commit
  34. 03 Jul, 2023 1 commit