"vscode:/vscode.git/clone" did not exist on "187de44352ce23acf00a9204a05a8a308aab7003"
  1. 02 May, 2025 1 commit
  2. 01 May, 2025 1 commit
  3. 15 Apr, 2025 1 commit
  4. 13 Feb, 2025 1 commit
    • Aryan's avatar
      Disable PEFT input autocast when using fp8 layerwise casting (#10685) · a0c22997
      Aryan authored
      * disable peft input autocast
      
      * use new peft method name; only disable peft input autocast if submodule layerwise casting active
      
      * add test; reference PeftInputAutocastDisableHook in peft docs
      
      * add load_lora_weights test
      
      * casted -> cast
      
      * Update tests/lora/utils.py
      a0c22997
  5. 02 Jan, 2025 1 commit
  6. 17 Dec, 2024 1 commit
  7. 03 Dec, 2024 1 commit
  8. 09 Oct, 2024 1 commit
  9. 16 Sep, 2024 1 commit
  10. 05 Aug, 2024 1 commit
  11. 03 Aug, 2024 1 commit
  12. 26 Jul, 2024 2 commits
  13. 25 Jul, 2024 3 commits
  14. 26 Jun, 2024 1 commit
  15. 24 Jun, 2024 2 commits
  16. 24 May, 2024 1 commit
    • Yue Wu's avatar
      sampling bug fix in diffusers tutorial "basic_training.md" (#8223) · 1096f88e
      Yue Wu authored
      sampling bug fix in basic_training.md
      
      In the diffusers basic training tutorial, setting the manual seed argument (generator=torch.manual_seed(config.seed)) in the pipeline call inside evaluate() function rewinds the dataloader shuffling, leading to overfitting due to the model seeing same sequence of training examples after every evaluation call. Using generator=torch.Generator(device='cpu').manual_seed(config.seed) avoids this.
      1096f88e
  17. 22 Apr, 2024 1 commit
  18. 29 Mar, 2024 1 commit
    • UmerHA's avatar
      Implements Blockwise lora (#7352) · 03024468
      UmerHA authored
      
      
      * Initial commit
      
      * Implemented block lora
      
      - implemented block lora
      - updated docs
      - added tests
      
      * Finishing up
      
      * Reverted unrelated changes made by make style
      
      * Fixed typo
      
      * Fixed bug + Made text_encoder_2 scalable
      
      * Integrated some review feedback
      
      * Incorporated review feedback
      
      * Fix tests
      
      * Made every module configurable
      
      * Adapter to new lora test structure
      
      * Final cleanup
      
      * Some more final fixes
      
      - Included examples in `using_peft_for_inference.md`
      - Added hint that only attns are scaled
      - Removed NoneTypes
      - Added test to check mismatching lens of adapter names / weights raise error
      
      * Update using_peft_for_inference.md
      
      * Update using_peft_for_inference.md
      
      * Make style, quality, fix-copies
      
      * Updated tutorial;Warning if scale/adapter mismatch
      
      * floats are forwarded as-is; changed tutorial scale
      
      * make style, quality, fix-copies
      
      * Fixed typo in tutorial
      
      * Moved some warnings into `lora_loader_utils.py`
      
      * Moved scale/lora mismatch warnings back
      
      * Integrated final review suggestions
      
      * Empty commit to trigger CI
      
      * Reverted emoty commit to trigger CI
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      03024468
  19. 21 Mar, 2024 1 commit
  20. 07 Mar, 2024 1 commit
  21. 04 Mar, 2024 1 commit
  22. 14 Feb, 2024 1 commit
  23. 08 Feb, 2024 1 commit
  24. 31 Jan, 2024 1 commit
  25. 09 Jan, 2024 1 commit
  26. 04 Jan, 2024 1 commit
  27. 31 Dec, 2023 1 commit
  28. 29 Dec, 2023 1 commit
  29. 28 Dec, 2023 1 commit
  30. 26 Dec, 2023 2 commits
  31. 20 Nov, 2023 1 commit
    • M. Tolga Cangöz's avatar
      Revert "[`Docs`] Update and make improvements" (#5858) · c72a1739
      M. Tolga Cangöz authored
      * Revert "[`Docs`] Update and make improvements (#5819)"
      
      This reverts commit c697f524.
      
      * Update README.md
      
      * Update memory.md
      
      * Update basic_training.md
      
      * Update write_own_pipeline.md
      
      * Update fp16.md
      
      * Update basic_training.md
      
      * Update write_own_pipeline.md
      
      * Update write_own_pipeline.md
      c72a1739
  32. 16 Nov, 2023 1 commit
  33. 15 Nov, 2023 1 commit
  34. 08 Nov, 2023 1 commit
  35. 01 Nov, 2023 1 commit