"...python/git@developer.sourcefind.cn:zhaoyu6/sglang.git" did not exist on "5bfafdfcb4703f39e8d135ce435365f53b4b4fbe"
  1. 29 Jul, 2025 1 commit
  2. 18 Jul, 2025 1 commit
  3. 01 Jul, 2025 1 commit
  4. 20 Jun, 2025 1 commit
  5. 19 Jun, 2025 1 commit
  6. 28 May, 2025 1 commit
  7. 20 May, 2025 1 commit
  8. 02 May, 2025 1 commit
  9. 01 May, 2025 1 commit
  10. 15 Apr, 2025 1 commit
  11. 13 Feb, 2025 1 commit
    • Aryan's avatar
      Disable PEFT input autocast when using fp8 layerwise casting (#10685) · a0c22997
      Aryan authored
      * disable peft input autocast
      
      * use new peft method name; only disable peft input autocast if submodule layerwise casting active
      
      * add test; reference PeftInputAutocastDisableHook in peft docs
      
      * add load_lora_weights test
      
      * casted -> cast
      
      * Update tests/lora/utils.py
      a0c22997
  12. 02 Jan, 2025 1 commit
  13. 17 Dec, 2024 1 commit
  14. 03 Dec, 2024 1 commit
  15. 09 Oct, 2024 1 commit
  16. 16 Sep, 2024 1 commit
  17. 05 Aug, 2024 1 commit
  18. 03 Aug, 2024 1 commit
  19. 26 Jul, 2024 2 commits
  20. 25 Jul, 2024 3 commits
  21. 26 Jun, 2024 1 commit
  22. 24 Jun, 2024 2 commits
  23. 24 May, 2024 1 commit
    • Yue Wu's avatar
      sampling bug fix in diffusers tutorial "basic_training.md" (#8223) · 1096f88e
      Yue Wu authored
      sampling bug fix in basic_training.md
      
      In the diffusers basic training tutorial, setting the manual seed argument (generator=torch.manual_seed(config.seed)) in the pipeline call inside evaluate() function rewinds the dataloader shuffling, leading to overfitting due to the model seeing same sequence of training examples after every evaluation call. Using generator=torch.Generator(device='cpu').manual_seed(config.seed) avoids this.
      1096f88e
  24. 22 Apr, 2024 1 commit
  25. 29 Mar, 2024 1 commit
    • UmerHA's avatar
      Implements Blockwise lora (#7352) · 03024468
      UmerHA authored
      
      
      * Initial commit
      
      * Implemented block lora
      
      - implemented block lora
      - updated docs
      - added tests
      
      * Finishing up
      
      * Reverted unrelated changes made by make style
      
      * Fixed typo
      
      * Fixed bug + Made text_encoder_2 scalable
      
      * Integrated some review feedback
      
      * Incorporated review feedback
      
      * Fix tests
      
      * Made every module configurable
      
      * Adapter to new lora test structure
      
      * Final cleanup
      
      * Some more final fixes
      
      - Included examples in `using_peft_for_inference.md`
      - Added hint that only attns are scaled
      - Removed NoneTypes
      - Added test to check mismatching lens of adapter names / weights raise error
      
      * Update using_peft_for_inference.md
      
      * Update using_peft_for_inference.md
      
      * Make style, quality, fix-copies
      
      * Updated tutorial;Warning if scale/adapter mismatch
      
      * floats are forwarded as-is; changed tutorial scale
      
      * make style, quality, fix-copies
      
      * Fixed typo in tutorial
      
      * Moved some warnings into `lora_loader_utils.py`
      
      * Moved scale/lora mismatch warnings back
      
      * Integrated final review suggestions
      
      * Empty commit to trigger CI
      
      * Reverted emoty commit to trigger CI
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      03024468
  26. 21 Mar, 2024 1 commit
  27. 07 Mar, 2024 1 commit
  28. 04 Mar, 2024 1 commit
  29. 14 Feb, 2024 1 commit
  30. 08 Feb, 2024 1 commit
  31. 31 Jan, 2024 1 commit
  32. 09 Jan, 2024 1 commit
  33. 04 Jan, 2024 1 commit
  34. 31 Dec, 2023 1 commit
  35. 29 Dec, 2023 1 commit
  36. 28 Dec, 2023 1 commit