1. 03 Feb, 2024 1 commit
  2. 02 Feb, 2024 1 commit
  3. 25 Jan, 2024 1 commit
  4. 09 Jan, 2024 1 commit
  5. 03 Jan, 2024 1 commit
  6. 22 Dec, 2023 1 commit
  7. 21 Dec, 2023 2 commits
  8. 07 Dec, 2023 1 commit
  9. 06 Dec, 2023 1 commit
  10. 30 Nov, 2023 1 commit
  11. 25 Nov, 2023 1 commit
  12. 13 Nov, 2023 1 commit
  13. 08 Nov, 2023 1 commit
  14. 02 Nov, 2023 1 commit
  15. 10 Oct, 2023 1 commit
  16. 08 Oct, 2023 1 commit
  17. 06 Oct, 2023 1 commit
  18. 13 Sep, 2023 1 commit
  19. 12 Sep, 2023 1 commit
  20. 11 Sep, 2023 1 commit
  21. 04 Sep, 2023 1 commit
  22. 03 Sep, 2023 1 commit
  23. 04 Aug, 2023 2 commits
  24. 21 Jul, 2023 2 commits
  25. 20 Jul, 2023 1 commit
  26. 19 Jul, 2023 1 commit
  27. 14 Jul, 2023 1 commit
  28. 13 Jul, 2023 1 commit
  29. 06 Jul, 2023 1 commit
  30. 06 May, 2023 1 commit
  31. 05 May, 2023 1 commit
    • shiyu1994's avatar
      Add quantized training (CPU part) (#5800) · 17ecfab3
      shiyu1994 authored
      * add quantized training (first stage)
      
      * add histogram construction functions for integer gradients
      
      * add stochastic rounding
      
      * update docs
      
      * fix compilation errors by adding template instantiations
      
      * update files for compilation
      
      * fix compilation of gpu version
      
      * initialize gradient discretizer before share states
      
      * add a test case for quantized training
      
      * add quantized training for data distributed training
      
      * Delete origin.pred
      
      * Delete ifelse.pred
      
      * Delete LightGBM_model.txt
      
      * remove useless changes
      
      * fix lint error
      
      * remove debug loggings
      
      * fix mismatch of vector and allocator types
      
      * remove changes in main.cpp
      
      * fix bugs with uninitialized gradient discretizer
      
      * initialize ordered gradients in gradient discretizer
      
      * disable quantized training with gpu and cuda
      
      fix msvc compilation errors and warnings
      
      * fix bug in data parallel tree learner
      
      * make quantized training test deterministic
      
      * make quantized training in test case more accurate
      
      * refactor test_quantized_training
      
      * fix leaf splits initialization with quantized training
      
      * check distributed quantized training result
      17ecfab3
  32. 07 Mar, 2023 1 commit
  33. 14 Feb, 2023 1 commit
  34. 31 Jan, 2023 3 commits
  35. 24 Jan, 2023 1 commit