1. 17 Oct, 2025 1 commit
  2. 14 Oct, 2025 1 commit
  3. 13 Oct, 2025 1 commit
  4. 11 Oct, 2025 1 commit
  5. 25 Sep, 2025 1 commit
  6. 23 Sep, 2025 1 commit
  7. 22 Sep, 2025 1 commit
    • Jeff Daily's avatar
      [ROCm] re-add support for ROCm builds · 61ec4f1a
      Jeff Daily authored
      Previously #6086 added ROCm support but after numerous rebases it lost
      critical changes. This PR restores the ROCm build.
      
      There are many source file changes but most were automated using the
      following:
      
      ```bash
      for f in `grep -rl '#ifdef USE_CUDA'`
      do
          sed -i 's@#ifdef USE_CUDA@#if defined(USE_CUDA) || defined(USE_ROCM)@g' $f
      done
      
      for f in `grep -rl '#endif  // USE_CUDA'`
      do
          sed -i 's@#endif  // USE_CUDA@#endif  // USE_CUDA || USE_ROCM@g' $f
      done
      ```
      61ec4f1a
  8. 24 Aug, 2025 1 commit
  9. 24 Jul, 2025 1 commit
  10. 07 Feb, 2025 1 commit
  11. 02 Jan, 2025 1 commit
  12. 15 Dec, 2024 1 commit
    • Nikita Titov's avatar
      [ci] use Ruff linter instead of isort (#6755) · c2f3807c
      Nikita Titov authored
      * Update append-comment.sh
      
      * Update static_analysis.yml
      
      * Update static_analysis.yml
      
      * Update basic.py
      
      * Update basic.py
      
      * Update .pre-commit-config.yaml
      
      * Update basic.py
      
      * Update basic.py
      
      * Update basic.py
      
      * Update basic.py
      
      * Update basic.py
      
      * Update pyproject.toml
      
      * Update pyproject.toml
      
      * Update pyproject.toml
      
      * Update pyproject.toml
      
      * Update interactive_plot_example.ipynb
      
      * Update pyproject.toml
      
      * Update append-comment.sh
      
      * Update basic.py
      
      * Update basic.py
      
      * Update pyproject.toml
      
      * Update .pre-commit-config.yaml
      
      * Update basic.py
      
      * Update basic.py
      
      * Update test_basic.R
      
      * Update rank_objective.hpp
      
      * Update histogram_16_64_256.cu
      
      * Update static_analysis.yml
      
      * ensure alphabetical order of rules
      c2f3807c
  13. 11 Dec, 2024 1 commit
  14. 01 Dec, 2024 1 commit
  15. 18 Oct, 2024 1 commit
  16. 13 Oct, 2024 1 commit
  17. 19 Mar, 2024 1 commit
  18. 23 Feb, 2024 1 commit
    • shiyu1994's avatar
      [c++][fix] Support Quantized Training with Categorical Features on CPU (#6301) · 776c5c3c
      shiyu1994 authored
      * support quantized training with categorical features on cpu
      
      * remove white spaces
      
      * add tests for quantized training with categorical features
      
      * skip tests for cuda version
      
      * fix cases when only 1 data block in row-wise quantized histogram construction with 8 inner bits
      
      * remove useless capture
      
      * fix compilation warnings
      
      revert useless changes
      
      * revert useless change
      
      * separate functions in feature histogram into cpp file
      
      * add feature_histogram.o in Makevars
      776c5c3c
  19. 20 Feb, 2024 1 commit
  20. 17 Jan, 2024 1 commit
  21. 22 Nov, 2023 1 commit
  22. 10 Oct, 2023 1 commit
  23. 09 Oct, 2023 1 commit
  24. 08 Oct, 2023 1 commit
    • shiyu1994's avatar
      [CUDA] CUDA Quantized Training (fixes #5606) (#5933) · f901f471
      shiyu1994 authored
      * add quantized training (first stage)
      
      * add histogram construction functions for integer gradients
      
      * add stochastic rounding
      
      * update docs
      
      * fix compilation errors by adding template instantiations
      
      * update files for compilation
      
      * fix compilation of gpu version
      
      * initialize gradient discretizer before share states
      
      * add a test case for quantized training
      
      * add quantized training for data distributed training
      
      * Delete origin.pred
      
      * Delete ifelse.pred
      
      * Delete LightGBM_model.txt
      
      * remove useless changes
      
      * fix lint error
      
      * remove debug loggings
      
      * fix mismatch of vector and allocator types
      
      * remove changes in main.cpp
      
      * fix bugs with uninitialized gradient discretizer
      
      * initialize ordered gradients in gradient discretizer
      
      * disable quantized training with gpu and cuda
      
      fix msvc compilation errors and warnings
      
      * fix bug in data parallel tree learner
      
      * make quantized training test deterministic
      
      * make quantized training in test case more accurate
      
      * refactor test_quantized_training
      
      * fix leaf splits initialization with quantized training
      
      * check distributed quantized training result
      
      * add cuda gradient discretizer
      
      * add quantized training for CUDA version in tree learner
      
      * remove cuda computability 6.1 and 6.2
      
      * fix parts of gpu quantized training errors and warnings
      
      * fix build-python.sh to install locally built version
      
      * fix memory access bugs
      
      * fix lint errors
      
      * mark cuda quantized training on cuda with categorical features as unsupported
      
      * rename cuda_utils.h to cuda_utils.hu
      
      * enable quantized training with cuda
      
      * fix cuda quantized training with sparse row data
      
      * allow using global memory buffer in histogram construction with cuda quantized training
      
      * recover build-python.sh
      
      enlarge allowed package size to 100M
      f901f471
  25. 12 Sep, 2023 1 commit
  26. 12 Jul, 2023 1 commit
  27. 30 Jun, 2023 1 commit
  28. 05 May, 2023 1 commit
    • shiyu1994's avatar
      Add quantized training (CPU part) (#5800) · 17ecfab3
      shiyu1994 authored
      * add quantized training (first stage)
      
      * add histogram construction functions for integer gradients
      
      * add stochastic rounding
      
      * update docs
      
      * fix compilation errors by adding template instantiations
      
      * update files for compilation
      
      * fix compilation of gpu version
      
      * initialize gradient discretizer before share states
      
      * add a test case for quantized training
      
      * add quantized training for data distributed training
      
      * Delete origin.pred
      
      * Delete ifelse.pred
      
      * Delete LightGBM_model.txt
      
      * remove useless changes
      
      * fix lint error
      
      * remove debug loggings
      
      * fix mismatch of vector and allocator types
      
      * remove changes in main.cpp
      
      * fix bugs with uninitialized gradient discretizer
      
      * initialize ordered gradients in gradient discretizer
      
      * disable quantized training with gpu and cuda
      
      fix msvc compilation errors and warnings
      
      * fix bug in data parallel tree learner
      
      * make quantized training test deterministic
      
      * make quantized training in test case more accurate
      
      * refactor test_quantized_training
      
      * fix leaf splits initialization with quantized training
      
      * check distributed quantized training result
      17ecfab3
  29. 15 Mar, 2023 1 commit
  30. 01 Feb, 2023 1 commit
    • James Lamb's avatar
      [CUDA] consolidate CUDA versions (#5677) · 4f47547c
      James Lamb authored
      
      
      * [ci] speed up if-else, swig, and lint conda setup
      
      * add 'source activate'
      
      * python constraint
      
      * start removing cuda v1
      
      * comment out CI
      
      * remove more references
      
      * revert some unnecessaary changes
      
      * revert a few more mistakes
      
      * revert another change that ignored params
      
      * sigh
      
      * remove CUDATreeLearner
      
      * fix tests, docs
      
      * fix quoting in setup.py
      
      * restore all CI
      
      * Apply suggestions from code review
      Co-authored-by: default avatarshiyu1994 <shiyu_k1994@qq.com>
      
      * Apply suggestions from code review
      
      * completely remove cuda_exp, update docs
      
      ---------
      Co-authored-by: default avatarshiyu1994 <shiyu_k1994@qq.com>
      4f47547c
  31. 11 Sep, 2022 1 commit
  32. 07 Sep, 2022 1 commit
  33. 02 Sep, 2022 1 commit
  34. 29 Aug, 2022 1 commit
  35. 03 Aug, 2022 1 commit
  36. 29 Jul, 2022 2 commits
  37. 08 Jun, 2022 1 commit
    • shiyu1994's avatar
      Clear split info buffer in cost efficient gradient boosting before every... · f1328d5c
      shiyu1994 authored
      Clear split info buffer in cost efficient gradient boosting before every iteration (fix partially #3679) (#5164)
      
      * clear split info buffer in cegb_ before every iteration
      
      * check nullable of cegb_ in serial_tree_learner.cpp
      
      * add a test case for checking the split buffer in CEGB
      
      * swith to Threading::For instead of raw OpenMP
      
      * apply review suggestions
      
      * apply review comments
      
      * remove device cpu
      f1328d5c
  38. 26 Apr, 2022 1 commit
  39. 24 Apr, 2022 1 commit