"vscode:/vscode.git/clone" did not exist on "ac37bf8a1dcc981dedadd9943e338e10ec072c01"
  1. 21 Mar, 2023 1 commit
  2. 01 Feb, 2023 1 commit
    • James Lamb's avatar
      [CUDA] consolidate CUDA versions (#5677) · 4f47547c
      James Lamb authored
      
      
      * [ci] speed up if-else, swig, and lint conda setup
      
      * add 'source activate'
      
      * python constraint
      
      * start removing cuda v1
      
      * comment out CI
      
      * remove more references
      
      * revert some unnecessaary changes
      
      * revert a few more mistakes
      
      * revert another change that ignored params
      
      * sigh
      
      * remove CUDATreeLearner
      
      * fix tests, docs
      
      * fix quoting in setup.py
      
      * restore all CI
      
      * Apply suggestions from code review
      Co-authored-by: default avatarshiyu1994 <shiyu_k1994@qq.com>
      
      * Apply suggestions from code review
      
      * completely remove cuda_exp, update docs
      
      ---------
      Co-authored-by: default avatarshiyu1994 <shiyu_k1994@qq.com>
      4f47547c
  3. 29 Dec, 2022 1 commit
  4. 28 Dec, 2022 1 commit
    • Yifei Liu's avatar
      Decouple Boosting Types (fixes #3128) (#4827) · fffd066c
      Yifei Liu authored
      
      
      * add parameter data_sample_strategy
      
      * abstract GOSS as a sample strategy(GOSS1), togetherwith origial GOSS (Normal Bagging has not been abstracted, so do NOT use it now)
      
      * abstract Bagging as a subclass (BAGGING), but original Bagging members in GBDT are still kept
      
      * fix some variables
      
      * remove GOSS(as boost) and Bagging logic in GBDT
      
      * rename GOSS1 to GOSS(as sample strategy)
      
      * add warning about use GOSS as boosting_type
      
      * a little ; bug
      
      * remove CHECK when "gradients != nullptr"
      
      * rename DataSampleStrategy to avoid confusion
      
      * remove and add some ccomments, followingconvention
      
      * fix bug about GBDT::ResetConfig (ObjectiveFunction inconsistencty bet…
      
      * add std::ignore to avoid compiler warnings (anpotential fails)
      
      * update Makevars and vcxproj
      
      * handle constant hessian
      
      move resize of gradient vectors out of sample strategy
      
      * mark override for IsHessianChange
      
      * fix lint errors
      
      * rerun parameter_generator.py
      
      * update config_auto.cpp
      
      * delete redundant blank line
      
      * update num_data_ when train_data_ is updated
      
      set gradients and hessians when GOSS
      
      * check bagging_freq is not zero
      
      * reset config_ value
      
      merge ResetBaggingConfig and ResetGOSS
      
      * remove useless check
      
      * add ttests in test_engine.py
      
      * remove whitespace in blank line
      
      * remove arguments verbose_eval and evals_result
      
      * Update tests/python_package_test/test_engine.py
      
      reduce num_boost_round
      Co-authored-by: default avatarJames Lamb <jaylamb20@gmail.com>
      
      * Update tests/python_package_test/test_engine.py
      
      reduce num_boost_round
      Co-authored-by: default avatarJames Lamb <jaylamb20@gmail.com>
      
      * Update tests/python_package_test/test_engine.py
      
      reduce num_boost_round
      Co-authored-by: default avatarJames Lamb <jaylamb20@gmail.com>
      
      * Update tests/python_package_test/test_engine.py
      
      reduce num_boost_round
      Co-authored-by: default avatarJames Lamb <jaylamb20@gmail.com>
      
      * Update tests/python_package_test/test_engine.py
      
      reduce num_boost_round
      Co-authored-by: default avatarJames Lamb <jaylamb20@gmail.com>
      
      * Update tests/python_package_test/test_engine.py
      
      reduce num_boost_round
      Co-authored-by: default avatarJames Lamb <jaylamb20@gmail.com>
      
      * Update src/boosting/sample_strategy.cpp
      
      modify warning about setting goss as `boosting_type`
      Co-authored-by: default avatarJames Lamb <jaylamb20@gmail.com>
      
      * Update tests/python_package_test/test_engine.py
      
      replace load_boston() with make_regression()
      
      remove value checks of mean_squared_error in test_sample_strategy_with_boosting()
      
      * Update tests/python_package_test/test_engine.py
      
      add value checks of mean_squared_error in test_sample_strategy_with_boosting()
      
      * Modify warnning about using goss as boosting type
      
      * Update tests/python_package_test/test_engine.py
      
      add random_state=42 for make_regression()
      
      reduce the threshold of mean_square_error
      
      * Update src/boosting/sample_strategy.cpp
      Co-authored-by: default avatarJames Lamb <jaylamb20@gmail.com>
      
      * remove goss from boosting types in documentation
      
      * Update src/boosting/bagging.hpp
      Co-authored-by: default avatarNikita Titov <nekit94-08@mail.ru>
      
      * Update src/boosting/bagging.hpp
      Co-authored-by: default avatarNikita Titov <nekit94-08@mail.ru>
      
      * Update src/boosting/goss.hpp
      Co-authored-by: default avatarNikita Titov <nekit94-08@mail.ru>
      
      * Update src/boosting/goss.hpp
      Co-authored-by: default avatarNikita Titov <nekit94-08@mail.ru>
      
      * rename GOSS with GOSSStrategy
      
      * update doc
      
      * address comments
      
      * fix table in doc
      
      * Update include/LightGBM/config.h
      Co-authored-by: default avatarNikita Titov <nekit94-08@mail.ru>
      
      * update documentation
      
      * update test case
      
      * revert useless change in test_engine.py
      
      * add tests for evaluation results in test_sample_strategy_with_boosting
      
      * include <string>
      
      * change to assert_allclose in test_goss_boosting_and_strategy_equivalent
      
      * more tolerance in result checking, due to minor difference in results of gpu versions
      
      * change == to np.testing.assert_allclose
      
      * fix test case
      
      * set gpu_use_dp to true
      
      * change --report to --report-level for rstcheck
      
      * use gpu_use_dp=true in test_goss_boosting_and_strategy_equivalent
      
      * revert unexpected changes of non-ascii characters
      
      * revert unexpected changes of non-ascii characters
      
      * remove useless changes
      
      * allocate gradients_pointer_ and hessians_pointer when necessary
      
      * add spaces
      
      * remove redundant virtual
      
      * include <LightGBM/utils/log.h> for USE_CUDA
      
      * check for  in test_goss_boosting_and_strategy_equivalent
      
      * check for identity in test_sample_strategy_with_boosting
      
      * remove cuda  option in test_sample_strategy_with_boosting
      
      * Update tests/python_package_test/test_engine.py
      Co-authored-by: default avatarNikita Titov <nekit94-08@mail.ru>
      
      * Update tests/python_package_test/test_engine.py
      Co-authored-by: default avatarJames Lamb <jaylamb20@gmail.com>
      
      * ResetGradientBuffers after ResetSampleConfig
      
      * ResetGradientBuffers after ResetSampleConfig
      
      * ResetGradientBuffers after bagging
      
      * remove useless code
      
      * check objective_function_ instead of gradients
      
      * enable rf with goss
      
      simplify params in test cases
      
      * remove useless changes
      
      * allow rf with feature subsampling alone
      
      * change position of ResetGradientBuffers
      
      * check for dask
      
      * add parameter types for data_sample_strategy
      Co-authored-by: default avatarGuangda Liu <v-guangdaliu@microsoft.com>
      Co-authored-by: default avatarYu Shi <shiyu_k1994@qq.com>
      Co-authored-by: default avatarGuangdaLiu <90019144+GuangdaLiu@users.noreply.github.com>
      Co-authored-by: default avatarJames Lamb <jaylamb20@gmail.com>
      Co-authored-by: default avatarNikita Titov <nekit94-08@mail.ru>
      fffd066c
  5. 27 Dec, 2022 1 commit
    • shiyu1994's avatar
      [CUDA] Add L2 metric for new CUDA version (#5633) · 6482b47e
      shiyu1994 authored
      * add rmse metric for new cuda version
      
      * add Init for CUDAMetricInterface
      
      * fix lint errors
      
      * fix rmse and add l2 metric for new cuda version
      
      * use CUDAL2Metric
      
      * explicit template instantiation
      
      * write result only with the first thread
      
      * pre allocate buffer for output converting
      
      * fix l2 regression with cuda metric evaluation
      
      * weighting loss in cuda metric evaluation
      
      * mark CUDATree::AsConstantTree as override
      6482b47e
  6. 02 Dec, 2022 1 commit
  7. 27 Nov, 2022 1 commit
    • shiyu1994's avatar
      [CUDA] Add Poisson regression objective for cuda_exp and refactor objective... · 24af9fa5
      shiyu1994 authored
      
      [CUDA] Add Poisson regression objective for cuda_exp and refactor objective functions for cuda_exp (#5486)
      
      * add poisson regression objective for cuda_exp
      
      * enable Poisson regression for cuda_exp
      
      * refactor cuda objective functions
      
      * remove useless changes
      
      * fix linter errors
      
      * remove redundant buffer in cuda poisson regression objective
      
      * fix log of cuda_exp binary objective
      
      * fix threshold of poisson objective result
      
      * remove useless changes
      
      * fix compilation errors
      
      * add cuda quantile regression objective
      
      * remove cuda quantile regression objective
      Co-authored-by: default avatarJames Lamb <jaylamb20@gmail.com>
      24af9fa5
  8. 06 Nov, 2022 1 commit
  9. 11 Sep, 2022 1 commit
  10. 09 Sep, 2022 1 commit
  11. 07 Sep, 2022 2 commits
  12. 05 Sep, 2022 2 commits
  13. 02 Sep, 2022 1 commit
  14. 01 Sep, 2022 1 commit
  15. 31 Aug, 2022 2 commits
    • shiyu1994's avatar
      [CUDA] L2 regression objective for cuda_exp (#5452) · 9e89ee7f
      shiyu1994 authored
      * add (l2) regression objective for cuda_exp
      
      * fix lint errors
      
      * correct time tag
      9e89ee7f
    • shiyu1994's avatar
      [CUDA] Add binary objective for cuda_exp (#5425) · 2b8fe8b4
      shiyu1994 authored
      * add binary objective for cuda_exp
      
      * include <string> and <vector>
      
      * exchange include ordering
      
      * fix length of score to copy in evaluation
      
      * fix EvalOneMetric
      
      * fix cuda binary objective and prediction when boosting on gpu
      
      * Add white space
      
      * fix BoostFromScore for CUDABinaryLogloss
      
      update log in test_register_logger
      
      * include <algorithm>
      
      * simplify shared memory buffer
      2b8fe8b4
  16. 29 Aug, 2022 1 commit
  17. 29 Jul, 2022 1 commit
  18. 10 May, 2022 1 commit
  19. 20 Jan, 2022 1 commit
  20. 08 Oct, 2021 1 commit
  21. 25 Jun, 2021 1 commit
  22. 21 May, 2021 1 commit
  23. 18 May, 2021 1 commit
  24. 04 May, 2021 1 commit
  25. 17 Mar, 2021 1 commit
    • ashok-ponnuswami-msft's avatar
      Range check for DCG position discount lookup (#4069) · 4580393f
      ashok-ponnuswami-msft authored
      * Add check to prevent out of index lookup in the position discount table. Add debug logging to report number of queries found in the data.
      
      * Change debug logging location so that we can print the data file name as well.
      
      * Revert "Change debug logging location so that we can print the data file name as well."
      
      This reverts commit 3981b34bd6e0530f89c4733e78e6b6603bf50d48.
      
      * Add data file name to debug logging.
      
      * Move log line to a place where it is output even when query IDs are read from a separate file.
      
      * Also add the out-of-range check to rank metrics.
      
      * Perform check after number of queries is initialized.
      
      * Update
      4580393f
  26. 11 Dec, 2020 1 commit
  27. 23 Nov, 2020 1 commit
  28. 10 Nov, 2020 1 commit
  29. 27 Oct, 2020 1 commit
    • Pavel Metrikov's avatar
      Add support to optimize for NDCG at a given truncation level (#3425) · ba0a1f8d
      Pavel Metrikov authored
      
      
      * Add support to optimize for NDCG at a given truncation level
      
      In order to correctly optimize for NDCG@_k_, one should exclude pairs containing both documents beyond the top-_k_ (as they don't affect NDCG@_k_ when swapped).
      
      * Update rank_objective.hpp
      
      * Apply suggestions from code review
      Co-authored-by: default avatarGuolin Ke <guolin.ke@outlook.com>
      
      * Update rank_objective.hpp
      
      remove the additional branching: get high_rank and low_rank by one "if".
      
      * Update config.h
      
      add description to lambdarank_truncation_level parameter
      
      * Update Parameters.rst
      
      * Update test_sklearn.py
      
      update expected NDCG value for a test, as it was affected by the underlying change in the algorithm
      
      * Update test_sklearn.py
      
      update NDCG@3 reference value
      
      * fix R learning-to-rank tests
      
      * Update rank_objective.hpp
      
      * Update include/LightGBM/config.h
      Co-authored-by: default avatarGuolin Ke <guolin.ke@outlook.com>
      
      * Update Parameters.rst
      Co-authored-by: default avatarGuolin Ke <guolin.ke@outlook.com>
      Co-authored-by: default avatarJames Lamb <jaylamb20@gmail.com>
      ba0a1f8d
  30. 05 Aug, 2020 1 commit
  31. 05 Jun, 2020 1 commit
  32. 01 Jun, 2020 1 commit
  33. 21 May, 2020 1 commit
  34. 12 May, 2020 1 commit
  35. 30 Apr, 2020 1 commit
  36. 04 Mar, 2020 1 commit
  37. 27 Feb, 2020 1 commit