- 04 Sep, 2023 1 commit
-
-
James Lamb authored
-
- 19 Aug, 2023 1 commit
-
-
James Lamb authored
-
- 15 Aug, 2023 1 commit
-
-
James Lamb authored
-
- 06 Jul, 2023 1 commit
-
-
James Lamb authored
-
- 04 Jul, 2023 2 commits
-
-
James Lamb authored
-
James Lamb authored
-
- 08 May, 2023 1 commit
-
-
James Lamb authored
-
- 04 May, 2023 1 commit
-
-
James Lamb authored
-
- 07 Apr, 2023 1 commit
-
-
James Lamb authored
-
- 27 Mar, 2023 1 commit
-
-
James Lamb authored
-
- 07 Mar, 2023 1 commit
-
-
James Lamb authored
-
- 01 Feb, 2023 1 commit
-
-
James Lamb authored
* [ci] speed up if-else, swig, and lint conda setup * add 'source activate' * python constraint * start removing cuda v1 * comment out CI * remove more references * revert some unnecessaary changes * revert a few more mistakes * revert another change that ignored params * sigh * remove CUDATreeLearner * fix tests, docs * fix quoting in setup.py * restore all CI * Apply suggestions from code review Co-authored-by:
shiyu1994 <shiyu_k1994@qq.com> * Apply suggestions from code review * completely remove cuda_exp, update docs --------- Co-authored-by:
shiyu1994 <shiyu_k1994@qq.com>
-
- 31 Jan, 2023 1 commit
-
-
James Lamb authored
-
- 10 Jan, 2023 1 commit
-
-
James Lamb authored
-
- 05 Jan, 2023 1 commit
-
-
James Lamb authored
-
- 29 Dec, 2022 1 commit
-
-
James Lamb authored
-
- 28 Dec, 2022 2 commits
-
-
James Lamb authored
-
James Lamb authored
-
- 15 Dec, 2022 1 commit
-
-
Nikita Titov authored
-
- 27 Nov, 2022 1 commit
-
-
James Lamb authored
-
- 25 Nov, 2022 1 commit
-
-
James Lamb authored
-
- 09 Oct, 2022 1 commit
-
-
James Lamb authored
-
- 08 Oct, 2022 1 commit
-
-
James Lamb authored
-
- 15 Sep, 2022 1 commit
-
-
James Lamb authored
-
- 12 Sep, 2022 1 commit
-
-
James Lamb authored
-
- 29 Aug, 2022 1 commit
-
-
shiyu1994 authored
* fix cuda_exp ci * fix ci failures introduced by #5279 * cleanup cuda.yml * fix test.sh * clean up test.sh * clean up test.sh * skip lines by cuda_exp in test_register_logger * Update tests/python_package_test/test_utilities.py Co-authored-by:
Nikita Titov <nekit94-08@mail.ru> Co-authored-by:
Nikita Titov <nekit94-08@mail.ru>
-
- 28 Aug, 2022 1 commit
-
-
Nikita Titov authored
-
- 25 Aug, 2022 1 commit
-
-
Nikita Titov authored
Update cuda.yml
-
- 11 Aug, 2022 1 commit
-
-
James Lamb authored
-
- 23 May, 2022 1 commit
-
-
James Lamb authored
-
- 22 May, 2022 1 commit
-
-
James Lamb authored
-
- 01 May, 2022 1 commit
-
-
James Lamb authored
* [ci] fix git checkout for comment-triggered CI jobs * set locale prior to installing packages * comment out cuda * change strategy for setting locale * comment out R jobs * comment out more CI jobs * update locales before installing other packages * remove unnecessary packages * add libc6 back * restore libicu and libssl * Revert "comment out more CI jobs" This reverts commit 8fd92144ad1dafc33ae699d7c3e159d8846e41b2. * uncomment CI jobs * revert more changes * more reverting * remove r_package.yml from diff
-
- 15 Apr, 2022 1 commit
-
-
James Lamb authored
-
- 14 Apr, 2022 1 commit
-
-
Nikita Titov authored
-
- 10 Apr, 2022 1 commit
-
-
James Lamb authored
* [ci] update to R 4.1.3 and use macOS-latest for R jobs (fixes #4990) * update Windows version * update docs env * simplify r-package config
-
- 09 Apr, 2022 1 commit
-
-
James Lamb authored
-
- 06 Apr, 2022 1 commit
-
-
James Lamb authored
* [ci] use lee-dohm/no-response to close stale issues (fixes #5060) * only run once a day
-
- 02 Apr, 2022 1 commit
-
-
Nikita Titov authored
* Update static_analysis.yml * Update README.md * Update README.md
-
- 01 Apr, 2022 1 commit
-
-
david-cortes authored
[R-package] Promote number of threads to top-level argument in `lightgbm()` and change default to number of cores (#4972)
-
- 23 Mar, 2022 1 commit
-
-
shiyu1994 authored
* new cuda framework * add histogram construction kernel * before removing multi-gpu * new cuda framework * tree learner cuda kernels * single tree framework ready * single tree training framework * remove comments * boosting with cuda * optimize for best split find * data split * move boosting into cuda * parallel synchronize best split point * merge split data kernels * before code refactor * use tasks instead of features as units for split finding * refactor cuda best split finder * fix configuration error with small leaves in data split * skip histogram construction of too small leaf * skip split finding of invalid leaves stop when no leaf to split * support row wise with CUDA * copy data for split by column * copy data from host to CPU by column for data partition * add synchronize best splits for one leaf from multiple blocks * partition dense row data * fix sync best split from task blocks * add support for sparse row wise for CUDA * remove useless code * add l2 regression objective * sparse multi value bin enabled for CUDA * fix cuda ranking objective * support for number of items <= 2048 per query * speedup histogram construction by interleaving global memory access * split optimization * add cuda tree predictor * remove comma * refactor objective and score updater * before use struct * use structure for split information * use structure for leaf splits * return CUDASplitInfo directly after finding best split * split with CUDATree directly * use cuda row data in cuda histogram constructor * clean src/treelearner/cuda * gather shared cuda device functions * put shared CUDA functions into header file * change smaller leaf from <= back to < for consistent result with CPU * add tree predictor * remove useless cuda_tree_predictor * predict on CUDA with pipeline * add global sort algorithms * add global argsort for queries with many items in ranking tasks * remove limitation of maximum number of items per query in ranking * add cuda metrics * fix CUDA AUC * remove debug code * add regression metrics * remove useless file * don't use mask in shuffle reduce * add more regression objectives * fix cuda mape loss add cuda xentropy loss * use template for different versions of BitonicArgSortDevice * add multiclass metrics * add ndcg metric * fix cross entropy objectives and metrics * fix cross entropy and ndcg metrics * add support for customized objective in CUDA * complete multiclass ova for CUDA * separate cuda tree learner * use shuffle based prefix sum * clean up cuda_algorithms.hpp * add copy subset on CUDA * add bagging for CUDA * clean up code * copy gradients from host to device * support bagging without using subset * add support of bagging with subset for CUDAColumnData * add support of bagging with subset for dense CUDARowData * refactor copy sparse subrow * use copy subset for column subset * add reset train data and reset config for CUDA tree learner add deconstructors for cuda tree learner * add USE_CUDA ifdef to cuda tree learner files * check that dataset doesn't contain CUDA tree learner * remove printf debug information * use full new cuda tree learner only when using single GPU * disable all CUDA code when using CPU version * recover main.cpp * add cpp files for multi value bins * update LightGBM.vcxproj * update LightGBM.vcxproj fix lint errors * fix lint errors * fix lint errors * update Makevars fix lint errors * fix the case with 0 feature and 0 bin fix split finding for invalid leaves create cuda column data when loaded from bin file * fix lint errors hide GetRowWiseData when cuda is not used * recover default device type to cpu * fix na_as_missing case fix cuda feature meta information * fix UpdateDataIndexToLeafIndexKernel * create CUDA trees when needed in CUDADataPartition::UpdateTrainScore * add refit by tree for cuda tree learner * fix test_refit in test_engine.py * create set of large bin partitions in CUDARowData * add histogram construction for columns with a large number of bins * add find best split for categorical features on CUDA * add bitvectors for categorical split * cuda data partition split for categorical features * fix split tree with categorical feature * fix categorical feature splits * refactor cuda_data_partition.cu with multi-level templates * refactor CUDABestSplitFinder by grouping task information into struct * pre-allocate space for vector split_find_tasks_ in CUDABestSplitFinder * fix misuse of reference * remove useless changes * add support for path smoothing * virtual destructor for LightGBM::Tree * fix overlapped cat threshold in best split infos * reset histogram pointers in data partition and spllit finder in ResetConfig * comment useless parameter * fix reverse case when na is missing and default bin is zero * fix mfb_is_na and mfb_is_zero and is_single_feature_column * remove debug log * fix cat_l2 when one-hot fix gradient copy when data subset is used * switch shared histogram size according to CUDA version * gpu_use_dp=true when cuda test * revert modification in config.h * fix setting of gpu_use_dp=true in .ci/test.sh * fix linter errors * fix linter error remove useless change * recover main.cpp * separate cuda_exp and cuda * fix ci bash scripts add description for cuda_exp * add USE_CUDA_EXP flag * switch off USE_CUDA_EXP * revert changes in python-packages * more careful separation for USE_CUDA_EXP * fix CUDARowData::DivideCUDAFeatureGroups fix set fields for cuda metadata * revert config.h * fix test settings for cuda experimental version * skip some tests due to unsupported features or differences in implementation details for CUDA Experimental version * fix lint issue by adding a blank line * fix lint errors by resorting imports * fix lint errors by resorting imports * fix lint errors by resorting imports * merge cuda.yml and cuda_exp.yml * update python version in cuda.yml * remove cuda_exp.yml * remove unrelated changes * fix compilation warnings fix cuda exp ci task name * recover task * use multi-level template in histogram construction check split only in debug mode * ignore NVCC related lines in parameter_generator.py * update job name for CUDA tests * apply review suggestions * Update .github/workflows/cuda.yml Co-authored-by:
Nikita Titov <nekit94-08@mail.ru> * Update .github/workflows/cuda.yml Co-authored-by:
Nikita Titov <nekit94-08@mail.ru> * update header * remove useless TODOs * remove [TODO(shiyu1994): constrain the split with min_data_in_group] and record in #5062 * #include <LightGBM/utils/log.h> for USE_CUDA_EXP only * fix include order * fix include order * remove extra space * address review comments * add warning when cuda_exp is used together with deterministic * add comment about gpu_use_dp in .ci/test.sh * revert changing order of included headers Co-authored-by:
Yu Shi <shiyu1994@qq.com> Co-authored-by:
Nikita Titov <nekit94-08@mail.ru>
-