1. 04 Jul, 2017 1 commit
  2. 20 Jun, 2017 1 commit
  3. 17 Jun, 2017 1 commit
  4. 07 Jun, 2017 1 commit
  5. 06 Jun, 2017 1 commit
  6. 30 May, 2017 1 commit
    • Guolin Ke's avatar
      Support early stopping of prediction in CLI (#565) · 6d4c7b03
      Guolin Ke authored
      * fix multi-threading.
      
      * fix name style.
      
      * support in CLI version.
      
      * remove warnings.
      
      * Not default parameters.
      
      * fix if...else... .
      
      * fix bug.
      
      * fix warning.
      
      * refine c_api.
      
      * fix R-package.
      
      * fix R's warning.
      
      * fix tests.
      
      * fix pep8 .
      6d4c7b03
  7. 29 May, 2017 1 commit
    • cbecker's avatar
      Add prediction early stopping (#550) · 993bbd5f
      cbecker authored
      * Add early stopping for prediction
      
      * Fix GBDT if-else prediction with early stopping
      
      * Small C++ embelishments to early stopping API and functions
      
      * Fix early stopping efficiency issue by creating a singleton for no early stopping
      
      * Python improvements to early stopping API
      
      * Add assertion check for binary and multiclass prediction score length
      
      * Update vcxproj and vcxproj.filters with new early stopping files
      
      * Remove inline from PredictRaw(), the linker was not able to find it otherwise
      993bbd5f
  8. 15 May, 2017 1 commit
  9. 02 May, 2017 2 commits
  10. 28 Apr, 2017 1 commit
    • wxchan's avatar
      [MRG] translate model to if-else (#469) · 8a19834a
      wxchan authored
      * translate model to if-else
      
      * support multiclass and predictleaf
      
      * remove java option for now
      
      * support multi-thread
      
      * add task:convert_model
      8a19834a
  11. 27 Apr, 2017 1 commit
  12. 22 Apr, 2017 3 commits
  13. 17 Apr, 2017 2 commits
  14. 16 Apr, 2017 1 commit
    • Guolin Ke's avatar
      faster histogram sum up (#418) · 98c7c2a3
      Guolin Ke authored
      * some refactor.
      
      * two stage sum up to reduce sum up error.
      
      * add more two-stage sumup.
      
      * some refactor.
      
      * add alignment.
      
      * change name to aligned_allocator.
      
      * remove some useless sumup.
      
      * fix a warning.
      
      * add -march=native .
      
      * remove the padding of gradients.
      
      * no alignment.
      
      * fix test.
      
      * change KNumSumupGroup to 32768.
      
      * change gcc flags.
      98c7c2a3
  15. 13 Apr, 2017 1 commit
    • Guolin Ke's avatar
      remove additional cost for prediction task. (#404) · ab559101
      Guolin Ke authored
      * refine prediction logic.
      
      * fix test.
      
      * fix out_len in training score of Dart.
      
      * improve predict speed for high dimension data.
      
      * try use unordered_map for sparse prediction.
      
      * avoid using unordered_map.
      
      * clean code.
      
      * fix test.
      
      * move predict buffer to Predictor.
      ab559101
  16. 12 Apr, 2017 1 commit
  17. 10 Apr, 2017 1 commit
    • Guolin Ke's avatar
      refine prediction logic. (#395) · 71660f1c
      Guolin Ke authored
      * refine prediction logic.
      
      * fix test.
      
      * fix out_len in training score of Dart.
      
      * improve predict speed for high dimension data.
      71660f1c
  18. 09 Apr, 2017 1 commit
    • Huan Zhang's avatar
      Initial GPU acceleration support for LightGBM (#368) · 0bb4a825
      Huan Zhang authored
      * add dummy gpu solver code
      
      * initial GPU code
      
      * fix crash bug
      
      * first working version
      
      * use asynchronous copy
      
      * use a better kernel for root
      
      * parallel read histogram
      
      * sparse features now works, but no acceleration, compute on CPU
      
      * compute sparse feature on CPU simultaneously
      
      * fix big bug; add gpu selection; add kernel selection
      
      * better debugging
      
      * clean up
      
      * add feature scatter
      
      * Add sparse_threshold control
      
      * fix a bug in feature scatter
      
      * clean up debug
      
      * temporarily add OpenCL kernels for k=64,256
      
      * fix up CMakeList and definition USE_GPU
      
      * add OpenCL kernels as string literals
      
      * Add boost.compute as a submodule
      
      * add boost dependency into CMakeList
      
      * fix opencl pragma
      
      * use pinned memory for histogram
      
      * use pinned buffer for gradients and hessians
      
      * better debugging message
      
      * add double precision support on GPU
      
      * fix boost version in CMakeList
      
      * Add a README
      
      * reconstruct GPU initialization code for ResetTrainingData
      
      * move data to GPU in parallel
      
      * fix a bug during feature copy
      
      * update gpu kernels
      
      * update gpu code
      
      * initial port to LightGBM v2
      
      * speedup GPU data loading process
      
      * Add 4-bit bin support to GPU
      
      * re-add sparse_threshold parameter
      
      * remove kMaxNumWorkgroups and allows an unlimited number of features
      
      * add feature mask support for skipping unused features
      
      * enable kernel cache
      
      * use GPU kernels withoug feature masks when all features are used
      
      * REAdme.
      
      * REAdme.
      
      * update README
      
      * fix typos (#349)
      
      * change compile to gcc on Apple as default
      
      * clean vscode related file
      
      * refine api of constructing from sampling data.
      
      * fix bug in the last commit.
      
      * more efficient algorithm to sample k from n.
      
      * fix bug in filter bin
      
      * change to boost from average output.
      
      * fix tests.
      
      * only stop training when all classes are finshed in multi-class.
      
      * limit the max tree output. change hessian in multi-class objective.
      
      * robust tree model loading.
      
      * fix test.
      
      * convert the probabilities to raw score in boost_from_average of classification.
      
      * fix the average label for binary classification.
      
      * Add boost_from_average to docs (#354)
      
      * don't use "ConvertToRawScore" for self-defined objective function.
      
      * boost_from_average seems doesn't work well in binary classification. remove it.
      
      * For a better jump link (#355)
      
      * Update Python-API.md
      
      * for a better jump in page
      
      A space is needed between `#` and the headers content according to Github's markdown format [guideline](https://guides.github.com/features/mastering-markdown/)
      
      After adding the spaces, we can jump to the exact position in page by click the link.
      
      * fixed something mentioned by @wxchan
      
      * Update Python-API.md
      
      * add FitByExistingTree.
      
      * adapt GPU tree learner for FitByExistingTree
      
      * avoid NaN output.
      
      * update boost.compute
      
      * fix typos (#361)
      
      * fix broken links (#359)
      
      * update README
      
      * disable GPU acceleration by default
      
      * fix image url
      
      * cleanup debug macro
      
      * remove old README
      
      * do not save sparse_threshold_ in FeatureGroup
      
      * add details for new GPU settings
      
      * ignore submodule when doing pep8 check
      
      * allocate workspace for at least one thread during builing Feature4
      
      * move sparse_threshold to class Dataset
      
      * remove duplicated code in GPUTreeLearner::Split
      
      * Remove duplicated code in FindBestThresholds and BeforeFindBestSplit
      
      * do not rebuild ordered gradients and hessians for sparse features
      
      * support feature groups in GPUTreeLearner
      
      * Initial parallel learners with GPU support
      
      * add option device, cleanup code
      
      * clean up FindBestThresholds; add some omp parallel
      
      * constant hessian optimization for GPU
      
      * Fix GPUTreeLearner crash when there is zero feature
      
      * use np.testing.assert_almost_equal() to compare lists of floats in tests
      
      * travis for GPU
      0bb4a825
  19. 06 Apr, 2017 1 commit
  20. 05 Apr, 2017 1 commit
  21. 31 Mar, 2017 2 commits
  22. 30 Mar, 2017 3 commits
  23. 28 Mar, 2017 1 commit
  24. 26 Mar, 2017 1 commit
  25. 25 Mar, 2017 1 commit
  26. 24 Mar, 2017 4 commits
  27. 23 Mar, 2017 1 commit
  28. 22 Mar, 2017 2 commits
  29. 07 Mar, 2017 1 commit