1. 13 Apr, 2017 2 commits
    • Guolin Ke's avatar
      remove additional cost for prediction task. (#404) · ab559101
      Guolin Ke authored
      * refine prediction logic.
      
      * fix test.
      
      * fix out_len in training score of Dart.
      
      * improve predict speed for high dimension data.
      
      * try use unordered_map for sparse prediction.
      
      * avoid using unordered_map.
      
      * clean code.
      
      * fix test.
      
      * move predict buffer to Predictor.
      ab559101
    • Huan Zhang's avatar
      Move Boost.Compute include path to the beginning of compiler search paths (#411) · 6be08748
      Huan Zhang authored
      * fix the size of vector cnt_per_class
      
      * put Boost.Compute includes at the beginning
      
      We want to put the submodule Boost.Compute at the beginning of compiler search
      paths, because the submodule usually contains upstream bug fixes that have not
      been included in the stable Boost releases installed on the host OS.
      6be08748
  2. 12 Apr, 2017 5 commits
    • Tsukasa OMOTO's avatar
      python-package: add graphviz.Digraph parameters (#400) · 053d9d8a
      Tsukasa OMOTO authored
      * python-package: add graphviz.Digraph parameters
      
      * examples: add a plottig example with graphviz
      
      * fix tree index in print
      053d9d8a
    • Huan Zhang's avatar
      fix the size of vector cnt_per_class (#402) · 223b164e
      Huan Zhang authored
      223b164e
    • Huan Zhang's avatar
      Add Some More GPU documentation (#401) · bbcd5f4d
      Huan Zhang authored
      * add dummy gpu solver code
      
      * initial GPU code
      
      * fix crash bug
      
      * first working version
      
      * use asynchronous copy
      
      * use a better kernel for root
      
      * parallel read histogram
      
      * sparse features now works, but no acceleration, compute on CPU
      
      * compute sparse feature on CPU simultaneously
      
      * fix big bug; add gpu selection; add kernel selection
      
      * better debugging
      
      * clean up
      
      * add feature scatter
      
      * Add sparse_threshold control
      
      * fix a bug in feature scatter
      
      * clean up debug
      
      * temporarily add OpenCL kernels for k=64,256
      
      * fix up CMakeList and definition USE_GPU
      
      * add OpenCL kernels as string literals
      
      * Add boost.compute as a submodule
      
      * add boost dependency into CMakeList
      
      * fix opencl pragma
      
      * use pinned memory for histogram
      
      * use pinned buffer for gradients and hessians
      
      * better debugging message
      
      * add double precision support on GPU
      
      * fix boost version in CMakeList
      
      * Add a README
      
      * reconstruct GPU initialization code for ResetTrainingData
      
      * move data to GPU in parallel
      
      * fix a bug during feature copy
      
      * update gpu kernels
      
      * update gpu code
      
      * initial port to LightGBM v2
      
      * speedup GPU data loading process
      
      * Add 4-bit bin support to GPU
      
      * re-add sparse_threshold parameter
      
      * remove kMaxNumWorkgroups and allows an unlimited number of features
      
      * add feature mask support for skipping unused features
      
      * enable kernel cache
      
      * use GPU kernels withoug feature masks when all features are used
      
      * REAdme.
      
      * REAdme.
      
      * update README
      
      * fix typos (#349)
      
      * change compile to gcc on Apple as default
      
      * clean vscode related file
      
      * refine api of constructing from sampling data.
      
      * fix bug in the last commit.
      
      * more efficient algorithm to sample k from n.
      
      * fix bug in filter bin
      
      * change to boost from average output.
      
      * fix tests.
      
      * only stop training when all classes are finshed in multi-class.
      
      * limit the max tree output. change hessian in multi-class objective.
      
      * robust tree model loading.
      
      * fix test.
      
      * convert the probabilities to raw score in boost_from_average of classification.
      
      * fix the average label for binary classification.
      
      * Add boost_from_average to docs (#354)
      
      * don't use "ConvertToRawScore" for self-defined objective function.
      
      * boost_from_average seems doesn't work well in binary classification. remove it.
      
      * For a better jump link (#355)
      
      * Update Python-API.md
      
      * for a better jump in page
      
      A space is needed between `#` and the headers content according to Github's markdown format [guideline](https://guides.github.com/features/mastering-markdown/)
      
      After adding the spaces, we can jump to the exact position in page by click the link.
      
      * fixed something mentioned by @wxchan
      
      * Update Python-API.md
      
      * add FitByExistingTree.
      
      * adapt GPU tree learner for FitByExistingTree
      
      * avoid NaN output.
      
      * update boost.compute
      
      * fix typos (#361)
      
      * fix broken links (#359)
      
      * update README
      
      * disable GPU acceleration by default
      
      * fix image url
      
      * cleanup debug macro
      
      * remove old README
      
      * do not save sparse_threshold_ in FeatureGroup
      
      * add details for new GPU settings
      
      * ignore submodule when doing pep8 check
      
      * allocate workspace for at least one thread during builing Feature4
      
      * move sparse_threshold to class Dataset
      
      * remove duplicated code in GPUTreeLearner::Split
      
      * Remove duplicated code in FindBestThresholds and BeforeFindBestSplit
      
      * do not rebuild ordered gradients and hessians for sparse features
      
      * support feature groups in GPUTreeLearner
      
      * Initial parallel learners with GPU support
      
      * add option device, cleanup code
      
      * clean up FindBestThresholds; add some omp parallel
      
      * constant hessian optimization for GPU
      
      * Fix GPUTreeLearner crash when there is zero feature
      
      * use np.testing.assert_almost_equal() to compare lists of floats in tests
      
      * travis for GPU
      
      * add tutorial and more GPU docs
      bbcd5f4d
    • Tsukasa OMOTO's avatar
      python-package: refine plot_tree (#388) · 61ad133b
      Tsukasa OMOTO authored
      * python-package: refine plot_tree
      
      This change splits plot_tree to two methods:
      1. method of creating digraph of tree
      2. method of plotting the digraph with matplotlib
      
      * fix doc
      
      * fix doc
      61ad133b
    • Guolin Ke's avatar
      fix #399 . · 546056e7
      Guolin Ke authored
      546056e7
  3. 10 Apr, 2017 4 commits
  4. 09 Apr, 2017 2 commits
    • Huan Zhang's avatar
      Initial GPU acceleration support for LightGBM (#368) · 0bb4a825
      Huan Zhang authored
      * add dummy gpu solver code
      
      * initial GPU code
      
      * fix crash bug
      
      * first working version
      
      * use asynchronous copy
      
      * use a better kernel for root
      
      * parallel read histogram
      
      * sparse features now works, but no acceleration, compute on CPU
      
      * compute sparse feature on CPU simultaneously
      
      * fix big bug; add gpu selection; add kernel selection
      
      * better debugging
      
      * clean up
      
      * add feature scatter
      
      * Add sparse_threshold control
      
      * fix a bug in feature scatter
      
      * clean up debug
      
      * temporarily add OpenCL kernels for k=64,256
      
      * fix up CMakeList and definition USE_GPU
      
      * add OpenCL kernels as string literals
      
      * Add boost.compute as a submodule
      
      * add boost dependency into CMakeList
      
      * fix opencl pragma
      
      * use pinned memory for histogram
      
      * use pinned buffer for gradients and hessians
      
      * better debugging message
      
      * add double precision support on GPU
      
      * fix boost version in CMakeList
      
      * Add a README
      
      * reconstruct GPU initialization code for ResetTrainingData
      
      * move data to GPU in parallel
      
      * fix a bug during feature copy
      
      * update gpu kernels
      
      * update gpu code
      
      * initial port to LightGBM v2
      
      * speedup GPU data loading process
      
      * Add 4-bit bin support to GPU
      
      * re-add sparse_threshold parameter
      
      * remove kMaxNumWorkgroups and allows an unlimited number of features
      
      * add feature mask support for skipping unused features
      
      * enable kernel cache
      
      * use GPU kernels withoug feature masks when all features are used
      
      * REAdme.
      
      * REAdme.
      
      * update README
      
      * fix typos (#349)
      
      * change compile to gcc on Apple as default
      
      * clean vscode related file
      
      * refine api of constructing from sampling data.
      
      * fix bug in the last commit.
      
      * more efficient algorithm to sample k from n.
      
      * fix bug in filter bin
      
      * change to boost from average output.
      
      * fix tests.
      
      * only stop training when all classes are finshed in multi-class.
      
      * limit the max tree output. change hessian in multi-class objective.
      
      * robust tree model loading.
      
      * fix test.
      
      * convert the probabilities to raw score in boost_from_average of classification.
      
      * fix the average label for binary classification.
      
      * Add boost_from_average to docs (#354)
      
      * don't use "ConvertToRawScore" for self-defined objective function.
      
      * boost_from_average seems doesn't work well in binary classification. remove it.
      
      * For a better jump link (#355)
      
      * Update Python-API.md
      
      * for a better jump in page
      
      A space is needed between `#` and the headers content according to Github's markdown format [guideline](https://guides.github.com/features/mastering-markdown/)
      
      After adding the spaces, we can jump to the exact position in page by click the link.
      
      * fixed something mentioned by @wxchan
      
      * Update Python-API.md
      
      * add FitByExistingTree.
      
      * adapt GPU tree learner for FitByExistingTree
      
      * avoid NaN output.
      
      * update boost.compute
      
      * fix typos (#361)
      
      * fix broken links (#359)
      
      * update README
      
      * disable GPU acceleration by default
      
      * fix image url
      
      * cleanup debug macro
      
      * remove old README
      
      * do not save sparse_threshold_ in FeatureGroup
      
      * add details for new GPU settings
      
      * ignore submodule when doing pep8 check
      
      * allocate workspace for at least one thread during builing Feature4
      
      * move sparse_threshold to class Dataset
      
      * remove duplicated code in GPUTreeLearner::Split
      
      * Remove duplicated code in FindBestThresholds and BeforeFindBestSplit
      
      * do not rebuild ordered gradients and hessians for sparse features
      
      * support feature groups in GPUTreeLearner
      
      * Initial parallel learners with GPU support
      
      * add option device, cleanup code
      
      * clean up FindBestThresholds; add some omp parallel
      
      * constant hessian optimization for GPU
      
      * Fix GPUTreeLearner crash when there is zero feature
      
      * use np.testing.assert_almost_equal() to compare lists of floats in tests
      
      * travis for GPU
      0bb4a825
    • Guolin Ke's avatar
      fix a bug of filter count. · db3d1f89
      Guolin Ke authored
      db3d1f89
  5. 07 Apr, 2017 4 commits
  6. 06 Apr, 2017 1 commit
  7. 05 Apr, 2017 2 commits
  8. 03 Apr, 2017 1 commit
  9. 02 Apr, 2017 2 commits
  10. 31 Mar, 2017 2 commits
  11. 30 Mar, 2017 4 commits
  12. 29 Mar, 2017 2 commits
  13. 28 Mar, 2017 3 commits
  14. 27 Mar, 2017 4 commits
  15. 26 Mar, 2017 2 commits