1. 05 Nov, 2019 1 commit
  2. 22 Sep, 2019 1 commit
  3. 13 Apr, 2019 1 commit
  4. 11 Apr, 2019 1 commit
  5. 26 Mar, 2019 1 commit
  6. 26 Feb, 2019 1 commit
    • remcob-gr's avatar
      Add ability to move features from one data set to another in memory (#2006) · 219c943d
      remcob-gr authored
      * Initial attempt to implement appending features in-memory to another data set
      
      The intent is for this to enable munging files together easily, without needing to round-trip via numpy or write multiple copies to disk.
      In turn, that enables working more efficiently with data sets that were written separately.
      
      * Implement Dataset.dump_text, and fix small bug in appending of group bin boundaries.
      
      Dumping to text enables us to compare results, without having to worry about issues like features being reordered.
      
      * Add basic tests for validation logic for add_features_from.
      
      * Remove various internal mapping items from dataset text dumps
      
      These are too sensitive to the exact feature order chosen, which is not visible to the user.
      Including them in tests appears unnecessary, as the data dumping code should provide enough coverage.
      
      * Add test that add_features_from results in identical data sets according to dump_text.
      
      * Add test that booster behaviour after using add_features_from matches that of training on the full data
      
      This checks:
      - That training after add_features_from works at all
      - That add_features_from does not cause training to misbehave
      
      * Expose feature_penalty and monotone_types/constraints via get_field
      
      These getters allow us to check that add_features_from does the right thing with these vectors.
      
      * Add tests that add_features correctly handles feature_penalty and monotone_constraints.
      
      * Ensure add_features_from properly frees the added dataset and add unit test for this
      
      Since add_features_from moves the feature group pointers from the added dataset to the dataset being added to, the added dataset is invalid after the call.
      We must ensure we do not try and access this handle.
      
      * Remove some obsolete TODOs
      
      * Tidy up DumpTextFile by using a single iterator for each feature
      
      This iterators were also passed around as raw pointers without being freed, which is now fixed.
      
      * Factor out offsetting logic in AddFeaturesFrom
      
      * Remove obsolete TODO
      
      * Remove another TODO
      
      This one is debatable, test code can be a bit messy and duplicate-heavy, factoring it out tends to end badly.
      Leaving this for now, will revisit if adding more tests later on becomes a mess.
      
      * Add documentation for newly-added methods.
      
      * Fix whitespace issues identified by pylint.
      
      * Fix a few more whitespace issues.
      
      * Fix doc comments
      
      * Implement deep copying for feature groups.
      
      * Replace awkward std::move usage by emplace_back, and reduce vector size to num_features rather than num_total_features.
      
      * Copy feature groups in addFeaturesFrom, rather than moving them.
      
      * Fix bugs in FeatureGroup copy constructor and ensure source dataset remains usable
      
      * Add reserve to PushVector and PushOffset
      
      * Move definition of Clone into class body
      
      * Fix PR review issues
      
      * Fix for loop increment style.
      
      * Fix test failure
      
      * Some more docstring fixes.
      
      * Remove blank line
      219c943d
  7. 06 Feb, 2019 1 commit
  8. 02 Feb, 2019 1 commit
  9. 27 Feb, 2018 1 commit
    • ebernhardson's avatar
      Experimental support for HDFS (#1243) · 7e186a57
      ebernhardson authored
      * Read and write datsets from hdfs.
      * Only enabled when cmake is run with -DUSE_HDFS:BOOL=TRUE
      * Introduces VirtualFile(Reader|Writer) to asbtract VFS differences
      7e186a57
  10. 02 Sep, 2017 1 commit
  11. 20 Aug, 2017 1 commit
  12. 30 Jul, 2017 1 commit
    • Guolin Ke's avatar
      Better missing value handle (#747) · 00cb04a2
      Guolin Ke authored
      * finish the data loading part
      
      * allow prediction.
      
      * fix bug for decision type.
      
      * finish split finding part
      
      * fix bugs.
      
      * bug fixed. add a test .
      
      * fix pep8 .
      
      * update documents.
      
      * fix test bugs.
      
      * fix a format
      
      * fix import error in python test.
      
      * disable missing handle in categorial features.
      
      * fix a bug.
      
      * add more tests.
      
      * fix pep8
      
      * fix bugs.
      
      * remove the missing handle code for categorical feature.
      00cb04a2
  13. 16 May, 2017 1 commit
  14. 15 May, 2017 1 commit
  15. 26 Apr, 2017 2 commits
  16. 17 Apr, 2017 2 commits
  17. 16 Apr, 2017 1 commit
    • Guolin Ke's avatar
      faster histogram sum up (#418) · 98c7c2a3
      Guolin Ke authored
      * some refactor.
      
      * two stage sum up to reduce sum up error.
      
      * add more two-stage sumup.
      
      * some refactor.
      
      * add alignment.
      
      * change name to aligned_allocator.
      
      * remove some useless sumup.
      
      * fix a warning.
      
      * add -march=native .
      
      * remove the padding of gradients.
      
      * no alignment.
      
      * fix test.
      
      * change KNumSumupGroup to 32768.
      
      * change gcc flags.
      98c7c2a3
  18. 10 Apr, 2017 1 commit
  19. 09 Apr, 2017 1 commit
    • Huan Zhang's avatar
      Initial GPU acceleration support for LightGBM (#368) · 0bb4a825
      Huan Zhang authored
      * add dummy gpu solver code
      
      * initial GPU code
      
      * fix crash bug
      
      * first working version
      
      * use asynchronous copy
      
      * use a better kernel for root
      
      * parallel read histogram
      
      * sparse features now works, but no acceleration, compute on CPU
      
      * compute sparse feature on CPU simultaneously
      
      * fix big bug; add gpu selection; add kernel selection
      
      * better debugging
      
      * clean up
      
      * add feature scatter
      
      * Add sparse_threshold control
      
      * fix a bug in feature scatter
      
      * clean up debug
      
      * temporarily add OpenCL kernels for k=64,256
      
      * fix up CMakeList and definition USE_GPU
      
      * add OpenCL kernels as string literals
      
      * Add boost.compute as a submodule
      
      * add boost dependency into CMakeList
      
      * fix opencl pragma
      
      * use pinned memory for histogram
      
      * use pinned buffer for gradients and hessians
      
      * better debugging message
      
      * add double precision support on GPU
      
      * fix boost version in CMakeList
      
      * Add a README
      
      * reconstruct GPU initialization code for ResetTrainingData
      
      * move data to GPU in parallel
      
      * fix a bug during feature copy
      
      * update gpu kernels
      
      * update gpu code
      
      * initial port to LightGBM v2
      
      * speedup GPU data loading process
      
      * Add 4-bit bin support to GPU
      
      * re-add sparse_threshold parameter
      
      * remove kMaxNumWorkgroups and allows an unlimited number of features
      
      * add feature mask support for skipping unused features
      
      * enable kernel cache
      
      * use GPU kernels withoug feature masks when all features are used
      
      * REAdme.
      
      * REAdme.
      
      * update README
      
      * fix typos (#349)
      
      * change compile to gcc on Apple as default
      
      * clean vscode related file
      
      * refine api of constructing from sampling data.
      
      * fix bug in the last commit.
      
      * more efficient algorithm to sample k from n.
      
      * fix bug in filter bin
      
      * change to boost from average output.
      
      * fix tests.
      
      * only stop training when all classes are finshed in multi-class.
      
      * limit the max tree output. change hessian in multi-class objective.
      
      * robust tree model loading.
      
      * fix test.
      
      * convert the probabilities to raw score in boost_from_average of classification.
      
      * fix the average label for binary classification.
      
      * Add boost_from_average to docs (#354)
      
      * don't use "ConvertToRawScore" for self-defined objective function.
      
      * boost_from_average seems doesn't work well in binary classification. remove it.
      
      * For a better jump link (#355)
      
      * Update Python-API.md
      
      * for a better jump in page
      
      A space is needed between `#` and the headers content according to Github's markdown format [guideline](https://guides.github.com/features/mastering-markdown/)
      
      After adding the spaces, we can jump to the exact position in page by click the link.
      
      * fixed something mentioned by @wxchan
      
      * Update Python-API.md
      
      * add FitByExistingTree.
      
      * adapt GPU tree learner for FitByExistingTree
      
      * avoid NaN output.
      
      * update boost.compute
      
      * fix typos (#361)
      
      * fix broken links (#359)
      
      * update README
      
      * disable GPU acceleration by default
      
      * fix image url
      
      * cleanup debug macro
      
      * remove old README
      
      * do not save sparse_threshold_ in FeatureGroup
      
      * add details for new GPU settings
      
      * ignore submodule when doing pep8 check
      
      * allocate workspace for at least one thread during builing Feature4
      
      * move sparse_threshold to class Dataset
      
      * remove duplicated code in GPUTreeLearner::Split
      
      * Remove duplicated code in FindBestThresholds and BeforeFindBestSplit
      
      * do not rebuild ordered gradients and hessians for sparse features
      
      * support feature groups in GPUTreeLearner
      
      * Initial parallel learners with GPU support
      
      * add option device, cleanup code
      
      * clean up FindBestThresholds; add some omp parallel
      
      * constant hessian optimization for GPU
      
      * Fix GPUTreeLearner crash when there is zero feature
      
      * use np.testing.assert_almost_equal() to compare lists of floats in tests
      
      * travis for GPU
      0bb4a825
  20. 05 Apr, 2017 1 commit
  21. 06 Mar, 2017 1 commit
  22. 01 Mar, 2017 3 commits
  23. 25 Jan, 2017 1 commit
  24. 24 Jan, 2017 1 commit
  25. 23 Jan, 2017 1 commit
  26. 12 Jan, 2017 1 commit
  27. 10 Jan, 2017 1 commit
  28. 05 Dec, 2016 2 commits
  29. 02 Dec, 2016 1 commit
    • wxchan's avatar
      Squash into one commit: · eba6d200
      wxchan authored
      1. merge python-package
      2. add dump model to json
      3. fix bugs
      4. clean code with pylint
      5. update python examples
      eba6d200
  30. 24 Nov, 2016 1 commit
  31. 19 Nov, 2016 1 commit
  32. 18 Nov, 2016 1 commit
    • Guolin Ke's avatar
      Refactor for RAII (#86) · 5442ed78
      Guolin Ke authored
      * RAII for utils, application and c_api(partical)
      
      * raii for class in include folder
      
      * raii for application and boosting
      
      * raii for dataset and dataset loader
      
      * raii for dense bin and parser
      
      * RAII refactor for almost all classes
      
      * RAII for c_api
      
      * clean code
      
      * refine repeated code
      
      * Decouple the "sigmoid" between objective and boosting.
      
      * change std::vector<bool> back to std::vector<char> due to concurrence problem
      
      * slight reduce some memory cost
      5442ed78
  33. 07 Nov, 2016 1 commit
  34. 03 Nov, 2016 1 commit
  35. 01 Nov, 2016 1 commit