1. 24 Aug, 2019 1 commit
    • Guolin Ke's avatar
      normalize the lambdas in lambdamart objective (#2331) · 0dfda826
      Guolin Ke authored
      * norm the lambda scores
      
      * change default to false
      
      * update doc
      
      * typo
      
      * Update Parameters.rst
      
      * Update config.h
      
      * Update test_sklearn.py
      
      * Update test_sklearn.py
      
      * Update test_sklearn.py
      
      * Update test_sklearn.py
      
      * Update test_sklearn.py
      
      * Update rank_objective.hpp
      
      * Update Parameters.rst
      
      * Update config.h
      
      * Update test_sklearn.py
      
      * Update test_sklearn.py
      
      * Update test_sklearn.py
      0dfda826
  2. 20 Aug, 2019 1 commit
  3. 16 Aug, 2019 1 commit
    • Belinda Trotta's avatar
      Bug fix: small values of max_bin cause program to crash (#2299) · c421f898
      Belinda Trotta authored
      * Fix bug where small values of max_bin cause crash.
      
      * Revert "Fix bug where small values of max_bin cause crash."
      
      This reverts commit fe5c8e2547057c1fa5750bcddd359dd7708fab4b.
      
      * Fix bug where small values of max_bin cause crash.
      
      * Reset random seed in test, remove extra blank line.
      
      * Minor bug fix. Remove extra blank line.
      
      * Change old test to account for new binning behavior.
      c421f898
  4. 14 Aug, 2019 1 commit
  5. 28 Jul, 2019 1 commit
  6. 25 Jul, 2019 2 commits
  7. 24 Jul, 2019 1 commit
  8. 23 Jul, 2019 1 commit
  9. 18 Jul, 2019 1 commit
  10. 08 Jul, 2019 1 commit
    • Belinda Trotta's avatar
      Max bin by feature (#2190) · 291752de
      Belinda Trotta authored
      * Add parameter max_bin_by_feature.
      
      * Fix minor bug.
      
      * Fix minor bug.
      
      * Fix calculation of header size for writing binary file.
      
      * Fix style issues.
      
      * Fix python style issue.
      
      * Fix test and python style issue.
      291752de
  11. 20 Jun, 2019 1 commit
  12. 18 Jun, 2019 1 commit
    • Guolin Ke's avatar
      balanced bagging (#2214) · cdba7147
      Guolin Ke authored
      * add balanced bagging
      
      * refine code
      
      * fix format
      
      * clarify usage only for binary application
      cdba7147
  13. 05 Jun, 2019 1 commit
  14. 02 Jun, 2019 1 commit
  15. 26 May, 2019 1 commit
    • Belinda Trotta's avatar
      Top k multi error (#2178) · b3db9e92
      Belinda Trotta authored
      * Implement top-k multiclass error metric. Add new parameter top_k_threshold.
      
      * Add test for multiclass metrics
      
      * Make test less sensitive to avoid floating-point issues.
      
      * Change tabs to spaces.
      
      * Fix problem with test in Python 2. Refactor to use np.testing. Decrease number of training rounds so loss is larger and easier to compare.
      
      * Move multiclass tests into test_engine.py
      
      * Change parameter name from top_k_threshold to multi_error_top_k.
      
      * Fix top-k error metric to handle case where scores are equal. Update tests and docs.
      
      * Change name of top-k metric to multi_error@k.
      
      * Change tabs to spaces.
      
      * Fix formatting.
      
      * Fix minor issues in docs.
      b3db9e92
  16. 22 May, 2019 1 commit
  17. 16 May, 2019 1 commit
  18. 29 Apr, 2019 1 commit
  19. 13 Apr, 2019 2 commits
  20. 11 Apr, 2019 1 commit
  21. 04 Apr, 2019 1 commit
    • remcob-gr's avatar
      Add Cost Effective Gradient Boosting (#2014) · 76102284
      remcob-gr authored
      * Add configuration parameters for CEGB.
      
      * Add skeleton CEGB tree learner
      
      Like the original CEGB version, this inherits from SerialTreeLearner.
      Currently, it changes nothing from the original.
      
      * Track features used in CEGB tree learner.
      
      * Pull CEGB tradeoff and coupled feature penalty from config.
      
      * Implement finding best splits for CEGB
      
      This is heavily based on the serial version, but just adds using the coupled penalties.
      
      * Set proper defaults for cegb parameters.
      
      * Ensure sanity checks don't switch off CEGB.
      
      * Implement per-data-point feature penalties in CEGB.
      
      * Implement split penalty and remove unused parameters.
      
      * Merge changes from CEGB tree learner into serial tree learner
      
      * Represent features_used_in_data by a bitset, to reduce the memory overhead of CEGB, and add sanity checks for the lengths of the penalty vectors.
      
      * Fix bug where CEGB would incorrectly penalise a previously used feature
      
      The tree learner did not update the gains of previously computed leaf splits when splitting a leaf elsewhere in the tree.
      This caused it to prefer new features due to incorrectly penalising splitting on previously used features.
      
      * Document CEGB parameters and add them to the appropriate section.
      
      * Remove leftover reference to cegb tree learner.
      
      * Remove outdated diff.
      
      * Fix warnings
      
      * Fix minor issues identified by @StrikerRUS.
      
      * Add docs section on CEGB, including citation.
      
      * Fix link.
      
      * Fix CI failure.
      
      * Add some unit tests
      
      * Fix pylint issues.
      
      * Fix remaining pylint issue
      76102284
  22. 01 Apr, 2019 1 commit
  23. 26 Mar, 2019 1 commit
  24. 25 Mar, 2019 1 commit
  25. 26 Feb, 2019 1 commit
    • remcob-gr's avatar
      Add ability to move features from one data set to another in memory (#2006) · 219c943d
      remcob-gr authored
      * Initial attempt to implement appending features in-memory to another data set
      
      The intent is for this to enable munging files together easily, without needing to round-trip via numpy or write multiple copies to disk.
      In turn, that enables working more efficiently with data sets that were written separately.
      
      * Implement Dataset.dump_text, and fix small bug in appending of group bin boundaries.
      
      Dumping to text enables us to compare results, without having to worry about issues like features being reordered.
      
      * Add basic tests for validation logic for add_features_from.
      
      * Remove various internal mapping items from dataset text dumps
      
      These are too sensitive to the exact feature order chosen, which is not visible to the user.
      Including them in tests appears unnecessary, as the data dumping code should provide enough coverage.
      
      * Add test that add_features_from results in identical data sets according to dump_text.
      
      * Add test that booster behaviour after using add_features_from matches that of training on the full data
      
      This checks:
      - That training after add_features_from works at all
      - That add_features_from does not cause training to misbehave
      
      * Expose feature_penalty and monotone_types/constraints via get_field
      
      These getters allow us to check that add_features_from does the right thing with these vectors.
      
      * Add tests that add_features correctly handles feature_penalty and monotone_constraints.
      
      * Ensure add_features_from properly frees the added dataset and add unit test for this
      
      Since add_features_from moves the feature group pointers from the added dataset to the dataset being added to, the added dataset is invalid after the call.
      We must ensure we do not try and access this handle.
      
      * Remove some obsolete TODOs
      
      * Tidy up DumpTextFile by using a single iterator for each feature
      
      This iterators were also passed around as raw pointers without being freed, which is now fixed.
      
      * Factor out offsetting logic in AddFeaturesFrom
      
      * Remove obsolete TODO
      
      * Remove another TODO
      
      This one is debatable, test code can be a bit messy and duplicate-heavy, factoring it out tends to end badly.
      Leaving this for now, will revisit if adding more tests later on becomes a mess.
      
      * Add documentation for newly-added methods.
      
      * Fix whitespace issues identified by pylint.
      
      * Fix a few more whitespace issues.
      
      * Fix doc comments
      
      * Implement deep copying for feature groups.
      
      * Replace awkward std::move usage by emplace_back, and reduce vector size to num_features rather than num_total_features.
      
      * Copy feature groups in addFeaturesFrom, rather than moving them.
      
      * Fix bugs in FeatureGroup copy constructor and ensure source dataset remains usable
      
      * Add reserve to PushVector and PushOffset
      
      * Move definition of Clone into class body
      
      * Fix PR review issues
      
      * Fix for loop increment style.
      
      * Fix test failure
      
      * Some more docstring fixes.
      
      * Remove blank line
      219c943d
  26. 24 Feb, 2019 1 commit
  27. 06 Feb, 2019 1 commit
  28. 02 Feb, 2019 1 commit
  29. 31 Jan, 2019 1 commit
  30. 23 Jan, 2019 1 commit
  31. 18 Jan, 2019 1 commit
  32. 16 Jan, 2019 2 commits
    • Shahzad Lone's avatar
      Reserve vectors, to save reallocation costs. (#1949) · 24c9503f
      Shahzad Lone authored
      File: [LightGBM//src/io/dataset.cpp]
      Function: [138:FastFeatureBundling(...)]
      
      Reserving vectors where we already know the size to save on reallocation costs.
      
      Also removed a variable that was unnecessary.
      24c9503f
    • remcob-gr's avatar
      When loading a binary file, take feature penalty and monotone constraints from... · 61527856
      remcob-gr authored
      When loading a binary file, take feature penalty and monotone constraints from config if given there. (#1881)
      
      * When loading a binary file, take feature penalty from config if given there.
      
      * When loading a binary file, take feature penalty from config if given there.
      
      * Fix crash when num_features != num_total_features and feature_contri is given.
      
      * Apply the same logic to monotone_types_.
      
      * Fix indentation
      61527856
  33. 20 Dec, 2018 1 commit
  34. 01 Nov, 2018 1 commit
  35. 10 Oct, 2018 1 commit
  36. 09 Oct, 2018 1 commit
    • Guolin Ke's avatar
      average predictions for constant features (#1735) · c920e634
      Guolin Ke authored
      * average predictions for constant features
      
      * fix possible numerical issues in std::log.
      
      * fix pylint
      
      * fix bugs in c_api
      
      * fix styles
      
      * clean code for multi class
      
      * rewrite test
      
      * fix pylint
      
      * skip test_constant_features
      
      * refine test
      
      * fix tests
      
      * fix tests
      
      * update FAQ
      
      * fix test
      
      * Update FAQ.rst
      c920e634
  37. 11 Sep, 2018 1 commit