- 28 Sep, 2019 1 commit
-
-
Belinda Trotta authored
* Fix bug where small values of max_bin cause crash. * Revert "Fix bug where small values of max_bin cause crash." This reverts commit fe5c8e2547057c1fa5750bcddd359dd7708fab4b. * Add functionality to force bin thresholds. * Fix style issues. * Use stable sort. * Minor style and doc fixes. * Add functionality to force bin thresholds. * Fix style issues. * Use stable sort. * Minor style and doc fixes. * Change binning behavior to be same as PR #2342. * Add functionality to force bin thresholds. * Fix style issues. * Use stable sort. * Minor style and doc fixes. * Add functionality to force bin thresholds. * Fix style issues. * Use stable sort. * Minor style and doc fixes. * Change binning behavior to be same as PR #2342. * Add functionality to force bin thresholds. * Fix style issues. * Minor style and doc fixes. * Add functionality to force bin thresholds. * Fix style issues. * Minor style and doc fixes. * Change binning behavior to be same as PR #2342. * Add functionality to force bin thresholds. * Fix style issues. * Use stable sort. * Minor style and doc fixes. * Add functionality to force bin thresholds. * Fix style issues. * Use stable sort. * Minor style and doc fixes. * Change binning behavior to be same as PR #2342. * Use different bin finding function for predefined bounds. * Fix style issues. * Minor refactoring, overload FindBinWithZeroAsOneBin. * Fix style issues. * Fix bug and add new test. * Add warning when using categorical features with forced bins. * Pass forced_upper_bounds by reference. * Pass container types by const reference. * Get categorical features using FeatureBinMapper. * Fix bug for small max_bin. * Move GetForcedBins to DatasetLoader. * Find forced bins in dataset_loader. * Minor fixes.
-
- 22 Sep, 2019 1 commit
-
-
Guolin Ke authored
* fix many cpp lint errors * indent * fix bug * fix more * fix gpu * more fixes
-
- 08 Jul, 2019 1 commit
-
-
Belinda Trotta authored
* Add parameter max_bin_by_feature. * Fix minor bug. * Fix minor bug. * Fix calculation of header size for writing binary file. * Fix style issues. * Fix python style issue. * Fix test and python style issue.
-
- 13 Apr, 2019 1 commit
-
-
Nikita Titov authored
-
- 11 Apr, 2019 1 commit
-
-
Nikita Titov authored
* added all necessary includes - fixed build/include_what_you_use error * fixed the order of includes (build/include_order)
-
- 26 Mar, 2019 1 commit
-
-
Nikita Titov authored
-
- 25 Mar, 2019 1 commit
-
-
Guolin Ke authored
-
- 26 Feb, 2019 1 commit
-
-
remcob-gr authored
* Initial attempt to implement appending features in-memory to another data set The intent is for this to enable munging files together easily, without needing to round-trip via numpy or write multiple copies to disk. In turn, that enables working more efficiently with data sets that were written separately. * Implement Dataset.dump_text, and fix small bug in appending of group bin boundaries. Dumping to text enables us to compare results, without having to worry about issues like features being reordered. * Add basic tests for validation logic for add_features_from. * Remove various internal mapping items from dataset text dumps These are too sensitive to the exact feature order chosen, which is not visible to the user. Including them in tests appears unnecessary, as the data dumping code should provide enough coverage. * Add test that add_features_from results in identical data sets according to dump_text. * Add test that booster behaviour after using add_features_from matches that of training on the full data This checks: - That training after add_features_from works at all - That add_features_from does not cause training to misbehave * Expose feature_penalty and monotone_types/constraints via get_field These getters allow us to check that add_features_from does the right thing with these vectors. * Add tests that add_features correctly handles feature_penalty and monotone_constraints. * Ensure add_features_from properly frees the added dataset and add unit test for this Since add_features_from moves the feature group pointers from the added dataset to the dataset being added to, the added dataset is invalid after the call. We must ensure we do not try and access this handle. * Remove some obsolete TODOs * Tidy up DumpTextFile by using a single iterator for each feature This iterators were also passed around as raw pointers without being freed, which is now fixed. * Factor out offsetting logic in AddFeaturesFrom * Remove obsolete TODO * Remove another TODO This one is debatable, test code can be a bit messy and duplicate-heavy, factoring it out tends to end badly. Leaving this for now, will revisit if adding more tests later on becomes a mess. * Add documentation for newly-added methods. * Fix whitespace issues identified by pylint. * Fix a few more whitespace issues. * Fix doc comments * Implement deep copying for feature groups. * Replace awkward std::move usage by emplace_back, and reduce vector size to num_features rather than num_total_features. * Copy feature groups in addFeaturesFrom, rather than moving them. * Fix bugs in FeatureGroup copy constructor and ensure source dataset remains usable * Add reserve to PushVector and PushOffset * Move definition of Clone into class body * Fix PR review issues * Fix for loop increment style. * Fix test failure * Some more docstring fixes. * Remove blank line
-
- 02 Feb, 2019 1 commit
-
-
Nikita Titov authored
-
- 31 Jan, 2019 1 commit
-
-
Guolin Ke authored
-
- 23 Jan, 2019 1 commit
-
-
Guolin Ke authored
* add warnings for override parameters of Dataset * fix pep8 * add feature_penalty * refactor * add R's code * Update basic.py * Update basic.py * fix parameter bug * Update lgb.Dataset.R * fix a bug
-
- 18 Jan, 2019 1 commit
-
-
Nikita Titov authored
* removed comparison warning * fixed spacing
-
- 16 Jan, 2019 1 commit
-
-
Shahzad Lone authored
File: [LightGBM//src/io/dataset.cpp] Function: [138:FastFeatureBundling(...)] Reserving vectors where we already know the size to save on reallocation costs. Also removed a variable that was unnecessary.
-
- 20 Dec, 2018 1 commit
-
-
Lingyi Hu authored
-
- 10 Oct, 2018 1 commit
-
-
Guolin Ke authored
* fix ndcg consistency. * more stable sorts * Update gbdt_model_text.cpp * Update dataset.cpp * Update gbdt_model_text.cpp
-
- 09 Oct, 2018 1 commit
-
-
Guolin Ke authored
* average predictions for constant features * fix possible numerical issues in std::log. * fix pylint * fix bugs in c_api * fix styles * clean code for multi class * rewrite test * fix pylint * skip test_constant_features * refine test * fix tests * fix tests * update FAQ * fix test * Update FAQ.rst
-
- 31 Jul, 2018 1 commit
-
-
Nikita Titov authored
-
- 14 Jun, 2018 1 commit
-
-
Guolin Ke authored
* add per-feature-penalites * fix comment
-
- 25 May, 2018 1 commit
-
-
Guolin Ke authored
-
- 20 May, 2018 1 commit
-
-
Guolin Ke authored
* [WIP] refine config * [wip] ready for the auto code generate * auto generate config codes * use with to open file * fix bug * fix pylint * fix bug * fix pylint * fix bugs. * tmp for failed test. * fix tests. * added nthreads alias * added new aliases from new config.h * fixed duplicated alias * refactored parameter_generator.py * added new aliases from config.h and removed remaining old names * fix bugs & some miss alias * added aliases * add more descriptions. * add comment.
-
- 11 May, 2018 2 commits
-
-
Nikita Titov authored
* decode error description * added break line char in log massages
-
Tsukasa OMOTO authored
* Shut up warnings - warning: 'void* memset(void*, int, size_t)' clearing an object of non-trivial type 'struct LightGBM::HistogramBinEntry'; use assignment or value-initialization instead [-Wclass-memaccess] - warning: 'void* memcpy(void*, const void*, size_t)' writing to an object of type 'class std::tuple<int, double, double>' with no trivial copy-assignment; use copy-assignment or copy-initialization instead [-Wclass-memaccess] * void*
-
- 18 Apr, 2018 1 commit
-
-
Guolin Ke authored
-
- 27 Feb, 2018 1 commit
-
-
ebernhardson authored
* Read and write datsets from hdfs. * Only enabled when cmake is run with -DUSE_HDFS:BOOL=TRUE * Introduces VirtualFile(Reader|Writer) to asbtract VFS differences
-
- 25 Dec, 2017 1 commit
-
-
Guolin Ke authored
-
- 17 Dec, 2017 1 commit
-
-
Guolin Ke authored
-
- 04 Oct, 2017 1 commit
-
-
Guolin Ke authored
* Update dataset.cpp * Update dataset.cpp
-
- 22 Sep, 2017 1 commit
-
-
zhangjin authored
While in parallel training, when one worker have 0 data, then it will not execute ConstructHistogram. if so, ptr_smaller_leaf_hist_data will not be 0 but the old data. that will get the wrong Histogram, and the wrong split info.
-
- 18 Aug, 2017 3 commits
- 27 Jul, 2017 1 commit
-
-
Guolin Ke authored
-
- 21 May, 2017 1 commit
-
-
Guolin Ke authored
-
- 16 May, 2017 1 commit
-
-
Guolin Ke authored
-
- 15 May, 2017 1 commit
-
-
Guolin Ke authored
-
- 26 Apr, 2017 1 commit
-
-
Guolin Ke authored
-
- 17 Apr, 2017 2 commits
-
-
Guolin Ke authored
-
- 16 Apr, 2017 1 commit
-
-
Guolin Ke authored
* some refactor. * two stage sum up to reduce sum up error. * add more two-stage sumup. * some refactor. * add alignment. * change name to aligned_allocator. * remove some useless sumup. * fix a warning. * add -march=native . * remove the padding of gradients. * no alignment. * fix test. * change KNumSumupGroup to 32768. * change gcc flags.
-
- 09 Apr, 2017 1 commit
-
-
Huan Zhang authored
* add dummy gpu solver code * initial GPU code * fix crash bug * first working version * use asynchronous copy * use a better kernel for root * parallel read histogram * sparse features now works, but no acceleration, compute on CPU * compute sparse feature on CPU simultaneously * fix big bug; add gpu selection; add kernel selection * better debugging * clean up * add feature scatter * Add sparse_threshold control * fix a bug in feature scatter * clean up debug * temporarily add OpenCL kernels for k=64,256 * fix up CMakeList and definition USE_GPU * add OpenCL kernels as string literals * Add boost.compute as a submodule * add boost dependency into CMakeList * fix opencl pragma * use pinned memory for histogram * use pinned buffer for gradients and hessians * better debugging message * add double precision support on GPU * fix boost version in CMakeList * Add a README * reconstruct GPU initialization code for ResetTrainingData * move data to GPU in parallel * fix a bug during feature copy * update gpu kernels * update gpu code * initial port to LightGBM v2 * speedup GPU data loading process * Add 4-bit bin support to GPU * re-add sparse_threshold parameter * remove kMaxNumWorkgroups and allows an unlimited number of features * add feature mask support for skipping unused features * enable kernel cache * use GPU kernels withoug feature masks when all features are used * REAdme. * REAdme. * update README * fix typos (#349) * change compile to gcc on Apple as default * clean vscode related file * refine api of constructing from sampling data. * fix bug in the last commit. * more efficient algorithm to sample k from n. * fix bug in filter bin * change to boost from average output. * fix tests. * only stop training when all classes are finshed in multi-class. * limit the max tree output. change hessian in multi-class objective. * robust tree model loading. * fix test. * convert the probabilities to raw score in boost_from_average of classification. * fix the average label for binary classification. * Add boost_from_average to docs (#354) * don't use "ConvertToRawScore" for self-defined objective function. * boost_from_average seems doesn't work well in binary classification. remove it. * For a better jump link (#355) * Update Python-API.md * for a better jump in page A space is needed between `#` and the headers content according to Github's markdown format [guideline](https://guides.github.com/features/mastering-markdown/) After adding the spaces, we can jump to the exact position in page by click the link. * fixed something mentioned by @wxchan * Update Python-API.md * add FitByExistingTree. * adapt GPU tree learner for FitByExistingTree * avoid NaN output. * update boost.compute * fix typos (#361) * fix broken links (#359) * update README * disable GPU acceleration by default * fix image url * cleanup debug macro * remove old README * do not save sparse_threshold_ in FeatureGroup * add details for new GPU settings * ignore submodule when doing pep8 check * allocate workspace for at least one thread during builing Feature4 * move sparse_threshold to class Dataset * remove duplicated code in GPUTreeLearner::Split * Remove duplicated code in FindBestThresholds and BeforeFindBestSplit * do not rebuild ordered gradients and hessians for sparse features * support feature groups in GPUTreeLearner * Initial parallel learners with GPU support * add option device, cleanup code * clean up FindBestThresholds; add some omp parallel * constant hessian optimization for GPU * Fix GPUTreeLearner crash when there is zero feature * use np.testing.assert_almost_equal() to compare lists of floats in tests * travis for GPU
-