"include/vscode:/vscode.git/clone" did not exist on "3b1619195e61e562435262e3ec5386f8e9b6dc9b"
Commit 4aa32967 authored by Nikita Titov's avatar Nikita Titov Committed by Tsukasa OMOTO
Browse files

[docs] documentation improvement (#976)

* fixed typos and hotfixes

* converted gcc-tips.Rmd; added ref to gcc-tips

* renamed files

* renamed Advanced-Topics

* renamed README

* renamed Parameters-Tuning

* renamed FAQ

* fixed refs to FAQ

* fixed undecodable source characters

* renamed Features

* renamed Quick-Start

* fixed undecodable source characters in Features

* renamed Python-Intro

* renamed GPU-Tutorial

* renamed GPU-Windows

* fixed markdown

* fixed undecodable source characters in GPU-Windows

* renamed Parameters

* fixed markdown

* removed recommonmark dependence

* hotfixes

* added anchors to links

* fixed 404

* fixed typos

* added more anchors

* removed sphinxcontrib-napoleon dependence

* removed outdated line in Travis config

* fixed max-width of the ReadTheDocs theme

* added horizontal align to images
parent 12257feb
# Parameters
This is a page contains all parameters in LightGBM.
***List of other Helpful Links***
* [Python API](./Python-API.rst)
* [Parameters Tuning](./Parameters-tuning.md)
***External Links***
* [Laurae++ Interactive Documentation](https://sites.google.com/view/lauraepp/parameters)
***Update of 08/04/2017***
Default values for the following parameters have changed:
* min_data_in_leaf = 100 => 20
* min_sum_hessian_in_leaf = 10 => 1e-3
* num_leaves = 127 => 31
* num_iterations = 10 => 100
## Parameter Format
The parameter format is `key1=value1 key2=value2 ... ` . And parameters can be set both in config file and command line. By using command line, parameters should not have spaces before and after `=`. By using config files, one line can only contain one parameter. you can use `#` to comment. If one parameter appears in both command line and config file, LightGBM will use the parameter in command line.
## Core Parameters
* `config`, default=`""`, type=string, alias=`config_file`
* path of config file
* `task`, default=`train`, type=enum, options=`train`,`prediction`
* `train` for training
* `prediction` for prediction.
* `convert_model` for converting model file into if-else format, see more information in [Convert model parameters](#convert-model-parameters)
* `application`, default=`regression`, type=enum, options=`regression`,`regression_l1`,`huber`,`fair`,`poisson`,`binary`,`lambdarank`,`multiclass`, alias=`objective`,`app`
* `regression`, regression application
* `regression_l2`, L2 loss, alias=`mean_squared_error`,`mse`
* `regression_l1`, L1 loss, alias=`mean_absolute_error`,`mae`
* `huber`, [Huber loss](https://en.wikipedia.org/wiki/Huber_loss "Huber loss - Wikipedia")
* `fair`, [Fair loss](https://www.kaggle.com/c/allstate-claims-severity/discussion/24520)
* `poisson`, [Poisson regression](https://en.wikipedia.org/wiki/Poisson_regression "Poisson regression")
* `binary`, binary classification application
* `lambdarank`, [lambdarank](https://papers.nips.cc/paper/2971-learning-to-rank-with-nonsmooth-cost-functions.pdf) application
* The label should be `int` type in lambdarank tasks, and larger number represent the higher relevance (e.g. 0:bad, 1:fair, 2:good, 3:perfect).
* `label_gain` can be used to set the gain(weight) of `int` label.
* `multiclass`, multi-class classification application, should set `num_class` as well
* `boosting`, default=`gbdt`, type=enum, options=`gbdt`,`rf`,`dart`,`goss`, alias=`boost`,`boosting_type`
* `gbdt`, traditional Gradient Boosting Decision Tree
* `rf`, Random Forest
* `dart`, [Dropouts meet Multiple Additive Regression Trees](https://arxiv.org/abs/1505.01866)
* `goss`, Gradient-based One-Side Sampling
* `data`, default=`""`, type=string, alias=`train`,`train_data`
* training data, LightGBM will train from this data
* `valid`, default=`""`, type=multi-string, alias=`test`,`valid_data`,`test_data`
* validation/test data, LightGBM will output metrics for these data
* support multi validation data, separate by `,`
* `num_iterations`, default=`100`, type=int, alias=`num_iteration`,`num_tree`,`num_trees`,`num_round`,`num_rounds`
* number of boosting iterations
* note: For python/R package, **this parameter is ignored**, use `num_boost_round` (Python) or `nrounds` (R) input arguments of `train` and `cv` methods instead
* note: internally, LightGBM constructs `num_class * num_iterations` trees for `multiclass` problems
* `learning_rate`, default=`0.1`, type=double, alias=`shrinkage_rate`
* shrinkage rate
* in `dart`, it also affects normalization weights of dropped trees
* `num_leaves`, default=`31`, type=int, alias=`num_leaf`
* number of leaves in one tree
* `tree_learner`, default=`serial`, type=enum, options=`serial`,`feature`,`data`
* `serial`, single machine tree learner
* `feature`, feature parallel tree learner
* `data`, data parallel tree learner
* Refer to [Parallel Learning Guide](./Parallel-Learning-Guide.rst) to get more details.
* `num_threads`, default=OpenMP_default, type=int, alias=`num_thread`,`nthread`
* Number of threads for LightGBM.
* For the best speed, set this to the number of **real CPU cores**, not the number of threads (most CPU using [hyper-threading](https://en.wikipedia.org/wiki/Hyper-threading) to generate 2 threads per CPU core).
* Do not set it too large if your dataset is small (do not use 64 threads for a dataset with 10,000 for instance).
* Be aware a task manager or any similar CPU monitoring tool might report cores not being fully utilized. This is normal.
* For parallel learning, should not use full CPU cores since this will cause poor performance for the network.
* `device`, default=`cpu`, options=`cpu`,`gpu`
* Choose device for the tree learning, can use gpu to achieve the faster learning.
* Note: 1. Recommend use the smaller `max_bin`(e.g `63`) to get the better speed up. 2. For the faster speed, GPU use 32-bit float point to sum up by default, may affect the accuracy for some tasks. You can set `gpu_use_dp=true` to enable 64-bit float point, but it will slow down the training. 3. Refer to [Installation Guide](./Installation-Guide.rst) to build with GPU .
## Learning Control Parameters
* `max_depth`, default=`-1`, type=int
* Limit the max depth for tree model. This is used to deal with overfit when #data is small. Tree still grow by leaf-wise.
* `< 0` means no limit
* `min_data_in_leaf`, default=`20`, type=int, alias=`min_data_per_leaf` , `min_data`
* Minimal number of data in one leaf. Can use this to deal with over-fit.
* `min_sum_hessian_in_leaf`, default=`1e-3`, type=double, alias=`min_sum_hessian_per_leaf`, `min_sum_hessian`, `min_hessian`
* Minimal sum hessian in one leaf. Like `min_data_in_leaf`, can use this to deal with over-fit.
* `feature_fraction`, default=`1.0`, type=double, `0.0 < feature_fraction < 1.0`, alias=`sub_feature`
* LightGBM will random select part of features on each iteration if `feature_fraction` smaller than `1.0`. For example, if set to `0.8`, will select 80% features before training each tree.
* Can use this to speed up training
* Can use this to deal with over-fit
* `feature_fraction_seed`, default=`2`, type=int
* Random seed for feature fraction.
* `bagging_fraction`, default=`1.0`, type=double, , `0.0 < bagging_fraction < 1.0`, alias=`sub_row`
* Like `feature_fraction`, but this will random select part of data without resampling
* Can use this to speed up training
* Can use this to deal with over-fit
* Note: To enable bagging, should set `bagging_freq` to a non zero value as well
* `bagging_freq`, default=`0`, type=int
* Frequency for bagging, `0` means disable bagging. `k` means will perform bagging at every `k` iteration.
* Note: To enable bagging, should set `bagging_fraction` as well
* `bagging_seed` , default=`3`, type=int
* Random seed for bagging.
* `early_stopping_round` , default=`0`, type=int, alias=`early_stopping_rounds`,`early_stopping`
* Will stop training if one metric of one validation data doesn't improve in last `early_stopping_round` rounds.
* `lambda_l1` , default=`0`, type=double
* l1 regularization
* `lambda_l2` , default=`0`, type=double
* l2 regularization
* `min_gain_to_split` , default=`0`, type=double
* The minimal gain to perform split
* `drop_rate`, default=`0.1`, type=double
* only used in `dart`
* `skip_drop`, default=`0.5`, type=double
* only used in `dart`, probability of skipping drop
* `max_drop`, default=`50`, type=int
* only used in `dart`, max number of dropped trees on one iteration. `<=0` means no limit.
* `uniform_drop`, default=`false`, type=bool
* only used in `dart`, true if want to use uniform drop
* `xgboost_dart_mode`, default=`false`, type=bool
* only used in `dart`, true if want to use xgboost dart mode
* `drop_seed`, default=`4`, type=int
* only used in `dart`, used to random seed to choose dropping models.
* `top_rate`, default=`0.2`, type=double
* only used in `goss`, the retain ratio of large gradient data
* `other_rate`, default=`0.1`, type=int
* only used in `goss`, the retain ratio of small gradient data
* `max_cat_group`, default=`64`, type=int
* use for the categorical features.
* When #catogory is large, finding the split point on it is easily over-fitting. So LightGBM merges them into `max_cat_group` groups, and finds the split points on the group boundaries.
* `min_data_per_group`, default=`10`, type=int
* Min number of data per categorical group.
* `max_cat_threshold`, default=`256`, type=int
* use for the categorical features. Limit the max threshold points in categorical features.
* `min_cat_smooth`, default=`5`, type=double
* use for the categorical features. Refer to the descrption in paramater `cat_smooth_ratio`.
* `max_cat_smooth`, default=`100`, type=double
* use for the categorical features. Refer to the descrption in paramater `cat_smooth_ratio`.
* `cat_smooth_ratio`, default=`0.01`, type=double
* use for the categorical features. This can reduce the effect of noises in categorical features, especially for categories with few data.
* The smooth denominator is `a = min(max_cat_smooth, max(min_cat_smooth, num_data/num_category*cat_smooth_ratio))`.
* The smooth numerator is `b = a * sum_gradient / sum_hessian`.
## IO Parameters
* `max_bin`, default=`255`, type=int
* max number of bin that feature values will bucket in. Small bin may reduce training accuracy but may increase general power (deal with over-fit).
* LightGBM will auto compress memory according `max_bin`. For example, LightGBM will use `uint8_t` for feature value if `max_bin=255`.
* `min_data_in_bin`, default=`5`, type=int
* min number of data inside one bin, use this to avoid one-data-one-bin (may over-fitting).
* `data_random_seed`, default=`1`, type=int
* random seed for data partition in parallel learning(not include feature parallel).
* `output_model`, default=`LightGBM_model.txt`, type=string, alias=`model_output`,`model_out`
* file name of output model in training.
* `input_model`, default=`""`, type=string, alias=`model_input`,`model_in`
* file name of input model.
* for prediction task, will prediction data using this model.
* for train task, will continued train from this model.
* `output_result`, default=`LightGBM_predict_result.txt`, type=string, alias=`predict_result`,`prediction_result`
* file name of prediction result in prediction task.
* `is_pre_partition`, default=`false`, type=bool
* used for parallel learning(not include feature parallel).
* `true` if training data are pre-partitioned, and different machines using different partition.
* `is_sparse`, default=`true`, type=bool, alias=`is_enable_sparse`
* used to enable/disable sparse optimization. Set to `false` to disable sparse optimization.
* `two_round`, default=`false`, type=bool, alias=`two_round_loading`,`use_two_round_loading`
* by default, LightGBM will map data file to memory and load features from memory. This will provide faster data loading speed. But it may out of memory when the data file is very big.
* set this to `true` if data file is too big to fit in memory.
* `save_binary`, default=`false`, type=bool, alias=`is_save_binary`,`is_save_binary_file`
* set this to `true` will save the data set(include validation data) to a binary file. Speed up the data loading speed for the next time.
* `verbosity`, default=`1`, type=int, alias=`verbose`
* `<0` = Fatel, `=0` = Error(Warn), `>0` = Info
* `header`, default=`false`, type=bool, alias=`has_header`
* `true` if input data has header
* `label`, default=`""`, type=string, alias=`label_column`
* specific the label column
* Use number for index, e.g. `label=0` means column_0 is the label
* Add a prefix `name:` for column name, e.g. `label=name:is_click`
* `weight`, default=`""`, type=string, alias=`weight_column`
* specific the weight column
* Use number for index, e.g. `weight=0` means column_0 is the weight
* Add a prefix `name:` for column name, e.g. `weight=name:weight`
* Note: Index start from `0`. And it doesn't count the label column when passing type is Index. e.g. when label is column_0, and weight is column_1, the correct parameter is `weight=0`.
* `query`, default=`""`, type=string, alias=`query_column`,`group`,`group_column`
* specific the query/group id column
* Use number for index, e.g. `query=0` means column_0 is the query id
* Add a prefix `name:` for column name, e.g. `query=name:query_id`
* Note: Data should group by query_id. Index start from `0`. And it doesn't count the label column when passing type is Index. e.g. when label is column_0, and query_id is column_1, the correct parameter is `query=0`.
* `ignore_column`, default=`""`, type=string, alias=`ignore_feature`,`blacklist`
* specific some ignore columns in training
* Use number for index, e.g. `ignore_column=0,1,2` means column_0, column_1 and column_2 will be ignored.
* Add a prefix `name:` for column name, e.g. `ignore_column=name:c1,c2,c3` means c1, c2 and c3 will be ignored.
* Note: Index start from `0`. And it doesn't count the label column.
* `categorical_feature`, default=`""`, type=string, alias=`categorical_column`,`cat_feature`,`cat_column`
* specific categorical features
* Use number for index, e.g. `categorical_feature=0,1,2` means column_0, column_1 and column_2 are categorical features.
* Add a prefix `name:` for column name, e.g. `categorical_feature=name:c1,c2,c3` means c1, c2 and c3 are categorical features.
* Note: Only support categorical with `int` type (Note: the negative values will be treated as Missing values). Index start from `0`. And it doesn't count the label column.
* `predict_raw_score`, default=`false`, type=bool, alias=`raw_score`,`is_predict_raw_score`
* only used in prediction task
* Set to `true` will only predict the raw scores.
* Set to `false` will transformed score
* `predict_leaf_index`, default=`false`, type=bool, alias=`leaf_index`,`is_predict_leaf_index`
* only used in prediction task
* Set to `true` to predict with leaf index of all trees
* `predict_contrib`, default=`false`, type=bool, alias=`contrib`,`is_predict_contrib`
* only used in prediction task
* Set to `true` to estimate [SHAP values](https://arxiv.org/abs/1706.06060), which represent how each feature contributed to each prediction. Produces number of features + 1 values where the last value is the expected value of the model output over the training data.
* `bin_construct_sample_cnt`, default=`200000`, type=int
* Number of data that sampled to construct histogram bins.
* Will give better training result when set this larger. But will increase data loading time.
* Set this to larger value if data is very sparse.
* `num_iteration_predict`, default=`-1`, type=int
* only used in prediction task, used to how many trained iterations will be used in prediction.
* `<= 0` means no limit
* `pred_early_stop`, default=`false`, type=bool
* Set to `true` will use early-stopping to speed up the prediction. May affect the accuracy.
* `pred_early_stop_freq`, default=`10`, type=int
* The frequency of checking early-stopping prediction.
* `pred_early_stop_margin`, default=`10.0`, type=double
* The Threshold of margin in early-stopping prediction.
* `use_missing`, default=`true`, type=bool
* Set to `false` will disable the special handle of missing value.
* `zero_as_missing`, default=`false`, type=bool
* Set to `true` will treat all zero as missing values (including the unshown values in libsvm/sparse matrics).
* Set to `false` will use `na` to represent missing values.
* `init_score_file`, default=`""`, type=string
* Path of training initial score file, `""` will use `train_data_file+".init"` (if exists).
* `valid_init_score_file`, default=`""`, type=multi-string
* Path of validation initial score file, `""` will use `valid_data_file+".init"` (if exists).
* separate by `,` for multi-validation data
## Objective Parameters
* `sigmoid`, default=`1.0`, type=double
* parameter for sigmoid function. Will be used in binary classification and lambdarank.
* `huber_delta`, default=`1.0`, type=double
* parameter for [Huber loss](https://en.wikipedia.org/wiki/Huber_loss "Huber loss - Wikipedia"). Will be used in regression task.
* `fair_c`, default=`1.0`, type=double
* parameter for [Fair loss](https://www.kaggle.com/c/allstate-claims-severity/discussion/24520). Will be used in regression task.
* `gaussian_eta`, default=`1.0`, type=double
* parameter to control the width of Gaussian function. Will be used in l1 and huber regression loss.
* `poission_max_delta_step`, default=`0.7`, type=double
* parameter used to safeguard optimization
* `scale_pos_weight`, default=`1.0`, type=double
* weight of positive class in binary classification task
* `boost_from_average`, default=`true`, type=bool
* adjust initial score to the mean of labels for faster convergence, only used in Regression task.
* `is_unbalance`, default=`false`, type=bool
* used in binary classification. Set this to `true` if training data are unbalance.
* `max_position`, default=`20`, type=int
* used in lambdarank, will optimize NDCG at this position.
* `label_gain`, default=`0,1,3,7,15,31,63,...`, type=multi-double
* used in lambdarank, relevant gain for labels. For example, the gain of label `2` is `3` if using default label gains.
* Separate by `,`
* `num_class`, default=`1`, type=int, alias=`num_classes`
* only used in multi-class classification
## Metric Parameters
* `metric`, default={`l2` for regression}, {`binary_logloss` for binary classification},{`ndcg` for lambdarank}, type=multi-enum, options=`l1`,`l2`,`ndcg`,`auc`,`binary_logloss`,`binary_error`...
* `l1`, absolute loss, alias=`mean_absolute_error`, `mae`
* `l2`, square loss, alias=`mean_squared_error`, `mse`
* `l2_root`, root square loss, alias=`root_mean_squared_error`, `rmse`
* `huber`, [Huber loss](https://en.wikipedia.org/wiki/Huber_loss "Huber loss - Wikipedia")
* `fair`, [Fair loss](https://www.kaggle.com/c/allstate-claims-severity/discussion/24520)
* `poisson`, [Poisson regression](https://en.wikipedia.org/wiki/Poisson_regression "Poisson regression")
* `ndcg`, [NDCG](https://en.wikipedia.org/wiki/Discounted_cumulative_gain#Normalized_DCG)
* `map`, [MAP](https://en.wikipedia.org/wiki/Information_retrieval#Mean_average_precision)
* `auc`, [AUC](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve)
* `binary_logloss`, [log loss](https://www.kaggle.com/wiki/LogLoss)
* `binary_error`. For one sample `0` for correct classification, `1` for error classification.
* `multi_logloss`, log loss for mulit-class classification
* `multi_error`. error rate for mulit-class classification
* Support multi metrics, separate by `,`
* `metric_freq`, default=`1`, type=int
* frequency for metric output
* `is_training_metric`, default=`false`, type=bool
* set this to true if need to output metric result of training
* `ndcg_at`, default=`1,2,3,4,5`, type=multi-int, alias=`ndcg_eval_at`,`eval_at`
* NDCG evaluation position, separate by `,`
## Network Parameters
Following parameters are used for parallel learning, and only used for base(socket) version.
* `num_machines`, default=`1`, type=int, alias=`num_machine`
* Used for parallel learning, the number of machines for parallel learning application
* Need to set this in both socket and mpi version.
* `local_listen_port`, default=`12400`, type=int, alias=`local_port`
* TCP listen port for local machines.
* Should allow this port in firewall setting before training.
* `time_out`, default=`120`, type=int
* Socket time-out in minutes.
* `machine_list_file`, default=`""`, type=string
* File that list machines for this parallel learning application
* Each line contains one IP and one port for one machine. The format is `ip port`, separate by space.
## GPU Parameters
* `gpu_platform_id`, default=`-1`, type=int
* OpenCL platform ID. Usually each GPU vendor exposes one OpenCL platform.
* Default value is -1, using the system-wide default platform.
* `gpu_device_id`, default=`-1`, type=int
* OpenCL device ID in the specified platform. Each GPU in the selected platform has a unique device ID.
* Default value is -1, using the default device in the selected platform.
* `gpu_use_dp`, default=`false`, type=bool
* Set to true to use double precision math on GPU (default using single precision).
## Convert Model Parameters
This feature is only supported in command line version yet.
* `convert_model_language`, default=`""`, type=string
* only `cpp` is supported yet.
* if `convert_model_language` is set when `task` is set to `train`, the model will also be converted.
* `convert_model`, default=`"gbdt_prediction.cpp"`, type=string
* output file name of converted model.
## Others
### Continued Training with Input Score
LightGBM support continued train with initial score. It uses an additional file to store these initial score, like the following:
```
0.5
-0.1
0.9
...
```
It means the initial score of first data is `0.5`, second is `-0.1`, and so on. The initial score file corresponds with data file line by line, and has per score per line. And if the name of data file is "train.txt", the initial score file should be named as "train.txt.init" and in the same folder as the data file. And LightGBM will auto load initial score file if it exists.
### Weight Data
LightGBM support weighted training. It uses an additional file to store weight data, like the following:
```
1.0
0.5
0.8
...
```
It means the weight of first data is `1.0`, second is `0.5`, and so on. The weight file corresponds with data file line by line, and has per weight per line. And if the name of data file is "train.txt", the weight file should be named as "train.txt.weight" and in the same folder as the data file. And LightGBM will auto load weight file if it exists.
update:
You can specific weight column in data file now. Please refer to parameter `weight` in above.
### Query Data
For LambdaRank learning, it needs query information for training data. LightGBM use an additional file to store query data. Following is an example:
```
27
18
67
...
```
It means first `27` lines samples belong one query and next `18` lines belong to another, and so on.(**Note: data should order by query**) If name of data file is "train.txt", the query file should be named as "train.txt.query" and in same folder of training data. LightGBM will load the query file automatically if it exists.
You can specific query/group id in data file now. Please refer to parameter `group` in above.
Parameters
==========
This page contains all parameters in LightGBM.
**List of other helpful links**
- `Python API <./Python-API.rst>`__
- `Parameters Tuning <./Parameters-Tuning.rst>`__
**External Links**
- `Laurae++ Interactive Documentation`_
**Update of 08/04/2017**
Default values for the following parameters have changed:
- ``min_data_in_leaf`` = 100 => 20
- ``min_sum_hessian_in_leaf`` = 10 => 1e-3
- ``num_leaves`` = 127 => 31
- ``num_iterations`` = 10 => 100
Parameters Format
-----------------
The parameters format is ``key1=value1 key2=value2 ...``.
And parameters can be set both in config file and command line.
By using command line, parameters should not have spaces before and after ``=``.
By using config files, one line can only contain one parameter. You can use ``#`` to comment.
If one parameter appears in both command line and config file, LightGBM will use the parameter in command line.
Core Parameters
---------------
- ``config``, default=\ ``""``, type=string, alias=\ ``config_file``
- path of config file
- ``task``, default=\ ``train``, type=enum, options=\ ``train``, ``prediction``
- ``train`` for training
- ``prediction`` for prediction.
- ``convert_model`` for converting model file into if-else format, see more information in `Convert model parameters <#convert-model-parameters>`__
- ``application``, default=\ ``regression``, type=enum,
options=\ ``regression``, ``regression_l2``, ``regression_l1``, ``huber``, ``fair``, ``poisson``, ``binary``, ``lambdarank``, ``multiclass``,
alias=\ ``objective``, ``app``
- ``regression``, regression application
- ``regression_l2``, L2 loss, alias=\ ``mean_squared_error``, ``mse``
- ``regression_l1``, L1 loss, alias=\ ``mean_absolute_error``, ``mae``
- ``huber``, `Huber loss`_
- ``fair``, `Fair loss`_
- ``poisson``, `Poisson regression`_
- ``binary``, binary classification application
- ``lambdarank``, `lambdarank`_ application
- the label should be ``int`` type in lambdarank tasks, and larger number represent the higher relevance (e.g. 0:bad, 1:fair, 2:good, 3:perfect)
- ``label_gain`` can be used to set the gain(weight) of ``int`` label
- ``multiclass``, multi-class classification application, ``num_class`` should be set as well
- ``boosting``, default=\ ``gbdt``, type=enum,
options=\ ``gbdt``, ``rf``, ``dart``, ``goss``,
alias=\ ``boost``, ``boosting_type``
- ``gbdt``, traditional Gradient Boosting Decision Tree
- ``rf``, Random Forest
- ``dart``, `Dropouts meet Multiple Additive Regression Trees`_
- ``goss``, Gradient-based One-Side Sampling
- ``data``, default=\ ``""``, type=string, alias=\ ``train``, ``train_data``
- training data, LightGBM will train from this data
- ``valid``, default=\ ``""``, type=multi-string, alias=\ ``test``, ``valid_data``, ``test_data``
- validation/test data, LightGBM will output metrics for these data
- support multi validation data, separate by ``,``
- ``num_iterations``, default=\ ``100``, type=int,
alias=\ ``num_iteration``, ``num_tree``, ``num_trees``, ``num_round``, ``num_rounds``
- number of boosting iterations
- **Note**: for Python/R package, **this parameter is ignored**,
use ``num_boost_round`` (Python) or ``nrounds`` (R) input arguments of ``train`` and ``cv`` methods instead
- **Note**: internally, LightGBM constructs ``num_class * num_iterations`` trees for ``multiclass`` problems
- ``learning_rate``, default=\ ``0.1``, type=double, alias=\ ``shrinkage_rate``
- shrinkage rate
- in ``dart``, it also affects on normalization weights of dropped trees
- ``num_leaves``, default=\ ``31``, type=int, alias=\ ``num_leaf``
- number of leaves in one tree
- ``tree_learner``, default=\ ``serial``, type=enum, options=\ ``serial``, ``feature``, ``data``
- ``serial``, single machine tree learner
- ``feature``, feature parallel tree learner
- ``data``, data parallel tree learner
- refer to `Parallel Learning Guide <./Parallel-Learning-Guide.rst>`__ to get more details
- ``num_threads``, default=\ ``OpenMP_default``, type=int, alias=\ ``num_thread``, ``nthread``
- number of threads for LightGBM
- for the best speed, set this to the number of **real CPU cores**,
not the number of threads (most CPU using `hyper-threading`_ to generate 2 threads per CPU core)
- do not set it too large if your dataset is small (do not use 64 threads for a dataset with 10,000 rows for instance)
- be aware a task manager or any similar CPU monitoring tool might report cores not being fully utilized. **This is normal**
- for parallel learning, should not use full CPU cores since this will cause poor performance for the network
- ``device``, default=\ ``cpu``, options=\ ``cpu``, ``gpu``
- choose device for the tree learning, you can use GPU to achieve the faster learning
- **Note**: it is recommended to use the smaller ``max_bin`` (e.g. 63) to get the better speed up
- **Note**: for the faster speed, GPU use 32-bit float point to sum up by default, may affect the accuracy for some tasks.
You can set ``gpu_use_dp=true`` to enable 64-bit float point, but it will slow down the training
- **Note**: refer to `Installation Guide <./Installation-Guide.rst#build-gpu-version>`__ to build with GPU
Learning Control Parameters
---------------------------
- ``max_depth``, default=\ ``-1``, type=int
- limit the max depth for tree model. This is used to deal with over-fitting when ``#data`` is small. Tree still grows by leaf-wise
- ``< 0`` means no limit
- ``min_data_in_leaf``, default=\ ``20``, type=int, alias=\ ``min_data_per_leaf`` , ``min_data``
- minimal number of data in one leaf. Can be used to deal with over-fitting
- ``min_sum_hessian_in_leaf``, default=\ ``1e-3``, type=double,
alias=\ ``min_sum_hessian_per_leaf``, ``min_sum_hessian``, ``min_hessian``
- minimal sum hessian in one leaf. Like ``min_data_in_leaf``, it can be used to deal with over-fitting
- ``feature_fraction``, default=\ ``1.0``, type=double, ``0.0 < feature_fraction < 1.0``, alias=\ ``sub_feature``
- LightGBM will randomly select part of features on each iteration if ``feature_fraction`` smaller than ``1.0``.
For example, if set to ``0.8``, will select 80% features before training each tree
- can be used to speed up training
- can be used to deal with over-fitting
- ``feature_fraction_seed``, default=\ ``2``, type=int
- random seed for ``feature_fraction``
- ``bagging_fraction``, default=\ ``1.0``, type=double, ``0.0 < bagging_fraction < 1.0``, alias=\ ``sub_row``
- like ``feature_fraction``, but this will randomly select part of data without resampling
- can be used to speed up training
- can be used to deal with over-fitting
- **Note**: To enable bagging, ``bagging_freq`` should be set to a non zero value as well
- ``bagging_freq``, default=\ ``0``, type=int
- frequency for bagging, ``0`` means disable bagging. ``k`` means will perform bagging at every ``k`` iteration
- **Note**: to enable bagging, ``bagging_fraction`` should be set as well
- ``bagging_seed`` , default=\ ``3``, type=int
- random seed for bagging
- ``early_stopping_round``, default=\ ``0``, type=int, alias=\ ``early_stopping_rounds``, ``early_stopping``
- will stop training if one metric of one validation data doesn't improve in last ``early_stopping_round`` rounds
- ``lambda_l1``, default=\ ``0``, type=double
- L1 regularization
- ``lambda_l2``, default=\ ``0``, type=double
- L2 regularization
- ``min_gain_to_split``, default=\ ``0``, type=double
- the minimal gain to perform split
- ``drop_rate``, default=\ ``0.1``, type=double
- only used in ``dart``
- ``skip_drop``, default=\ ``0.5``, type=double
- only used in ``dart``, probability of skipping drop
- ``max_drop``, default=\ ``50``, type=int
- only used in ``dart``, max number of dropped trees on one iteration
- ``<=0`` means no limit
- ``uniform_drop``, default=\ ``false``, type=bool
- only used in ``dart``, set this to ``true`` if want to use uniform drop
- ``xgboost_dart_mode``, default=\ ``false``, type=bool
- only used in ``dart``, set this to ``true`` if want to use xgboost dart mode
- ``drop_seed``, default=\ ``4``, type=int
- only used in ``dart``, random seed to choose dropping models
- ``top_rate``, default=\ ``0.2``, type=double
- only used in ``goss``, the retain ratio of large gradient data
- ``other_rate``, default=\ ``0.1``, type=int
- only used in ``goss``, the retain ratio of small gradient data
- ``max_cat_group``, default=\ ``64``, type=int
- use for the categorical features
- when ``#catogory`` is large, finding the split point on it is easily over-fitting.
So LightGBM merges them into ``max_cat_group`` groups, and finds the split points on the group boundaries
- ``min_data_per_group``, default=\ ``10``, type=int
- min number of data per categorical group
- ``max_cat_threshold``, default=\ ``256``, type=int
- use for the categorical features
- limit the max threshold points in categorical features
- ``min_cat_smooth``, default=\ ``5``, type=double
- use for the categorical features
- refer to the descrption of the paramater ``cat_smooth_ratio``
- ``max_cat_smooth``, default=\ ``100``, type=double
- use for the categorical features
- refer to the descrption of the paramater ``cat_smooth_ratio``
- ``cat_smooth_ratio``, default=\ ``0.01``, type=double
- use for the categorical features
- this can reduce the effect of noises in categorical features, especially for categories with few data
- the smooth denominator is ``a = min(max_cat_smooth, max(min_cat_smooth, num_data / num_category * cat_smooth_ratio))``
- the smooth numerator is ``b = a * sum_gradient / sum_hessian``
IO Parameters
-------------
- ``max_bin``, default=\ ``255``, type=int
- max number of bins that feature values will be bucketed in.
Small number of bins may reduce training accuracy but may increase general power (deal with over-fitting)
- LightGBM will auto compress memory according ``max_bin``.
For example, LightGBM will use ``uint8_t`` for feature value if ``max_bin=255``
- ``min_data_in_bin``, default=\ ``5``, type=int
- min number of data inside one bin, use this to avoid one-data-one-bin (may over-fitting)
- ``data_random_seed``, default=\ ``1``, type=int
- random seed for data partition in parallel learning (not include feature parallel)
- ``output_model``, default=\ ``LightGBM_model.txt``, type=string, alias=\ ``model_output``, ``model_out``
- file name of output model in training
- ``input_model``, default=\ ``""``, type=string, alias=\ ``model_input``, ``model_in``
- file name of input model
- for ``prediction`` task, this model will be used for prediction data
- for ``train`` task, training will be continued from this model
- ``output_result``, default=\ ``LightGBM_predict_result.txt``,
type=string, alias=\ ``predict_result``, ``prediction_result``
- file name of prediction result in ``prediction`` task
- ``is_pre_partition``, default=\ ``false``, type=bool
- used for parallel learning (not include feature parallel)
- ``true`` if training data are pre-partitioned, and different machines use different partitions
- ``is_sparse``, default=\ ``true``, type=bool, alias=\ ``is_enable_sparse``
- used to enable/disable sparse optimization. Set to ``false`` to disable sparse optimization
- ``two_round``, default=\ ``false``, type=bool, alias=\ ``two_round_loading``, ``use_two_round_loading``
- by default, LightGBM will map data file to memory and load features from memory.
This will provide faster data loading speed. But it may run out of memory when the data file is very big
- set this to ``true`` if data file is too big to fit in memory
- ``save_binary``, default=\ ``false``, type=bool, alias=\ ``is_save_binary``, ``is_save_binary_file``
- if ``true`` LightGBM will save the dataset (include validation data) to a binary file.
Speed up the data loading for the next time
- ``verbosity``, default=\ ``1``, type=int, alias=\ ``verbose``
- ``<0`` = Fatal,
``=0`` = Error (Warn),
``>0`` = Info
- ``header``, default=\ ``false``, type=bool, alias=\ ``has_header``
- set this to ``true`` if input data has header
- ``label``, default=\ ``""``, type=string, alias=\ ``label_column``
- specify the label column
- use number for index, e.g. ``label=0`` means column\_0 is the label
- add a prefix ``name:`` for column name, e.g. ``label=name:is_click``
- ``weight``, default=\ ``""``, type=string, alias=\ ``weight_column``
- specify the weight column
- use number for index, e.g. ``weight=0`` means column\_0 is the weight
- add a prefix ``name:`` for column name, e.g. ``weight=name:weight``
- **Note**: index starts from ``0``.
And it doesn't count the label column when passing type is Index, e.g. when label is column\_0, and weight is column\_1, the correct parameter is ``weight=0``
- ``query``, default=\ ``""``, type=string, alias=\ ``query_column``, ``group``, ``group_column``
- specify the query/group id column
- use number for index, e.g. ``query=0`` means column\_0 is the query id
- add a prefix ``name:`` for column name, e.g. ``query=name:query_id``
- **Note**: data should be grouped by query\_id.
Index starts from ``0``.
And it doesn't count the label column when passing type is Index, e.g. when label is column\_0 and query\_id is column\_1, the correct parameter is ``query=0``
- ``ignore_column``, default=\ ``""``, type=string, alias=\ ``ignore_feature``, ``blacklist``
- specify some ignoring columns in training
- use number for index, e.g. ``ignore_column=0,1,2`` means column\_0, column\_1 and column\_2 will be ignored
- add a prefix ``name:`` for column name, e.g. ``ignore_column=name:c1,c2,c3`` means c1, c2 and c3 will be ignored
- **Note**: index starts from ``0``. And it doesn't count the label column
- ``categorical_feature``, default=\ ``""``, type=string, alias=\ ``categorical_column``, ``cat_feature``, ``cat_column``
- specify categorical features
- use number for index, e.g. ``categorical_feature=0,1,2`` means column\_0, column\_1 and column\_2 are categorical features
- add a prefix ``name:`` for column name, e.g. ``categorical_feature=name:c1,c2,c3`` means c1, c2 and c3 are categorical features
- **Note**: only supports categorical with ``int`` type. Index starts from ``0``. And it doesn't count the label column
- **Note**: the negative values will be treated as **missing values**
- ``predict_raw_score``, default=\ ``false``, type=bool, alias=\ ``raw_score``, ``is_predict_raw_score``
- only used in ``prediction`` task
- set to ``true`` to predict only the raw scores
- set to ``false`` to predict transformed scores
- ``predict_leaf_index``, default=\ ``false``, type=bool, alias=\ ``leaf_index``, ``is_predict_leaf_index``
- only used in ``prediction`` task
- set to ``true`` to predict with leaf index of all trees
- ``predict_contrib``, default=\ ``false``, type=bool, alias=\ ``contrib``, ``is_predict_contrib``
- only used in ``prediction`` task
- set to ``true`` to estimate `SHAP values`_, which represent how each feature contributs to each prediction.
Produces number of features + 1 values where the last value is the expected value of the model output over the training data
- ``bin_construct_sample_cnt``, default=\ ``200000``, type=int
- number of data that sampled to construct histogram bins
- will give better training result when set this larger, but will increase data loading time
- set this to larger value if data is very sparse
- ``num_iteration_predict``, default=\ ``-1``, type=int
- only used in ``prediction`` task
- use to specify how many trained iterations will be used in prediction
- ``<= 0`` means no limit
- ``pred_early_stop``, default=\ ``false``, type=bool
- if ``true`` will use early-stopping to speed up the prediction. May affect the accuracy
- ``pred_early_stop_freq``, default=\ ``10``, type=int
- the frequency of checking early-stopping prediction
- ``pred_early_stop_margin``, default=\ ``10.0``, type=double
- the threshold of margin in early-stopping prediction
- ``use_missing``, default=\ ``true``, type=bool
- set to ``false`` to disable the special handle of missing value
- ``zero_as_missing``, default=\ ``false``, type=bool
- set to ``true`` to treat all zero as missing values (including the unshown values in libsvm/sparse matrics)
- set to ``false`` to use ``na`` to represent missing values
- ``init_score_file``, default=\ ``""``, type=string
- path to training initial score file, ``""`` will use ``train_data_file`` + ``.init`` (if exists)
- ``valid_init_score_file``, default=\ ``""``, type=multi-string
- path to validation initial score file, ``""`` will use ``valid_data_file`` + ``.init`` (if exists)
- separate by ``,`` for multi-validation data
Objective Parameters
--------------------
- ``sigmoid``, default=\ ``1.0``, type=double
- parameter for sigmoid function. Will be used in ``binary`` classification and ``lambdarank``
- ``huber_delta``, default=\ ``1.0``, type=double
- parameter for `Huber loss`_. Will be used in ``regression`` task
- ``fair_c``, default=\ ``1.0``, type=double
- parameter for `Fair loss`_. Will be used in ``regression`` task
- ``gaussian_eta``, default=\ ``1.0``, type=double
- parameter to control the width of Gaussian function. Will be used in ``regression_l1`` and ``huber`` losses
- ``poission_max_delta_step``, default=\ ``0.7``, type=double
- parameter used to safeguard optimization
- ``scale_pos_weight``, default=\ ``1.0``, type=double
- weight of positive class in ``binary`` classification task
- ``boost_from_average``, default=\ ``true``, type=bool
- only used in ``regression`` task
- adjust initial score to the mean of labels for faster convergence
- ``is_unbalance``, default=\ ``false``, type=bool
- used in ``binary`` classification
- set this to ``true`` if training data are unbalance
- ``max_position``, default=\ ``20``, type=int
- used in ``lambdarank``
- will optimize `NDCG`_ at this position
- ``label_gain``, default=\ ``0,1,3,7,15,31,63,...``, type=multi-double
- used in ``lambdarank``
- relevant gain for labels. For example, the gain of label ``2`` is ``3`` if using default label gains
- separate by ``,``
- ``num_class``, default=\ ``1``, type=int, alias=\ ``num_classes``
- only used in ``multiclass`` classification
Metric Parameters
-----------------
- ``metric``, default={``l2`` for regression}, {``binary_logloss`` for binary classification}, {``ndcg`` for lambdarank}, type=multi-enum,
options=\ ``l1``, ``l2``, ``ndcg``, ``auc``, ``binary_logloss``, ``binary_error`` ...
- ``l1``, absolute loss, alias=\ ``mean_absolute_error``, ``mae``
- ``l2``, square loss, alias=\ ``mean_squared_error``, ``mse``
- ``l2_root``, root square loss, alias=\ ``root_mean_squared_error``, ``rmse``
- ``huber``, `Huber loss`_
- ``fair``, `Fair loss`_
- ``poisson``, `Poisson regression`_
- ``ndcg``, `NDCG`_
- ``map``, `MAP`_
- ``auc``, `AUC`_
- ``binary_logloss``, `log loss`_
- ``binary_error``.
For one sample: ``0`` for correct classification, ``1`` for error classification
- ``multi_logloss``, log loss for mulit-class classification
- ``multi_error``, error rate for mulit-class classification
- support multi metrics, separated by ``,``
- ``metric_freq``, default=\ ``1``, type=int
- frequency for metric output
- ``is_training_metric``, default=\ ``false``, type=bool
- set this to ``true`` if you need to output metric result of training
- ``ndcg_at``, default=\ ``1,2,3,4,5``, type=multi-int, alias=\ ``ndcg_eval_at``, ``eval_at``
- `NDCG`_ evaluation positions, separated by ``,``
Network Parameters
------------------
Following parameters are used for parallel learning, and only used for base (socket) version.
- ``num_machines``, default=\ ``1``, type=int, alias=\ ``num_machine``
- used for parallel learning, the number of machines for parallel learning application
- need to set this in both socket and mpi versions
- ``local_listen_port``, default=\ ``12400``, type=int, alias=\ ``local_port``
- TCP listen port for local machines
- you should allow this port in firewall settings before training
- ``time_out``, default=\ ``120``, type=int
- socket time-out in minutes
- ``machine_list_file``, default=\ ``""``, type=string
- file that lists machines for this parallel learning application
- each line contains one IP and one port for one machine. The format is ``ip port``, separate by space
GPU Parameters
--------------
- ``gpu_platform_id``, default=\ ``-1``, type=int
- OpenCL platform ID. Usually each GPU vendor exposes one OpenCL platform.
- default value is ``-1``, means the system-wide default platform
- ``gpu_device_id``, default=\ ``-1``, type=int
- OpenCL device ID in the specified platform. Each GPU in the selected platform has a unique device ID
- default value is ``-1``, means the default device in the selected platform
- ``gpu_use_dp``, default=\ ``false``, type=bool
- set to ``true`` to use double precision math on GPU (default using single precision)
Convert Model Parameters
------------------------
This feature is only supported in command line version yet.
- ``convert_model_language``, default=\ ``""``, type=string
- only ``cpp`` is supported yet
- if ``convert_model_language`` is set when ``task`` is set to ``train``, the model will also be converted
- ``convert_model``, default=\ ``"gbdt_prediction.cpp"``, type=string
- output file name of converted model
Others
------
Continued Training with Input Score
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
LightGBM supports continued training with initial scores. It uses an additional file to store these initial scores, like the following:
::
0.5
-0.1
0.9
...
It means the initial score of the first data row is ``0.5``, second is ``-0.1``, and so on.
The initial score file corresponds with data file line by line, and has per score per line.
And if the name of data file is ``train.txt``, the initial score file should be named as ``train.txt.init`` and in the same folder as the data file.
In this case LightGBM will auto load initial score file if it exists.
Weight Data
~~~~~~~~~~~
LightGBM supporta weighted training. It uses an additional file to store weight data, like the following:
::
1.0
0.5
0.8
...
It means the weight of the first data row is ``1.0``, second is ``0.5``, and so on.
The weight file corresponds with data file line by line, and has per weight per line.
And if the name of data file is ``train.txt``, the weight file should be named as ``train.txt.weight`` and in the same folder as the data file.
In this case LightGBM will auto load weight file if it exists.
**update**:
You can specific weight column in data file now. Please refer to parameter ``weight`` in above.
Query Data
~~~~~~~~~~
For LambdaRank learning, it needs query information for training data.
LightGBM use an additional file to store query data, like the following:
::
27
18
67
...
It means first ``27`` lines samples belong one query and next ``18`` lines belong to another, and so on.
**Note**: data should be ordered by the query.
If the name of data file is ``train.txt``, the query file should be named as ``train.txt.query`` and in same folder of training data.
In this case LightGBM will load the query file automatically if it exists.
**update**:
You can specific query/group id in data file now. Please refer to parameter ``group`` in above.
.. _Laurae++ Interactive Documentation: https://sites.google.com/view/lauraepp/parameters
.. _Huber loss: https://en.wikipedia.org/wiki/Huber_loss
.. _Fair loss: https://www.kaggle.com/c/allstate-claims-severity/discussion/24520
.. _Poisson regression: https://en.wikipedia.org/wiki/Poisson_regression
.. _lambdarank: https://papers.nips.cc/paper/2971-learning-to-rank-with-nonsmooth-cost-functions.pdf
.. _Dropouts meet Multiple Additive Regression Trees: https://arxiv.org/abs/1505.01866
.. _hyper-threading: https://en.wikipedia.org/wiki/Hyper-threading
.. _SHAP values: https://arxiv.org/abs/1706.06060
.. _NDCG: https://en.wikipedia.org/wiki/Discounted_cumulative_gain#Normalized_DCG
.. _MAP: https://en.wikipedia.org/wiki/Information_retrieval#Mean_average_precision
.. _AUC: https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve
.. _log loss: https://www.kaggle.com/wiki/LogLoss
Python Package Introduction
===========================
This document gives a basic walkthrough of LightGBM Python-package.
**List of other helpful links**
- `Python Examples <https://github.com/Microsoft/LightGBM/tree/master/examples/python-guide>`__
- `Python API <./Python-API.rst>`__
- `Parameters Tuning <./Parameters-Tuning.rst>`__
Install
-------
Install Python-package dependencies,
``setuptools``, ``wheel``, ``numpy`` and ``scipy`` are required, ``scikit-learn`` is required for sklearn interface and recommended:
::
pip install setuptools wheel numpy scipy scikit-learn -U
Refer to `Python-package`_ folder for the installation guide.
To verify your installation, try to ``import lightgbm`` in Python:
::
import lightgbm as lgb
Data Interface
--------------
The LightGBM Python module is able to load data from:
- libsvm/tsv/csv txt format file
- Numpy 2D array, pandas object
- LightGBM binary file
The data is stored in a ``Dataset`` object.
**To load a libsvm text file or a LightGBM binary file into Dataset:**
.. code:: python
train_data = lgb.Dataset('train.svm.bin')
**To load a numpy array into Dataset:**
.. code:: python
data = np.random.rand(500, 10) # 500 entities, each contains 10 features
label = np.random.randint(2, size=500) # binary target
train_data = lgb.Dataset(data, label=label)
**To load a scpiy.sparse.csr\_matrix array into Dataset:**
.. code:: python
csr = scipy.sparse.csr_matrix((dat, (row, col)))
train_data = lgb.Dataset(csr)
**Saving Dataset into a LightGBM binary file will make loading faster:**
.. code:: python
train_data = lgb.Dataset('train.svm.txt')
train_data.save_binary('train.bin')
**Create validation data:**
.. code:: python
test_data = train_data.create_valid('test.svm')
or
.. code:: python
test_data = lgb.Dataset('test.svm', reference=train_data)
In LightGBM, the validation data should be aligned with training data.
**Specific feature names and categorical features:**
.. code:: python
train_data = lgb.Dataset(data, label=label, feature_name=['c1', 'c2', 'c3'], categorical_feature=['c3'])
LightGBM can use categorical features as input directly.
It doesn't need to covert to one-hot coding, and is much faster than one-hot coding (about 8x speed-up).
**Note**: You should convert your categorical features to ``int`` type before you construct ``Dataset``.
**Weights can be set when needed:**
.. code:: python
w = np.random.rand(500, )
train_data = lgb.Dataset(data, label=label, weight=w)
or
.. code:: python
train_data = lgb.Dataset(data, label=label)
w = np.random.rand(500, )
train_data.set_weight(w)
And you can use ``Dataset.set_init_score()`` to set initial score, and ``Dataset.set_group()`` to set group/query data for ranking tasks.
**Memory efficent usage:**
The ``Dataset`` object in LightGBM is very memory-efficient, due to it only need to save discrete bins.
However, Numpy/Array/Pandas object is memory cost.
If you concern about your memory consumption. You can save memory accroding to following:
1. Let ``free_raw_data=True`` (default is ``True``) when constructing the ``Dataset``
2. Explicit set ``raw_data=None`` after the ``Dataset`` has been constructed
3. Call ``gc``
Setting Parameters
------------------
LightGBM can use either a list of pairs or a dictionary to set `Parameters <./Parameters.rst>`__.
For instance:
- Booster parameters:
.. code:: python
param = {'num_leaves':31, 'num_trees':100, 'objective':'binary'}
param['metric'] = 'auc'
- You can also specify multiple eval metrics:
.. code:: python
param['metric'] = ['auc', 'binary_logloss']
Training
--------
Training a model requires a parameter list and data set:
.. code:: python
num_round = 10
bst = lgb.train(param, train_data, num_round, valid_sets=[test_data])
After training, the model can be saved:
.. code:: python
bst.save_model('model.txt')
The trained model can also be dumped to JSON format:
.. code:: python
json_model = bst.dump_model()
A saved model can be loaded.
.. code:: python
bst = lgb.Booster(model_file='model.txt') #init model
CV
--
Training with 5-fold CV:
.. code:: python
num_round = 10
lgb.cv(param, train_data, num_round, nfold=5)
Early Stopping
--------------
If you have a validation set, you can use early stopping to find the optimal number of boosting rounds.
Early stopping requires at least one set in ``valid_sets``. If there is more than one, it will use all of them:
.. code:: python
bst = lgb.train(param, train_data, num_round, valid_sets=valid_sets, early_stopping_rounds=10)
bst.save_model('model.txt', num_iteration=bst.best_iteration)
The model will train until the validation score stops improving.
Validation error needs to improve at least every ``early_stopping_rounds`` to continue training.
If early stopping occurs, the model will have an additional field: ``bst.best_iteration``.
Note that ``train()`` will return a model from the last iteration, not the best one.
And you can set ``num_iteration=bst.best_iteration`` when saving model.
This works with both metrics to minimize (L2, log loss, etc.) and to maximize (NDCG, AUC).
Note that if you specify more than one evaluation metric, all of them will be used for early stopping.
Prediction
----------
A model that has been trained or loaded can perform predictions on data sets:
.. code:: python
# 7 entities, each contains 10 features
data = np.random.rand(7, 10)
ypred = bst.predict(data)
If early stopping is enabled during training, you can get predictions from the best iteration with ``bst.best_iteration``:
.. code:: python
ypred = bst.predict(data, num_iteration=bst.best_iteration)
.. _Python-package: https://github.com/Microsoft/LightGBM/tree/master/python-package
Python Package Introduction
===========================
This document gives a basic walkthrough of LightGBM Python-package.
***List of other Helpful Links***
* [Python Examples](https://github.com/Microsoft/LightGBM/tree/master/examples/python-guide)
* [Python API](./Python-API.rst)
* [Parameters Tuning](./Parameters-tuning.md)
Install
-------
Install Python-package dependencies, `setuptools`, `wheel`, `numpy` and `scipy` are required, `scikit-learn` is required for sklearn interface and recommended:
```
pip install setuptools wheel numpy scipy scikit-learn -U
```
Refer to [Python-package](https://github.com/Microsoft/LightGBM/tree/master/python-package) folder for the installation guide.
To verify your installation, try to `import lightgbm` in Python:
```
import lightgbm as lgb
```
Data Interface
--------------
The LightGBM Python module is able to load data from:
- libsvm/tsv/csv txt format file
- Numpy 2D array, pandas object
- LightGBM binary file
The data is stored in a ```Dataset``` object.
#### To load a libsvm text file or a LightGBM binary file into ```Dataset```:
```python
train_data = lgb.Dataset('train.svm.bin')
```
#### To load a numpy array into ```Dataset```:
```python
data = np.random.rand(500, 10) # 500 entities, each contains 10 features
label = np.random.randint(2, size=500) # binary target
train_data = lgb.Dataset(data, label=label)
```
#### To load a scpiy.sparse.csr_matrix array into ```Dataset```:
```python
csr = scipy.sparse.csr_matrix((dat, (row, col)))
train_data = lgb.Dataset(csr)
```
#### Saving ```Dataset``` into a LightGBM binary file will make loading faster:
```python
train_data = lgb.Dataset('train.svm.txt')
train_data.save_binary('train.bin')
```
#### Create validation data:
```python
test_data = train_data.create_valid('test.svm')
```
or
```python
test_data = lgb.Dataset('test.svm', reference=train_data)
```
In LightGBM, the validation data should be aligned with training data.
#### Specific feature names and categorical features:
```python
train_data = lgb.Dataset(data, label=label, feature_name=['c1', 'c2', 'c3'], categorical_feature=['c3'])
```
LightGBM can use categorical features as input directly. It doesn't need to covert to one-hot coding, and is much faster than one-hot coding (about 8x speed-up).
**Note**:You should convert your categorical features to int type before you construct `Dataset`.
#### Weights can be set when needed:
```python
w = np.random.rand(500, )
train_data = lgb.Dataset(data, label=label, weight=w)
```
or
```python
train_data = lgb.Dataset(data, label=label)
w = np.random.rand(500, )
train_data.set_weight(w)
```
And you can use `Dataset.set_init_score()` to set initial score, and `Dataset.set_group()` to set group/query data for ranking tasks.
#### Memory efficent usage
The `Dataset` object in LightGBM is very memory-efficient, due to it only need to save discrete bins.
However, Numpy/Array/Pandas object is memory cost. If you concern about your memory consumption. You can save memory accroding to following:
1. Let ```free_raw_data=True```(default is ```True```) when constructing the ```Dataset```
2. Explicit set ```raw_data=None``` after the ```Dataset``` has been constructed
3. Call ```gc```
Setting Parameters
------------------
LightGBM can use either a list of pairs or a dictionary to set [Parameters](./Parameters.md). For instance:
* Booster parameters:
```python
param = {'num_leaves':31, 'num_trees':100, 'objective':'binary'}
param['metric'] = 'auc'
```
* You can also specify multiple eval metrics:
```python
param['metric'] = ['auc', 'binary_logloss']
```
Training
--------
Training a model requires a parameter list and data set.
```python
num_round = 10
bst = lgb.train(param, train_data, num_round, valid_sets=[test_data])
```
After training, the model can be saved.
```python
bst.save_model('model.txt')
```
The trained model can also be dumped to JSON format.
```python
# dump model
json_model = bst.dump_model()
```
A saved model can be loaded.
```python
bst = lgb.Booster(model_file='model.txt') #init model
```
CV
--
Training with 5-fold CV:
```python
num_round = 10
lgb.cv(param, train_data, num_round, nfold=5)
```
Early Stopping
--------------
If you have a validation set, you can use early stopping to find the optimal number of boosting rounds.
Early stopping requires at least one set in `valid_sets`. If there's more than one, it will use all of them.
```python
bst = lgb.train(param, train_data, num_round, valid_sets=valid_sets, early_stopping_rounds=10)
bst.save_model('model.txt', num_iteration=bst.best_iteration)
```
The model will train until the validation score stops improving. Validation error needs to improve at least every `early_stopping_rounds` to continue training.
If early stopping occurs, the model will have an additional field: `bst.best_iteration`. Note that `train()` will return a model from the last iteration, not the best one. And you can set `num_iteration=bst.best_iteration` when saving model.
This works with both metrics to minimize (L2, log loss, etc.) and to maximize (NDCG, AUC). Note that if you specify more than one evaluation metric, all of them will be used for early stopping.
Prediction
----------
A model that has been trained or loaded can perform predictions on data sets.
```python
# 7 entities, each contains 10 features
data = np.random.rand(7, 10)
ypred = bst.predict(data)
```
If early stopping is enabled during training, you can get predictions from the best iteration with `bst.best_iteration`:
```python
ypred = bst.predict(data, num_iteration=bst.best_iteration)
```
# Quick Start
This is a quick start guide for LightGBM of cli version.
Follow the [Installation Guide](./Installation-Guide.rst) to install LightGBM first.
***List of other Helpful Links***
* [Parameters](./Parameters.md)
* [Parameters Tuning](./Parameters-tuning.md)
* [Python-package Quick Start](./Python-intro.md)
* [Python API](./Python-API.rst)
## Training Data Format
LightGBM supports input data file with [CSV](https://en.wikipedia.org/wiki/Comma-separated_values), [TSV](https://en.wikipedia.org/wiki/Tab-separated_values) and [LibSVM](https://www.csie.ntu.edu.tw/~cjlin/libsvm/) formats.
Label is the data of first column, and there is no header in the file.
### Categorical Feature Support
update 12/5/2016:
LightGBM can use categorical feature directly (without one-hot coding). The experiment on [Expo data](http://stat-computing.org/dataexpo/2009/) shows about 8x speed-up compared with one-hot coding.
For the setting details, please refer to [Parameters](./Parameters.md).
### Weight and Query/Group Data
LightGBM also support weighted training, it needs an additional [weight data](./Parameters.md). And it needs an additional [query data](./Parameters.md) for ranking task.
update 11/3/2016:
1. support input with header now
2. can specific label column, weight column and query/group id column. Both index and column are supported
3. can specific a list of ignored columns
## Parameter Quick Look
The parameter format is ```key1=value1 key2=value2 ... ``` . And parameters can be in both config file and command line.
Some important parameters:
* ```config```, default=```""```, type=string, alias=```config_file```
* path of config file
* ```task```, default=```train```, type=enum, options=```train```,```prediction```
* ```train``` for training
* ```prediction``` for prediction.
* `application`, default=`regression`, type=enum, options=`regression`,`regression_l1`,`huber`,`fair`,`poisson`,`binary`,`lambdarank`,`multiclass`, alias=`objective`,`app`
* `regression`, regression application
* `regression_l2`, L2 loss, alias=`mean_squared_error`,`mse`
* `regression_l1`, L1 loss, alias=`mean_absolute_error`,`mae`
* `huber`, [Huber loss](https://en.wikipedia.org/wiki/Huber_loss "Huber loss - Wikipedia")
* `fair`, [Fair loss](https://www.kaggle.com/c/allstate-claims-severity/discussion/24520)
* `poisson`, [Poisson regression](https://en.wikipedia.org/wiki/Poisson_regression "Poisson regression")
* `binary`, binary classification application
* `lambdarank`, [lambdarank](https://papers.nips.cc/paper/2971-learning-to-rank-with-nonsmooth-cost-functions.pdf) application
* The label should be `int` type in lambdarank tasks, and larger number represent the higher relevance (e.g. 0:bad, 1:fair, 2:good, 3:perfect).
* `label_gain` can be used to set the gain(weight) of `int` label.
* `multiclass`, multi-class classification application, should set `num_class` as well
* `boosting`, default=`gbdt`, type=enum, options=`gbdt`,`rf`,`dart`,`goss`, alias=`boost`,`boosting_type`
* `gbdt`, traditional Gradient Boosting Decision Tree
* `rf`, Random Forest
* `dart`, [Dropouts meet Multiple Additive Regression Trees](https://arxiv.org/abs/1505.01866)
* `goss`, Gradient-based One-Side Sampling
* ```data```, default=```""```, type=string, alias=```train```,```train_data```
* training data, LightGBM will train from this data
* ```valid```, default=```""```, type=multi-string, alias=```test```,```valid_data```,```test_data```
* validation/test data, LightGBM will output metrics for these data
* support multi validation data, separate by ```,```
* ```num_iterations```, default=```100```, type=int, alias=```num_iteration```,```num_tree```,```num_trees```,```num_round```,```num_rounds```
* number of boosting iterations/trees
* ```learning_rate```, default=```0.1```, type=double, alias=```shrinkage_rate```
* shrinkage rate
* ```num_leaves```, default=```31```, type=int, alias=```num_leaf```
* number of leaves in one tree
* ```tree_learner```, default=```serial```, type=enum, options=```serial```,```feature```,```data```
* ```serial```, single machine tree learner
* ```feature```, feature parallel tree learner
* ```data```, data parallel tree learner
* Refer to [Parallel Learning Guide](./Parallel-Learning-Guide.rst) to get more details.
* ```num_threads```, default=OpenMP_default, type=int, alias=```num_thread```,```nthread```
* Number of threads for LightGBM.
* For the best speed, set this to the number of **real CPU cores**, not the number of threads (most CPU using [hyper-threading](https://en.wikipedia.org/wiki/Hyper-threading) to generate 2 threads per CPU core).
* For parallel learning, should not use full CPU cores since this will cause poor performance for the network.
* ```max_depth```, default=```-1```, type=int
* Limit the max depth for tree model. This is used to deal with overfit when #data is small. Tree still grow by leaf-wise.
* ```< 0``` means no limit
* ```min_data_in_leaf```, default=```20```, type=int, alias=```min_data_per_leaf``` , ```min_data```
* Minimal number of data in one leaf. Can use this to deal with over-fit.
* ```min_sum_hessian_in_leaf```, default=```1e-3```, type=double, alias=```min_sum_hessian_per_leaf```, ```min_sum_hessian```, ```min_hessian```
* Minimal sum hessian in one leaf. Like ```min_data_in_leaf```, can use this to deal with over-fit.
For all parameters, please refer to [Parameters](./Parameters.md).
## Run LightGBM
For Windows:
```
lightgbm.exe config=your_config_file other_args ...
```
For Unix:
```
./lightgbm config=your_config_file other_args ...
```
Parameters can be both in the config file and command line, and the parameters in command line have higher priority than in config file.
For example, following command line will keep 'num_trees=10' and ignore same parameter in config file.
```
./lightgbm config=train.conf num_trees=10
```
## Examples
* [Binary Classification](https://github.com/Microsoft/LightGBM/tree/master/examples/binary_classification)
* [Regression](https://github.com/Microsoft/LightGBM/tree/master/examples/regression)
* [Lambdarank](https://github.com/Microsoft/LightGBM/tree/master/examples/lambdarank)
* [Parallel Learning](https://github.com/Microsoft/LightGBM/tree/master/examples/parallel_learning)
Quick Start
===========
This is a quick start guide for LightGBM CLI version.
Follow the `Installation Guide <./Installation-Guide.rst>`__ to install LightGBM first.
**List of other helpful links**
- `Parameters <./Parameters.rst>`__
- `Parameters Tuning <./Parameters-Tuning.rst>`__
- `Python-package Quick Start <./Python-Intro.rst>`__
- `Python API <./Python-API.rst>`__
Training Data Format
--------------------
LightGBM supports input data file with `CSV`_, `TSV`_ and `LibSVM`_ formats.
Label is the data of first column, and there is no header in the file.
Categorical Feature Support
~~~~~~~~~~~~~~~~~~~~~~~~~~~
update 12/5/2016:
LightGBM can use categorical feature directly (without one-hot coding).
The experiment on `Expo data`_ shows about 8x speed-up compared with one-hot coding.
For the setting details, please refer to `Parameters <./Parameters.rst>`__.
Weight and Query/Group Data
~~~~~~~~~~~~~~~~~~~~~~~~~~~
LightGBM also support weighted training, it needs an additional `weight data <./Parameters.rst#io-parameters>`__.
And it needs an additional `query data <./Parameters.rst#io-parameters>`_ for ranking task.
update 11/3/2016:
1. support input with header now
2. can specific label column, weight column and query/group id column.
Both index and column are supported
3. can specific a list of ignored columns
Parameter Quick Look
--------------------
The parameter format is ``key1=value1 key2=value2 ...``.
And parameters can be in both config file and command line.
Some important parameters:
- ``config``, default=\ ``""``, type=string, alias=\ ``config_file``
- path to config file
- ``task``, default=\ ``train``, type=enum, options=\ ``train``, ``prediction``
- ``train`` for training
- ``prediction`` for prediction
- ``application``, default=\ ``regression``, type=enum,
options=\ ``regression``, ``regression_l2``, ``regression_l1``, ``huber``, ``fair``, ``poisson``, ``binary``, ``lambdarank``, ``multiclass``,
alias=\ ``objective``, ``app``
- ``regression``, regression application
- ``regression_l2``, L2 loss, alias=\ ``mean_squared_error``, ``mse``
- ``regression_l1``, L1 loss, alias=\ ``mean_absolute_error``, ``mae``
- ``huber``, `Huber loss`_
- ``fair``, `Fair loss`_
- ``poisson``, `Poisson regression`_
- ``binary``, binary classification application
- ``lambdarank``, `lambdarank`_ application
- the label should be ``int`` type in lambdarank tasks,
and larger number represent the higher relevance (e.g. 0:bad, 1:fair, 2:good, 3:perfect)
- ``label_gain`` can be used to set the gain(weight) of ``int`` label.
- ``multiclass``, multi-class classification application, ``num_class`` should be set as well
- ``boosting``, default=\ ``gbdt``, type=enum,
options=\ ``gbdt``, ``rf``, ``dart``, ``goss``,
alias=\ ``boost``, ``boosting_type``
- ``gbdt``, traditional Gradient Boosting Decision Tree
- ``rf``, Random Forest
- ``dart``, `Dropouts meet Multiple Additive Regression Trees`_
- ``goss``, Gradient-based One-Side Sampling
- ``data``, default=\ ``""``, type=string, alias=\ ``train``, ``train_data``
- training data, LightGBM will train from this data
- ``valid``, default=\ ``""``, type=multi-string, alias=\ ``test``, ``valid_data``, ``test_data``
- validation/test data, LightGBM will output metrics for these data
- support multi validation data, separate by ``,``
- ``num_iterations``, default=\ ``100``, type=int,
alias=\ ``num_iteration``, ``num_tree``, ``num_trees``, ``num_round``, ``num_rounds``
- number of boosting iterations/trees
- ``learning_rate``, default=\ ``0.1``, type=double, alias=\ ``shrinkage_rate``
- shrinkage rate
- ``num_leaves``, default=\ ``31``, type=int, alias=\ ``num_leaf``
- number of leaves in one tree
- ``tree_learner``, default=\ ``serial``, type=enum, options=\ ``serial``, ``feature``, ``data``
- ``serial``, single machine tree learner
- ``feature``, feature parallel tree learner
- ``data``, data parallel tree learner
- refer to `Parallel Learning Guide <./Parallel-Learning-Guide.rst>`__ to get more details
- ``num_threads``, default=\ ``OpenMP_default``, type=int, alias=\ ``num_thread``, ``nthread``
- number of threads for LightGBM
- for the best speed, set this to the number of **real CPU cores**,
not the number of threads (most CPU using `hyper-threading`_ to generate 2 threads per CPU core)
- for parallel learning, should not use full CPU cores since this will cause poor performance for the network
- ``max_depth``, default=\ ``-1``, type=int
- limit the max depth for tree model.
This is used to deal with overfit when ``#data`` is small.
Tree still grow by leaf-wise
- ``< 0`` means no limit
- ``min_data_in_leaf``, default=\ ``20``, type=int, alias=\ ``min_data_per_leaf`` , ``min_data``
- minimal number of data in one leaf. Can use this to deal with over-fitting
- ``min_sum_hessian_in_leaf``, default=\ ``1e-3``, type=double,
alias=\ ``min_sum_hessian_per_leaf``, ``min_sum_hessian``, ``min_hessian``
- minimal sum hessian in one leaf. Like ``min_data_in_leaf``, can be used to deal with over-fitting
For all parameters, please refer to `Parameters <./Parameters.rst>`__.
Run LightGBM
------------
For Windows:
::
lightgbm.exe config=your_config_file other_args ...
For Unix:
::
./lightgbm config=your_config_file other_args ...
Parameters can be both in the config file and command line, and the parameters in command line have higher priority than in config file.
For example, following command line will keep ``num_trees=10`` and ignore the same parameter in config file.
::
./lightgbm config=train.conf num_trees=10
Examples
--------
- `Binary Classification <https://github.com/Microsoft/LightGBM/tree/master/examples/binary_classification>`__
- `Regression <https://github.com/Microsoft/LightGBM/tree/master/examples/regression>`__
- `Lambdarank <https://github.com/Microsoft/LightGBM/tree/master/examples/lambdarank>`__
- `Parallel Learning <https://github.com/Microsoft/LightGBM/tree/master/examples/parallel_learning>`__
.. _CSV: https://en.wikipedia.org/wiki/Comma-separated_values
.. _TSV: https://en.wikipedia.org/wiki/Tab-separated_values
.. _LibSVM: https://www.csie.ntu.edu.tw/~cjlin/libsvm/
.. _Expo data: http://stat-computing.org/dataexpo/2009/
.. _Huber loss: https://en.wikipedia.org/wiki/Huber_loss
.. _Fair loss: https://www.kaggle.com/c/allstate-claims-severity/discussion/24520
.. _Poisson regression: https://en.wikipedia.org/wiki/Poisson_regression
.. _lambdarank: https://papers.nips.cc/paper/2971-learning-to-rank-with-nonsmooth-cost-functions.pdf
.. _Dropouts meet Multiple Additive Regression Trees: https://arxiv.org/abs/1505.01866
.. _hyper-threading: https://en.wikipedia.org/wiki/Hyper-threading
# Documentation
Documentation for LightGBM is generated using [Sphinx](http://www.sphinx-doc.org/) and [recommonmark](https://recommonmark.readthedocs.io/).
After each commit on `master`, documentation is updated and published to [https://lightgbm.readthedocs.io/](https://lightgbm.readthedocs.io/).
## Build
You can build the documentation locally. Just run in `docs` folder:
```sh
pip install -r requirements.txt
make html
```
Documentation
=============
Documentation for LightGBM is generated using `Sphinx <http://www.sphinx-doc.org/>`__.
After each commit on ``master``, documentation is updated and published to `Read the Docs <https://lightgbm.readthedocs.io/>`__.
Build
-----
You can build the documentation locally. Just run in ``docs`` folder:
.. code:: sh
pip install sphinx sphinx_rtd_theme
make html
window.onload = function() { $(function() {
$('a[href^="./"][href$=".md"]').attr('href', (i, val) => { return val.replace('.md', '.html'); }); /* Replace '.md' with '.html' in all internal links like './[Something].md' */ $('a[href^="./"][href*=".rst"]').attr('href', (i, val) => { return val.replace('.rst', '.html'); }); /* Replace '.rst' with '.html' in all internal links like './[Something].rst[#anchor]' */
$('a[href^="./"][href$=".rst"]').attr('href', (i, val) => { return val.replace('.rst', '.html'); }); /* Replace '.rst' with '.html' in all internal links like './[Something].rst' */ $('.wy-nav-content').each(function () { this.style.setProperty('max-width', 'none', 'important'); });
} });
...@@ -19,14 +19,13 @@ ...@@ -19,14 +19,13 @@
# #
import os import os
import sys import sys
import sphinx
from sphinx.errors import VersionRequirementError
curr_path = os.path.dirname(os.path.realpath(__file__)) curr_path = os.path.dirname(os.path.realpath(__file__))
libpath = os.path.join(curr_path, '../python-package/') libpath = os.path.join(curr_path, '../python-package/')
sys.path.insert(0, libpath) sys.path.insert(0, libpath)
from recommonmark.parser import CommonMarkParser
from recommonmark.transform import AutoStructify
# -- mock out modules # -- mock out modules
from unittest.mock import Mock from unittest.mock import Mock
MOCK_MODULES = [ MOCK_MODULES = [
...@@ -42,8 +41,10 @@ for mod_name in MOCK_MODULES: ...@@ -42,8 +41,10 @@ for mod_name in MOCK_MODULES:
os.environ['LIGHTGBM_BUILD_DOC'] = '1' os.environ['LIGHTGBM_BUILD_DOC'] = '1'
# If your documentation needs a minimal Sphinx version, state it here. # If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.3' # Due to sphinx.ext.napoleon
# needs_sphinx = '1.0' if needs_sphinx > sphinx.__version__:
message = 'This project needs at least Sphinx v%s' % needs_sphinx
raise VersionRequirementError(message)
# Add any Sphinx extension module names here, as strings. They can be # Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
...@@ -60,10 +61,7 @@ templates_path = ['_templates'] ...@@ -60,10 +61,7 @@ templates_path = ['_templates']
# The suffix(es) of source filenames. # The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string: # You can specify multiple suffix as a list of string:
source_parsers = { # source_suffix = ['.rst', '.md']
'.md': CommonMarkParser,
}
source_suffix = ['.rst', '.md']
# The master toctree document. # The master toctree document.
master_doc = 'index' master_doc = 'index'
...@@ -151,20 +149,20 @@ latex_elements = { ...@@ -151,20 +149,20 @@ latex_elements = {
# Grouping the document tree into LaTeX files. List of tuples # Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, # (source start file, target name, title,
# author, documentclass [howto, manual, or own class]). # author, documentclass [howto, manual, or own class]).
latex_documents = [ # latex_documents = [
(master_doc, 'LightGBM.tex', 'LightGBM Documentation', # (master_doc, 'LightGBM.tex', 'LightGBM Documentation',
'Microsoft Corporation', 'manual'), # 'Microsoft Corporation', 'manual'),
] # ]
# -- Options for manual page output --------------------------------------- # -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples # One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section). # (source start file, name, description, authors, manual section).
man_pages = [ # man_pages = [
(master_doc, 'lightgbm', 'LightGBM Documentation', # (master_doc, 'lightgbm', 'LightGBM Documentation',
[author], 1) # [author], 1)
] # ]
# -- Options for Texinfo output ------------------------------------------- # -- Options for Texinfo output -------------------------------------------
...@@ -172,19 +170,12 @@ man_pages = [ ...@@ -172,19 +170,12 @@ man_pages = [
# Grouping the document tree into Texinfo files. List of tuples # Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author, # (source start file, target name, title, author,
# dir menu entry, description, category) # dir menu entry, description, category)
texinfo_documents = [ # texinfo_documents = [
(master_doc, 'LightGBM', 'LightGBM Documentation', # (master_doc, 'LightGBM', 'LightGBM Documentation',
author, 'LightGBM', 'One line description of project.', # author, 'LightGBM', 'One line description of project.',
'Miscellaneous'), # 'Miscellaneous'),
] # ]
# https://recommonmark.readthedocs.io/en/latest/
github_doc_root = 'https://github.com/Microsoft/LightGBM/tree/master/docs/'
def setup(app): def setup(app):
app.add_config_value('recommonmark_config', {
'url_resolver': lambda url: github_doc_root + url,
'auto_toc_tree_section': 'Contents',
}, True)
app.add_transform(AutoStructify)
app.add_javascript("js/rst_links_fix.js") app.add_javascript("js/rst_links_fix.js")
Recommendations When Using gcc
==============================
It is recommended to use ``-O3 -mtune=native`` to achieve maximum speed during LightGBM training.
Using Intel Ivy Bridge CPU on 1M x 1K Bosch dataset, the performance increases as follow:
+-------------------------------------+---------------------+
| Compilation Flag | Performance Index |
+=====================================+=====================+
| ``-O2 -mtune=core2`` | 100.00% |
+-------------------------------------+---------------------+
| ``-O2 -mtune=native`` | 100.90% |
+-------------------------------------+---------------------+
| ``-O3 -mtune=native`` | 102.78% |
+-------------------------------------+---------------------+
| ``-O3 -ffast-math -mtune=native`` | 100.64% |
+-------------------------------------+---------------------+
You can find more details on the experimentation below:
- `Laurae++/Benchmarks <https://sites.google.com/view/lauraepp/new-benchmarks/old-benchmarks>`__
- `Laurae2/gbt\_benchmarks <https://github.com/Laurae2/gbt_benchmarks>`__
- `Laurae's Benchmark Master Data (Interactive) <https://public.tableau.com/views/gbt_benchmarks/Master-Data?:showVizHome=no>`__
- `Kaggle Paris Meetup #12 Slides <https://drive.google.com/file/d/0B6qJBmoIxFe0ZHNCOXdoRWMxUm8/view>`__
Some explanatory pictures:
.. image:: ./_static/images/gcc-table.png
:align: center
.. image:: ./_static/images/gcc-bars.png
:align: center
.. image:: ./_static/images/gcc-chart.png
:align: center
.. image:: ./_static/images/gcc-comparison-1.png
:align: center
.. image:: ./_static/images/gcc-comparison-2.png
:align: center
.. image:: ./_static/images/gcc-meetup-1.png
:align: center
.. image:: ./_static/images/gcc-meetup-2.png
:align: center
# Recommendations when using gcc
It is recommended to use `-O3 -mtune=native` to achieve maximum speed during LightGBM training.
Using Intel Ivy Bridge CPU on 1M x 1K Bosch dataset, the performance increases as follow:
| Compilation Flag | Performance Index |
| --- | ---: |
| `-O2 -mtune=core2` | 100.00% |
| `-O2 -mtune=native` | 100.90% |
| `-O3 -mtune=native` | 102.78% |
| `-O3 -ffast-math -mtune=native` | 100.64% |
You can find more details on the experimentation below:
* [Laurae++/Benchmarks](https://sites.google.com/view/lauraepp/benchmarks)
* [Laurae2/gbt_benchmarks](https://github.com/Laurae2/gbt_benchmarks)
* [Laurae's Benchmark Master Data (Interactive)](https://public.tableau.com/views/gbt_benchmarks/Master-Data?:showVizHome=no)
* [Kaggle Paris Meetup #12 Slides](https://drive.google.com/file/d/0B6qJBmoIxFe0ZHNCOXdoRWMxUm8/view)
Some pictures below:
![gcc table](https://cloud.githubusercontent.com/assets/9083669/26027337/c376e22e-380c-11e7-91bc-fe0a333c03e9.png)
![gcc bars](https://cloud.githubusercontent.com/assets/9083669/26027338/d1caebcc-380c-11e7-864e-d704b39f1e63.png)
![gcc chart](https://cloud.githubusercontent.com/assets/9083669/26027353/e1bdb866-380c-11e7-97b5-22c7eac349b2.png)
![gcc comparison 1](https://cloud.githubusercontent.com/assets/9083669/26027401/c31f2f74-380d-11e7-857a-f5119791bed7.png)
![gcc comparison 2](https://cloud.githubusercontent.com/assets/9083669/26027486/d7d7e72a-380e-11e7-86c3-ccbbf42a9c55.png)
![gcc meetup 1](https://cloud.githubusercontent.com/assets/9083669/26027427/21b38f44-380e-11e7-9c95-05437782dd46.png)
![gcc meetup 2](https://cloud.githubusercontent.com/assets/9083669/26027433/362be250-380e-11e7-8982-76ac167bcd3e.png)
...@@ -12,17 +12,17 @@ Welcome to LightGBM's documentation! ...@@ -12,17 +12,17 @@ Welcome to LightGBM's documentation!
Installation Guide <Installation-Guide> Installation Guide <Installation-Guide>
Quick Start <Quick-Start> Quick Start <Quick-Start>
Python Quick Start <Python-intro> Python Quick Start <Python-Intro>
Features <Features> Features <Features>
Experiments <Experiments> Experiments <Experiments>
Parameters <Parameters> Parameters <Parameters>
Parameters Tuning <Parameters-tuning> Parameters Tuning <Parameters-Tuning>
Python API <Python-API> Python API <Python-API>
Parallel Learning Guide <Parallel-Learning-Guide> Parallel Learning Guide <Parallel-Learning-Guide>
GPU Tutorial <GPU-Tutorial> GPU Tutorial <GPU-Tutorial>
Advanced Topics <Advanced-Topic> Advanced Topics <Advanced-Topics>
FAQ <FAQ> FAQ <FAQ>
Development Guide <development> Development Guide <Development-Guide>
Indices and Tables Indices and Tables
================== ==================
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment