Commit 4aa32967 authored by Nikita Titov's avatar Nikita Titov Committed by Tsukasa OMOTO
Browse files

[docs] documentation improvement (#976)

* fixed typos and hotfixes

* converted gcc-tips.Rmd; added ref to gcc-tips

* renamed files

* renamed Advanced-Topics

* renamed README

* renamed Parameters-Tuning

* renamed FAQ

* fixed refs to FAQ

* fixed undecodable source characters

* renamed Features

* renamed Quick-Start

* fixed undecodable source characters in Features

* renamed Python-Intro

* renamed GPU-Tutorial

* renamed GPU-Windows

* fixed markdown

* fixed undecodable source characters in GPU-Windows

* renamed Parameters

* fixed markdown

* removed recommonmark dependence

* hotfixes

* added anchors to links

* fixed 404

* fixed typos

* added more anchors

* removed sphinxcontrib-napoleon dependence

* removed outdated line in Travis config

* fixed max-width of the ReadTheDocs theme

* added horizontal align to images
parent 12257feb
This diff is collapsed.
This diff is collapsed.
Python Package Introduction
===========================
This document gives a basic walkthrough of LightGBM Python-package.
**List of other helpful links**
- `Python Examples <https://github.com/Microsoft/LightGBM/tree/master/examples/python-guide>`__
- `Python API <./Python-API.rst>`__
- `Parameters Tuning <./Parameters-Tuning.rst>`__
Install
-------
Install Python-package dependencies,
``setuptools``, ``wheel``, ``numpy`` and ``scipy`` are required, ``scikit-learn`` is required for sklearn interface and recommended:
::
pip install setuptools wheel numpy scipy scikit-learn -U
Refer to `Python-package`_ folder for the installation guide.
To verify your installation, try to ``import lightgbm`` in Python:
::
import lightgbm as lgb
Data Interface
--------------
The LightGBM Python module is able to load data from:
- libsvm/tsv/csv txt format file
- Numpy 2D array, pandas object
- LightGBM binary file
The data is stored in a ``Dataset`` object.
**To load a libsvm text file or a LightGBM binary file into Dataset:**
.. code:: python
train_data = lgb.Dataset('train.svm.bin')
**To load a numpy array into Dataset:**
.. code:: python
data = np.random.rand(500, 10) # 500 entities, each contains 10 features
label = np.random.randint(2, size=500) # binary target
train_data = lgb.Dataset(data, label=label)
**To load a scpiy.sparse.csr\_matrix array into Dataset:**
.. code:: python
csr = scipy.sparse.csr_matrix((dat, (row, col)))
train_data = lgb.Dataset(csr)
**Saving Dataset into a LightGBM binary file will make loading faster:**
.. code:: python
train_data = lgb.Dataset('train.svm.txt')
train_data.save_binary('train.bin')
**Create validation data:**
.. code:: python
test_data = train_data.create_valid('test.svm')
or
.. code:: python
test_data = lgb.Dataset('test.svm', reference=train_data)
In LightGBM, the validation data should be aligned with training data.
**Specific feature names and categorical features:**
.. code:: python
train_data = lgb.Dataset(data, label=label, feature_name=['c1', 'c2', 'c3'], categorical_feature=['c3'])
LightGBM can use categorical features as input directly.
It doesn't need to covert to one-hot coding, and is much faster than one-hot coding (about 8x speed-up).
**Note**: You should convert your categorical features to ``int`` type before you construct ``Dataset``.
**Weights can be set when needed:**
.. code:: python
w = np.random.rand(500, )
train_data = lgb.Dataset(data, label=label, weight=w)
or
.. code:: python
train_data = lgb.Dataset(data, label=label)
w = np.random.rand(500, )
train_data.set_weight(w)
And you can use ``Dataset.set_init_score()`` to set initial score, and ``Dataset.set_group()`` to set group/query data for ranking tasks.
**Memory efficent usage:**
The ``Dataset`` object in LightGBM is very memory-efficient, due to it only need to save discrete bins.
However, Numpy/Array/Pandas object is memory cost.
If you concern about your memory consumption. You can save memory accroding to following:
1. Let ``free_raw_data=True`` (default is ``True``) when constructing the ``Dataset``
2. Explicit set ``raw_data=None`` after the ``Dataset`` has been constructed
3. Call ``gc``
Setting Parameters
------------------
LightGBM can use either a list of pairs or a dictionary to set `Parameters <./Parameters.rst>`__.
For instance:
- Booster parameters:
.. code:: python
param = {'num_leaves':31, 'num_trees':100, 'objective':'binary'}
param['metric'] = 'auc'
- You can also specify multiple eval metrics:
.. code:: python
param['metric'] = ['auc', 'binary_logloss']
Training
--------
Training a model requires a parameter list and data set:
.. code:: python
num_round = 10
bst = lgb.train(param, train_data, num_round, valid_sets=[test_data])
After training, the model can be saved:
.. code:: python
bst.save_model('model.txt')
The trained model can also be dumped to JSON format:
.. code:: python
json_model = bst.dump_model()
A saved model can be loaded.
.. code:: python
bst = lgb.Booster(model_file='model.txt') #init model
CV
--
Training with 5-fold CV:
.. code:: python
num_round = 10
lgb.cv(param, train_data, num_round, nfold=5)
Early Stopping
--------------
If you have a validation set, you can use early stopping to find the optimal number of boosting rounds.
Early stopping requires at least one set in ``valid_sets``. If there is more than one, it will use all of them:
.. code:: python
bst = lgb.train(param, train_data, num_round, valid_sets=valid_sets, early_stopping_rounds=10)
bst.save_model('model.txt', num_iteration=bst.best_iteration)
The model will train until the validation score stops improving.
Validation error needs to improve at least every ``early_stopping_rounds`` to continue training.
If early stopping occurs, the model will have an additional field: ``bst.best_iteration``.
Note that ``train()`` will return a model from the last iteration, not the best one.
And you can set ``num_iteration=bst.best_iteration`` when saving model.
This works with both metrics to minimize (L2, log loss, etc.) and to maximize (NDCG, AUC).
Note that if you specify more than one evaluation metric, all of them will be used for early stopping.
Prediction
----------
A model that has been trained or loaded can perform predictions on data sets:
.. code:: python
# 7 entities, each contains 10 features
data = np.random.rand(7, 10)
ypred = bst.predict(data)
If early stopping is enabled during training, you can get predictions from the best iteration with ``bst.best_iteration``:
.. code:: python
ypred = bst.predict(data, num_iteration=bst.best_iteration)
.. _Python-package: https://github.com/Microsoft/LightGBM/tree/master/python-package
Python Package Introduction
===========================
This document gives a basic walkthrough of LightGBM Python-package.
***List of other Helpful Links***
* [Python Examples](https://github.com/Microsoft/LightGBM/tree/master/examples/python-guide)
* [Python API](./Python-API.rst)
* [Parameters Tuning](./Parameters-tuning.md)
Install
-------
Install Python-package dependencies, `setuptools`, `wheel`, `numpy` and `scipy` are required, `scikit-learn` is required for sklearn interface and recommended:
```
pip install setuptools wheel numpy scipy scikit-learn -U
```
Refer to [Python-package](https://github.com/Microsoft/LightGBM/tree/master/python-package) folder for the installation guide.
To verify your installation, try to `import lightgbm` in Python:
```
import lightgbm as lgb
```
Data Interface
--------------
The LightGBM Python module is able to load data from:
- libsvm/tsv/csv txt format file
- Numpy 2D array, pandas object
- LightGBM binary file
The data is stored in a ```Dataset``` object.
#### To load a libsvm text file or a LightGBM binary file into ```Dataset```:
```python
train_data = lgb.Dataset('train.svm.bin')
```
#### To load a numpy array into ```Dataset```:
```python
data = np.random.rand(500, 10) # 500 entities, each contains 10 features
label = np.random.randint(2, size=500) # binary target
train_data = lgb.Dataset(data, label=label)
```
#### To load a scpiy.sparse.csr_matrix array into ```Dataset```:
```python
csr = scipy.sparse.csr_matrix((dat, (row, col)))
train_data = lgb.Dataset(csr)
```
#### Saving ```Dataset``` into a LightGBM binary file will make loading faster:
```python
train_data = lgb.Dataset('train.svm.txt')
train_data.save_binary('train.bin')
```
#### Create validation data:
```python
test_data = train_data.create_valid('test.svm')
```
or
```python
test_data = lgb.Dataset('test.svm', reference=train_data)
```
In LightGBM, the validation data should be aligned with training data.
#### Specific feature names and categorical features:
```python
train_data = lgb.Dataset(data, label=label, feature_name=['c1', 'c2', 'c3'], categorical_feature=['c3'])
```
LightGBM can use categorical features as input directly. It doesn't need to covert to one-hot coding, and is much faster than one-hot coding (about 8x speed-up).
**Note**:You should convert your categorical features to int type before you construct `Dataset`.
#### Weights can be set when needed:
```python
w = np.random.rand(500, )
train_data = lgb.Dataset(data, label=label, weight=w)
```
or
```python
train_data = lgb.Dataset(data, label=label)
w = np.random.rand(500, )
train_data.set_weight(w)
```
And you can use `Dataset.set_init_score()` to set initial score, and `Dataset.set_group()` to set group/query data for ranking tasks.
#### Memory efficent usage
The `Dataset` object in LightGBM is very memory-efficient, due to it only need to save discrete bins.
However, Numpy/Array/Pandas object is memory cost. If you concern about your memory consumption. You can save memory accroding to following:
1. Let ```free_raw_data=True```(default is ```True```) when constructing the ```Dataset```
2. Explicit set ```raw_data=None``` after the ```Dataset``` has been constructed
3. Call ```gc```
Setting Parameters
------------------
LightGBM can use either a list of pairs or a dictionary to set [Parameters](./Parameters.md). For instance:
* Booster parameters:
```python
param = {'num_leaves':31, 'num_trees':100, 'objective':'binary'}
param['metric'] = 'auc'
```
* You can also specify multiple eval metrics:
```python
param['metric'] = ['auc', 'binary_logloss']
```
Training
--------
Training a model requires a parameter list and data set.
```python
num_round = 10
bst = lgb.train(param, train_data, num_round, valid_sets=[test_data])
```
After training, the model can be saved.
```python
bst.save_model('model.txt')
```
The trained model can also be dumped to JSON format.
```python
# dump model
json_model = bst.dump_model()
```
A saved model can be loaded.
```python
bst = lgb.Booster(model_file='model.txt') #init model
```
CV
--
Training with 5-fold CV:
```python
num_round = 10
lgb.cv(param, train_data, num_round, nfold=5)
```
Early Stopping
--------------
If you have a validation set, you can use early stopping to find the optimal number of boosting rounds.
Early stopping requires at least one set in `valid_sets`. If there's more than one, it will use all of them.
```python
bst = lgb.train(param, train_data, num_round, valid_sets=valid_sets, early_stopping_rounds=10)
bst.save_model('model.txt', num_iteration=bst.best_iteration)
```
The model will train until the validation score stops improving. Validation error needs to improve at least every `early_stopping_rounds` to continue training.
If early stopping occurs, the model will have an additional field: `bst.best_iteration`. Note that `train()` will return a model from the last iteration, not the best one. And you can set `num_iteration=bst.best_iteration` when saving model.
This works with both metrics to minimize (L2, log loss, etc.) and to maximize (NDCG, AUC). Note that if you specify more than one evaluation metric, all of them will be used for early stopping.
Prediction
----------
A model that has been trained or loaded can perform predictions on data sets.
```python
# 7 entities, each contains 10 features
data = np.random.rand(7, 10)
ypred = bst.predict(data)
```
If early stopping is enabled during training, you can get predictions from the best iteration with `bst.best_iteration`:
```python
ypred = bst.predict(data, num_iteration=bst.best_iteration)
```
# Quick Start
This is a quick start guide for LightGBM of cli version.
Follow the [Installation Guide](./Installation-Guide.rst) to install LightGBM first.
***List of other Helpful Links***
* [Parameters](./Parameters.md)
* [Parameters Tuning](./Parameters-tuning.md)
* [Python-package Quick Start](./Python-intro.md)
* [Python API](./Python-API.rst)
## Training Data Format
LightGBM supports input data file with [CSV](https://en.wikipedia.org/wiki/Comma-separated_values), [TSV](https://en.wikipedia.org/wiki/Tab-separated_values) and [LibSVM](https://www.csie.ntu.edu.tw/~cjlin/libsvm/) formats.
Label is the data of first column, and there is no header in the file.
### Categorical Feature Support
update 12/5/2016:
LightGBM can use categorical feature directly (without one-hot coding). The experiment on [Expo data](http://stat-computing.org/dataexpo/2009/) shows about 8x speed-up compared with one-hot coding.
For the setting details, please refer to [Parameters](./Parameters.md).
### Weight and Query/Group Data
LightGBM also support weighted training, it needs an additional [weight data](./Parameters.md). And it needs an additional [query data](./Parameters.md) for ranking task.
update 11/3/2016:
1. support input with header now
2. can specific label column, weight column and query/group id column. Both index and column are supported
3. can specific a list of ignored columns
## Parameter Quick Look
The parameter format is ```key1=value1 key2=value2 ... ``` . And parameters can be in both config file and command line.
Some important parameters:
* ```config```, default=```""```, type=string, alias=```config_file```
* path of config file
* ```task```, default=```train```, type=enum, options=```train```,```prediction```
* ```train``` for training
* ```prediction``` for prediction.
* `application`, default=`regression`, type=enum, options=`regression`,`regression_l1`,`huber`,`fair`,`poisson`,`binary`,`lambdarank`,`multiclass`, alias=`objective`,`app`
* `regression`, regression application
* `regression_l2`, L2 loss, alias=`mean_squared_error`,`mse`
* `regression_l1`, L1 loss, alias=`mean_absolute_error`,`mae`
* `huber`, [Huber loss](https://en.wikipedia.org/wiki/Huber_loss "Huber loss - Wikipedia")
* `fair`, [Fair loss](https://www.kaggle.com/c/allstate-claims-severity/discussion/24520)
* `poisson`, [Poisson regression](https://en.wikipedia.org/wiki/Poisson_regression "Poisson regression")
* `binary`, binary classification application
* `lambdarank`, [lambdarank](https://papers.nips.cc/paper/2971-learning-to-rank-with-nonsmooth-cost-functions.pdf) application
* The label should be `int` type in lambdarank tasks, and larger number represent the higher relevance (e.g. 0:bad, 1:fair, 2:good, 3:perfect).
* `label_gain` can be used to set the gain(weight) of `int` label.
* `multiclass`, multi-class classification application, should set `num_class` as well
* `boosting`, default=`gbdt`, type=enum, options=`gbdt`,`rf`,`dart`,`goss`, alias=`boost`,`boosting_type`
* `gbdt`, traditional Gradient Boosting Decision Tree
* `rf`, Random Forest
* `dart`, [Dropouts meet Multiple Additive Regression Trees](https://arxiv.org/abs/1505.01866)
* `goss`, Gradient-based One-Side Sampling
* ```data```, default=```""```, type=string, alias=```train```,```train_data```
* training data, LightGBM will train from this data
* ```valid```, default=```""```, type=multi-string, alias=```test```,```valid_data```,```test_data```
* validation/test data, LightGBM will output metrics for these data
* support multi validation data, separate by ```,```
* ```num_iterations```, default=```100```, type=int, alias=```num_iteration```,```num_tree```,```num_trees```,```num_round```,```num_rounds```
* number of boosting iterations/trees
* ```learning_rate```, default=```0.1```, type=double, alias=```shrinkage_rate```
* shrinkage rate
* ```num_leaves```, default=```31```, type=int, alias=```num_leaf```
* number of leaves in one tree
* ```tree_learner```, default=```serial```, type=enum, options=```serial```,```feature```,```data```
* ```serial```, single machine tree learner
* ```feature```, feature parallel tree learner
* ```data```, data parallel tree learner
* Refer to [Parallel Learning Guide](./Parallel-Learning-Guide.rst) to get more details.
* ```num_threads```, default=OpenMP_default, type=int, alias=```num_thread```,```nthread```
* Number of threads for LightGBM.
* For the best speed, set this to the number of **real CPU cores**, not the number of threads (most CPU using [hyper-threading](https://en.wikipedia.org/wiki/Hyper-threading) to generate 2 threads per CPU core).
* For parallel learning, should not use full CPU cores since this will cause poor performance for the network.
* ```max_depth```, default=```-1```, type=int
* Limit the max depth for tree model. This is used to deal with overfit when #data is small. Tree still grow by leaf-wise.
* ```< 0``` means no limit
* ```min_data_in_leaf```, default=```20```, type=int, alias=```min_data_per_leaf``` , ```min_data```
* Minimal number of data in one leaf. Can use this to deal with over-fit.
* ```min_sum_hessian_in_leaf```, default=```1e-3```, type=double, alias=```min_sum_hessian_per_leaf```, ```min_sum_hessian```, ```min_hessian```
* Minimal sum hessian in one leaf. Like ```min_data_in_leaf```, can use this to deal with over-fit.
For all parameters, please refer to [Parameters](./Parameters.md).
## Run LightGBM
For Windows:
```
lightgbm.exe config=your_config_file other_args ...
```
For Unix:
```
./lightgbm config=your_config_file other_args ...
```
Parameters can be both in the config file and command line, and the parameters in command line have higher priority than in config file.
For example, following command line will keep 'num_trees=10' and ignore same parameter in config file.
```
./lightgbm config=train.conf num_trees=10
```
## Examples
* [Binary Classification](https://github.com/Microsoft/LightGBM/tree/master/examples/binary_classification)
* [Regression](https://github.com/Microsoft/LightGBM/tree/master/examples/regression)
* [Lambdarank](https://github.com/Microsoft/LightGBM/tree/master/examples/lambdarank)
* [Parallel Learning](https://github.com/Microsoft/LightGBM/tree/master/examples/parallel_learning)
Quick Start
===========
This is a quick start guide for LightGBM CLI version.
Follow the `Installation Guide <./Installation-Guide.rst>`__ to install LightGBM first.
**List of other helpful links**
- `Parameters <./Parameters.rst>`__
- `Parameters Tuning <./Parameters-Tuning.rst>`__
- `Python-package Quick Start <./Python-Intro.rst>`__
- `Python API <./Python-API.rst>`__
Training Data Format
--------------------
LightGBM supports input data file with `CSV`_, `TSV`_ and `LibSVM`_ formats.
Label is the data of first column, and there is no header in the file.
Categorical Feature Support
~~~~~~~~~~~~~~~~~~~~~~~~~~~
update 12/5/2016:
LightGBM can use categorical feature directly (without one-hot coding).
The experiment on `Expo data`_ shows about 8x speed-up compared with one-hot coding.
For the setting details, please refer to `Parameters <./Parameters.rst>`__.
Weight and Query/Group Data
~~~~~~~~~~~~~~~~~~~~~~~~~~~
LightGBM also support weighted training, it needs an additional `weight data <./Parameters.rst#io-parameters>`__.
And it needs an additional `query data <./Parameters.rst#io-parameters>`_ for ranking task.
update 11/3/2016:
1. support input with header now
2. can specific label column, weight column and query/group id column.
Both index and column are supported
3. can specific a list of ignored columns
Parameter Quick Look
--------------------
The parameter format is ``key1=value1 key2=value2 ...``.
And parameters can be in both config file and command line.
Some important parameters:
- ``config``, default=\ ``""``, type=string, alias=\ ``config_file``
- path to config file
- ``task``, default=\ ``train``, type=enum, options=\ ``train``, ``prediction``
- ``train`` for training
- ``prediction`` for prediction
- ``application``, default=\ ``regression``, type=enum,
options=\ ``regression``, ``regression_l2``, ``regression_l1``, ``huber``, ``fair``, ``poisson``, ``binary``, ``lambdarank``, ``multiclass``,
alias=\ ``objective``, ``app``
- ``regression``, regression application
- ``regression_l2``, L2 loss, alias=\ ``mean_squared_error``, ``mse``
- ``regression_l1``, L1 loss, alias=\ ``mean_absolute_error``, ``mae``
- ``huber``, `Huber loss`_
- ``fair``, `Fair loss`_
- ``poisson``, `Poisson regression`_
- ``binary``, binary classification application
- ``lambdarank``, `lambdarank`_ application
- the label should be ``int`` type in lambdarank tasks,
and larger number represent the higher relevance (e.g. 0:bad, 1:fair, 2:good, 3:perfect)
- ``label_gain`` can be used to set the gain(weight) of ``int`` label.
- ``multiclass``, multi-class classification application, ``num_class`` should be set as well
- ``boosting``, default=\ ``gbdt``, type=enum,
options=\ ``gbdt``, ``rf``, ``dart``, ``goss``,
alias=\ ``boost``, ``boosting_type``
- ``gbdt``, traditional Gradient Boosting Decision Tree
- ``rf``, Random Forest
- ``dart``, `Dropouts meet Multiple Additive Regression Trees`_
- ``goss``, Gradient-based One-Side Sampling
- ``data``, default=\ ``""``, type=string, alias=\ ``train``, ``train_data``
- training data, LightGBM will train from this data
- ``valid``, default=\ ``""``, type=multi-string, alias=\ ``test``, ``valid_data``, ``test_data``
- validation/test data, LightGBM will output metrics for these data
- support multi validation data, separate by ``,``
- ``num_iterations``, default=\ ``100``, type=int,
alias=\ ``num_iteration``, ``num_tree``, ``num_trees``, ``num_round``, ``num_rounds``
- number of boosting iterations/trees
- ``learning_rate``, default=\ ``0.1``, type=double, alias=\ ``shrinkage_rate``
- shrinkage rate
- ``num_leaves``, default=\ ``31``, type=int, alias=\ ``num_leaf``
- number of leaves in one tree
- ``tree_learner``, default=\ ``serial``, type=enum, options=\ ``serial``, ``feature``, ``data``
- ``serial``, single machine tree learner
- ``feature``, feature parallel tree learner
- ``data``, data parallel tree learner
- refer to `Parallel Learning Guide <./Parallel-Learning-Guide.rst>`__ to get more details
- ``num_threads``, default=\ ``OpenMP_default``, type=int, alias=\ ``num_thread``, ``nthread``
- number of threads for LightGBM
- for the best speed, set this to the number of **real CPU cores**,
not the number of threads (most CPU using `hyper-threading`_ to generate 2 threads per CPU core)
- for parallel learning, should not use full CPU cores since this will cause poor performance for the network
- ``max_depth``, default=\ ``-1``, type=int
- limit the max depth for tree model.
This is used to deal with overfit when ``#data`` is small.
Tree still grow by leaf-wise
- ``< 0`` means no limit
- ``min_data_in_leaf``, default=\ ``20``, type=int, alias=\ ``min_data_per_leaf`` , ``min_data``
- minimal number of data in one leaf. Can use this to deal with over-fitting
- ``min_sum_hessian_in_leaf``, default=\ ``1e-3``, type=double,
alias=\ ``min_sum_hessian_per_leaf``, ``min_sum_hessian``, ``min_hessian``
- minimal sum hessian in one leaf. Like ``min_data_in_leaf``, can be used to deal with over-fitting
For all parameters, please refer to `Parameters <./Parameters.rst>`__.
Run LightGBM
------------
For Windows:
::
lightgbm.exe config=your_config_file other_args ...
For Unix:
::
./lightgbm config=your_config_file other_args ...
Parameters can be both in the config file and command line, and the parameters in command line have higher priority than in config file.
For example, following command line will keep ``num_trees=10`` and ignore the same parameter in config file.
::
./lightgbm config=train.conf num_trees=10
Examples
--------
- `Binary Classification <https://github.com/Microsoft/LightGBM/tree/master/examples/binary_classification>`__
- `Regression <https://github.com/Microsoft/LightGBM/tree/master/examples/regression>`__
- `Lambdarank <https://github.com/Microsoft/LightGBM/tree/master/examples/lambdarank>`__
- `Parallel Learning <https://github.com/Microsoft/LightGBM/tree/master/examples/parallel_learning>`__
.. _CSV: https://en.wikipedia.org/wiki/Comma-separated_values
.. _TSV: https://en.wikipedia.org/wiki/Tab-separated_values
.. _LibSVM: https://www.csie.ntu.edu.tw/~cjlin/libsvm/
.. _Expo data: http://stat-computing.org/dataexpo/2009/
.. _Huber loss: https://en.wikipedia.org/wiki/Huber_loss
.. _Fair loss: https://www.kaggle.com/c/allstate-claims-severity/discussion/24520
.. _Poisson regression: https://en.wikipedia.org/wiki/Poisson_regression
.. _lambdarank: https://papers.nips.cc/paper/2971-learning-to-rank-with-nonsmooth-cost-functions.pdf
.. _Dropouts meet Multiple Additive Regression Trees: https://arxiv.org/abs/1505.01866
.. _hyper-threading: https://en.wikipedia.org/wiki/Hyper-threading
# Documentation
Documentation for LightGBM is generated using [Sphinx](http://www.sphinx-doc.org/) and [recommonmark](https://recommonmark.readthedocs.io/).
After each commit on `master`, documentation is updated and published to [https://lightgbm.readthedocs.io/](https://lightgbm.readthedocs.io/).
## Build
You can build the documentation locally. Just run in `docs` folder:
```sh
pip install -r requirements.txt
make html
```
Documentation
=============
Documentation for LightGBM is generated using `Sphinx <http://www.sphinx-doc.org/>`__.
After each commit on ``master``, documentation is updated and published to `Read the Docs <https://lightgbm.readthedocs.io/>`__.
Build
-----
You can build the documentation locally. Just run in ``docs`` folder:
.. code:: sh
pip install sphinx sphinx_rtd_theme
make html
window.onload = function() {
$('a[href^="./"][href$=".md"]').attr('href', (i, val) => { return val.replace('.md', '.html'); }); /* Replace '.md' with '.html' in all internal links like './[Something].md' */
$('a[href^="./"][href$=".rst"]').attr('href', (i, val) => { return val.replace('.rst', '.html'); }); /* Replace '.rst' with '.html' in all internal links like './[Something].rst' */
}
$(function() {
$('a[href^="./"][href*=".rst"]').attr('href', (i, val) => { return val.replace('.rst', '.html'); }); /* Replace '.rst' with '.html' in all internal links like './[Something].rst[#anchor]' */
$('.wy-nav-content').each(function () { this.style.setProperty('max-width', 'none', 'important'); });
});
......@@ -19,14 +19,13 @@
#
import os
import sys
import sphinx
from sphinx.errors import VersionRequirementError
curr_path = os.path.dirname(os.path.realpath(__file__))
libpath = os.path.join(curr_path, '../python-package/')
sys.path.insert(0, libpath)
from recommonmark.parser import CommonMarkParser
from recommonmark.transform import AutoStructify
# -- mock out modules
from unittest.mock import Mock
MOCK_MODULES = [
......@@ -42,8 +41,10 @@ for mod_name in MOCK_MODULES:
os.environ['LIGHTGBM_BUILD_DOC'] = '1'
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
needs_sphinx = '1.3' # Due to sphinx.ext.napoleon
if needs_sphinx > sphinx.__version__:
message = 'This project needs at least Sphinx v%s' % needs_sphinx
raise VersionRequirementError(message)
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
......@@ -60,10 +61,7 @@ templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
source_parsers = {
'.md': CommonMarkParser,
}
source_suffix = ['.rst', '.md']
# source_suffix = ['.rst', '.md']
# The master toctree document.
master_doc = 'index'
......@@ -151,20 +149,20 @@ latex_elements = {
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'LightGBM.tex', 'LightGBM Documentation',
'Microsoft Corporation', 'manual'),
]
# latex_documents = [
# (master_doc, 'LightGBM.tex', 'LightGBM Documentation',
# 'Microsoft Corporation', 'manual'),
# ]
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'lightgbm', 'LightGBM Documentation',
[author], 1)
]
# man_pages = [
# (master_doc, 'lightgbm', 'LightGBM Documentation',
# [author], 1)
# ]
# -- Options for Texinfo output -------------------------------------------
......@@ -172,19 +170,12 @@ man_pages = [
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'LightGBM', 'LightGBM Documentation',
author, 'LightGBM', 'One line description of project.',
'Miscellaneous'),
]
# texinfo_documents = [
# (master_doc, 'LightGBM', 'LightGBM Documentation',
# author, 'LightGBM', 'One line description of project.',
# 'Miscellaneous'),
# ]
# https://recommonmark.readthedocs.io/en/latest/
github_doc_root = 'https://github.com/Microsoft/LightGBM/tree/master/docs/'
def setup(app):
app.add_config_value('recommonmark_config', {
'url_resolver': lambda url: github_doc_root + url,
'auto_toc_tree_section': 'Contents',
}, True)
app.add_transform(AutoStructify)
app.add_javascript("js/rst_links_fix.js")
Recommendations When Using gcc
==============================
It is recommended to use ``-O3 -mtune=native`` to achieve maximum speed during LightGBM training.
Using Intel Ivy Bridge CPU on 1M x 1K Bosch dataset, the performance increases as follow:
+-------------------------------------+---------------------+
| Compilation Flag | Performance Index |
+=====================================+=====================+
| ``-O2 -mtune=core2`` | 100.00% |
+-------------------------------------+---------------------+
| ``-O2 -mtune=native`` | 100.90% |
+-------------------------------------+---------------------+
| ``-O3 -mtune=native`` | 102.78% |
+-------------------------------------+---------------------+
| ``-O3 -ffast-math -mtune=native`` | 100.64% |
+-------------------------------------+---------------------+
You can find more details on the experimentation below:
- `Laurae++/Benchmarks <https://sites.google.com/view/lauraepp/new-benchmarks/old-benchmarks>`__
- `Laurae2/gbt\_benchmarks <https://github.com/Laurae2/gbt_benchmarks>`__
- `Laurae's Benchmark Master Data (Interactive) <https://public.tableau.com/views/gbt_benchmarks/Master-Data?:showVizHome=no>`__
- `Kaggle Paris Meetup #12 Slides <https://drive.google.com/file/d/0B6qJBmoIxFe0ZHNCOXdoRWMxUm8/view>`__
Some explanatory pictures:
.. image:: ./_static/images/gcc-table.png
:align: center
.. image:: ./_static/images/gcc-bars.png
:align: center
.. image:: ./_static/images/gcc-chart.png
:align: center
.. image:: ./_static/images/gcc-comparison-1.png
:align: center
.. image:: ./_static/images/gcc-comparison-2.png
:align: center
.. image:: ./_static/images/gcc-meetup-1.png
:align: center
.. image:: ./_static/images/gcc-meetup-2.png
:align: center
# Recommendations when using gcc
It is recommended to use `-O3 -mtune=native` to achieve maximum speed during LightGBM training.
Using Intel Ivy Bridge CPU on 1M x 1K Bosch dataset, the performance increases as follow:
| Compilation Flag | Performance Index |
| --- | ---: |
| `-O2 -mtune=core2` | 100.00% |
| `-O2 -mtune=native` | 100.90% |
| `-O3 -mtune=native` | 102.78% |
| `-O3 -ffast-math -mtune=native` | 100.64% |
You can find more details on the experimentation below:
* [Laurae++/Benchmarks](https://sites.google.com/view/lauraepp/benchmarks)
* [Laurae2/gbt_benchmarks](https://github.com/Laurae2/gbt_benchmarks)
* [Laurae's Benchmark Master Data (Interactive)](https://public.tableau.com/views/gbt_benchmarks/Master-Data?:showVizHome=no)
* [Kaggle Paris Meetup #12 Slides](https://drive.google.com/file/d/0B6qJBmoIxFe0ZHNCOXdoRWMxUm8/view)
Some pictures below:
![gcc table](https://cloud.githubusercontent.com/assets/9083669/26027337/c376e22e-380c-11e7-91bc-fe0a333c03e9.png)
![gcc bars](https://cloud.githubusercontent.com/assets/9083669/26027338/d1caebcc-380c-11e7-864e-d704b39f1e63.png)
![gcc chart](https://cloud.githubusercontent.com/assets/9083669/26027353/e1bdb866-380c-11e7-97b5-22c7eac349b2.png)
![gcc comparison 1](https://cloud.githubusercontent.com/assets/9083669/26027401/c31f2f74-380d-11e7-857a-f5119791bed7.png)
![gcc comparison 2](https://cloud.githubusercontent.com/assets/9083669/26027486/d7d7e72a-380e-11e7-86c3-ccbbf42a9c55.png)
![gcc meetup 1](https://cloud.githubusercontent.com/assets/9083669/26027427/21b38f44-380e-11e7-9c95-05437782dd46.png)
![gcc meetup 2](https://cloud.githubusercontent.com/assets/9083669/26027433/362be250-380e-11e7-8982-76ac167bcd3e.png)
......@@ -12,17 +12,17 @@ Welcome to LightGBM's documentation!
Installation Guide <Installation-Guide>
Quick Start <Quick-Start>
Python Quick Start <Python-intro>
Python Quick Start <Python-Intro>
Features <Features>
Experiments <Experiments>
Parameters <Parameters>
Parameters Tuning <Parameters-tuning>
Parameters Tuning <Parameters-Tuning>
Python API <Python-API>
Parallel Learning Guide <Parallel-Learning-Guide>
GPU Tutorial <GPU-Tutorial>
Advanced Topics <Advanced-Topic>
Advanced Topics <Advanced-Topics>
FAQ <FAQ>
Development Guide <development>
Development Guide <Development-Guide>
Indices and Tables
==================
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment