README.md 8.23 KB
Newer Older
1
<img src=https://github.com/microsoft/LightGBM/blob/master/docs/logo/LightGBM_logo_black_text.svg width=300 />
Guolin Ke's avatar
Guolin Ke committed
2
3
4

Light Gradient Boosting Machine
===============================
The Gitter Badger's avatar
The Gitter Badger committed
5

6
[![Python-package GitHub Actions Build Status](https://github.com/microsoft/LightGBM/workflows/Python-package/badge.svg?branch=master)](https://github.com/microsoft/LightGBM/actions)
7
8
[![R-package GitHub Actions Build Status](https://github.com/microsoft/LightGBM/workflows/R-package/badge.svg?branch=master)](https://github.com/microsoft/LightGBM/actions)
[![Static Analysis GitHub Actions Build Status](https://github.com/microsoft/LightGBM/workflows/Static%20Analysis/badge.svg?branch=master)](https://github.com/microsoft/LightGBM/actions)
9
[![Azure Pipelines Build Status](https://lightgbm-ci.visualstudio.com/lightgbm-ci/_apis/build/status/Microsoft.LightGBM?branchName=master)](https://lightgbm-ci.visualstudio.com/lightgbm-ci/_build/latest?definitionId=1)
Nikita Titov's avatar
Nikita Titov committed
10
[![Appveyor Build Status](https://ci.appveyor.com/api/projects/status/1ys5ot401m0fep6l/branch/master?svg=true)](https://ci.appveyor.com/project/guolinke/lightgbm/branch/master)
11
[![Documentation Status](https://readthedocs.org/projects/lightgbm/badge/?version=latest)](https://lightgbm.readthedocs.io/)
12
13
14
[![License](https://img.shields.io/github/license/microsoft/lightgbm.svg)](https://github.com/microsoft/LightGBM/blob/master/LICENSE)
[![Python Versions](https://img.shields.io/pypi/pyversions/lightgbm.svg?logo=python&logoColor=white)](https://pypi.org/project/lightgbm)
[![PyPI Version](https://img.shields.io/pypi/v/lightgbm.svg?logo=pypi&logoColor=white)](https://pypi.org/project/lightgbm)
Nikita Titov's avatar
Nikita Titov committed
15
[![CRAN Version](https://www.r-pkg.org/badges/version/lightgbm)](https://cran.r-project.org/package=lightgbm)
Guolin Ke's avatar
Guolin Ke committed
16

17
LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed and efficient with the following advantages:
Guolin Ke's avatar
Guolin Ke committed
18

19
20
21
- Faster training speed and higher efficiency.
- Lower memory usage.
- Better accuracy.
22
- Support of parallel and GPU learning.
23
- Capable of handling large-scale data.
Guolin Ke's avatar
Guolin Ke committed
24

25
For further details, please refer to [Features](https://github.com/microsoft/LightGBM/blob/master/docs/Features.rst).
Guolin Ke's avatar
Guolin Ke committed
26

27
Benefitting from these advantages, LightGBM is being widely-used in many [winning solutions](https://github.com/microsoft/LightGBM/blob/master/examples/README.md#machine-learning-challenge-winning-solutions) of machine learning competitions.
28

29
[Comparison experiments](https://github.com/microsoft/LightGBM/blob/master/docs/Experiments.rst#comparison-experiment) on public datasets show that LightGBM can outperform existing boosting frameworks on both efficiency and accuracy, with significantly lower memory consumption. What's more, [parallel experiments](https://github.com/microsoft/LightGBM/blob/master/docs/Experiments.rst#parallel-experiment) show that LightGBM can achieve a linear speed-up by using multiple machines for training in specific settings.
Guolin Ke's avatar
Guolin Ke committed
30

31
32
33
Get Started and Documentation
-----------------------------

34
Our primary documentation is at https://lightgbm.readthedocs.io/ and is generated from this repository. If you are new to LightGBM, follow [the installation instructions](https://lightgbm.readthedocs.io/en/latest/Installation-Guide.html) on that site.
35
36
37

Next you may want to read:

38
39
40
41
42
- [**Examples**](https://github.com/microsoft/LightGBM/tree/master/examples) showing command line usage of common tasks.
- [**Features**](https://github.com/microsoft/LightGBM/blob/master/docs/Features.rst) and algorithms supported by LightGBM.
- [**Parameters**](https://github.com/microsoft/LightGBM/blob/master/docs/Parameters.rst) is an exhaustive list of customization you can make.
- [**Parallel Learning**](https://github.com/microsoft/LightGBM/blob/master/docs/Parallel-Learning-Guide.rst) and [**GPU Learning**](https://github.com/microsoft/LightGBM/blob/master/docs/GPU-Tutorial.rst) can speed up computation.
- [**Laurae++ interactive documentation**](https://sites.google.com/view/lauraepp/parameters) is a detailed guide for hyperparameters.
43
- [**Optuna Hyperparameter Tuner**](https://medium.com/optuna/lightgbm-tuner-new-optuna-integration-for-hyperparameter-optimization-8b7095e99258) provides automated tuning for LightGBM hyperparameters ([code examples](https://github.com/optuna/optuna/blob/master/examples/)).
44
45
46

Documentation for contributors:

47
48
- [**How we update readthedocs.io**](https://github.com/microsoft/LightGBM/blob/master/docs/README.rst).
- Check out the [**Development Guide**](https://github.com/microsoft/LightGBM/blob/master/docs/Development-Guide.rst).
49

wxchan's avatar
wxchan committed
50
51
News
----
52

53
Please refer to changelogs at [GitHub releases](https://github.com/microsoft/LightGBM/releases) page.
Guolin Ke's avatar
Guolin Ke committed
54

55
Some old update logs are available at [Key Events](https://github.com/microsoft/LightGBM/blob/master/docs/Key-Events.md) page.
56

57
External (Unofficial) Repositories
58
----------------------------------
59

60
61
Optuna (hyperparameter optimization framework): https://github.com/optuna/optuna

62
Julia-package: https://github.com/IQVIA-ML/LightGBM.jl
63

64
65
66
67
JPMML (Java PMML converter): https://github.com/jpmml/jpmml-lightgbm

Treelite (model compiler for efficient deployment): https://github.com/dmlc/treelite

68
69
cuML Forest Inference Library (GPU-accelerated inference): https://github.com/rapidsai/cuml

70
71
daal4py (Intel CPU-accelerated inference): https://github.com/IntelPython/daal4py

72
73
m2cgen (model appliers for various languages): https://github.com/BayesWitnesses/m2cgen

74
75
leaves (Go model applier): https://github.com/dmitryikh/leaves

76
77
78
ONNXMLTools (ONNX converter): https://github.com/onnx/onnxmltools

SHAP (model output explainer): https://github.com/slundberg/shap
79

80
MMLSpark (LightGBM on Spark): https://github.com/Azure/mmlspark
81

82
83
Kubeflow Fairing (LightGBM on Kubernetes): https://github.com/kubeflow/fairing

84
85
Kubeflow Operator (LightGBM on Kubernetes): https://github.com/kubeflow/xgboost-operator

86
87
ML.NET (.NET/C#-package): https://github.com/dotnet/machinelearning

88
89
LightGBM.NET (.NET/C#-package): https://github.com/rca22/LightGBM.Net

90
91
Ruby gem: https://github.com/ankane/lightgbm

92
93
LightGBM4j (Java high-level binding): https://github.com/metarank/lightgbm4j

Nikita Titov's avatar
Nikita Titov committed
94
MLflow (experiment tracking, model monitoring framework): https://github.com/mlflow/mlflow
95
96
97

`{treesnip}` (R `{parsnip}`-compliant interface): https://github.com/curso-r/treesnip

98
99
`{mlr3learners.lightgbm}` (R `{mlr3}`-compliant interface): https://github.com/mlr3learners/mlr3learners.lightgbm

100
101
Support
-------
102

103
104
- Ask a question [on Stack Overflow with the `lightgbm` tag](https://stackoverflow.com/questions/ask?tags=lightgbm), we monitor this for new questions.
- Open **bug reports** and **feature requests** (not questions) on [GitHub issues](https://github.com/microsoft/LightGBM/issues).
105

Guolin Ke's avatar
Guolin Ke committed
106
107
108
How to Contribute
-----------------

109
110
Check [CONTRIBUTING](https://github.com/microsoft/LightGBM/blob/master/CONTRIBUTING.md) page.

Guolin Ke's avatar
Guolin Ke committed
111
Microsoft Open Source Code of Conduct
112
113
-------------------------------------

Guolin Ke's avatar
Guolin Ke committed
114
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
Guolin Ke's avatar
Guolin Ke committed
115

116
117
Reference Papers
----------------
Guolin Ke's avatar
Guolin Ke committed
118

119
Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, Tie-Yan Liu. "[LightGBM: A Highly Efficient Gradient Boosting Decision Tree](https://papers.nips.cc/paper/6907-lightgbm-a-highly-efficient-gradient-boosting-decision-tree)". Advances in Neural Information Processing Systems 30 (NIPS 2017), pp. 3149-3157.
Guolin Ke's avatar
Guolin Ke committed
120

121
Qi Meng, Guolin Ke, Taifeng Wang, Wei Chen, Qiwei Ye, Zhi-Ming Ma, Tie-Yan Liu. "[A Communication-Efficient Parallel Algorithm for Decision Tree](http://papers.nips.cc/paper/6380-a-communication-efficient-parallel-algorithm-for-decision-tree)". Advances in Neural Information Processing Systems 29 (NIPS 2016), pp. 1279-1287.
Guolin Ke's avatar
Guolin Ke committed
122

123
Huan Zhang, Si Si and Cho-Jui Hsieh. "[GPU Acceleration for Large-scale Tree Boosting](https://arxiv.org/abs/1706.08359)". SysML Conference, 2018.
124

125
126
**Note**: If you use LightGBM in your GitHub projects, please add `lightgbm` in the `requirements.txt`.

127
128
License
-------
129

130
This project is licensed under the terms of the MIT license. See [LICENSE](https://github.com/microsoft/LightGBM/blob/master/LICENSE) for additional details.