Python-intro.md 5.51 KB
Newer Older
Guolin Ke's avatar
Guolin Ke committed
1
2
Python Package Introduction
===========================
3

Guolin Ke's avatar
Guolin Ke committed
4
5
6
This document gives a basic walkthrough of LightGBM python package.

***List of other Helpful Links***
7
* [Python Examples](https://github.com/Microsoft/LightGBM/tree/master/examples/python-guide)
Guolin Ke's avatar
Guolin Ke committed
8
* [Python API Reference](./Python-API.md)
Guolin Ke's avatar
Guolin Ke committed
9
* [Parameters Tuning](./Parameters-tuning.md)
Guolin Ke's avatar
Guolin Ke committed
10
11
12

Install
-------
Guolin Ke's avatar
Guolin Ke committed
13
* Install the library first, follow the wiki [here](./Installation-Guide.md).
wxchan's avatar
wxchan committed
14
15
16
17
18
* Install python-package dependencies, `setuptools`, `numpy` and `scipy` is required, `scikit-learn` is required for sklearn interface and recommended. Run:
```
pip install setuptools numpy scipy scikit-learn -U
```

Guolin Ke's avatar
Guolin Ke committed
19
20
21
22
23
* In the  `python-package` directory, run
```
python setup.py install
```

Rahul Phatak's avatar
Rahul Phatak committed
24
* To verify your installation, try to `import lightgbm` in Python.
Guolin Ke's avatar
Guolin Ke committed
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
```
import lightgbm as lgb
```

Data Interface
--------------
The LightGBM python module is able to load data from:
- libsvm/tsv/csv txt format file
- Numpy 2D array, pandas object
- LightGBM binary file

The data is stored in a ```Dataset``` object.

#### To load a libsvm text file or a LightGBM binary file into ```Dataset```:
```python
Guolin Ke's avatar
Guolin Ke committed
40
train_data = lgb.Dataset('train.svm.bin')
Guolin Ke's avatar
Guolin Ke committed
41
```
Guolin Ke's avatar
Guolin Ke committed
42

Guolin Ke's avatar
Guolin Ke committed
43
44
####  To load a numpy array into ```Dataset```:
```python
Yuyu Zhang's avatar
Yuyu Zhang committed
45
data = np.random.rand(500, 10) # 500 entities, each contains 10 features
Guolin Ke's avatar
Guolin Ke committed
46
label = np.random.randint(2, size=500) # binary target
Yuyu Zhang's avatar
Yuyu Zhang committed
47
train_data = lgb.Dataset(data, label=label)
Guolin Ke's avatar
Guolin Ke committed
48
49
50
51
52
53
54
55
56
```
#### To load a scpiy.sparse.csr_matrix array into ```Dataset```:
```python
csr = scipy.sparse.csr_matrix((dat, (row, col)))
train_data = lgb.Dataset(csr)
```
#### Saving ```Dataset``` into a LightGBM binary file will make loading faster:
```python
train_data = lgb.Dataset('train.svm.txt')
Yuyu Zhang's avatar
Yuyu Zhang committed
57
train_data.save_binary('train.bin')
Guolin Ke's avatar
Guolin Ke committed
58
```
Guolin Ke's avatar
Guolin Ke committed
59
60
61
62
63
64
65
66
67
68
69
70
#### Create validation data
```python
test_data = train_data.create_valid('test.svm')
```

or 

```python
test_data = lgb.Dataset('test.svm', reference=train_data)
```

In LightGBM, the validation data should be aligned with training data.
Guolin Ke's avatar
Guolin Ke committed
71

72
73
74
75
76
77
78
#### Specific feature names and categorical features

```python
train_data = lgb.Dataset(data, label=label, feature_name=['c1', 'c2', 'c3'], categorical_feature=['c3'])
```
LightGBM can use categorical features as input directly. It doesn't need to covert to one-hot coding, and is much faster than one-hot coding (about 8x speed-up). 
**Note:You should convert your categorical features to int type before you construct `Dataset`.**
Guolin Ke's avatar
Guolin Ke committed
79
80
81

#### Weights can be set when needed:
```python
wxchan's avatar
wxchan committed
82
w = np.random.rand(500, )
Guolin Ke's avatar
Guolin Ke committed
83
84
85
86
87
train_data = lgb.Dataset(data, label=label, weight=w)
```
or
```python
train_data = lgb.Dataset(data, label=label)
wxchan's avatar
wxchan committed
88
w = np.random.rand(500, )
Guolin Ke's avatar
Guolin Ke committed
89
90
91
92
93
94
95
96
97
98
train_data.set_weight(w)
```

And you can use `Dataset.set_init_score()` to set initial score, and `Dataset.set_group()` to set group/query data for ranking tasks.

#### Memory efficent usage

The `Dataset` object in LightGBM is very memory-efficient, due to it only need to save discrete bins.
However, Numpy/Array/Pandas object is memory cost. If you concern about your memory consumption. You can save memory accroding to following:

Rahul Phatak's avatar
Rahul Phatak committed
99
100
1. Let ```free_raw_data=True```(default is ```True```) when constructing the ```Dataset```
2. Explicit set ```raw_data=None``` after the ```Dataset``` has been constructed
Guolin Ke's avatar
Guolin Ke committed
101
102
103
104
3. Call ```gc```  

Setting Parameters
------------------
Guolin Ke's avatar
Guolin Ke committed
105
LightGBM can use either a list of pairs or a dictionary to set [parameters](./Parameters.md). For instance:
Guolin Ke's avatar
Guolin Ke committed
106
107
* Booster parameters
```python
Yuyu Zhang's avatar
Yuyu Zhang committed
108
param = {'num_leaves':31, 'num_trees':100, 'objective':'binary'}
Guolin Ke's avatar
Guolin Ke committed
109
110
111
112
113
114
115
116
117
118
119
120
121
122
param['metric'] = 'auc'
```
* You can also specify multiple eval metrics:
```python
param['metric'] = ['auc', 'binary_logloss']

```

Training
--------

Training a model requires a parameter list and data set.
```python
num_round = 10
Yuyu Zhang's avatar
Yuyu Zhang committed
123
bst = lgb.train(param, train_data, num_round, valid_sets=[test_data])
Guolin Ke's avatar
Guolin Ke committed
124
125
126
127
128
```
After training, the model can be saved.
```python
bst.save_model('model.txt')
```
Rahul Phatak's avatar
Rahul Phatak committed
129
The trained model can also be dumped to JSON format
Guolin Ke's avatar
Guolin Ke committed
130
131
132
133
134
135
```python
# dump model
json_model = bst.dump_model()
```
A saved model can be loaded as follows:
```python
Yuyu Zhang's avatar
Yuyu Zhang committed
136
bst = lgb.Booster(model_file='model.txt') #init model
Guolin Ke's avatar
Guolin Ke committed
137
138
139
140
```

CV
--
Rahul Phatak's avatar
Rahul Phatak committed
141
Training with 5-fold CV:
Guolin Ke's avatar
Guolin Ke committed
142
143
144
145
146
147
148
149
150
151
152
```python
num_round = 10
lgb.cv(param, train_data, num_round, nfold=5)
```

Early Stopping
--------------
If you have a validation set, you can use early stopping to find the optimal number of boosting rounds.
Early stopping requires at least one set in `valid_sets`. If there's more than one, it will use all of them.

```python
Yuyu Zhang's avatar
Yuyu Zhang committed
153
bst = lgb.train(param, train_data, num_round, valid_sets=valid_sets, early_stopping_rounds=10)
Guolin Ke's avatar
Guolin Ke committed
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
bst.save_model('model.txt', num_iteration=bst.best_iteration)
```

The model will train until the validation score stops improving. Validation error needs to improve at least every `early_stopping_rounds` to continue training.

If early stopping occurs, the model will have an additional field: `bst.best_iteration`. Note that `train()` will return a model from the last iteration, not the best one. And you can set `num_iteration=bst.best_iteration` when saving model.

This works with both metrics to minimize (L2, log loss, etc.) and to maximize (NDCG, AUC). Note that if you specify more than one evaluation metric, all of them will be used for early stopping.

Prediction
----------
A model that has been trained or loaded can perform predictions on data sets.
```python
# 7 entities, each contains 10 features
data = np.random.rand(7, 10)
ypred = bst.predict(data)
```

If early stopping is enabled during training, you can get predictions from the best iteration with `bst.best_iteration`:
```python
Yuyu Zhang's avatar
Yuyu Zhang committed
174
ypred = bst.predict(data, num_iteration=bst.best_iteration)
Guolin Ke's avatar
Guolin Ke committed
175
```