Python-intro.md 5.21 KB
Newer Older
Guolin Ke's avatar
Guolin Ke committed
1
2
3
4
5
Python Package Introduction
===========================
This document gives a basic walkthrough of LightGBM python package.

***List of other Helpful Links***
Guolin Ke's avatar
Guolin Ke committed
6
7
* [Python Examples](../examples/python-guide/)
* [Python API Reference](./Python-API.md)
Guolin Ke's avatar
Guolin Ke committed
8
* [Parameters Tuning](./Parameters-tuning.md)
Guolin Ke's avatar
Guolin Ke committed
9
10
11

Install
-------
Guolin Ke's avatar
Guolin Ke committed
12
* Install the library first, follow the wiki [here](./Installation-Guide.md).
Guolin Ke's avatar
Guolin Ke committed
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
* In the  `python-package` directory, run
```
python setup.py install
```

To verify your installation, try to `import lightgbm` in Python.
```
import lightgbm as lgb
```

Data Interface
--------------
The LightGBM python module is able to load data from:
- libsvm/tsv/csv txt format file
- Numpy 2D array, pandas object
- LightGBM binary file

The data is stored in a ```Dataset``` object.

#### To load a libsvm text file or a LightGBM binary file into ```Dataset```:
```python
Guolin Ke's avatar
Guolin Ke committed
34
train_data = lgb.Dataset('train.svm.bin')
Guolin Ke's avatar
Guolin Ke committed
35
```
Guolin Ke's avatar
Guolin Ke committed
36

Guolin Ke's avatar
Guolin Ke committed
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
####  To load a numpy array into ```Dataset```:
```python
data = np.random.rand(500,10) # 500 entities, each contains 10 features
label = np.random.randint(2, size=500) # binary target
train_data = lgb.Dataset( data, label=label)
```
#### To load a scpiy.sparse.csr_matrix array into ```Dataset```:
```python
csr = scipy.sparse.csr_matrix((dat, (row, col)))
train_data = lgb.Dataset(csr)
```
#### Saving ```Dataset``` into a LightGBM binary file will make loading faster:
```python
train_data = lgb.Dataset('train.svm.txt')
train_data.save_binary("train.bin")
```
Guolin Ke's avatar
Guolin Ke committed
53
54
55
56
57
58
59
60
61
62
63
64
#### Create validation data
```python
test_data = train_data.create_valid('test.svm')
```

or 

```python
test_data = lgb.Dataset('test.svm', reference=train_data)
```

In LightGBM, the validation data should be aligned with training data.
Guolin Ke's avatar
Guolin Ke committed
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98

#### Specific feature names and categorical features

```python
train_data = lgb.Dataset(data, label=label, feature_name=['c1', 'c2', 'c3'], categorical_feature=['c3'])
```
LightGBM can use categorical features as input directly. It doesn't need to covert to one-hot coding, and is much faster than one-hot coding (about 8x speed-up). 
**Note:You should convert your categorical features to int type before you construct `Dataset`.**

#### Weights can be set when needed:
```python
w = np.random.rand(500, 1)
train_data = lgb.Dataset(data, label=label, weight=w)
```
or
```python
train_data = lgb.Dataset(data, label=label)
w = np.random.rand(500, 1)
train_data.set_weight(w)
```

And you can use `Dataset.set_init_score()` to set initial score, and `Dataset.set_group()` to set group/query data for ranking tasks.

#### Memory efficent usage

The `Dataset` object in LightGBM is very memory-efficient, due to it only need to save discrete bins.
However, Numpy/Array/Pandas object is memory cost. If you concern about your memory consumption. You can save memory accroding to following:

1. let ```free_raw_data=True```(default is ```True```) when construct Dataset
2. Explicit set ```raw_data=None``` after ```Dataset``` constructed
3. Call ```gc```  

Setting Parameters
------------------
Guolin Ke's avatar
Guolin Ke committed
99
LightGBM can use either a list of pairs or a dictionary to set [parameters](./Parameters.md). For instance:
Guolin Ke's avatar
Guolin Ke committed
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
* Booster parameters
```python
param = {'num_leaves':31, 'num_trees':100, 'objective':'binary' }
param['metric'] = 'auc'
```
* You can also specify multiple eval metrics:
```python
param['metric'] = ['auc', 'binary_logloss']

```

Training
--------

Training a model requires a parameter list and data set.
```python
num_round = 10
bst = lgb.train(param, train_data, num_round, valid_sets=[test_data] )
```
After training, the model can be saved.
```python
bst.save_model('model.txt')
```
The model can be dumpped to json format
```python
# dump model
json_model = bst.dump_model()
```
A saved model can be loaded as follows:
```python
bst = lgb.Booster(model_file="model.txt") #init model
```

CV
--
Training with 5-fold cv:
```python
num_round = 10
lgb.cv(param, train_data, num_round, nfold=5)
```

Early Stopping
--------------
If you have a validation set, you can use early stopping to find the optimal number of boosting rounds.
Early stopping requires at least one set in `valid_sets`. If there's more than one, it will use all of them.

```python
bst = train(param, train_data, num_round, valid_sets=valid_sets, early_stopping_rounds=10)
bst.save_model('model.txt', num_iteration=bst.best_iteration)
```

The model will train until the validation score stops improving. Validation error needs to improve at least every `early_stopping_rounds` to continue training.

If early stopping occurs, the model will have an additional field: `bst.best_iteration`. Note that `train()` will return a model from the last iteration, not the best one. And you can set `num_iteration=bst.best_iteration` when saving model.

This works with both metrics to minimize (L2, log loss, etc.) and to maximize (NDCG, AUC). Note that if you specify more than one evaluation metric, all of them will be used for early stopping.

Prediction
----------
A model that has been trained or loaded can perform predictions on data sets.
```python
# 7 entities, each contains 10 features
data = np.random.rand(7, 10)
ypred = bst.predict(data)
```

If early stopping is enabled during training, you can get predictions from the best iteration with `bst.best_iteration`:
```python
ypred = bst.predict(data,num_iteration=bst.best_iteration)
```