Parameters.rst 30.4 KB
Newer Older
1
2
3
Parameters
==========

4
This page contains descriptions of all parameters in LightGBM.
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

**List of other helpful links**

- `Python API <./Python-API.rst>`__

- `Parameters Tuning <./Parameters-Tuning.rst>`__

**External Links**

- `Laurae++ Interactive Documentation`_

Parameters Format
-----------------

The parameters format is ``key1=value1 key2=value2 ...``.
20
Parameters can be set both in config file and command line.
21
22
23
24
25
26
27
28
29
30
31
32
By using command line, parameters should not have spaces before and after ``=``.
By using config files, one line can only contain one parameter. You can use ``#`` to comment.

If one parameter appears in both command line and config file, LightGBM will use the parameter in command line.

Core Parameters
---------------

-  ``config``, default=\ ``""``, type=string, alias=\ ``config_file``

   -  path of config file

33
   -  **Note**: Only can be used in CLI version
34

Guolin Ke's avatar
Guolin Ke committed
35
-  ``task``, default=\ ``train``, type=enum, options=\ ``train``, ``predict``, ``convert_model``, ``refit``, alias=\ ``task_type``
36

37
   -  ``train``, alias=\ ``training``, for training
38

39
   -  ``predict``, alias=\ ``prediction``, ``test``, for prediction
40

41
   -  ``convert_model``, for converting model file into if-else format, see more information in `Convert model parameters <#convert-model-parameters>`__
42

43
   -  ``refit``, alias=\ ``refit_tree``, refit existing models with new data
44

45
   -  **Note**: Only can be used in CLI version
46

47
-  ``application``, default=\ ``regression``, type=enum,
Nikita Titov's avatar
Nikita Titov committed
48
49
   options=\ ``regression``, ``regression_l1``, ``huber``, ``fair``, ``poisson``, ``quantile``, ``mape``, ``gammma``, ``tweedie``,
   ``binary``, ``multiclass``, ``multiclassova``, ``xentropy``, ``xentlambda``, ``lambdarank``,
Guolin Ke's avatar
Guolin Ke committed
50
   alias=\ ``app``, ``objective``, ``objective_type``
51

52
   -  regression application
53

Nikita Titov's avatar
Nikita Titov committed
54
      -  ``regression_l2``, L2 loss, alias=\ ``regression``, ``mean_squared_error``, ``mse``, ``l2_root``, ``root_mean_squared_error``, ``rmse``
55
56
57
58
59
60
61
62
63

      -  ``regression_l1``, L1 loss, alias=\ ``mean_absolute_error``, ``mae``

      -  ``huber``, `Huber loss`_

      -  ``fair``, `Fair loss`_

      -  ``poisson``, `Poisson regression`_

64
      -  ``quantile``, `Quantile regression`_
65

66
      -  ``mape``, `MAPE loss`_, alias=\ ``mean_absolute_percentage_error``
67

Nikita Titov's avatar
Nikita Titov committed
68
      -  ``gamma``, Gamma regression with log-link. It might be useful, e.g., for modeling insurance claims severity, or for any target that might be `gamma-distributed`_
Guolin Ke's avatar
Guolin Ke committed
69

Nikita Titov's avatar
Nikita Titov committed
70
      -  ``tweedie``, Tweedie regression with log-link. It might be useful, e.g., for modeling total loss in insurance, or for any target that might be `tweedie-distributed`_
Guolin Ke's avatar
Guolin Ke committed
71

72
73
74
75
   -  ``binary``, binary `log loss`_ classification application

   -  multi-class classification application

Nikita Titov's avatar
Nikita Titov committed
76
      -  ``multiclass``, `softmax`_ objective function, alias=\ ``softmax``
77

Nikita Titov's avatar
Nikita Titov committed
78
79
80
      -  ``multiclassova``, `One-vs-All`_ binary objective function, alias=\ ``multiclass_ova``, ``ova``, ``ovr``

      -  ``num_class`` should be set as well
81
82
83
84
85
86
87
88

   -  cross-entropy application

      -  ``xentropy``, objective function for cross-entropy (with optional linear weights), alias=\ ``cross_entropy``

      -  ``xentlambda``, alternative parameterization of cross-entropy, alias=\ ``cross_entropy_lambda``

      -  the label is anything in interval [0, 1]
89
90
91
92
93

   -  ``lambdarank``, `lambdarank`_ application

      -  the label should be ``int`` type in lambdarank tasks, and larger number represent the higher relevance (e.g. 0:bad, 1:fair, 2:good, 3:perfect)

94
95
96
      -  `label_gain <#objective-parameters>`__ can be used to set the gain(weight) of ``int`` label

      -  all values in ``label`` must be smaller than number of elements in ``label_gain``
97
98
99
100
101
102
103
104
105
106
107
108
109

-  ``boosting``, default=\ ``gbdt``, type=enum,
   options=\ ``gbdt``, ``rf``, ``dart``, ``goss``,
   alias=\ ``boost``, ``boosting_type``

   -  ``gbdt``, traditional Gradient Boosting Decision Tree

   -  ``rf``, Random Forest

   -  ``dart``, `Dropouts meet Multiple Additive Regression Trees`_

   -  ``goss``, Gradient-based One-Side Sampling

Guolin Ke's avatar
Guolin Ke committed
110
-  ``data``, default=\ ``""``, type=string, alias=\ ``train``, ``train_data``, ``data_filename``
111
112
113

   -  training data, LightGBM will train from this data

Guolin Ke's avatar
Guolin Ke committed
114
-  ``valid``, default=\ ``""``, type=multi-string, alias=\ ``test``, ``valid_data``, ``test_data``, ``valid_filenames``
115
116
117
118
119
120

   -  validation/test data, LightGBM will output metrics for these data

   -  support multi validation data, separate by ``,``

-  ``num_iterations``, default=\ ``100``, type=int,
121
   alias=\ ``num_iteration``, ``num_tree``, ``num_trees``, ``num_round``, ``num_rounds``, ``num_boost_round``, ``n_estimators``
122
123

   -  number of boosting iterations
124

125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
   -  **Note**: for Python/R package, **this parameter is ignored**,
      use ``num_boost_round`` (Python) or ``nrounds`` (R) input arguments of ``train`` and ``cv`` methods instead

   -  **Note**: internally, LightGBM constructs ``num_class * num_iterations`` trees for ``multiclass`` problems

-  ``learning_rate``, default=\ ``0.1``, type=double, alias=\ ``shrinkage_rate``

   -  shrinkage rate

   -  in ``dart``, it also affects on normalization weights of dropped trees

-  ``num_leaves``, default=\ ``31``, type=int, alias=\ ``num_leaf``

   -  number of leaves in one tree

Guolin Ke's avatar
Guolin Ke committed
140
-  ``tree_learner``, default=\ ``serial``, type=enum, options=\ ``serial``, ``feature``, ``data``, ``voting``, alias=\ ``tree``, ``tree_learner_type``
141
142
143

   -  ``serial``, single machine tree learner

144
   -  ``feature``, alias=\ ``feature_parallel``, feature parallel tree learner
145

146
147
148
   -  ``data``, alias=\ ``data_parallel``, data parallel tree learner

   -  ``voting``, alias=\ ``voting_parallel``, voting parallel tree learner
149
150
151

   -  refer to `Parallel Learning Guide <./Parallel-Learning-Guide.rst>`__ to get more details

Guolin Ke's avatar
Guolin Ke committed
152
-  ``num_threads``, default=\ ``OpenMP_default``, type=int, alias=\ ``num_thread``, ``nthread``, ``nthreads``
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184

   -  number of threads for LightGBM

   -  for the best speed, set this to the number of **real CPU cores**,
      not the number of threads (most CPU using `hyper-threading`_ to generate 2 threads per CPU core)

   -  do not set it too large if your dataset is small (do not use 64 threads for a dataset with 10,000 rows for instance)

   -  be aware a task manager or any similar CPU monitoring tool might report cores not being fully utilized. **This is normal**

   -  for parallel learning, should not use full CPU cores since this will cause poor performance for the network

-  ``device``, default=\ ``cpu``, options=\ ``cpu``, ``gpu``

   -  choose device for the tree learning, you can use GPU to achieve the faster learning

   -  **Note**: it is recommended to use the smaller ``max_bin`` (e.g. 63) to get the better speed up

   -  **Note**: for the faster speed, GPU use 32-bit float point to sum up by default, may affect the accuracy for some tasks.
      You can set ``gpu_use_dp=true`` to enable 64-bit float point, but it will slow down the training

   -  **Note**: refer to `Installation Guide <./Installation-Guide.rst#build-gpu-version>`__ to build with GPU

Learning Control Parameters
---------------------------

-  ``max_depth``, default=\ ``-1``, type=int

   -  limit the max depth for tree model. This is used to deal with over-fitting when ``#data`` is small. Tree still grows by leaf-wise

   -  ``< 0`` means no limit

185
-  ``min_data_in_leaf``, default=\ ``20``, type=int, alias=\ ``min_data_per_leaf`` , ``min_data``, ``min_child_samples``
186
187
188
189

   -  minimal number of data in one leaf. Can be used to deal with over-fitting

-  ``min_sum_hessian_in_leaf``, default=\ ``1e-3``, type=double,
190
   alias=\ ``min_sum_hessian_per_leaf``, ``min_sum_hessian``, ``min_hessian``, ``min_child_weight``
191
192
193

   -  minimal sum hessian in one leaf. Like ``min_data_in_leaf``, it can be used to deal with over-fitting

194
-  ``feature_fraction``, default=\ ``1.0``, type=double, ``0.0 < feature_fraction <= 1.0``, alias=\ ``sub_feature``, ``colsample_bytree``
195
196
197
198
199
200
201
202
203
204
205
206

   -  LightGBM will randomly select part of features on each iteration if ``feature_fraction`` smaller than ``1.0``.
      For example, if set to ``0.8``, will select 80% features before training each tree

   -  can be used to speed up training

   -  can be used to deal with over-fitting

-  ``feature_fraction_seed``, default=\ ``2``, type=int

   -  random seed for ``feature_fraction``

Guolin Ke's avatar
Guolin Ke committed
207
-  ``bagging_fraction``, default=\ ``1.0``, type=double, ``0.0 < bagging_fraction <= 1.0``, alias=\ ``sub_row``, ``subsample``, ``bagging``
208
209
210
211
212
213
214
215
216

   -  like ``feature_fraction``, but this will randomly select part of data without resampling

   -  can be used to speed up training

   -  can be used to deal with over-fitting

   -  **Note**: To enable bagging, ``bagging_freq`` should be set to a non zero value as well

217
-  ``bagging_freq``, default=\ ``0``, type=int, alias=\ ``subsample_freq``
218
219
220
221
222

   -  frequency for bagging, ``0`` means disable bagging. ``k`` means will perform bagging at every ``k`` iteration

   -  **Note**: to enable bagging, ``bagging_fraction`` should be set as well

223
-  ``bagging_seed`` , default=\ ``3``, type=int, alias=\ ``bagging_fraction_seed``
224
225
226
227
228
229
230

   -  random seed for bagging

-  ``early_stopping_round``, default=\ ``0``, type=int, alias=\ ``early_stopping_rounds``, ``early_stopping``

   -  will stop training if one metric of one validation data doesn't improve in last ``early_stopping_round`` rounds

231
-  ``lambda_l1``, default=\ ``0``, type=double, alias=\ ``reg_alpha``
232
233
234

   -  L1 regularization

235
-  ``lambda_l2``, default=\ ``0``, type=double, alias=\ ``reg_lambda``
236
237
238

   -  L2 regularization

239
240
241
242
243
244
245
246
-  ``max_delta_step``, default=\ ``0``, type=double, alias=\ ``max_tree_output``, ``max_leaf_output``

   -  Used to limit the max output of tree leaves

   -  when <= 0, there is not constraint

   -  the final max output of leaves is ``learning_rate*max_delta_step``

247
-  ``min_split_gain``, default=\ ``0``, type=double, alias=\ ``min_gain_to_split``
248
249
250

   -  the minimal gain to perform split

251
-  ``drop_rate``, default=\ ``0.1``, type=double, ``0.0 <= drop_rate <= 1.0``
252
253
254

   -  only used in ``dart``

255
-  ``skip_drop``, default=\ ``0.5``, type=double, ``0.0 <= skip_drop <= 1.0``
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284

   -  only used in ``dart``, probability of skipping drop

-  ``max_drop``, default=\ ``50``, type=int

   -  only used in ``dart``, max number of dropped trees on one iteration
   
   -  ``<=0`` means no limit

-  ``uniform_drop``, default=\ ``false``, type=bool

   -  only used in ``dart``, set this to ``true`` if want to use uniform drop

-  ``xgboost_dart_mode``, default=\ ``false``, type=bool

   -  only used in ``dart``, set this to ``true`` if want to use xgboost dart mode

-  ``drop_seed``, default=\ ``4``, type=int

   -  only used in ``dart``, random seed to choose dropping models

-  ``top_rate``, default=\ ``0.2``, type=double

   -  only used in ``goss``, the retain ratio of large gradient data

-  ``other_rate``, default=\ ``0.1``, type=int

   -  only used in ``goss``, the retain ratio of small gradient data

Guolin Ke's avatar
Guolin Ke committed
285
-  ``min_data_per_group``, default=\ ``100``, type=int
286
287
288

   -  min number of data per categorical group

289
-  ``max_cat_threshold``, default=\ ``32``, type=int
290
291
292
293
294

   -  use for the categorical features

   -  limit the max threshold points in categorical features

295
-  ``cat_smooth``, default=\ ``10``, type=double
296

297
   -  used for the categorical features
298

299
   -  this can reduce the effect of noises in categorical features, especially for categories with few data
300

301
-  ``cat_l2``, default=\ ``10``, type=double
Guolin Ke's avatar
Guolin Ke committed
302
303

   -  L2 regularization in categorcial split
304

305
306
-  ``max_cat_to_onehot``, default=\ ``4``, type=int

307
308
309
310
311
312
313
   -  when number of categories of one feature smaller than or equal to ``max_cat_to_onehot``, one-vs-other split algorithm will be used

-  ``top_k``, default=\ ``20``, type=int, alias=\ ``topk``

   -  used in `Voting parallel <./Parallel-Learning-Guide.rst#choose-appropriate-parallel-algorithm>`__

   -  set this to larger value for more accurate result, but it will slow down the training speed
314

Guolin Ke's avatar
Guolin Ke committed
315
-  ``monotone_constraint``, default=\ ``None``, type=multi-int, alias=\ ``mc``, ``monotone_constraints``
Guolin Ke's avatar
Guolin Ke committed
316

317
   -  used for constraints of monotonic features
Guolin Ke's avatar
Guolin Ke committed
318

319
   -  ``1`` means increasing, ``-1`` means decreasing, ``0`` means non-constraint
Guolin Ke's avatar
Guolin Ke committed
320

321
   -  you need to specify all features in order. For example, ``mc=-1,0,1`` means the decreasing for 1st feature, non-constraint for 2nd feature and increasing for the 3rd feature
Guolin Ke's avatar
Guolin Ke committed
322

323
324
325
326
327
328
329
330
331
332
333
IO Parameters
-------------

-  ``max_bin``, default=\ ``255``, type=int

   -  max number of bins that feature values will be bucketed in.
      Small number of bins may reduce training accuracy but may increase general power (deal with over-fitting)

   -  LightGBM will auto compress memory according ``max_bin``.
      For example, LightGBM will use ``uint8_t`` for feature value if ``max_bin=255``

334
-  ``min_data_in_bin``, default=\ ``3``, type=int
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358

   -  min number of data inside one bin, use this to avoid one-data-one-bin (may over-fitting)

-  ``data_random_seed``, default=\ ``1``, type=int

   -  random seed for data partition in parallel learning (not include feature parallel)

-  ``output_model``, default=\ ``LightGBM_model.txt``, type=string, alias=\ ``model_output``, ``model_out``

   -  file name of output model in training

-  ``input_model``, default=\ ``""``, type=string, alias=\ ``model_input``, ``model_in``

   -  file name of input model

   -  for ``prediction`` task, this model will be used for prediction data

   -  for ``train`` task, training will be continued from this model

-  ``output_result``, default=\ ``LightGBM_predict_result.txt``,
   type=string, alias=\ ``predict_result``, ``prediction_result``

   -  file name of prediction result in ``prediction`` task

359
-  ``pre_partition``, default=\ ``false``, type=bool, alias=\ ``is_pre_partition``
360
361
362
363
364

   -  used for parallel learning (not include feature parallel)

   -  ``true`` if training data are pre-partitioned, and different machines use different partitions

365
-  ``is_sparse``, default=\ ``true``, type=bool, alias=\ ``is_enable_sparse``, ``enable_sparse``
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429

   -  used to enable/disable sparse optimization. Set to ``false`` to disable sparse optimization

-  ``two_round``, default=\ ``false``, type=bool, alias=\ ``two_round_loading``, ``use_two_round_loading``

   -  by default, LightGBM will map data file to memory and load features from memory.
      This will provide faster data loading speed. But it may run out of memory when the data file is very big

   -  set this to ``true`` if data file is too big to fit in memory

-  ``save_binary``, default=\ ``false``, type=bool, alias=\ ``is_save_binary``, ``is_save_binary_file``

   -  if ``true`` LightGBM will save the dataset (include validation data) to a binary file.
      Speed up the data loading for the next time

-  ``verbosity``, default=\ ``1``, type=int, alias=\ ``verbose``

   -  ``<0`` = Fatal,
      ``=0`` = Error (Warn),
      ``>0`` = Info

-  ``header``, default=\ ``false``, type=bool, alias=\ ``has_header``

   -  set this to ``true`` if input data has header

-  ``label``, default=\ ``""``, type=string, alias=\ ``label_column``

   -  specify the label column

   -  use number for index, e.g. ``label=0`` means column\_0 is the label

   -  add a prefix ``name:`` for column name, e.g. ``label=name:is_click``

-  ``weight``, default=\ ``""``, type=string, alias=\ ``weight_column``

   -  specify the weight column

   -  use number for index, e.g. ``weight=0`` means column\_0 is the weight

   -  add a prefix ``name:`` for column name, e.g. ``weight=name:weight``

   -  **Note**: index starts from ``0``.
      And it doesn't count the label column when passing type is Index, e.g. when label is column\_0, and weight is column\_1, the correct parameter is ``weight=0``

-  ``query``, default=\ ``""``, type=string, alias=\ ``query_column``, ``group``, ``group_column``

   -  specify the query/group id column

   -  use number for index, e.g. ``query=0`` means column\_0 is the query id

   -  add a prefix ``name:`` for column name, e.g. ``query=name:query_id``

   -  **Note**: data should be grouped by query\_id.
      Index starts from ``0``.
      And it doesn't count the label column when passing type is Index, e.g. when label is column\_0 and query\_id is column\_1, the correct parameter is ``query=0``

-  ``ignore_column``, default=\ ``""``, type=string, alias=\ ``ignore_feature``, ``blacklist``

   -  specify some ignoring columns in training

   -  use number for index, e.g. ``ignore_column=0,1,2`` means column\_0, column\_1 and column\_2 will be ignored

   -  add a prefix ``name:`` for column name, e.g. ``ignore_column=name:c1,c2,c3`` means c1, c2 and c3 will be ignored

430
   -  **Note**: works only in case of loading data directly from file
431

432
433
434
435
436
437
438
439
440
441
442
443
444
445
   -  **Note**: index starts from ``0``. And it doesn't count the label column

-  ``categorical_feature``, default=\ ``""``, type=string, alias=\ ``categorical_column``, ``cat_feature``, ``cat_column``

   -  specify categorical features

   -  use number for index, e.g. ``categorical_feature=0,1,2`` means column\_0, column\_1 and column\_2 are categorical features

   -  add a prefix ``name:`` for column name, e.g. ``categorical_feature=name:c1,c2,c3`` means c1, c2 and c3 are categorical features

   -  **Note**: only supports categorical with ``int`` type. Index starts from ``0``. And it doesn't count the label column

   -  **Note**: the negative values will be treated as **missing values**

Guolin Ke's avatar
Guolin Ke committed
446
-  ``predict_raw_score``, default=\ ``false``, type=bool, alias=\ ``raw_score``, ``is_predict_raw_score``, ``predict_rawscore``
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466

   -  only used in ``prediction`` task

   -  set to ``true`` to predict only the raw scores

   -  set to ``false`` to predict transformed scores

-  ``predict_leaf_index``, default=\ ``false``, type=bool, alias=\ ``leaf_index``, ``is_predict_leaf_index``

   -  only used in ``prediction`` task

   -  set to ``true`` to predict with leaf index of all trees

-  ``predict_contrib``, default=\ ``false``, type=bool, alias=\ ``contrib``, ``is_predict_contrib``

   -  only used in ``prediction`` task

   -  set to ``true`` to estimate `SHAP values`_, which represent how each feature contributs to each prediction.
      Produces number of features + 1 values where the last value is the expected value of the model output over the training data

467
-  ``bin_construct_sample_cnt``, default=\ ``200000``, type=int, alias=\ ``subsample_for_bin``
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503

   -  number of data that sampled to construct histogram bins

   -  will give better training result when set this larger, but will increase data loading time

   -  set this to larger value if data is very sparse

-  ``num_iteration_predict``, default=\ ``-1``, type=int

   -  only used in ``prediction`` task
   -  use to specify how many trained iterations will be used in prediction

   -  ``<= 0`` means no limit

-  ``pred_early_stop``, default=\ ``false``, type=bool

   -  if ``true`` will use early-stopping to speed up the prediction. May affect the accuracy

-  ``pred_early_stop_freq``, default=\ ``10``, type=int

   -  the frequency of checking early-stopping prediction

-  ``pred_early_stop_margin``, default=\ ``10.0``, type=double

   -  the threshold of margin in early-stopping prediction

-  ``use_missing``, default=\ ``true``, type=bool

   -  set to ``false`` to disable the special handle of missing value

-  ``zero_as_missing``, default=\ ``false``, type=bool

   -  set to ``true`` to treat all zero as missing values (including the unshown values in libsvm/sparse matrics)

   -  set to ``false`` to use ``na`` to represent missing values

Guolin Ke's avatar
Guolin Ke committed
504
-  ``init_score_file``, default=\ ``""``, type=string, alias=\ ``init_score_filename``, ``initscore_filename``, ``init_score``
505
506
507

   -  path to training initial score file, ``""`` will use ``train_data_file`` + ``.init`` (if exists)

Guolin Ke's avatar
Guolin Ke committed
508
-  ``valid_init_score_file``, default=\ ``""``, type=multi-string, alias=\ ``valid_data_initscores``, ``valid_data_init_scores``, ``valid_init_score``
509
510
511
512
513

   -  path to validation initial score file, ``""`` will use ``valid_data_file`` + ``.init`` (if exists)

   -  separate by ``,`` for multi-validation data

Guolin Ke's avatar
Guolin Ke committed
514
-  ``forced_splits``, default=\ ``""``, type=string, alias=\ ``forced_splits_file``, ``forcedsplits_filename``, ``forced_splits_filename``
515

516
   -  path to a ``.json`` file that specifies splits to force at the top of every decision tree before best-first learning commences
517

518
519
   -  ``.json`` file can be arbitrarily nested, and each split contains ``feature``, ``threshold`` fields, as well as ``left`` and ``right`` 
      fields representing subsplits. Categorical splits are forced in a one-hot fashion, with ``left`` representing the split containing
520
      the feature value and ``right`` representing other values
521

522
   -  see `this file <https://github.com/Microsoft/LightGBM/tree/master/examples/binary_classification/forced_splits.json>`__ as an example
523

524
525
526
527
528
Objective Parameters
--------------------

-  ``sigmoid``, default=\ ``1.0``, type=double

529
   -  parameter for sigmoid function. Will be used in ``binary`` and ``multiclassova`` classification and in ``lambdarank``
530

531
-  ``alpha``, default=\ ``0.9``, type=double
532

533
   -  parameter for `Huber loss`_ and `Quantile regression`_. Will be used in ``regression`` task
534
535
536
537
538

-  ``fair_c``, default=\ ``1.0``, type=double

   -  parameter for `Fair loss`_. Will be used in ``regression`` task

539
-  ``poisson_max_delta_step``, default=\ ``0.7``, type=double
540

541
   -  parameter for `Poisson regression`_ to safeguard optimization
542
543
544
545
546
547
548
549
550
551
552

-  ``scale_pos_weight``, default=\ ``1.0``, type=double

   -  weight of positive class in ``binary`` classification task

-  ``boost_from_average``, default=\ ``true``, type=bool

   -  only used in ``regression`` task

   -  adjust initial score to the mean of labels for faster convergence

553
-  ``is_unbalance``, default=\ ``false``, type=bool, alias=\ ``unbalanced_sets``
554
555
556
557
558
559
560
561
562
563
564

   -  used in ``binary`` classification
   
   -  set this to ``true`` if training data are unbalance

-  ``max_position``, default=\ ``20``, type=int

   -  used in ``lambdarank``

   -  will optimize `NDCG`_ at this position

565
-  ``label_gain``, default=\ ``0,1,3,7,15,31,63,...,2^30-1``, type=multi-double
566
567
568
569
570
571
572
573
574

   -  used in ``lambdarank``

   -  relevant gain for labels. For example, the gain of label ``2`` is ``3`` if using default label gains

   -  separate by ``,``

-  ``num_class``, default=\ ``1``, type=int, alias=\ ``num_classes``

Nikita Titov's avatar
Nikita Titov committed
575
   -  only used in multi-class classification
576

577
578
-  ``reg_sqrt``, default=\ ``false``, type=bool

579
580
581
   -  only used in ``regression``

   -  will fit ``sqrt(label)`` instead and prediction result will be also automatically converted to ``pow2(prediction)``
582

Guolin Ke's avatar
Guolin Ke committed
583
-  ``tweedie_variance_power``, default=\ ``1.5``, type=\ ``double``, range=\ ``[1,2)``
Nikita Titov's avatar
Nikita Titov committed
584
585
586
587

   - only used in ``tweedie`` regression

   - controls the variance of the tweedie distribution
Guolin Ke's avatar
Guolin Ke committed
588
589
590
591
592
   
   - set closer to 2 to shift towards a gamma distribution
   
   - set closer to 1 to shift towards a poisson distribution

593
594
595
Metric Parameters
-----------------

Guolin Ke's avatar
Guolin Ke committed
596
-  ``metric``, default=\ ``''``, type=multi-enum, alias=\ ``metric_types``
597

598
   -  metric to be evaluated on the evaluation sets **in addition** to what is provided in the training arguments
599

Misha Lisovyi's avatar
Misha Lisovyi committed
600
601
      -  ``''`` (empty string or not specific), metric corresponding to specified objective will be used
         (this is possible only for pre-defined objective functions, otherwise no evaluation metric will be added)
Nikita Titov's avatar
Nikita Titov committed
602

603
      -  ``'None'`` (string, **not** a ``None`` value), no metric registered, alias=\ ``na``
604
   
Misha Lisovyi's avatar
Misha Lisovyi committed
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
      -  ``l1``, absolute loss, alias=\ ``mean_absolute_error``, ``mae``, ``regression_l1``
   
      -  ``l2``, square loss, alias=\ ``mean_squared_error``, ``mse``, ``regression_l2``, ``regression``
   
      -  ``l2_root``, root square loss, alias=\ ``root_mean_squared_error``, ``rmse``
   
      -  ``quantile``, `Quantile regression`_
      
      -  ``mape``, `MAPE loss`_, alias=\ ``mean_absolute_percentage_error``
   
      -  ``huber``, `Huber loss`_
   
      -  ``fair``, `Fair loss`_
   
      -  ``poisson``, negative log-likelihood for `Poisson regression`_
   
      -  ``gamma``, negative log-likelihood for Gamma regression
   
      -  ``gamma_deviance``, residual deviance for Gamma regression
   
      -  ``tweedie``, negative log-likelihood for Tweedie regression
   
      -  ``ndcg``, `NDCG`_
   
      -  ``map``, `MAP`_, alias=\ ``mean_average_precision``
   
      -  ``auc``, `AUC`_
   
      -  ``binary_logloss``, `log loss`_, alias=\ ``binary``
   
      -  ``binary_error``, for one sample: ``0`` for correct classification, ``1`` for error classification
   
      -  ``multi_logloss``, log loss for mulit-class classification, alias=\ ``multiclass``, ``softmax``, ``multiclassova``, ``multiclass_ova``, ``ova``, ``ovr``
   
      -  ``multi_error``, error rate for mulit-class classification
   
      -  ``xentropy``, cross-entropy (with optional linear weights), alias=\ ``cross_entropy``
   
      -  ``xentlambda``, "intensity-weighted" cross-entropy, alias=\ ``cross_entropy_lambda``
   
      -  ``kldiv``, `Kullback-Leibler divergence`_, alias=\ ``kullback_leibler``
646

Misha Lisovyi's avatar
Misha Lisovyi committed
647
   -  support multiple metrics, separated by ``,``
648

649
-  ``metric_freq``, default=\ ``1``, type=int, alias=\ ``output_freq``
650
651
652

   -  frequency for metric output

Guolin Ke's avatar
Guolin Ke committed
653
-  ``train_metric``, default=\ ``false``, type=bool, alias=\ ``training_metric``, ``is_training_metric``, ``is_provide_training_metric``
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681

   -  set this to ``true`` if you need to output metric result of training

-  ``ndcg_at``, default=\ ``1,2,3,4,5``, type=multi-int, alias=\ ``ndcg_eval_at``, ``eval_at``

   -  `NDCG`_ evaluation positions, separated by ``,``

Network Parameters
------------------

Following parameters are used for parallel learning, and only used for base (socket) version.

-  ``num_machines``, default=\ ``1``, type=int, alias=\ ``num_machine``

   -  used for parallel learning, the number of machines for parallel learning application

   -  need to set this in both socket and mpi versions

-  ``local_listen_port``, default=\ ``12400``, type=int, alias=\ ``local_port``

   -  TCP listen port for local machines

   -  you should allow this port in firewall settings before training

-  ``time_out``, default=\ ``120``, type=int

   -  socket time-out in minutes

682
-  ``machine_list_file``, default=\ ``""``, type=string, alias=\ ``mlist``
683
684
685
686
687
688
689
690
691
692

   -  file that lists machines for this parallel learning application

   -  each line contains one IP and one port for one machine. The format is ``ip port``, separate by space

GPU Parameters
--------------

-  ``gpu_platform_id``, default=\ ``-1``, type=int

693
   -  OpenCL platform ID. Usually each GPU vendor exposes one OpenCL platform
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744

   -  default value is ``-1``, means the system-wide default platform

-  ``gpu_device_id``, default=\ ``-1``, type=int

   -  OpenCL device ID in the specified platform. Each GPU in the selected platform has a unique device ID

   -  default value is ``-1``, means the default device in the selected platform

-  ``gpu_use_dp``, default=\ ``false``, type=bool

   -  set to ``true`` to use double precision math on GPU (default using single precision)
  
Convert Model Parameters
------------------------

This feature is only supported in command line version yet.

-  ``convert_model_language``, default=\ ``""``, type=string

   -  only ``cpp`` is supported yet

   -  if ``convert_model_language`` is set when ``task`` is set to ``train``, the model will also be converted

-  ``convert_model``, default=\ ``"gbdt_prediction.cpp"``, type=string

   -  output file name of converted model

Others
------

Continued Training with Input Score
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

LightGBM supports continued training with initial scores. It uses an additional file to store these initial scores, like the following:

::

    0.5
    -0.1
    0.9
    ...

It means the initial score of the first data row is ``0.5``, second is ``-0.1``, and so on.
The initial score file corresponds with data file line by line, and has per score per line.
And if the name of data file is ``train.txt``, the initial score file should be named as ``train.txt.init`` and in the same folder as the data file.
In this case LightGBM will auto load initial score file if it exists.

Weight Data
~~~~~~~~~~~

Nikita Titov's avatar
Nikita Titov committed
745
LightGBM supports weighted training. It uses an additional file to store weight data, like the following:
746
747
748
749
750
751
752
753
754
755

::

    1.0
    0.5
    0.8
    ...

It means the weight of the first data row is ``1.0``, second is ``0.5``, and so on.
The weight file corresponds with data file line by line, and has per weight per line.
756
757
And if the name of data file is ``train.txt``, the weight file should be named as ``train.txt.weight`` and placed in the same folder as the data file.
In this case LightGBM will load the weight file automatically if it exists.
758

759
Also, you can include weight column in your data file. Please refer to parameter ``weight`` in above.
760
761
762
763
764

Query Data
~~~~~~~~~~

For LambdaRank learning, it needs query information for training data.
Nikita Titov's avatar
Nikita Titov committed
765
LightGBM uses an additional file to store query data, like the following:
766
767
768
769
770
771
772
773

::

    27
    18
    67
    ...

774
It means first ``27`` lines samples belong to one query and next ``18`` lines belong to another, and so on.
775
776
777

**Note**: data should be ordered by the query.

778
If the name of data file is ``train.txt``, the query file should be named as ``train.txt.query`` and placed in the same folder as the data file.
779
780
In this case LightGBM will load the query file automatically if it exists.

781
Also, you can include query/group id column in your data file. Please refer to parameter ``group`` in above.
782
783
784
785
786

.. _Laurae++ Interactive Documentation: https://sites.google.com/view/lauraepp/parameters

.. _Huber loss: https://en.wikipedia.org/wiki/Huber_loss

787
.. _Quantile regression: https://en.wikipedia.org/wiki/Quantile_regression
788

789
790
.. _MAPE loss: https://en.wikipedia.org/wiki/Mean_absolute_percentage_error

791
792
793
794
795
796
797
798
799
800
801
802
803
804
.. _Fair loss: https://www.kaggle.com/c/allstate-claims-severity/discussion/24520

.. _Poisson regression: https://en.wikipedia.org/wiki/Poisson_regression

.. _lambdarank: https://papers.nips.cc/paper/2971-learning-to-rank-with-nonsmooth-cost-functions.pdf

.. _Dropouts meet Multiple Additive Regression Trees: https://arxiv.org/abs/1505.01866

.. _hyper-threading: https://en.wikipedia.org/wiki/Hyper-threading

.. _SHAP values: https://arxiv.org/abs/1706.06060

.. _NDCG: https://en.wikipedia.org/wiki/Discounted_cumulative_gain#Normalized_DCG

Guolin Ke's avatar
Guolin Ke committed
805
.. _MAP: https://makarandtapaswi.wordpress.com/2012/07/02/intuition-behind-average-precision-and-map/
806
807
808

.. _AUC: https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve

Guolin Ke's avatar
Guolin Ke committed
809
.. _log loss: https://en.wikipedia.org/wiki/Cross_entropy
810
811
812
813
814
815

.. _softmax: https://en.wikipedia.org/wiki/Softmax_function

.. _One-vs-All: https://en.wikipedia.org/wiki/Multiclass_classification#One-vs.-rest

.. _Kullback-Leibler divergence: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence
Guolin Ke's avatar
Guolin Ke committed
816
817
818

.. _gamma-distributed: https://en.wikipedia.org/wiki/Gamma_distribution#Applications

Nikita Titov's avatar
Nikita Titov committed
819
.. _tweedie-distributed: https://en.wikipedia.org/wiki/Tweedie_distribution#Applications