- ``num_iterations`` :raw-html:`<a id="num_iterations" title="Permalink to this parameter" href="#num_iterations">🔗︎</a>`, default = ``100``, type = int, aliases: ``num_iteration``, ``n_iter``, ``num_tree``, ``num_trees``, ``num_round``, ``num_rounds``, ``num_boost_round``, ``n_estimators``, constraints: ``num_iterations >= 0``
- ``num_iterations`` :raw-html:`<a id="num_iterations" title="Permalink to this parameter" href="#num_iterations">🔗︎</a>`, default = ``100``, type = int, aliases: ``num_iteration``, ``n_iter``, ``num_tree``, ``num_trees``, ``num_round``, ``num_rounds``, ``num_boost_round``, ``n_estimators``, ``max_iter``, constraints: ``num_iterations >= 0``
- number of boosting iterations
...
...
@@ -165,7 +165,7 @@ Core Parameters
- in ``dart``, it also affects on normalization weights of dropped trees
- ``num_leaves`` :raw-html:`<a id="num_leaves" title="Permalink to this parameter" href="#num_leaves">🔗︎</a>`, default = ``31``, type = int, aliases: ``num_leaf``, ``max_leaves``, ``max_leaf``, constraints: ``1 < num_leaves <= 131072``
- ``num_leaves`` :raw-html:`<a id="num_leaves" title="Permalink to this parameter" href="#num_leaves">🔗︎</a>`, default = ``31``, type = int, aliases: ``num_leaf``, ``max_leaves``, ``max_leaf``, ``max_leaf_nodes``, constraints: ``1 < num_leaves <= 131072``
- max number of leaves in one tree
...
...
@@ -282,7 +282,7 @@ Learning Control Parameters
- ``<= 0`` means no limit
- ``min_data_in_leaf`` :raw-html:`<a id="min_data_in_leaf" title="Permalink to this parameter" href="#min_data_in_leaf">🔗︎</a>`, default = ``20``, type = int, aliases: ``min_data_per_leaf``, ``min_data``, ``min_child_samples``, constraints: ``min_data_in_leaf >= 0``
- ``min_data_in_leaf`` :raw-html:`<a id="min_data_in_leaf" title="Permalink to this parameter" href="#min_data_in_leaf">🔗︎</a>`, default = ``20``, type = int, aliases: ``min_data_per_leaf``, ``min_data``, ``min_child_samples``, ``min_samples_leaf``, constraints: ``min_data_in_leaf >= 0``
- minimal number of data in one leaf. Can be used to deal with over-fitting
...
...
@@ -402,11 +402,11 @@ Learning Control Parameters
- the final max output of leaves is ``learning_rate * max_delta_step``
- ``lambda_l1`` :raw-html:`<a id="lambda_l1" title="Permalink to this parameter" href="#lambda_l1">🔗︎</a>`, default = ``0.0``, type = double, aliases: ``reg_alpha``, constraints: ``lambda_l1 >= 0.0``
- ``lambda_l1`` :raw-html:`<a id="lambda_l1" title="Permalink to this parameter" href="#lambda_l1">🔗︎</a>`, default = ``0.0``, type = double, aliases: ``reg_alpha``, ``l1_regularization``, constraints: ``lambda_l1 >= 0.0``
- L1 regularization
- ``lambda_l2`` :raw-html:`<a id="lambda_l2" title="Permalink to this parameter" href="#lambda_l2">🔗︎</a>`, default = ``0.0``, type = double, aliases: ``reg_lambda``, ``lambda``, constraints: ``lambda_l2 >= 0.0``
- ``lambda_l2`` :raw-html:`<a id="lambda_l2" title="Permalink to this parameter" href="#lambda_l2">🔗︎</a>`, default = ``0.0``, type = double, aliases: ``reg_lambda``, ``lambda``, ``l2_regularization``, constraints: ``lambda_l2 >= 0.0``
- L2 regularization
...
...
@@ -504,7 +504,7 @@ Learning Control Parameters
- set this to larger value for more accurate result, but it will slow down the training speed
- ``monotone_constraints`` :raw-html:`<a id="monotone_constraints" title="Permalink to this parameter" href="#monotone_constraints">🔗︎</a>`, default = ``None``, type = multi-int, aliases: ``mc``, ``monotone_constraint``
- ``monotone_constraints`` :raw-html:`<a id="monotone_constraints" title="Permalink to this parameter" href="#monotone_constraints">🔗︎</a>`, default = ``None``, type = multi-int, aliases: ``mc``, ``monotone_constraint``, ``monotonic_cst``
- used for constraints of monotonic features
...
...
@@ -672,7 +672,7 @@ Dataset Parameters
- **Note**: if you specify ``monotone_constraints``, constraints will be enforced when choosing the split points, but not when fitting the linear models on leaves
- ``max_bin`` :raw-html:`<a id="max_bin" title="Permalink to this parameter" href="#max_bin">🔗︎</a>`, default = ``255``, type = int, constraints: ``max_bin > 1``
- ``max_bin`` :raw-html:`<a id="max_bin" title="Permalink to this parameter" href="#max_bin">🔗︎</a>`, default = ``255``, type = int, aliases: ``max_bins``, constraints: ``max_bin > 1``
- max number of bins that feature values will be bucketed in
...
...
@@ -806,7 +806,7 @@ Dataset Parameters
- **Note**: despite the fact that specified columns will be completely ignored during the training, they still should have a valid format allowing LightGBM to load file successfully
- ``categorical_feature`` :raw-html:`<a id="categorical_feature" title="Permalink to this parameter" href="#categorical_feature">🔗︎</a>`, default = ``""``, type = multi-int or string, aliases: ``cat_feature``, ``categorical_column``, ``cat_column``
- ``categorical_feature`` :raw-html:`<a id="categorical_feature" title="Permalink to this parameter" href="#categorical_feature">🔗︎</a>`, default = ``""``, type = multi-int or string, aliases: ``cat_feature``, ``categorical_column``, ``cat_column``, ``categorical_features``
// desc = **Note**: internally, LightGBM constructs ``num_class * num_iterations`` trees for multi-class classification problems
...
...
@@ -174,7 +174,7 @@ struct Config {
doublelearning_rate=0.1;
// default = 31
// alias = num_leaf, max_leaves, max_leaf
// alias = num_leaf, max_leaves, max_leaf, max_leaf_nodes
// check = >1
// check = <=131072
// desc = max number of leaves in one tree
...
...
@@ -261,7 +261,7 @@ struct Config {
// desc = ``<= 0`` means no limit
intmax_depth=-1;
// alias = min_data_per_leaf, min_data, min_child_samples
// alias = min_data_per_leaf, min_data, min_child_samples, min_samples_leaf
// check = >=0
// desc = minimal number of data in one leaf. Can be used to deal with over-fitting
// desc = **Note**: this is an approximation based on the Hessian, so occasionally you may observe splits which produce leaf nodes that have less than this many observations
...
...
@@ -360,12 +360,12 @@ struct Config {
// desc = the final max output of leaves is ``learning_rate * max_delta_step``
doublemax_delta_step=0.0;
// alias = reg_alpha
// alias = reg_alpha, l1_regularization
// check = >=0.0
// desc = L1 regularization
doublelambda_l1=0.0;
// alias = reg_lambda, lambda
// alias = reg_lambda, lambda, l2_regularization
// check = >=0.0
// desc = L2 regularization
doublelambda_l2=0.0;
...
...
@@ -453,7 +453,7 @@ struct Config {
inttop_k=20;
// type = multi-int
// alias = mc, monotone_constraint
// alias = mc, monotone_constraint, monotonic_cst
// default = None
// desc = used for constraints of monotonic features
// desc = ``1`` means increasing, ``-1`` means decreasing, ``0`` means non-constraint
...
...
@@ -586,6 +586,7 @@ struct Config {
// descl2 = **Note**: if you specify ``monotone_constraints``, constraints will be enforced when choosing the split points, but not when fitting the linear models on leaves
boollinear_tree=false;
// alias = max_bins
// check = >1
// desc = max number of bins that feature values will be bucketed in
// desc = small number of bins may reduce training accuracy but may increase general power (deal with over-fitting)
...
...
@@ -691,7 +692,7 @@ struct Config {
std::stringignore_column="";
// type = multi-int or string
// alias = cat_feature, categorical_column, cat_column
// alias = cat_feature, categorical_column, cat_column, categorical_features
// desc = used to specify categorical features
// desc = use number for index, e.g. ``categorical_feature=0,1,2`` means column\_0, column\_1 and column\_2 are categorical features
// desc = add a prefix ``name:`` for column name, e.g. ``categorical_feature=name:c1,c2,c3`` means c1, c2 and c3 are categorical features