"...git@developer.sourcefind.cn:tianlh/lightgbm-dcu.git" did not exist on "f1a1486929f8e981c44cba45132f5b4929453f8a"
Commit 0a9d4cc2 authored by Nikita Titov's avatar Nikita Titov Committed by Guolin Ke
Browse files

[docs] generate parameters description from config file. Final stage (#1421)

* removed excess whitespaces

* don't use built-in name for variable

* simplified line parsing

* chanched link to related

* run parameter_generator.py

* removed old targets

* use tuples instead of list where possible

* hotfix for descriptions were erased and only last one was kept

* run parameter_generator.py

* separated checks from aliases section
parent c0147cbe
...@@ -30,402 +30,499 @@ If one parameter appears in both command line and config file, LightGBM will use ...@@ -30,402 +30,499 @@ If one parameter appears in both command line and config file, LightGBM will use
Core Parameters Core Parameters
--------------- ---------------
- ``config``, default=\ ``""``, type=string, alias=\ ``config_file`` - ``config``, default = ``""``, type = string, aliases: ``config_file``
- path of config file - path of config file
- **Note**: Only can be used in CLI version - **Note**: can be used only in CLI version
- ``task``, default=\ ``train``, type=enum, options=\ ``train``, ``predict``, ``convert_model``, ``refit``, alias=\ ``task_type`` - ``task``, default = ``train``, type = enum, options: ``train``, ``predict``, ``convert_model``, ``refit``, aliases: ``task_type``
- ``train``, alias=\ ``training``, for training - ``train``, for training, aliases: ``training``
- ``predict``, alias=\ ``prediction``, ``test``, for prediction - ``predict``, for prediction, aliases: ``prediction``, ``test``
- ``convert_model``, for converting model file into if-else format, see more information in `Convert model parameters <#convert-model-parameters>`__ - ``convert_model``, for converting model file into if-else format, see more information in `IO Parameters <#io-parameters>`__
- ``refit``, alias=\ ``refit_tree``, refit existing models with new data - ``refit``, for refitting existing models with new data, aliases: ``refit_tree``
- **Note**: Only can be used in CLI version - **Note**: can be used only in CLI version
- ``application``, default=\ ``regression``, type=enum, - ``objective``, default = ``regression``, type = enum, options: ``regression``, ``regression_l1``, ``huber``, ``fair``, ``poisson``, ``quantile``, ``mape``, ``gammma``, ``tweedie``, ``binary``, ``multiclass``, ``multiclassova``, ``xentropy``, ``xentlambda``, ``lambdarank``, aliases: ``objective_type``, ``app``, ``application``
options=\ ``regression``, ``regression_l1``, ``huber``, ``fair``, ``poisson``, ``quantile``, ``mape``, ``gammma``, ``tweedie``,
``binary``, ``multiclass``, ``multiclassova``, ``xentropy``, ``xentlambda``, ``lambdarank``,
alias=\ ``app``, ``objective``, ``objective_type``
- regression application - regression application
- ``regression_l2``, L2 loss, alias=\ ``regression``, ``mean_squared_error``, ``mse``, ``l2_root``, ``root_mean_squared_error``, ``rmse`` - ``regression_l2``, L2 loss, aliases: ``regression``, ``mean_squared_error``, ``mse``, ``l2_root``, ``root_mean_squared_error``, ``rmse``
- ``regression_l1``, L1 loss, alias=\ ``mean_absolute_error``, ``mae`` - ``regression_l1``, L1 loss, aliases: ``mean_absolute_error``, ``mae``
- ``huber``, `Huber loss`_ - ``huber``, `Huber loss <https://en.wikipedia.org/wiki/Huber_loss>`__
- ``fair``, `Fair loss`_ - ``fair``, `Fair loss <https://www.kaggle.com/c/allstate-claims-severity/discussion/24520>`__
- ``poisson``, `Poisson regression`_ - ``poisson``, `Poisson regression <https://en.wikipedia.org/wiki/Poisson_regression>`__
- ``quantile``, `Quantile regression`_ - ``quantile``, `Quantile regression <https://en.wikipedia.org/wiki/Quantile_regression>`__
- ``mape``, `MAPE loss`_, alias=\ ``mean_absolute_percentage_error`` - ``mape``, `MAPE loss <https://en.wikipedia.org/wiki/Mean_absolute_percentage_error>`__, aliases: ``mean_absolute_percentage_error``
- ``gamma``, Gamma regression with log-link. It might be useful, e.g., for modeling insurance claims severity, or for any target that might be `gamma-distributed`_ - ``gamma``, Gamma regression with log-link. It might be useful, e.g., for modeling insurance claims severity, or for any target that might be `gamma-distributed <https://en.wikipedia.org/wiki/Gamma_distribution#Applications>`__
- ``tweedie``, Tweedie regression with log-link. It might be useful, e.g., for modeling total loss in insurance, or for any target that might be `tweedie-distributed`_ - ``tweedie``, Tweedie regression with log-link. It might be useful, e.g., for modeling total loss in insurance, or for any target that might be `tweedie-distributed <https://en.wikipedia.org/wiki/Tweedie_distribution#Applications>`__
- ``binary``, binary `log loss`_ classification (or logistic regression). Requires labels in {0, 1}; see ``xentropy`` for general probability labels in [0, 1] - ``binary``, binary `log loss <https://en.wikipedia.org/wiki/Cross_entropy>`__ classification (or logistic regression). Requires labels in {0, 1}; see ``xentropy`` for general probability labels in [0, 1]
- multi-class classification application - multi-class classification application
- ``multiclass``, `softmax`_ objective function, alias=\ ``softmax`` - ``multiclass``, `softmax <https://en.wikipedia.org/wiki/Softmax_function>`__ objective function, aliases: ``softmax``
- ``multiclassova``, `One-vs-All`_ binary objective function, alias=\ ``multiclass_ova``, ``ova``, ``ovr`` - ``multiclassova``, `One-vs-All <https://en.wikipedia.org/wiki/Multiclass_classification#One-vs.-rest>`__ binary objective function, aliases: ``multiclass_ova``, ``ova``, ``ovr``
- ``num_class`` should be set as well - ``num_class`` should be set as well
- cross-entropy application - cross-entropy application
- ``xentropy``, objective function for cross-entropy (with optional linear weights), alias=\ ``cross_entropy`` - ``xentropy``, objective function for cross-entropy (with optional linear weights), aliases: ``cross_entropy``
- ``xentlambda``, alternative parameterization of cross-entropy, alias=\ ``cross_entropy_lambda`` - ``xentlambda``, alternative parameterization of cross-entropy, aliases: ``cross_entropy_lambda``
- the label is anything in interval [0, 1] - label is anything in interval [0, 1]
- ``lambdarank``, `lambdarank`_ application - ``lambdarank``, `lambdarank <https://papers.nips.cc/paper/2971-learning-to-rank-with-nonsmooth-cost-functions.pdf>`__ application
- the label should be ``int`` type in lambdarank tasks, and larger number represent the higher relevance (e.g. 0:bad, 1:fair, 2:good, 3:perfect) - label should be ``int`` type in lambdarank tasks, and larger number represents the higher relevance (e.g. 0:bad, 1:fair, 2:good, 3:perfect)
- `label_gain <#objective-parameters>`__ can be used to set the gain(weight) of ``int`` label - `label_gain <#objective-parameters>`__ can be used to set the gain (weight) of ``int`` label
- all values in ``label`` must be smaller than number of elements in ``label_gain`` - all values in ``label`` must be smaller than number of elements in ``label_gain``
- ``boosting``, default=\ ``gbdt``, type=enum, - ``boosting``, default = ``gbdt``, type = enum, options: ``gbdt``, ``gbrt``, ``rf``, ``random_forest``, ``dart``, ``goss``, aliases: ``boosting_type``, ``boost``
options=\ ``gbdt``, ``rf``, ``dart``, ``goss``,
alias=\ ``boost``, ``boosting_type``
- ``gbdt``, traditional Gradient Boosting Decision Tree - ``gbdt``, traditional Gradient Boosting Decision Tree, aliases: ``gbrt``
- ``rf``, Random Forest - ``rf``, Random Forest, aliases: ``random_forest``
- ``dart``, `Dropouts meet Multiple Additive Regression Trees`_ - ``dart``, `Dropouts meet Multiple Additive Regression Trees <https://arxiv.org/abs/1505.01866>`__
- ``goss``, Gradient-based One-Side Sampling - ``goss``, Gradient-based One-Side Sampling
- ``data``, default=\ ``""``, type=string, alias=\ ``train``, ``train_data``, ``data_filename`` - ``data``, default = ``""``, type = string, aliases: ``train``, ``train_data``, ``data_filename``
- training data, LightGBM will train from this data - path of training data, LightGBM will train from this data
- ``valid``, default=\ ``""``, type=multi-string, alias=\ ``test``, ``valid_data``, ``test_data``, ``valid_filenames`` - ``valid``, default = ``""``, type = string, aliases: ``test``, ``valid_data``, ``valid_data_file``, ``test_data``, ``valid_filenames``
- validation/test data, LightGBM will output metrics for these data - path(s) of validation/test data, LightGBM will output metrics for these data
- support multi validation data, separate by ``,`` - support multiple validation data, separated by ``,``
- ``num_iterations``, default=\ ``100``, type=int, - ``num_iterations``, default = ``100``, type = int, aliases: ``num_iteration``, ``num_tree``, ``num_trees``, ``num_round``, ``num_rounds``, ``num_boost_round``, ``n_estimators``, constraints: ``num_iterations >= 0``
alias=\ ``num_iteration``, ``num_tree``, ``num_trees``, ``num_round``, ``num_rounds``, ``num_boost_round``, ``n_estimators``
- number of boosting iterations - number of boosting iterations
- **Note**: for Python/R package, **this parameter is ignored**, - **Note**: for Python/R-package, **this parameter is ignored**, use ``num_boost_round`` (Python) or ``nrounds`` (R) input arguments of ``train`` and ``cv`` methods instead
use ``num_boost_round`` (Python) or ``nrounds`` (R) input arguments of ``train`` and ``cv`` methods instead
- **Note**: internally, LightGBM constructs ``num_class * num_iterations`` trees for ``multiclass`` problems - **Note**: internally, LightGBM constructs ``num_class * num_iterations`` trees for multi-class classification problems
- ``learning_rate``, default=\ ``0.1``, type=double, alias=\ ``shrinkage_rate`` - ``learning_rate``, default = ``0.1``, type = double, aliases: ``shrinkage_rate``, constraints: ``learning_rate > 0.0``
- shrinkage rate - shrinkage rate
- in ``dart``, it also affects on normalization weights of dropped trees - in ``dart``, it also affects on normalization weights of dropped trees
- ``num_leaves``, default=\ ``31``, type=int, alias=\ ``num_leaf`` - ``num_leaves``, default = ``31``, type = int, aliases: ``num_leaf``, constraints: ``num_leaves > 1``
- number of leaves in one tree - max number of leaves in one tree
- ``tree_learner``, default=\ ``serial``, type=enum, options=\ ``serial``, ``feature``, ``data``, ``voting``, alias=\ ``tree``, ``tree_learner_type`` - ``tree_learner``, default = ``serial``, type = enum, options: ``serial``, ``feature``, ``data``, ``voting``, aliases: ``tree``, ``tree_learner_type``
- ``serial``, single machine tree learner - ``serial``, single machine tree learner
- ``feature``, alias=\ ``feature_parallel``, feature parallel tree learner - ``feature``, feature parallel tree learner, aliases: ``feature_parallel``
- ``data``, alias=\ ``data_parallel``, data parallel tree learner - ``data``, data parallel tree learner, aliases: ``data_parallel``
- ``voting``, alias=\ ``voting_parallel``, voting parallel tree learner - ``voting``, voting parallel tree learner, aliases: ``voting_parallel``
- refer to `Parallel Learning Guide <./Parallel-Learning-Guide.rst>`__ to get more details - refer to `Parallel Learning Guide <./Parallel-Learning-Guide.rst>`__ to get more details
- ``num_threads``, default=\ ``OpenMP_default``, type=int, alias=\ ``num_thread``, ``nthread``, ``nthreads`` - ``num_threads``, default = ``0``, type = int, aliases: ``num_thread``, ``nthread``, ``nthreads``
- number of threads for LightGBM - number of threads for LightGBM
- for the best speed, set this to the number of **real CPU cores**, - ``0`` means default number of threads in OpenMP
not the number of threads (most CPU using `hyper-threading`_ to generate 2 threads per CPU core)
- do not set it too large if your dataset is small (do not use 64 threads for a dataset with 10,000 rows for instance) - for the best speed, set this to the number of **real CPU cores**, not the number of threads (most CPUs use `hyper-threading <https://en.wikipedia.org/wiki/Hyper-threading>`__ to generate 2 threads per CPU core)
- be aware a task manager or any similar CPU monitoring tool might report cores not being fully utilized. **This is normal** - do not set it too large if your dataset is small (for instance, do not use 64 threads for a dataset with 10,000 rows)
- for parallel learning, should not use full CPU cores since this will cause poor performance for the network - be aware a task manager or any similar CPU monitoring tool might report that cores not being fully utilized. **This is normal**
- ``device``, default=\ ``cpu``, options=\ ``cpu``, ``gpu`` - for parallel learning, do not use all CPU cores because this will cause poor performance for the network communication
- choose device for the tree learning, you can use GPU to achieve the faster learning - ``device_type``, default = ``cpu``, type = enum, options: ``cpu``, ``gpu``, aliases: ``device``
- device for the tree learning, you can use GPU to achieve the faster learning
- **Note**: it is recommended to use the smaller ``max_bin`` (e.g. 63) to get the better speed up - **Note**: it is recommended to use the smaller ``max_bin`` (e.g. 63) to get the better speed up
- **Note**: for the faster speed, GPU use 32-bit float point to sum up by default, may affect the accuracy for some tasks. - **Note**: for the faster speed, GPU uses 32-bit float point to sum up by default, so this may affect the accuracy for some tasks. You can set ``gpu_use_dp=true`` to enable 64-bit float point, but it will slow down the training
You can set ``gpu_use_dp=true`` to enable 64-bit float point, but it will slow down the training
- **Note**: refer to `Installation Guide <./Installation-Guide.rst#build-gpu-version>`__ to build LightGBM with GPU support
- ``seed``, default = ``0``, type = int, aliases: ``random_seed``
- **Note**: refer to `Installation Guide <./Installation-Guide.rst#build-gpu-version>`__ to build with GPU - this seed is used to generate other seeds, e.g. ``data_random_seed``, ``feature_fraction_seed``
- will be overridden, if you set other seeds
Learning Control Parameters Learning Control Parameters
--------------------------- ---------------------------
- ``max_depth``, default=\ ``-1``, type=int - ``max_depth``, default = ``-1``, type = int
- limit the max depth for tree model. This is used to deal with over-fitting when ``#data`` is small. Tree still grows by leaf-wise - limit the max depth for tree model. This is used to deal with over-fitting when ``#data`` is small. Tree still grows leaf-wise
- ``< 0`` means no limit - ``< 0`` means no limit
- ``min_data_in_leaf``, default=\ ``20``, type=int, alias=\ ``min_data_per_leaf`` , ``min_data``, ``min_child_samples`` - ``min_data_in_leaf``, default = ``20``, type = int, aliases: ``min_data_per_leaf``, ``min_data``, ``min_child_samples``, constraints: ``min_data_in_leaf >= 0``
- minimal number of data in one leaf. Can be used to deal with over-fitting - minimal number of data in one leaf. Can be used to deal with over-fitting
- ``min_sum_hessian_in_leaf``, default=\ ``1e-3``, type=double, - ``min_sum_hessian_in_leaf``, default = ``1e-3``, type = double, aliases: ``min_sum_hessian_per_leaf``, ``min_sum_hessian``, ``min_hessian``, ``min_child_weight``, constraints: ``min_sum_hessian_in_leaf >= 0.0``
alias=\ ``min_sum_hessian_per_leaf``, ``min_sum_hessian``, ``min_hessian``, ``min_child_weight``
- minimal sum hessian in one leaf. Like ``min_data_in_leaf``, it can be used to deal with over-fitting - minimal sum hessian in one leaf. Like ``min_data_in_leaf``, it can be used to deal with over-fitting
- ``feature_fraction``, default=\ ``1.0``, type=double, ``0.0 < feature_fraction <= 1.0``, alias=\ ``sub_feature``, ``colsample_bytree`` - ``bagging_fraction``, default = ``1.0``, type = double, aliases: ``sub_row``, ``subsample``, ``bagging``, constraints: ``0.0 < bagging_fraction <= 1.0``
- LightGBM will randomly select part of features on each iteration if ``feature_fraction`` smaller than ``1.0``. - like ``feature_fraction``, but this will randomly select part of data without resampling
For example, if set to ``0.8``, will select 80% features before training each tree
- can be used to speed up training - can be used to speed up training
- can be used to deal with over-fitting - can be used to deal with over-fitting
- ``feature_fraction_seed``, default=\ ``2``, type=int - **Note**: to enable bagging, ``bagging_freq`` should be set to a non zero value as well
- random seed for ``feature_fraction`` - ``bagging_freq``, default = ``0``, type = int, aliases: ``subsample_freq``
- ``bagging_fraction``, default=\ ``1.0``, type=double, ``0.0 < bagging_fraction <= 1.0``, alias=\ ``sub_row``, ``subsample``, ``bagging`` - frequency for bagging
- like ``feature_fraction``, but this will randomly select part of data without resampling - ``0`` means disable bagging; ``k`` means perform bagging at every ``k`` iteration
- **Note**: to enable bagging, ``bagging_fraction`` should be set to value smaller than ``1.0`` as well
- ``bagging_seed``, default = ``3``, type = int, aliases: ``bagging_fraction_seed``
- random seed for bagging
- ``feature_fraction``, default = ``1.0``, type = double, aliases: ``sub_feature``, ``colsample_bytree``, constraints: ``0.0 < feature_fraction <= 1.0``
- LightGBM will randomly select part of features on each iteration if ``feature_fraction`` smaller than ``1.0``. For example, if you set it to ``0.8``, LightGBM will select 80% of features before training each tree
- can be used to speed up training - can be used to speed up training
- can be used to deal with over-fitting - can be used to deal with over-fitting
- **Note**: To enable bagging, ``bagging_freq`` should be set to a non zero value as well - ``feature_fraction_seed``, default = ``2``, type = int
- random seed for ``feature_fraction``
- ``bagging_freq``, default=\ ``0``, type=int, alias=\ ``subsample_freq`` - ``early_stopping_round``, default = ``0``, type = int, aliases: ``early_stopping_rounds``, ``early_stopping``
- frequency for bagging, ``0`` means disable bagging. ``k`` means will perform bagging at every ``k`` iteration - will stop training if one metric of one validation data doesn't improve in last ``early_stopping_round`` rounds
- **Note**: to enable bagging, ``bagging_fraction`` should be set as well - ``<= 0`` means disable
- ``bagging_seed`` , default=\ ``3``, type=int, alias=\ ``bagging_fraction_seed`` - ``max_delta_step``, default = ``0.0``, type = double, aliases: ``max_tree_output``, ``max_leaf_output``
- random seed for bagging - used to limit the max output of tree leaves
- ``early_stopping_round``, default=\ ``0``, type=int, alias=\ ``early_stopping_rounds``, ``early_stopping`` - ``<= 0`` means no constraint
- will stop training if one metric of one validation data doesn't improve in last ``early_stopping_round`` rounds - the final max output of leaves is ``learning_rate * max_delta_step``
- ``lambda_l1``, default=\ ``0``, type=double, alias=\ ``reg_alpha`` - ``lambda_l1``, default = ``0.0``, type = double, aliases: ``reg_alpha``, constraints: ``lambda_l1 >= 0.0``
- L1 regularization - L1 regularization
- ``lambda_l2``, default=\ ``0``, type=double, alias=\ ``reg_lambda`` - ``lambda_l2``, default = ``0.0``, type = double, aliases: ``reg_lambda``, constraints: ``lambda_l2 >= 0.0``
- L2 regularization - L2 regularization
- ``max_delta_step``, default=\ ``0``, type=double, alias=\ ``max_tree_output``, ``max_leaf_output`` - ``min_gain_to_split``, default = ``0.0``, type = double, aliases: ``min_split_gain``, constraints: ``min_gain_to_split >= 0.0``
- Used to limit the max output of tree leaves - the minimal gain to perform split
- when <= 0, there is not constraint - ``drop_rate``, default = ``0.1``, type = double, constraints: ``0.0 <= drop_rate <= 1.0``
- the final max output of leaves is ``learning_rate*max_delta_step`` - used only in ``dart``
- ``min_split_gain``, default=\ ``0``, type=double, alias=\ ``min_gain_to_split`` - dropout rate
- the minimal gain to perform split - ``max_drop``, default = ``50``, type = int
- ``drop_rate``, default=\ ``0.1``, type=double, ``0.0 <= drop_rate <= 1.0`` - used only in ``dart``
- only used in ``dart`` - max number of dropped trees on one iteration
- ``skip_drop``, default=\ ``0.5``, type=double, ``0.0 <= skip_drop <= 1.0`` - ``<=0`` means no limit
- only used in ``dart``, probability of skipping drop - ``skip_drop``, default = ``0.5``, type = double, constraints: ``0.0 <= skip_drop <= 1.0``
- ``max_drop``, default=\ ``50``, type=int - used only in ``dart``
- only used in ``dart``, max number of dropped trees on one iteration - probability of skipping drop
- ``<=0`` means no limit
- ``uniform_drop``, default=\ ``false``, type=bool - ``xgboost_dart_mode``, default = ``false``, type = bool
- only used in ``dart``, set this to ``true`` if want to use uniform drop - used only in ``dart``
- ``xgboost_dart_mode``, default=\ ``false``, type=bool - set this to ``true``, if you want to use xgboost dart mode
- only used in ``dart``, set this to ``true`` if want to use xgboost dart mode - ``uniform_drop``, default = ``false``, type = bool
- ``drop_seed``, default=\ ``4``, type=int - used only in ``dart``
- only used in ``dart``, random seed to choose dropping models - set this to ``true``, if you want to use uniform drop
- ``top_rate``, default=\ ``0.2``, type=double - ``drop_seed``, default = ``4``, type = int
- only used in ``goss``, the retain ratio of large gradient data - used only in ``dart``
- ``other_rate``, default=\ ``0.1``, type=int - random seed to choose dropping models
- only used in ``goss``, the retain ratio of small gradient data - ``top_rate``, default = ``0.2``, type = double, constraints: ``0.0 <= top_rate <= 1.0``
- ``min_data_per_group``, default=\ ``100``, type=int - used only in ``goss``
- min number of data per categorical group - the retain ratio of large gradient data
- ``max_cat_threshold``, default=\ ``32``, type=int - ``other_rate``, default = ``0.1``, type = double, constraints: ``0.0 <= other_rate <= 1.0``
- use for the categorical features - used only in ``goss``
- limit the max threshold points in categorical features - the retain ratio of small gradient data
- ``min_data_per_group``, default = ``100``, type = int, constraints: ``min_data_per_group > 0``
- minimal number of data per categorical group
- ``cat_smooth``, default=\ ``10``, type=double - ``max_cat_threshold``, default = ``32``, type = int, constraints: ``max_cat_threshold > 0``
- used for the categorical features - used for the categorical features
- this can reduce the effect of noises in categorical features, especially for categories with few data - limit the max threshold points in categorical features
- ``cat_l2``, default=\ ``10``, type=double - ``cat_l2``, default = ``10.0``, type = double, constraints: ``cat_l2 >= 0.0``
- used for the categorical features
- L2 regularization in categorcial split - L2 regularization in categorcial split
- ``max_cat_to_onehot``, default=\ ``4``, type=int - ``cat_smooth``, default = ``10.0``, type = double, constraints: ``cat_smooth >= 0.0``
- used for the categorical features
- this can reduce the effect of noises in categorical features, especially for categories with few data
- ``max_cat_to_onehot``, default = ``4``, type = int, constraints: ``max_cat_to_onehot > 0``
- when number of categories of one feature smaller than or equal to ``max_cat_to_onehot``, one-vs-other split algorithm will be used - when number of categories of one feature smaller than or equal to ``max_cat_to_onehot``, one-vs-other split algorithm will be used
- ``top_k``, default=\ ``20``, type=int, alias=\ ``topk`` - ``top_k``, default = ``20``, type = int, aliases: ``topk``, constraints: ``top_k > 0``
- used in `Voting parallel <./Parallel-Learning-Guide.rst#choose-appropriate-parallel-algorithm>`__ - used in `Voting parallel <./Parallel-Learning-Guide.rst#choose-appropriate-parallel-algorithm>`__
- set this to larger value for more accurate result, but it will slow down the training speed - set this to larger value for more accurate result, but it will slow down the training speed
- ``monotone_constraint``, default=\ ``None``, type=multi-int, alias=\ ``mc``, ``monotone_constraints`` - ``monotone_constraints``, default = ``None``, type = multi-int, aliases: ``mc``, ``monotone_constraint``
- used for constraints of monotonic features - used for constraints of monotonic features
- ``1`` means increasing, ``-1`` means decreasing, ``0`` means non-constraint - ``1`` means increasing, ``-1`` means decreasing, ``0`` means non-constraint
- you need to specify all features in order. For example, ``mc=-1,0,1`` means the decreasing for 1st feature, non-constraint for 2nd feature and increasing for the 3rd feature - you need to specify all features in order. For example, ``mc=-1,0,1`` means decreasing for 1st feature, non-constraint for 2nd feature and increasing for the 3rd feature
- ``forcedsplits_filename``, default = ``""``, type = string, aliases: ``fs``, ``forced_splits_filename``, ``forced_splits_file``, ``forced_splits``
- path to a ``.json`` file that specifies splits to force at the top of every decision tree before best-first learning commences
- ``.json`` file can be arbitrarily nested, and each split contains ``feature``, ``threshold`` fields, as well as ``left`` and ``right`` fields representing subsplits
- categorical splits are forced in a one-hot fashion, with ``left`` representing the split containing the feature value and ``right`` representing other values
- see `this file <https://github.com/Microsoft/LightGBM/tree/master/examples/binary_classification/forced_splits.json>`__ as an example
IO Parameters IO Parameters
------------- -------------
- ``max_bin``, default=\ ``255``, type=int - ``verbosity``, default = ``1``, type = int, aliases: ``verbose``
- controls the level of LightGBM's verbosity
- ``< 0``: Fatal, ``= 0``: Error (Warn), ``> 0``: Info
- ``max_bin``, default = ``255``, type = int, constraints: ``max_bin > 1``
- max number of bins that feature values will be bucketed in
- small number of bins may reduce training accuracy but may increase general power (deal with over-fitting)
- LightGBM will auto compress memory according to ``max_bin``. For example, LightGBM will use ``uint8_t`` for feature value if ``max_bin=255``
- ``min_data_in_bin``, default = ``3``, type = int, constraints: ``min_data_in_bin > 0``
- minimal number of data inside one bin
- use this to avoid one-data-one-bin (potential over-fitting)
- max number of bins that feature values will be bucketed in. - ``bin_construct_sample_cnt``, default = ``200000``, type = int, aliases: ``subsample_for_bin``, constraints: ``bin_construct_sample_cnt > 0``
Small number of bins may reduce training accuracy but may increase general power (deal with over-fitting)
- LightGBM will auto compress memory according ``max_bin``. - number of data that sampled to construct histogram bins
For example, LightGBM will use ``uint8_t`` for feature value if ``max_bin=255``
- setting this to larger value will give better training result, but will increase data loading time
- set this to larger value if data is very sparse
- ``histogram_pool_size``, default = ``-1.0``, type = double
- ``min_data_in_bin``, default=\ ``3``, type=int - max cache size in MB for historical histogram
- min number of data inside one bin, use this to avoid one-data-one-bin (may over-fitting) - ``< 0`` means no limit
- ``data_random_seed``, default = ``1``, type = int
- random seed for data partition in parallel learning (excluding the ``feature_parallel`` mode)
- ``data_random_seed``, default=\ ``1``, type=int - ``output_model``, default = ``LightGBM_model.txt``, type = string, aliases: ``model_output``, ``model_out``
- random seed for data partition in parallel learning (not include feature parallel) - filename of output model in training
- ``output_model``, default=\ ``LightGBM_model.txt``, type=string, alias=\ ``model_output``, ``model_out`` - ``snapshot_freq``, default = ``-1``, type = int
- file name of output model in training - frequency of saving model file snapshot
- ``input_model``, default=\ ``""``, type=string, alias=\ ``model_input``, ``model_in`` - set this to positive value to enable this function. For example, the model file will be snapshotted at each iteration if ``snapshot_freq=1``
- file name of input model - ``input_model``, default = ``""``, type = string, aliases: ``model_input``, ``model_in``
- for ``prediction`` task, this model will be used for prediction data - filename of input model
- for ``prediction`` task, this model will be applied to prediction data
- for ``train`` task, training will be continued from this model - for ``train`` task, training will be continued from this model
- ``output_result``, default=\ ``LightGBM_predict_result.txt``, - **Note**: can be used only in CLI version
type=string, alias=\ ``predict_result``, ``prediction_result``
- ``output_result``, default = ``LightGBM_predict_result.txt``, type = string, aliases: ``predict_result``, ``prediction_result``
- file name of prediction result in ``prediction`` task - filename of prediction result in ``prediction`` task
- ``pre_partition``, default=\ ``false``, type=bool, alias=\ ``is_pre_partition`` - ``initscore_filename``, default = ``""``, type = string, aliases: ``init_score_filename``, ``init_score_file``, ``init_score``, ``input_init_score``
- used for parallel learning (not include feature parallel) - path of file with training initial score
- if ``""``, will use ``train_data_file`` + ``.init`` (if exists)
- ``valid_data_initscores``, default = ``""``, type = string, aliases: ``valid_data_init_scores``, ``valid_init_score_file``, ``valid_init_score``
- path(s) of file(s) with validation initial score(s)
- if ``""``, will use ``valid_data_file`` + ``.init`` (if exists)
- separate by ``,`` for multi-validation data
- ``pre_partition``, default = ``false``, type = bool, aliases: ``is_pre_partition``
- used for parallel learning (excluding the ``feature_parallel`` mode)
- ``true`` if training data are pre-partitioned, and different machines use different partitions - ``true`` if training data are pre-partitioned, and different machines use different partitions
- ``is_sparse``, default=\ ``true``, type=bool, alias=\ ``is_enable_sparse``, ``enable_sparse`` - ``enable_bundle``, default = ``true``, type = bool, aliases: ``is_enable_bundle``, ``bundle``
- set this to ``false`` to disable Exclusive Feature Bundling (EFB), which is described in `LightGBM: A Highly Efficient Gradient Boosting Decision Tree <https://papers.nips.cc/paper/6907-lightgbm-a-highly-efficient-gradient-boosting-decision-tree>`__
- **Note**: disabling this may cause the slow training speed for sparse datasets
- ``max_conflict_rate``, default = ``0.0``, type = double, constraints: ``0.0 <= max_conflict_rate < 1.0``
- max conflict rate for bundles in EFB
- set this to ``0.0`` to disallow the conflict and provide more accurate results
- set this to a larger value to achieve faster speed
- ``is_enable_sparse``, default = ``true``, type = bool, aliases: ``is_sparse``, ``enable_sparse``, ``sparse``
- used to enable/disable sparse optimization
- ``sparse_threshold``, default = ``0.8``, type = double, constraints: ``0.0 < sparse_threshold <= 1.0``
- the threshold of zero elements precentage for treating a feature as a sparse one
- ``use_missing``, default = ``true``, type = bool
- set this to ``false`` to disable the special handle of missing value
- used to enable/disable sparse optimization. Set to ``false`` to disable sparse optimization - ``zero_as_missing``, default = ``false``, type = bool
- ``two_round``, default=\ ``false``, type=bool, alias=\ ``two_round_loading``, ``use_two_round_loading`` - set this to ``true`` to treat all zero as missing values (including the unshown values in libsvm/sparse matrics)
- by default, LightGBM will map data file to memory and load features from memory. - set this to ``false`` to use ``na`` for representing missing values
This will provide faster data loading speed. But it may run out of memory when the data file is very big
- ``two_round``, default = ``false``, type = bool, aliases: ``two_round_loading``, ``use_two_round_loading``
- set this to ``true`` if data file is too big to fit in memory - set this to ``true`` if data file is too big to fit in memory
- ``save_binary``, default=\ ``false``, type=bool, alias=\ ``is_save_binary``, ``is_save_binary_file`` - by default, LightGBM will map data file to memory and load features from memory. This will provide faster data loading speed, but may cause run out of memory error when the data file is very big
- ``save_binary``, default = ``false``, type = bool, aliases: ``is_save_binary``, ``is_save_binary_file``
- if ``true``, LightGBM will save the dataset (including validation data) to a binary file. This speed ups the data loading for the next time
- if ``true`` LightGBM will save the dataset (include validation data) to a binary file. - ``enable_load_from_binary_file``, default = ``true``, type = bool, aliases: ``load_from_binary_file``, ``binary_load``, ``load_binary``
Speed up the data loading for the next time
- ``verbosity``, default=\ ``1``, type=int, alias=\ ``verbose`` - set this to ``true`` to enable autoloading from previous saved binary datasets
- ``<0`` = Fatal, - set this to ``false`` to ignore binary datasets
``=0`` = Error (Warn),
``>0`` = Info
- ``header``, default=\ ``false``, type=bool, alias=\ ``has_header`` - ``header``, default = ``false``, type = bool, aliases: ``has_header``
- set this to ``true`` if input data has header - set this to ``true`` if input data has header
- ``label``, default=\ ``""``, type=string, alias=\ ``label_column`` - ``label_column``, default = ``""``, type = int or string, aliases: ``label``
- specify the label column - used to specify the label column
- use number for index, e.g. ``label=0`` means column\_0 is the label - use number for index, e.g. ``label=0`` means column\_0 is the label
- add a prefix ``name:`` for column name, e.g. ``label=name:is_click`` - add a prefix ``name:`` for column name, e.g. ``label=name:is_click``
- ``weight``, default=\ ``""``, type=string, alias=\ ``weight_column`` - ``weight_column``, default = ``""``, type = int or string, aliases: ``weight``
- specify the weight column - used to specify the weight column
- use number for index, e.g. ``weight=0`` means column\_0 is the weight - use number for index, e.g. ``weight=0`` means column\_0 is the weight
- add a prefix ``name:`` for column name, e.g. ``weight=name:weight`` - add a prefix ``name:`` for column name, e.g. ``weight=name:weight``
- **Note**: index starts from ``0``. - **Note**: index starts from ``0`` and it doesn't count the label column when passing type is ``int``, e.g. when label is column\_0, and weight is column\_1, the correct parameter is ``weight=0``
And it doesn't count the label column when passing type is Index, e.g. when label is column\_0, and weight is column\_1, the correct parameter is ``weight=0``
- ``query``, default=\ ``""``, type=string, alias=\ ``query_column``, ``group``, ``group_column`` - ``group_column``, default = ``""``, type = int or string, aliases: ``group``, ``group_id``, ``query_column``, ``query``, ``query_id``
- specify the query/group id column - used to specify the query/group id column
- use number for index, e.g. ``query=0`` means column\_0 is the query id - use number for index, e.g. ``query=0`` means column\_0 is the query id
- add a prefix ``name:`` for column name, e.g. ``query=name:query_id`` - add a prefix ``name:`` for column name, e.g. ``query=name:query_id``
- **Note**: data should be grouped by query\_id. - **Note**: data should be grouped by query\_id
Index starts from ``0``.
And it doesn't count the label column when passing type is Index, e.g. when label is column\_0 and query\_id is column\_1, the correct parameter is ``query=0``
- ``ignore_column``, default=\ ``""``, type=string, alias=\ ``ignore_feature``, ``blacklist`` - **Note**: index starts from ``0`` and it doesn't count the label column when passing type is ``int``, e.g. when label is column\_0 and query\_id is column\_1, the correct parameter is ``query=0``
- specify some ignoring columns in training - ``ignore_column``, default = ``""``, type = multi-int or string, aliases: ``ignore_feature``, ``blacklist``
- used to specify some ignoring columns in training
- use number for index, e.g. ``ignore_column=0,1,2`` means column\_0, column\_1 and column\_2 will be ignored - use number for index, e.g. ``ignore_column=0,1,2`` means column\_0, column\_1 and column\_2 will be ignored
...@@ -433,297 +530,287 @@ IO Parameters ...@@ -433,297 +530,287 @@ IO Parameters
- **Note**: works only in case of loading data directly from file - **Note**: works only in case of loading data directly from file
- **Note**: index starts from ``0``. And it doesn't count the label column - **Note**: index starts from ``0`` and it doesn't count the label column when passing type is ``int``
- ``categorical_feature``, default=\ ``""``, type=string, alias=\ ``categorical_column``, ``cat_feature``, ``cat_column`` - ``categorical_feature``, default = ``""``, type = multi-int or string, aliases: ``cat_feature``, ``categorical_column``, ``cat_column``
- specify categorical features - used to specify categorical features
- use number for index, e.g. ``categorical_feature=0,1,2`` means column\_0, column\_1 and column\_2 are categorical features - use number for index, e.g. ``categorical_feature=0,1,2`` means column\_0, column\_1 and column\_2 are categorical features
- add a prefix ``name:`` for column name, e.g. ``categorical_feature=name:c1,c2,c3`` means c1, c2 and c3 are categorical features - add a prefix ``name:`` for column name, e.g. ``categorical_feature=name:c1,c2,c3`` means c1, c2 and c3 are categorical features
- **Note**: only supports categorical with ``int`` type. Index starts from ``0``. And it doesn't count the label column - **Note**: only supports categorical with ``int`` type
- **Note**: index starts from ``0`` and it doesn't count the label column when passing type is ``int``
- **Note**: all values should be less than ``Int32.MaxValue`` (2147483647) - **Note**: all values should be less than ``Int32.MaxValue`` (2147483647)
- **Note**: the negative values will be treated as **missing values** - **Note**: the negative values will be treated as **missing values**
- ``predict_raw_score``, default=\ ``false``, type=bool, alias=\ ``raw_score``, ``is_predict_raw_score``, ``predict_rawscore`` - ``predict_raw_score``, default = ``false``, type = bool, aliases: ``is_predict_raw_score``, ``predict_rawscore``, ``raw_score``
- only used in ``prediction`` task - used only in ``prediction`` task
- set to ``true`` to predict only the raw scores - set this to ``true`` to predict only the raw scores
- set to ``false`` to predict transformed scores - set this to ``false`` to predict transformed scores
- ``predict_leaf_index``, default=\ ``false``, type=bool, alias=\ ``leaf_index``, ``is_predict_leaf_index`` - ``predict_leaf_index``, default = ``false``, type = bool, aliases: ``is_predict_leaf_index``, ``leaf_index``
- only used in ``prediction`` task - used only in ``prediction`` task
- set to ``true`` to predict with leaf index of all trees - set this to ``true`` to predict with leaf index of all trees
- ``predict_contrib``, default=\ ``false``, type=bool, alias=\ ``contrib``, ``is_predict_contrib`` - ``predict_contrib``, default = ``false``, type = bool, aliases: ``is_predict_contrib``, ``contrib``
- only used in ``prediction`` task - used only in ``prediction`` task
- set to ``true`` to estimate `SHAP values`_, which represent how each feature contributs to each prediction. - set this to ``true`` to estimate `SHAP values <https://arxiv.org/abs/1706.06060>`__, which represent how each feature contributs to each prediction
Produces number of features + 1 values where the last value is the expected value of the model output over the training data
- ``bin_construct_sample_cnt``, default=\ ``200000``, type=int, alias=\ ``subsample_for_bin`` - produces ``#features + 1`` values where the last value is the expected value of the model output over the training data
- number of data that sampled to construct histogram bins - ``num_iteration_predict``, default = ``-1``, type = int
- will give better training result when set this larger, but will increase data loading time - used only in ``prediction`` task
- set this to larger value if data is very sparse - used to specify how many trained iterations will be used in prediction
- ``num_iteration_predict``, default=\ ``-1``, type=int - ``<= 0`` means no limit
- only used in ``prediction`` task - ``pred_early_stop``, default = ``false``, type = bool
- use to specify how many trained iterations will be used in prediction
- ``<= 0`` means no limit - used only in ``prediction`` task
- ``pred_early_stop``, default=\ ``false``, type=bool - if ``true``, will use early-stopping to speed up the prediction. May affect the accuracy
- if ``true`` will use early-stopping to speed up the prediction. May affect the accuracy - ``pred_early_stop_freq``, default = ``10``, type = int
- ``pred_early_stop_freq``, default=\ ``10``, type=int - used only in ``prediction`` task
- the frequency of checking early-stopping prediction - the frequency of checking early-stopping prediction
- ``pred_early_stop_margin``, default=\ ``10.0``, type=double - ``pred_early_stop_margin``, default = ``10.0``, type = double
- used only in ``prediction`` task
- the threshold of margin in early-stopping prediction - the threshold of margin in early-stopping prediction
- ``use_missing``, default=\ ``true``, type=bool - ``convert_model_language``, default = ``""``, type = string
- set to ``false`` to disable the special handle of missing value - used only in ``convert_model`` task
- ``zero_as_missing``, default=\ ``false``, type=bool - only ``cpp`` is supported yet
- set to ``true`` to treat all zero as missing values (including the unshown values in libsvm/sparse matrics) - if ``convert_model_language`` is set and ``task=train``, the model will be also converted
- set to ``false`` to use ``na`` to represent missing values - ``convert_model``, default = ``gbdt_prediction.cpp``, type = string, aliases: ``convert_model_file``
- ``init_score_file``, default=\ ``""``, type=string, alias=\ ``init_score_filename``, ``initscore_filename``, ``init_score`` - used only in ``convert_model`` task
- path to training initial score file, ``""`` will use ``train_data_file`` + ``.init`` (if exists) - output filename of converted model
- ``valid_init_score_file``, default=\ ``""``, type=multi-string, alias=\ ``valid_data_initscores``, ``valid_data_init_scores``, ``valid_init_score`` Objective Parameters
--------------------
- path to validation initial score file, ``""`` will use ``valid_data_file`` + ``.init`` (if exists) - ``num_class``, default = ``1``, type = int, aliases: ``num_classes``, constraints: ``num_class > 0``
- separate by ``,`` for multi-validation data - used only in ``multi-class`` classification application
- ``forced_splits``, default=\ ``""``, type=string, alias=\ ``forced_splits_file``, ``forcedsplits_filename``, ``forced_splits_filename`` - ``is_unbalance``, default = ``false``, type = bool, aliases: ``unbalanced_sets``
- path to a ``.json`` file that specifies splits to force at the top of every decision tree before best-first learning commences - used only in ``binary`` application
- ``.json`` file can be arbitrarily nested, and each split contains ``feature``, ``threshold`` fields, as well as ``left`` and ``right`` - set this to ``true`` if training data are unbalance
fields representing subsplits. Categorical splits are forced in a one-hot fashion, with ``left`` representing the split containing
the feature value and ``right`` representing other values
- see `this file <https://github.com/Microsoft/LightGBM/tree/master/examples/binary_classification/forced_splits.json>`__ as an example - **Note**: this parameter cannot be used at the same time with ``scale_pos_weight``, choose only **one** of them
Objective Parameters - ``scale_pos_weight``, default = ``1.0``, type = double, constraints: ``scale_pos_weight > 0.0``
--------------------
- ``sigmoid``, default=\ ``1.0``, type=double - used only in ``binary`` application
- parameter for sigmoid function. Will be used in ``binary`` and ``multiclassova`` classification and in ``lambdarank`` - weight of labels with positive class
- ``alpha``, default=\ ``0.9``, type=double - **Note**: this parameter cannot be used at the same time with ``is_unbalance``, choose only **one** of them
- parameter for `Huber loss`_ and `Quantile regression`_. Will be used in ``regression`` task - ``sigmoid``, default = ``1.0``, type = double, constraints: ``sigmoid > 0.0``
- ``fair_c``, default=\ ``1.0``, type=double - used only in ``binary`` and ``multiclassova`` classification and in ``lambdarank`` applications
- parameter for `Fair loss`_. Will be used in ``regression`` task - parameter for the sigmoid function
- ``poisson_max_delta_step``, default=\ ``0.7``, type=double - ``boost_from_average``, default = ``true``, type = bool
- parameter for `Poisson regression`_ to safeguard optimization - used only in ``regression``, ``binary`` and ``cross-entropy`` applications
- ``scale_pos_weight``, default=\ ``1.0``, type=double - adjusts initial score to the mean of labels for faster convergence
- weight of positive class in ``binary`` classification task - ``reg_sqrt``, default = ``false``, type = bool
- ``boost_from_average``, default=\ ``true``, type=bool - used only in ``regression`` application
- used only in ``regression``, ``binary``, and ``xentropy`` tasks (others may get added) - used to fit ``sqrt(label)`` instead of original values and prediction result will be also automatically converted to ``prediction^2``
- adjust initial score to the mean of labels for faster convergence - might be useful in case of large-range labels
- ``is_unbalance``, default=\ ``false``, type=bool, alias=\ ``unbalanced_sets`` - ``alpha``, default = ``0.9``, type = double, constraints: ``0.0 < alpha < 1.0``
- used in ``binary`` classification - used only in ``huber`` and ``quantile`` ``regression`` applications
- set this to ``true`` if training data are unbalance
- ``max_position``, default=\ ``20``, type=int - parameter for `Huber loss <https://en.wikipedia.org/wiki/Huber_loss>`__ and `Quantile regression <https://en.wikipedia.org/wiki/Quantile_regression>`__
- used in ``lambdarank`` - ``fair_c``, default = ``1.0``, type = double, constraints: ``fair_c > 0.0``
- will optimize `NDCG`_ at this position - used only in ``fair`` ``regression`` application
- ``label_gain``, default=\ ``0,1,3,7,15,31,63,...,2^30-1``, type=multi-double - parameter for `Fair loss <https://www.kaggle.com/c/allstate-claims-severity/discussion/24520>`__
- used in ``lambdarank`` - ``poisson_max_delta_step``, default = ``0.7``, type = double, constraints: ``poisson_max_delta_step > 0.0``
- relevant gain for labels. For example, the gain of label ``2`` is ``3`` if using default label gains - used only in ``poisson`` ``regression`` application
- separate by ``,`` - parameter for `Poisson regression <https://en.wikipedia.org/wiki/Poisson_regression>`__ to safeguard optimization
- ``tweedie_variance_power``, default = ``1.5``, type = double, constraints: ``1.0 <= tweedie_variance_power < 2.0``
- used only in ``tweedie`` ``regression`` application
- used to control the variance of the tweedie distribution
- set this closer to ``2`` to shift towards a **Gamma** distribution
- ``num_class``, default=\ ``1``, type=int, alias=\ ``num_classes`` - set this closer to ``1`` to shift towards a **Poisson** distribution
- only used in multi-class classification - ``max_position``, default = ``20``, type = int, constraints: ``max_position > 0``
- ``reg_sqrt``, default=\ ``false``, type=bool - used only in ``lambdarank`` application
- only used in ``regression`` - optimizes `NDCG <https://en.wikipedia.org/wiki/Discounted_cumulative_gain#Normalized_DCG>`__ at this position
- will fit ``sqrt(label)`` instead and prediction result will be also automatically converted to ``pow2(prediction)`` - ``label_gain``, default = ``0,1,3,7,15,31,63,...,2^30-1``, type = multi-double
- ``tweedie_variance_power``, default=\ ``1.5``, type=\ ``double``, range=\ ``[1,2)`` - used only in ``lambdarank`` application
- only used in ``tweedie`` regression - relevant gain for labels. For example, the gain of label ``2`` is ``3`` in case of default label gains
- controls the variance of the tweedie distribution - separate by ``,``
- set closer to 2 to shift towards a gamma distribution
- set closer to 1 to shift towards a poisson distribution
Metric Parameters Metric Parameters
----------------- -----------------
- ``metric``, default=\ ``''``, type=multi-enum, alias=\ ``metric_types`` - ``metric``, default = ``""``, type = multi-enum, aliases: ``metrics``, ``metric_types``
- metric to be evaluated on the evaluation sets **in addition** to what is provided in the training arguments - metric(s) to be evaluated on the evaluation sets **in addition** to what is provided in the training arguments
- ``''`` (empty string or not specific), metric corresponding to specified objective will be used - ``""`` (empty string or not specific) means that metric corresponding to specified ``objective`` will be used (this is possible only for pre-defined objective functions, otherwise no evaluation metric will be added)
(this is possible only for pre-defined objective functions, otherwise no evaluation metric will be added)
- ``"None"`` (string, **not** a ``None`` value) means that no metric will be registered, aliases: ``na``
- ``'None'`` (string, **not** a ``None`` value), no metric registered, alias=\ ``na``
- ``l1``, absolute loss, aliases: ``mean_absolute_error``, ``mae``, ``regression_l1``
- ``l1``, absolute loss, alias=\ ``mean_absolute_error``, ``mae``, ``regression_l1``
- ``l2``, square loss, aliases: ``mean_squared_error``, ``mse``, ``regression_l2``, ``regression``
- ``l2``, square loss, alias=\ ``mean_squared_error``, ``mse``, ``regression_l2``, ``regression``
- ``l2_root``, root square loss, aliases: ``root_mean_squared_error``, ``rmse``
- ``l2_root``, root square loss, alias=\ ``root_mean_squared_error``, ``rmse``
- ``quantile``, `Quantile regression <https://en.wikipedia.org/wiki/Quantile_regression>`__
- ``quantile``, `Quantile regression`_
- ``mape``, `MAPE loss <https://en.wikipedia.org/wiki/Mean_absolute_percentage_error>`__, aliases: ``mean_absolute_percentage_error``
- ``mape``, `MAPE loss`_, alias=\ ``mean_absolute_percentage_error``
- ``huber``, `Huber loss <https://en.wikipedia.org/wiki/Huber_loss>`__
- ``huber``, `Huber loss`_
- ``fair``, `Fair loss <https://www.kaggle.com/c/allstate-claims-severity/discussion/24520>`__
- ``fair``, `Fair loss`_
- ``poisson``, negative log-likelihood for `Poisson regression <https://en.wikipedia.org/wiki/Poisson_regression>`__
- ``poisson``, negative log-likelihood for `Poisson regression`_
- ``gamma``, negative log-likelihood for **Gamma** regression
- ``gamma``, negative log-likelihood for Gamma regression
- ``gamma_deviance``, residual deviance for **Gamma** regression
- ``gamma_deviance``, residual deviance for Gamma regression
- ``tweedie``, negative log-likelihood for **Tweedie** regression
- ``tweedie``, negative log-likelihood for Tweedie regression
- ``ndcg``, `NDCG <https://en.wikipedia.org/wiki/Discounted_cumulative_gain#Normalized_DCG>`__
- ``ndcg``, `NDCG`_
- ``map``, `MAP <https://makarandtapaswi.wordpress.com/2012/07/02/intuition-behind-average-precision-and-map/>`__, aliases: ``mean_average_precision``
- ``map``, `MAP`_, alias=\ ``mean_average_precision``
- ``auc``, `AUC <https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve>`__
- ``auc``, `AUC`_
- ``binary_logloss``, `log loss <https://en.wikipedia.org/wiki/Cross_entropy>`__, aliases: ``binary``
- ``binary_logloss``, `log loss`_, alias=\ ``binary``
- ``binary_error``, for one sample: ``0`` for correct classification, ``1`` for error classification - ``binary_error``, for one sample: ``0`` for correct classification, ``1`` for error classification
- ``multi_logloss``, log loss for mulit-class classification, alias=\ ``multiclass``, ``softmax``, ``multiclassova``, ``multiclass_ova``, ``ova``, ``ovr`` - ``multi_logloss``, log loss for multi-class classification, aliases: ``multiclass``, ``softmax``, ``multiclassova``, ``multiclass_ova``, ``ova``, ``ovr``
- ``multi_error``, error rate for mulit-class classification - ``multi_error``, error rate for multi-class classification
- ``xentropy``, cross-entropy (with optional linear weights), alias=\ ``cross_entropy`` - ``xentropy``, cross-entropy (with optional linear weights), aliases: ``cross_entropy``
- ``xentlambda``, "intensity-weighted" cross-entropy, alias=\ ``cross_entropy_lambda`` - ``xentlambda``, "intensity-weighted" cross-entropy, aliases: ``cross_entropy_lambda``
- ``kldiv``, `Kullback-Leibler divergence`_, alias=\ ``kullback_leibler`` - ``kldiv``, `Kullback-Leibler divergence <https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence>`__, aliases: ``kullback_leibler``
- support multiple metrics, separated by ``,`` - support multiple metrics, separated by ``,``
- ``metric_freq``, default=\ ``1``, type=int, alias=\ ``output_freq`` - ``metric_freq``, default = ``1``, type = int, aliases: ``output_freq``, constraints: ``metric_freq > 0``
- frequency for metric output - frequency for metric output
- ``train_metric``, default=\ ``false``, type=bool, alias=\ ``training_metric``, ``is_training_metric``, ``is_provide_training_metric`` - ``is_provide_training_metric``, default = ``false``, type = bool, aliases: ``training_metric``, ``is_training_metric``, ``train_metric``
- set this to ``true`` if you need to output metric result of training - set this to ``true`` to output metric result over training dataset
- ``ndcg_at``, default=\ ``1,2,3,4,5``, type=multi-int, alias=\ ``ndcg_eval_at``, ``eval_at`` - ``eval_at``, default = ``1,2,3,4,5``, type = multi-int, aliases: ``ndcg_eval_at``, ``ndcg_at``
- `NDCG`_ evaluation positions, separated by ``,`` - used only with ``ndcg`` and ``map`` metrics
- `NDCG <https://en.wikipedia.org/wiki/Discounted_cumulative_gain#Normalized_DCG>`__ evaluation positions, separated by ``,``
Network Parameters Network Parameters
------------------ ------------------
Following parameters are used for parallel learning, and only used for base (socket) version. - ``num_machines``, default = ``1``, type = int, aliases: ``num_machine``, constraints: ``num_machines > 0``
- ``num_machines``, default=\ ``1``, type=int, alias=\ ``num_machine``
- used for parallel learning, the number of machines for parallel learning application - the number of machines for parallel learning application
- need to set this in both socket and mpi versions - this parameter is needed to be set in both **socket** and **mpi** versions
- ``local_listen_port``, default=\ ``12400``, type=int, alias=\ ``local_port`` - ``local_listen_port``, default = ``12400``, type = int, aliases: ``local_port``, ``port``, constraints: ``local_listen_port > 0``
- TCP listen port for local machines - TCP listen port for local machines
- you should allow this port in firewall settings before training - **Note**: don't forget to allow this port in firewall settings before training
- ``time_out``, default=\ ``120``, type=int - ``time_out``, default = ``120``, type = int, constraints: ``time_out > 0``
- socket time-out in minutes - socket time-out in minutes
- ``machine_list_file``, default=\ ``""``, type=string, alias=\ ``mlist`` - ``machine_list_filename``, default = ``""``, type = string, aliases: ``machine_list_file``, ``machine_list``, ``mlist``
- path of file that lists machines for this parallel learning application
- file that lists machines for this parallel learning application - each line contains one IP and one port for one machine. The format is ``ip port`` (space as a separator)
- each line contains one IP and one port for one machine. The format is ``ip port``, separate by space - ``machines``, default = ``""``, type = string, aliases: ``workers``, ``nodes``
- list of machines in the following format: ``ip1:port1,ip2:port2``
GPU Parameters GPU Parameters
-------------- --------------
- ``gpu_platform_id``, default=\ ``-1``, type=int - ``gpu_platform_id``, default = ``-1``, type = int
- OpenCL platform ID. Usually each GPU vendor exposes one OpenCL platform - OpenCL platform ID. Usually each GPU vendor exposes one OpenCL platform
- default value is ``-1``, means the system-wide default platform - ``-1`` means the system-wide default platform
- ``gpu_device_id``, default=\ ``-1``, type=int - ``gpu_device_id``, default = ``-1``, type = int
- OpenCL device ID in the specified platform. Each GPU in the selected platform has a unique device ID - OpenCL device ID in the specified platform. Each GPU in the selected platform has a unique device ID
- default value is ``-1``, means the default device in the selected platform - ``-1`` means the default device in the selected platform
- ``gpu_use_dp``, default=\ ``false``, type=bool
- set to ``true`` to use double precision math on GPU (default using single precision)
Convert Model Parameters
------------------------
This feature is only supported in command line version yet.
- ``convert_model_language``, default=\ ``""``, type=string
- only ``cpp`` is supported yet
- if ``convert_model_language`` is set when ``task`` is set to ``train``, the model will also be converted - ``gpu_use_dp``, default = ``false``, type = bool
- ``convert_model``, default=\ ``"gbdt_prediction.cpp"``, type=string - set this to ``true`` to use double precision math on GPU (by default single precision is used)
- output file name of converted model
.. end params list .. end params list
...@@ -789,39 +876,3 @@ In this case LightGBM will load the query file automatically if it exists. ...@@ -789,39 +876,3 @@ In this case LightGBM will load the query file automatically if it exists.
Also, you can include query/group id column in your data file. Please refer to parameter ``group`` in above. Also, you can include query/group id column in your data file. Please refer to parameter ``group`` in above.
.. _Laurae++ Interactive Documentation: https://sites.google.com/view/lauraepp/parameters .. _Laurae++ Interactive Documentation: https://sites.google.com/view/lauraepp/parameters
.. _Huber loss: https://en.wikipedia.org/wiki/Huber_loss
.. _Quantile regression: https://en.wikipedia.org/wiki/Quantile_regression
.. _MAPE loss: https://en.wikipedia.org/wiki/Mean_absolute_percentage_error
.. _Fair loss: https://www.kaggle.com/c/allstate-claims-severity/discussion/24520
.. _Poisson regression: https://en.wikipedia.org/wiki/Poisson_regression
.. _lambdarank: https://papers.nips.cc/paper/2971-learning-to-rank-with-nonsmooth-cost-functions.pdf
.. _Dropouts meet Multiple Additive Regression Trees: https://arxiv.org/abs/1505.01866
.. _hyper-threading: https://en.wikipedia.org/wiki/Hyper-threading
.. _SHAP values: https://arxiv.org/abs/1706.06060
.. _NDCG: https://en.wikipedia.org/wiki/Discounted_cumulative_gain#Normalized_DCG
.. _MAP: https://makarandtapaswi.wordpress.com/2012/07/02/intuition-behind-average-precision-and-map/
.. _AUC: https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve
.. _log loss: https://en.wikipedia.org/wiki/Cross_entropy
.. _softmax: https://en.wikipedia.org/wiki/Softmax_function
.. _One-vs-All: https://en.wikipedia.org/wiki/Multiclass_classification#One-vs.-rest
.. _Kullback-Leibler divergence: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence
.. _gamma-distributed: https://en.wikipedia.org/wiki/Gamma_distribution#Applications
.. _tweedie-distributed: https://en.wikipedia.org/wiki/Tweedie_distribution#Applications
...@@ -3,7 +3,7 @@ Documentation ...@@ -3,7 +3,7 @@ Documentation
Documentation for LightGBM is generated using `Sphinx <http://www.sphinx-doc.org/>`__. Documentation for LightGBM is generated using `Sphinx <http://www.sphinx-doc.org/>`__.
List of parameters and their descriptions in `Parameters.rst <https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.rst>`__ List of parameters and their descriptions in `Parameters.rst <./Parameters.rst>`__
is generated automatically from comments in `config file <https://github.com/Microsoft/LightGBM/blob/master/include/LightGBM/config.h>`__ is generated automatically from comments in `config file <https://github.com/Microsoft/LightGBM/blob/master/include/LightGBM/config.h>`__
by `this script <https://github.com/Microsoft/LightGBM/blob/master/helper/parameter_generator.py>`__. by `this script <https://github.com/Microsoft/LightGBM/blob/master/helper/parameter_generator.py>`__.
......
...@@ -30,18 +30,18 @@ def GetParameterInfos(config_hpp): ...@@ -30,18 +30,18 @@ def GetParameterInfos(config_hpp):
elif cur_key is not None: elif cur_key is not None:
line = line.strip() line = line.strip()
if line.startswith("//"): if line.startswith("//"):
tokens = line[2:].split("=") key, _, val = line[2:].partition("=")
key = tokens[0].strip() key = key.strip()
val = '='.join(tokens[1:]).strip() val = val.strip()
if key not in cur_info: if key not in cur_info:
if key == "descl2": if key == "descl2" and "desc" not in cur_info:
cur_info["desc"] = [] cur_info["desc"] = []
else: elif key != "descl2":
cur_info[key] = [] cur_info[key] = []
if key == "desc": if key == "desc":
cur_info["desc"].append(["l1", val]) cur_info["desc"].append(("l1", val))
elif key == "descl2": elif key == "descl2":
cur_info["desc"].append(["l2", val]) cur_info["desc"].append(("l2", val))
else: else:
cur_info[key].append(val) cur_info[key].append(val)
elif line: elif line:
...@@ -79,22 +79,22 @@ def GetAlias(infos): ...@@ -79,22 +79,22 @@ def GetAlias(infos):
name = y["name"][0] name = y["name"][0]
alias = y["alias"][0].split(',') alias = y["alias"][0].split(',')
for name2 in alias: for name2 in alias:
pairs.append([name2.strip(), name]) pairs.append((name2.strip(), name))
return pairs return pairs
def SetOneVarFromString(name, type, checks): def SetOneVarFromString(name, param_type, checks):
ret = "" ret = ""
univar_mapper = {"int": "GetInt", "double": "GetDouble", "bool": "GetBool", "std::string": "GetString"} univar_mapper = {"int": "GetInt", "double": "GetDouble", "bool": "GetBool", "std::string": "GetString"}
if "vector" not in type: if "vector" not in param_type:
ret += " %s(params, \"%s\", &%s);\n" % (univar_mapper[type], name, name) ret += " %s(params, \"%s\", &%s);\n" % (univar_mapper[param_type], name, name)
if len(checks) > 0: if len(checks) > 0:
for check in checks: for check in checks:
ret += " CHECK(%s %s);\n" % (name, check) ret += " CHECK(%s %s);\n" % (name, check)
ret += "\n" ret += "\n"
else: else:
ret += " if (GetString(params, \"%s\", &tmp_str)) {\n" % (name) ret += " if (GetString(params, \"%s\", &tmp_str)) {\n" % (name)
type2 = type.split("<")[1][:-1] type2 = param_type.split("<")[1][:-1]
if type2 == "std::string": if type2 == "std::string":
ret += " %s = Common::Split(tmp_str.c_str(), ',');\n" % (name) ret += " %s = Common::Split(tmp_str.c_str(), ',');\n" % (name)
else: else:
...@@ -141,10 +141,10 @@ def GenParameterDescription(sections, descriptions, params_rst): ...@@ -141,10 +141,10 @@ def GenParameterDescription(sections, descriptions, params_rst):
if checks_len > 1: if checks_len > 1:
number1, sign1 = parse_check(checks[0]) number1, sign1 = parse_check(checks[0])
number2, sign2 = parse_check(checks[1], reverse=True) number2, sign2 = parse_check(checks[1], reverse=True)
checks_str = ', ``{0} {1} {2} {3} {4}``'.format(number2, sign2, name, sign1, number1) checks_str = ', constraints: ``{0} {1} {2} {3} {4}``'.format(number2, sign2, name, sign1, number1)
elif checks_len == 1: elif checks_len == 1:
number, sign = parse_check(checks[0]) number, sign = parse_check(checks[0])
checks_str = ', ``{0} {1} {2}``'.format(name, sign, number) checks_str = ', constraints: ``{0} {1} {2}``'.format(name, sign, number)
else: else:
checks_str = '' checks_str = ''
main_desc = '- ``{0}``, default = ``{1}``, type = {2}{3}{4}{5}'.format(name, default, param_type, options_str, aliases_str, checks_str) main_desc = '- ``{0}``, default = ``{1}``, type = {2}{3}{4}{5}'.format(name, default, param_type, options_str, aliases_str, checks_str)
...@@ -173,12 +173,12 @@ def GenParameterCode(config_hpp, config_out_cpp): ...@@ -173,12 +173,12 @@ def GenParameterCode(config_hpp, config_out_cpp):
# alias table # alias table
str_to_write += "std::unordered_map<std::string, std::string> Config::alias_table({\n" str_to_write += "std::unordered_map<std::string, std::string> Config::alias_table({\n"
for pair in alias: for pair in alias:
str_to_write += " {\"%s\", \"%s\"}, \n" % (pair[0], pair[1]) str_to_write += " {\"%s\", \"%s\"},\n" % (pair[0], pair[1])
str_to_write += "});\n\n" str_to_write += "});\n\n"
# names # names
str_to_write += "std::unordered_set<std::string> Config::parameter_set({\n" str_to_write += "std::unordered_set<std::string> Config::parameter_set({\n"
for name in names: for name in names:
str_to_write += " \"%s\", \n" % (name) str_to_write += " \"%s\",\n" % (name)
str_to_write += "});\n\n" str_to_write += "});\n\n"
# from strings # from strings
str_to_write += "void Config::GetMembersFromString(const std::unordered_map<std::string, std::string>& params) {\n" str_to_write += "void Config::GetMembersFromString(const std::unordered_map<std::string, std::string>& params) {\n"
...@@ -187,12 +187,12 @@ def GenParameterCode(config_hpp, config_out_cpp): ...@@ -187,12 +187,12 @@ def GenParameterCode(config_hpp, config_out_cpp):
for y in x: for y in x:
if "[doc-only]" in y: if "[doc-only]" in y:
continue continue
type = y["inner_type"][0] param_type = y["inner_type"][0]
name = y["name"][0] name = y["name"][0]
checks = [] checks = []
if "check" in y: if "check" in y:
checks = y["check"] checks = y["check"]
tmp = SetOneVarFromString(name, type, checks) tmp = SetOneVarFromString(name, param_type, checks)
str_to_write += tmp str_to_write += tmp
# tails # tails
str_to_write += "}\n\n" str_to_write += "}\n\n"
...@@ -202,10 +202,10 @@ def GenParameterCode(config_hpp, config_out_cpp): ...@@ -202,10 +202,10 @@ def GenParameterCode(config_hpp, config_out_cpp):
for y in x: for y in x:
if "[doc-only]" in y: if "[doc-only]" in y:
continue continue
type = y["inner_type"][0] param_type = y["inner_type"][0]
name = y["name"][0] name = y["name"][0]
if "vector" in type: if "vector" in param_type:
if "int8" in type: if "int8" in param_type:
str_to_write += " str_buf << \"[%s: \" << Common::Join(Common::ArrayCast<int8_t, int>(%s),\",\") << \"]\\n\";\n" % (name, name) str_to_write += " str_buf << \"[%s: \" << Common::Join(Common::ArrayCast<int8_t, int>(%s),\",\") << \"]\\n\";\n" % (name, name)
else: else:
str_to_write += " str_buf << \"[%s: \" << Common::Join(%s,\",\") << \"]\\n\";\n" % (name, name) str_to_write += " str_buf << \"[%s: \" << Common::Join(%s,\",\") << \"]\\n\";\n" % (name, name)
......
...@@ -2,222 +2,238 @@ ...@@ -2,222 +2,238 @@
#include<LightGBM/config.h> #include<LightGBM/config.h>
namespace LightGBM { namespace LightGBM {
std::unordered_map<std::string, std::string> Config::alias_table({ std::unordered_map<std::string, std::string> Config::alias_table({
{"config_file", "config"}, {"config_file", "config"},
{"task_type", "task"}, {"task_type", "task"},
{"application", "objective"}, {"objective_type", "objective"},
{"app", "objective"}, {"app", "objective"},
{"objective_type", "objective"}, {"application", "objective"},
{"boosting_type", "boosting"}, {"boosting_type", "boosting"},
{"boost", "boosting"}, {"boost", "boosting"},
{"train", "data"}, {"train", "data"},
{"train_data", "data"}, {"train_data", "data"},
{"data_filename", "data"}, {"data_filename", "data"},
{"test", "valid"}, {"test", "valid"},
{"valid_data", "valid"}, {"valid_data", "valid"},
{"test_data", "valid"}, {"valid_data_file", "valid"},
{"valid_filenames", "valid"}, {"test_data", "valid"},
{"num_iteration", "num_iterations"}, {"valid_filenames", "valid"},
{"num_tree", "num_iterations"}, {"num_iteration", "num_iterations"},
{"num_trees", "num_iterations"}, {"num_tree", "num_iterations"},
{"num_round", "num_iterations"}, {"num_trees", "num_iterations"},
{"num_rounds", "num_iterations"}, {"num_round", "num_iterations"},
{"num_boost_round", "num_iterations"}, {"num_rounds", "num_iterations"},
{"n_estimators", "num_iterations"}, {"num_boost_round", "num_iterations"},
{"shrinkage_rate", "learning_rate"}, {"n_estimators", "num_iterations"},
{"num_leaf", "num_leaves"}, {"shrinkage_rate", "learning_rate"},
{"tree", "tree_learner"}, {"num_leaf", "num_leaves"},
{"tree_learner_type", "tree_learner"}, {"tree", "tree_learner"},
{"num_thread", "num_threads"}, {"tree_learner_type", "tree_learner"},
{"nthread", "num_threads"}, {"num_thread", "num_threads"},
{"nthreads", "num_threads"}, {"nthread", "num_threads"},
{"nthreads", "num_threads"},
{"device", "device_type"}, {"device", "device_type"},
{"random_seed", "seed"}, {"random_seed", "seed"},
{"min_data_per_leaf", "min_data_in_leaf"}, {"min_data_per_leaf", "min_data_in_leaf"},
{"min_data", "min_data_in_leaf"}, {"min_data", "min_data_in_leaf"},
{"min_child_samples", "min_data_in_leaf"}, {"min_child_samples", "min_data_in_leaf"},
{"min_sum_hessian_per_leaf", "min_sum_hessian_in_leaf"}, {"min_sum_hessian_per_leaf", "min_sum_hessian_in_leaf"},
{"min_sum_hessian", "min_sum_hessian_in_leaf"}, {"min_sum_hessian", "min_sum_hessian_in_leaf"},
{"min_hessian", "min_sum_hessian_in_leaf"}, {"min_hessian", "min_sum_hessian_in_leaf"},
{"min_child_weight", "min_sum_hessian_in_leaf"}, {"min_child_weight", "min_sum_hessian_in_leaf"},
{"sub_row", "bagging_fraction"}, {"sub_row", "bagging_fraction"},
{"subsample", "bagging_fraction"}, {"subsample", "bagging_fraction"},
{"bagging", "bagging_fraction"}, {"bagging", "bagging_fraction"},
{"subsample_freq", "bagging_freq"}, {"subsample_freq", "bagging_freq"},
{"bagging_fraction_seed", "bagging_seed"}, {"bagging_fraction_seed", "bagging_seed"},
{"sub_feature", "feature_fraction"}, {"sub_feature", "feature_fraction"},
{"colsample_bytree", "feature_fraction"}, {"colsample_bytree", "feature_fraction"},
{"early_stopping_rounds", "early_stopping_round"}, {"early_stopping_rounds", "early_stopping_round"},
{"early_stopping", "early_stopping_round"}, {"early_stopping", "early_stopping_round"},
{"max_tree_output", "max_delta_step"}, {"max_tree_output", "max_delta_step"},
{"max_leaf_output", "max_delta_step"}, {"max_leaf_output", "max_delta_step"},
{"reg_alpha", "lambda_l1"}, {"reg_alpha", "lambda_l1"},
{"reg_lambda", "lambda_l2"}, {"reg_lambda", "lambda_l2"},
{"min_split_gain", "min_gain_to_split"}, {"min_split_gain", "min_gain_to_split"},
{"topk", "top_k"}, {"topk", "top_k"},
{"mc", "monotone_constraints"}, {"mc", "monotone_constraints"},
{"monotone_constraint", "monotone_constraints"}, {"monotone_constraint", "monotone_constraints"},
{"forced_splits_filename", "forcedsplits_filename"}, {"fs", "forcedsplits_filename"},
{"forced_splits_file", "forcedsplits_filename"}, {"forced_splits_filename", "forcedsplits_filename"},
{"forced_splits", "forcedsplits_filename"}, {"forced_splits_file", "forcedsplits_filename"},
{"model_output", "output_model"}, {"forced_splits", "forcedsplits_filename"},
{"model_out", "output_model"}, {"verbose", "verbosity"},
{"model_input", "input_model"}, {"subsample_for_bin", "bin_construct_sample_cnt"},
{"model_in", "input_model"}, {"model_output", "output_model"},
{"predict_result", "output_result"}, {"model_out", "output_model"},
{"prediction_result", "output_result"}, {"model_input", "input_model"},
{"is_pre_partition", "pre_partition"}, {"model_in", "input_model"},
{"is_sparse", "is_enable_sparse"}, {"predict_result", "output_result"},
{"enable_sparse", "is_enable_sparse"}, {"prediction_result", "output_result"},
{"two_round_loading", "two_round"}, {"init_score_filename", "initscore_filename"},
{"use_two_round_loading", "two_round"}, {"init_score_file", "initscore_filename"},
{"is_save_binary", "save_binary"}, {"init_score", "initscore_filename"},
{"is_save_binary_file", "save_binary"}, {"input_init_score", "initscore_filename"},
{"verbose", "verbosity"}, {"valid_data_init_scores", "valid_data_initscores"},
{"has_header", "header"}, {"valid_init_score_file", "valid_data_initscores"},
{"label", "label_column"}, {"valid_init_score", "valid_data_initscores"},
{"weight", "weight_column"}, {"is_pre_partition", "pre_partition"},
{"query_column", "group_column"}, {"is_enable_bundle", "enable_bundle"},
{"group", "group_column"}, {"bundle", "enable_bundle"},
{"query", "group_column"}, {"is_sparse", "is_enable_sparse"},
{"ignore_feature", "ignore_column"}, {"enable_sparse", "is_enable_sparse"},
{"blacklist", "ignore_column"}, {"sparse", "is_enable_sparse"},
{"categorical_column", "categorical_feature"}, {"two_round_loading", "two_round"},
{"cat_feature", "categorical_feature"}, {"use_two_round_loading", "two_round"},
{"cat_column", "categorical_feature"}, {"is_save_binary", "save_binary"},
{"raw_score", "predict_raw_score"}, {"is_save_binary_file", "save_binary"},
{"is_predict_raw_score", "predict_raw_score"}, {"load_from_binary_file", "enable_load_from_binary_file"},
{"predict_rawscore", "predict_raw_score"}, {"binary_load", "enable_load_from_binary_file"},
{"leaf_index", "predict_leaf_index"}, {"load_binary", "enable_load_from_binary_file"},
{"is_predict_leaf_index", "predict_leaf_index"}, {"has_header", "header"},
{"contrib", "predict_contrib"}, {"label", "label_column"},
{"is_predict_contrib", "predict_contrib"}, {"weight", "weight_column"},
{"subsample_for_bin", "bin_construct_sample_cnt"}, {"group", "group_column"},
{"init_score_filename", "initscore_filename"}, {"group_id", "group_column"},
{"init_score_file", "initscore_filename"}, {"query_column", "group_column"},
{"init_score", "initscore_filename"}, {"query", "group_column"},
{"valid_data_init_scores", "valid_data_initscores"}, {"query_id", "group_column"},
{"valid_init_score_file", "valid_data_initscores"}, {"ignore_feature", "ignore_column"},
{"valid_init_score", "valid_data_initscores"}, {"blacklist", "ignore_column"},
{"num_classes", "num_class"}, {"cat_feature", "categorical_feature"},
{"unbalanced_sets", "is_unbalance"}, {"categorical_column", "categorical_feature"},
{"metric_types", "metric"}, {"cat_column", "categorical_feature"},
{"output_freq", "metric_freq"}, {"is_predict_raw_score", "predict_raw_score"},
{"training_metric", "is_provide_training_metric"}, {"predict_rawscore", "predict_raw_score"},
{"is_training_metric", "is_provide_training_metric"}, {"raw_score", "predict_raw_score"},
{"train_metric", "is_provide_training_metric"}, {"is_predict_leaf_index", "predict_leaf_index"},
{"ndcg_eval_at", "eval_at"}, {"leaf_index", "predict_leaf_index"},
{"ndcg_at", "eval_at"}, {"is_predict_contrib", "predict_contrib"},
{"num_machine", "num_machines"}, {"contrib", "predict_contrib"},
{"local_port", "local_listen_port"}, {"convert_model_file", "convert_model"},
{"mlist", "machine_list_filename"}, {"num_classes", "num_class"},
{"works", "machines"}, {"unbalanced_sets", "is_unbalance"},
{"nodes", "machines"}, {"metrics", "metric"},
{"metric_types", "metric"},
{"output_freq", "metric_freq"},
{"training_metric", "is_provide_training_metric"},
{"is_training_metric", "is_provide_training_metric"},
{"train_metric", "is_provide_training_metric"},
{"ndcg_eval_at", "eval_at"},
{"ndcg_at", "eval_at"},
{"num_machine", "num_machines"},
{"local_port", "local_listen_port"},
{"port", "local_listen_port"},
{"machine_list_file", "machine_list_filename"},
{"machine_list", "machine_list_filename"},
{"mlist", "machine_list_filename"},
{"workers", "machines"},
{"nodes", "machines"},
}); });
std::unordered_set<std::string> Config::parameter_set({ std::unordered_set<std::string> Config::parameter_set({
"config", "config",
"task", "task",
"objective", "objective",
"boosting", "boosting",
"data", "data",
"valid", "valid",
"num_iterations", "num_iterations",
"learning_rate", "learning_rate",
"num_leaves", "num_leaves",
"tree_learner", "tree_learner",
"num_threads", "num_threads",
"device_type", "device_type",
"seed", "seed",
"max_depth", "max_depth",
"min_data_in_leaf", "min_data_in_leaf",
"min_sum_hessian_in_leaf", "min_sum_hessian_in_leaf",
"bagging_fraction", "bagging_fraction",
"bagging_freq", "bagging_freq",
"bagging_seed", "bagging_seed",
"feature_fraction", "feature_fraction",
"feature_fraction_seed", "feature_fraction_seed",
"early_stopping_round", "early_stopping_round",
"max_delta_step", "max_delta_step",
"lambda_l1", "lambda_l1",
"lambda_l2", "lambda_l2",
"min_gain_to_split", "min_gain_to_split",
"drop_rate", "drop_rate",
"max_drop", "max_drop",
"skip_drop", "skip_drop",
"xgboost_dart_mode", "xgboost_dart_mode",
"uniform_drop", "uniform_drop",
"drop_seed", "drop_seed",
"top_rate", "top_rate",
"other_rate", "other_rate",
"min_data_per_group", "min_data_per_group",
"max_cat_threshold", "max_cat_threshold",
"cat_l2", "cat_l2",
"cat_smooth", "cat_smooth",
"max_cat_to_onehot", "max_cat_to_onehot",
"top_k", "top_k",
"monotone_constraints", "monotone_constraints",
"forcedsplits_filename", "forcedsplits_filename",
"max_bin", "verbosity",
"min_data_in_bin", "max_bin",
"data_random_seed", "min_data_in_bin",
"output_model", "bin_construct_sample_cnt",
"input_model", "histogram_pool_size",
"output_result", "data_random_seed",
"pre_partition", "output_model",
"is_enable_sparse", "snapshot_freq",
"sparse_threshold", "input_model",
"two_round", "output_result",
"save_binary", "initscore_filename",
"verbosity", "valid_data_initscores",
"header", "pre_partition",
"label_column", "enable_bundle",
"weight_column", "max_conflict_rate",
"group_column", "is_enable_sparse",
"ignore_column", "sparse_threshold",
"categorical_feature", "use_missing",
"predict_raw_score", "zero_as_missing",
"predict_leaf_index", "two_round",
"predict_contrib", "save_binary",
"num_iteration_predict", "enable_load_from_binary_file",
"pred_early_stop", "header",
"pred_early_stop_freq", "label_column",
"pred_early_stop_margin", "weight_column",
"bin_construct_sample_cnt", "group_column",
"use_missing", "ignore_column",
"zero_as_missing", "categorical_feature",
"initscore_filename", "predict_raw_score",
"valid_data_initscores", "predict_leaf_index",
"histogram_pool_size", "predict_contrib",
"enable_load_from_binary_file", "num_iteration_predict",
"enable_bundle", "pred_early_stop",
"max_conflict_rate", "pred_early_stop_freq",
"snapshot_freq", "pred_early_stop_margin",
"convert_model_language", "convert_model_language",
"convert_model", "convert_model",
"num_class", "num_class",
"sigmoid", "is_unbalance",
"alpha", "scale_pos_weight",
"fair_c", "sigmoid",
"poisson_max_delta_step", "boost_from_average",
"boost_from_average", "reg_sqrt",
"is_unbalance", "alpha",
"scale_pos_weight", "fair_c",
"reg_sqrt", "poisson_max_delta_step",
"tweedie_variance_power", "tweedie_variance_power",
"label_gain", "max_position",
"max_position", "label_gain",
"metric", "metric",
"metric_freq", "metric_freq",
"is_provide_training_metric", "is_provide_training_metric",
"eval_at", "eval_at",
"num_machines", "num_machines",
"local_listen_port", "local_listen_port",
"time_out", "time_out",
"machine_list_filename", "machine_list_filename",
"machines", "machines",
"gpu_platform_id", "gpu_platform_id",
"gpu_device_id", "gpu_device_id",
"gpu_use_dp", "gpu_use_dp",
}); });
void Config::GetMembersFromString(const std::unordered_map<std::string, std::string>& params) { void Config::GetMembersFromString(const std::unordered_map<std::string, std::string>& params) {
...@@ -232,7 +248,7 @@ void Config::GetMembersFromString(const std::unordered_map<std::string, std::str ...@@ -232,7 +248,7 @@ void Config::GetMembersFromString(const std::unordered_map<std::string, std::str
CHECK(num_iterations >=0); CHECK(num_iterations >=0);
GetDouble(params, "learning_rate", &learning_rate); GetDouble(params, "learning_rate", &learning_rate);
CHECK(learning_rate >0); CHECK(learning_rate >0.0);
GetInt(params, "num_leaves", &num_leaves); GetInt(params, "num_leaves", &num_leaves);
CHECK(num_leaves >1); CHECK(num_leaves >1);
...@@ -245,9 +261,10 @@ void Config::GetMembersFromString(const std::unordered_map<std::string, std::str ...@@ -245,9 +261,10 @@ void Config::GetMembersFromString(const std::unordered_map<std::string, std::str
CHECK(min_data_in_leaf >=0); CHECK(min_data_in_leaf >=0);
GetDouble(params, "min_sum_hessian_in_leaf", &min_sum_hessian_in_leaf); GetDouble(params, "min_sum_hessian_in_leaf", &min_sum_hessian_in_leaf);
CHECK(min_sum_hessian_in_leaf >=0.0);
GetDouble(params, "bagging_fraction", &bagging_fraction); GetDouble(params, "bagging_fraction", &bagging_fraction);
CHECK(bagging_fraction >0); CHECK(bagging_fraction >0.0);
CHECK(bagging_fraction <=1.0); CHECK(bagging_fraction <=1.0);
GetInt(params, "bagging_freq", &bagging_freq); GetInt(params, "bagging_freq", &bagging_freq);
...@@ -255,7 +272,7 @@ void Config::GetMembersFromString(const std::unordered_map<std::string, std::str ...@@ -255,7 +272,7 @@ void Config::GetMembersFromString(const std::unordered_map<std::string, std::str
GetInt(params, "bagging_seed", &bagging_seed); GetInt(params, "bagging_seed", &bagging_seed);
GetDouble(params, "feature_fraction", &feature_fraction); GetDouble(params, "feature_fraction", &feature_fraction);
CHECK(feature_fraction >0); CHECK(feature_fraction >0.0);
CHECK(feature_fraction <=1.0); CHECK(feature_fraction <=1.0);
GetInt(params, "feature_fraction_seed", &feature_fraction_seed); GetInt(params, "feature_fraction_seed", &feature_fraction_seed);
...@@ -265,21 +282,22 @@ void Config::GetMembersFromString(const std::unordered_map<std::string, std::str ...@@ -265,21 +282,22 @@ void Config::GetMembersFromString(const std::unordered_map<std::string, std::str
GetDouble(params, "max_delta_step", &max_delta_step); GetDouble(params, "max_delta_step", &max_delta_step);
GetDouble(params, "lambda_l1", &lambda_l1); GetDouble(params, "lambda_l1", &lambda_l1);
CHECK(lambda_l1 >=0); CHECK(lambda_l1 >=0.0);
GetDouble(params, "lambda_l2", &lambda_l2); GetDouble(params, "lambda_l2", &lambda_l2);
CHECK(lambda_l2 >=0); CHECK(lambda_l2 >=0.0);
GetDouble(params, "min_gain_to_split", &min_gain_to_split); GetDouble(params, "min_gain_to_split", &min_gain_to_split);
CHECK(min_gain_to_split >=0.0);
GetDouble(params, "drop_rate", &drop_rate); GetDouble(params, "drop_rate", &drop_rate);
CHECK(drop_rate >=0); CHECK(drop_rate >=0.0);
CHECK(drop_rate <=1.0); CHECK(drop_rate <=1.0);
GetInt(params, "max_drop", &max_drop); GetInt(params, "max_drop", &max_drop);
GetDouble(params, "skip_drop", &skip_drop); GetDouble(params, "skip_drop", &skip_drop);
CHECK(skip_drop >=0); CHECK(skip_drop >=0.0);
CHECK(skip_drop <=1.0); CHECK(skip_drop <=1.0);
GetBool(params, "xgboost_dart_mode", &xgboost_dart_mode); GetBool(params, "xgboost_dart_mode", &xgboost_dart_mode);
...@@ -289,11 +307,11 @@ void Config::GetMembersFromString(const std::unordered_map<std::string, std::str ...@@ -289,11 +307,11 @@ void Config::GetMembersFromString(const std::unordered_map<std::string, std::str
GetInt(params, "drop_seed", &drop_seed); GetInt(params, "drop_seed", &drop_seed);
GetDouble(params, "top_rate", &top_rate); GetDouble(params, "top_rate", &top_rate);
CHECK(top_rate >=0); CHECK(top_rate >=0.0);
CHECK(top_rate <=1.0); CHECK(top_rate <=1.0);
GetDouble(params, "other_rate", &other_rate); GetDouble(params, "other_rate", &other_rate);
CHECK(other_rate >=0); CHECK(other_rate >=0.0);
CHECK(other_rate <=1.0); CHECK(other_rate <=1.0);
GetInt(params, "min_data_per_group", &min_data_per_group); GetInt(params, "min_data_per_group", &min_data_per_group);
...@@ -303,15 +321,16 @@ void Config::GetMembersFromString(const std::unordered_map<std::string, std::str ...@@ -303,15 +321,16 @@ void Config::GetMembersFromString(const std::unordered_map<std::string, std::str
CHECK(max_cat_threshold >0); CHECK(max_cat_threshold >0);
GetDouble(params, "cat_l2", &cat_l2); GetDouble(params, "cat_l2", &cat_l2);
CHECK(cat_l2 >=0); CHECK(cat_l2 >=0.0);
GetDouble(params, "cat_smooth", &cat_smooth); GetDouble(params, "cat_smooth", &cat_smooth);
CHECK(cat_smooth >=0); CHECK(cat_smooth >=0.0);
GetInt(params, "max_cat_to_onehot", &max_cat_to_onehot); GetInt(params, "max_cat_to_onehot", &max_cat_to_onehot);
CHECK(max_cat_to_onehot >0); CHECK(max_cat_to_onehot >0);
GetInt(params, "top_k", &top_k); GetInt(params, "top_k", &top_k);
CHECK(top_k >0);
if (GetString(params, "monotone_constraints", &tmp_str)) { if (GetString(params, "monotone_constraints", &tmp_str)) {
monotone_constraints = Common::StringToArray<int8_t>(tmp_str, ','); monotone_constraints = Common::StringToArray<int8_t>(tmp_str, ',');
...@@ -319,33 +338,58 @@ void Config::GetMembersFromString(const std::unordered_map<std::string, std::str ...@@ -319,33 +338,58 @@ void Config::GetMembersFromString(const std::unordered_map<std::string, std::str
GetString(params, "forcedsplits_filename", &forcedsplits_filename); GetString(params, "forcedsplits_filename", &forcedsplits_filename);
GetInt(params, "verbosity", &verbosity);
GetInt(params, "max_bin", &max_bin); GetInt(params, "max_bin", &max_bin);
CHECK(max_bin >1); CHECK(max_bin >1);
GetInt(params, "min_data_in_bin", &min_data_in_bin); GetInt(params, "min_data_in_bin", &min_data_in_bin);
CHECK(min_data_in_bin >0); CHECK(min_data_in_bin >0);
GetInt(params, "bin_construct_sample_cnt", &bin_construct_sample_cnt);
CHECK(bin_construct_sample_cnt >0);
GetDouble(params, "histogram_pool_size", &histogram_pool_size);
GetInt(params, "data_random_seed", &data_random_seed); GetInt(params, "data_random_seed", &data_random_seed);
GetString(params, "output_model", &output_model); GetString(params, "output_model", &output_model);
GetInt(params, "snapshot_freq", &snapshot_freq);
GetString(params, "input_model", &input_model); GetString(params, "input_model", &input_model);
GetString(params, "output_result", &output_result); GetString(params, "output_result", &output_result);
GetString(params, "initscore_filename", &initscore_filename);
if (GetString(params, "valid_data_initscores", &tmp_str)) {
valid_data_initscores = Common::Split(tmp_str.c_str(), ',');
}
GetBool(params, "pre_partition", &pre_partition); GetBool(params, "pre_partition", &pre_partition);
GetBool(params, "enable_bundle", &enable_bundle);
GetDouble(params, "max_conflict_rate", &max_conflict_rate);
CHECK(max_conflict_rate >=0.0);
CHECK(max_conflict_rate <1.0);
GetBool(params, "is_enable_sparse", &is_enable_sparse); GetBool(params, "is_enable_sparse", &is_enable_sparse);
GetDouble(params, "sparse_threshold", &sparse_threshold); GetDouble(params, "sparse_threshold", &sparse_threshold);
CHECK(sparse_threshold >0); CHECK(sparse_threshold >0.0);
CHECK(sparse_threshold <=1); CHECK(sparse_threshold <=1.0);
GetBool(params, "use_missing", &use_missing);
GetBool(params, "zero_as_missing", &zero_as_missing);
GetBool(params, "two_round", &two_round); GetBool(params, "two_round", &two_round);
GetBool(params, "save_binary", &save_binary); GetBool(params, "save_binary", &save_binary);
GetInt(params, "verbosity", &verbosity); GetBool(params, "enable_load_from_binary_file", &enable_load_from_binary_file);
GetBool(params, "header", &header); GetBool(params, "header", &header);
...@@ -373,64 +417,46 @@ void Config::GetMembersFromString(const std::unordered_map<std::string, std::str ...@@ -373,64 +417,46 @@ void Config::GetMembersFromString(const std::unordered_map<std::string, std::str
GetDouble(params, "pred_early_stop_margin", &pred_early_stop_margin); GetDouble(params, "pred_early_stop_margin", &pred_early_stop_margin);
GetInt(params, "bin_construct_sample_cnt", &bin_construct_sample_cnt);
CHECK(bin_construct_sample_cnt >0);
GetBool(params, "use_missing", &use_missing);
GetBool(params, "zero_as_missing", &zero_as_missing);
GetString(params, "initscore_filename", &initscore_filename);
if (GetString(params, "valid_data_initscores", &tmp_str)) {
valid_data_initscores = Common::Split(tmp_str.c_str(), ',');
}
GetDouble(params, "histogram_pool_size", &histogram_pool_size);
GetBool(params, "enable_load_from_binary_file", &enable_load_from_binary_file);
GetBool(params, "enable_bundle", &enable_bundle);
GetDouble(params, "max_conflict_rate", &max_conflict_rate);
CHECK(max_conflict_rate >=0);
CHECK(max_conflict_rate <1);
GetInt(params, "snapshot_freq", &snapshot_freq);
GetString(params, "convert_model_language", &convert_model_language); GetString(params, "convert_model_language", &convert_model_language);
GetString(params, "convert_model", &convert_model); GetString(params, "convert_model", &convert_model);
GetInt(params, "num_class", &num_class); GetInt(params, "num_class", &num_class);
CHECK(num_class >0);
GetDouble(params, "sigmoid", &sigmoid); GetBool(params, "is_unbalance", &is_unbalance);
CHECK(sigmoid >0);
GetDouble(params, "alpha", &alpha);
GetDouble(params, "fair_c", &fair_c); GetDouble(params, "scale_pos_weight", &scale_pos_weight);
CHECK(scale_pos_weight >0.0);
GetDouble(params, "poisson_max_delta_step", &poisson_max_delta_step); GetDouble(params, "sigmoid", &sigmoid);
CHECK(sigmoid >0.0);
GetBool(params, "boost_from_average", &boost_from_average); GetBool(params, "boost_from_average", &boost_from_average);
GetBool(params, "is_unbalance", &is_unbalance); GetBool(params, "reg_sqrt", &reg_sqrt);
GetDouble(params, "scale_pos_weight", &scale_pos_weight); GetDouble(params, "alpha", &alpha);
CHECK(scale_pos_weight >0); CHECK(alpha >0.0);
CHECK(alpha <1.0);
GetBool(params, "reg_sqrt", &reg_sqrt); GetDouble(params, "fair_c", &fair_c);
CHECK(fair_c >0.0);
GetDouble(params, "poisson_max_delta_step", &poisson_max_delta_step);
CHECK(poisson_max_delta_step >0.0);
GetDouble(params, "tweedie_variance_power", &tweedie_variance_power); GetDouble(params, "tweedie_variance_power", &tweedie_variance_power);
CHECK(tweedie_variance_power >=1.0);
CHECK(tweedie_variance_power <2.0);
GetInt(params, "max_position", &max_position);
CHECK(max_position >0);
if (GetString(params, "label_gain", &tmp_str)) { if (GetString(params, "label_gain", &tmp_str)) {
label_gain = Common::StringToArray<double>(tmp_str, ','); label_gain = Common::StringToArray<double>(tmp_str, ',');
} }
GetInt(params, "max_position", &max_position);
CHECK(max_position >0);
GetInt(params, "metric_freq", &metric_freq); GetInt(params, "metric_freq", &metric_freq);
CHECK(metric_freq >0); CHECK(metric_freq >0);
...@@ -441,10 +467,13 @@ void Config::GetMembersFromString(const std::unordered_map<std::string, std::str ...@@ -441,10 +467,13 @@ void Config::GetMembersFromString(const std::unordered_map<std::string, std::str
} }
GetInt(params, "num_machines", &num_machines); GetInt(params, "num_machines", &num_machines);
CHECK(num_machines >0);
GetInt(params, "local_listen_port", &local_listen_port); GetInt(params, "local_listen_port", &local_listen_port);
CHECK(local_listen_port >0);
GetInt(params, "time_out", &time_out); GetInt(params, "time_out", &time_out);
CHECK(time_out >0);
GetString(params, "machine_list_filename", &machine_list_filename); GetString(params, "machine_list_filename", &machine_list_filename);
...@@ -495,18 +524,28 @@ std::string Config::SaveMembersToString() const { ...@@ -495,18 +524,28 @@ std::string Config::SaveMembersToString() const {
str_buf << "[top_k: " << top_k << "]\n"; str_buf << "[top_k: " << top_k << "]\n";
str_buf << "[monotone_constraints: " << Common::Join(Common::ArrayCast<int8_t, int>(monotone_constraints),",") << "]\n"; str_buf << "[monotone_constraints: " << Common::Join(Common::ArrayCast<int8_t, int>(monotone_constraints),",") << "]\n";
str_buf << "[forcedsplits_filename: " << forcedsplits_filename << "]\n"; str_buf << "[forcedsplits_filename: " << forcedsplits_filename << "]\n";
str_buf << "[verbosity: " << verbosity << "]\n";
str_buf << "[max_bin: " << max_bin << "]\n"; str_buf << "[max_bin: " << max_bin << "]\n";
str_buf << "[min_data_in_bin: " << min_data_in_bin << "]\n"; str_buf << "[min_data_in_bin: " << min_data_in_bin << "]\n";
str_buf << "[bin_construct_sample_cnt: " << bin_construct_sample_cnt << "]\n";
str_buf << "[histogram_pool_size: " << histogram_pool_size << "]\n";
str_buf << "[data_random_seed: " << data_random_seed << "]\n"; str_buf << "[data_random_seed: " << data_random_seed << "]\n";
str_buf << "[output_model: " << output_model << "]\n"; str_buf << "[output_model: " << output_model << "]\n";
str_buf << "[snapshot_freq: " << snapshot_freq << "]\n";
str_buf << "[input_model: " << input_model << "]\n"; str_buf << "[input_model: " << input_model << "]\n";
str_buf << "[output_result: " << output_result << "]\n"; str_buf << "[output_result: " << output_result << "]\n";
str_buf << "[initscore_filename: " << initscore_filename << "]\n";
str_buf << "[valid_data_initscores: " << Common::Join(valid_data_initscores,",") << "]\n";
str_buf << "[pre_partition: " << pre_partition << "]\n"; str_buf << "[pre_partition: " << pre_partition << "]\n";
str_buf << "[enable_bundle: " << enable_bundle << "]\n";
str_buf << "[max_conflict_rate: " << max_conflict_rate << "]\n";
str_buf << "[is_enable_sparse: " << is_enable_sparse << "]\n"; str_buf << "[is_enable_sparse: " << is_enable_sparse << "]\n";
str_buf << "[sparse_threshold: " << sparse_threshold << "]\n"; str_buf << "[sparse_threshold: " << sparse_threshold << "]\n";
str_buf << "[use_missing: " << use_missing << "]\n";
str_buf << "[zero_as_missing: " << zero_as_missing << "]\n";
str_buf << "[two_round: " << two_round << "]\n"; str_buf << "[two_round: " << two_round << "]\n";
str_buf << "[save_binary: " << save_binary << "]\n"; str_buf << "[save_binary: " << save_binary << "]\n";
str_buf << "[verbosity: " << verbosity << "]\n"; str_buf << "[enable_load_from_binary_file: " << enable_load_from_binary_file << "]\n";
str_buf << "[header: " << header << "]\n"; str_buf << "[header: " << header << "]\n";
str_buf << "[label_column: " << label_column << "]\n"; str_buf << "[label_column: " << label_column << "]\n";
str_buf << "[weight_column: " << weight_column << "]\n"; str_buf << "[weight_column: " << weight_column << "]\n";
...@@ -520,30 +559,20 @@ std::string Config::SaveMembersToString() const { ...@@ -520,30 +559,20 @@ std::string Config::SaveMembersToString() const {
str_buf << "[pred_early_stop: " << pred_early_stop << "]\n"; str_buf << "[pred_early_stop: " << pred_early_stop << "]\n";
str_buf << "[pred_early_stop_freq: " << pred_early_stop_freq << "]\n"; str_buf << "[pred_early_stop_freq: " << pred_early_stop_freq << "]\n";
str_buf << "[pred_early_stop_margin: " << pred_early_stop_margin << "]\n"; str_buf << "[pred_early_stop_margin: " << pred_early_stop_margin << "]\n";
str_buf << "[bin_construct_sample_cnt: " << bin_construct_sample_cnt << "]\n";
str_buf << "[use_missing: " << use_missing << "]\n";
str_buf << "[zero_as_missing: " << zero_as_missing << "]\n";
str_buf << "[initscore_filename: " << initscore_filename << "]\n";
str_buf << "[valid_data_initscores: " << Common::Join(valid_data_initscores,",") << "]\n";
str_buf << "[histogram_pool_size: " << histogram_pool_size << "]\n";
str_buf << "[enable_load_from_binary_file: " << enable_load_from_binary_file << "]\n";
str_buf << "[enable_bundle: " << enable_bundle << "]\n";
str_buf << "[max_conflict_rate: " << max_conflict_rate << "]\n";
str_buf << "[snapshot_freq: " << snapshot_freq << "]\n";
str_buf << "[convert_model_language: " << convert_model_language << "]\n"; str_buf << "[convert_model_language: " << convert_model_language << "]\n";
str_buf << "[convert_model: " << convert_model << "]\n"; str_buf << "[convert_model: " << convert_model << "]\n";
str_buf << "[num_class: " << num_class << "]\n"; str_buf << "[num_class: " << num_class << "]\n";
str_buf << "[is_unbalance: " << is_unbalance << "]\n";
str_buf << "[scale_pos_weight: " << scale_pos_weight << "]\n";
str_buf << "[sigmoid: " << sigmoid << "]\n"; str_buf << "[sigmoid: " << sigmoid << "]\n";
str_buf << "[boost_from_average: " << boost_from_average << "]\n";
str_buf << "[reg_sqrt: " << reg_sqrt << "]\n";
str_buf << "[alpha: " << alpha << "]\n"; str_buf << "[alpha: " << alpha << "]\n";
str_buf << "[fair_c: " << fair_c << "]\n"; str_buf << "[fair_c: " << fair_c << "]\n";
str_buf << "[poisson_max_delta_step: " << poisson_max_delta_step << "]\n"; str_buf << "[poisson_max_delta_step: " << poisson_max_delta_step << "]\n";
str_buf << "[boost_from_average: " << boost_from_average << "]\n";
str_buf << "[is_unbalance: " << is_unbalance << "]\n";
str_buf << "[scale_pos_weight: " << scale_pos_weight << "]\n";
str_buf << "[reg_sqrt: " << reg_sqrt << "]\n";
str_buf << "[tweedie_variance_power: " << tweedie_variance_power << "]\n"; str_buf << "[tweedie_variance_power: " << tweedie_variance_power << "]\n";
str_buf << "[label_gain: " << Common::Join(label_gain,",") << "]\n";
str_buf << "[max_position: " << max_position << "]\n"; str_buf << "[max_position: " << max_position << "]\n";
str_buf << "[label_gain: " << Common::Join(label_gain,",") << "]\n";
str_buf << "[metric_freq: " << metric_freq << "]\n"; str_buf << "[metric_freq: " << metric_freq << "]\n";
str_buf << "[is_provide_training_metric: " << is_provide_training_metric << "]\n"; str_buf << "[is_provide_training_metric: " << is_provide_training_metric << "]\n";
str_buf << "[eval_at: " << Common::Join(eval_at,",") << "]\n"; str_buf << "[eval_at: " << Common::Join(eval_at,",") << "]\n";
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment