Commit 0ac24779 authored by Nikita Titov's avatar Nikita Titov Committed by Guolin Ke
Browse files

[docs] updated Quick Start according to the last changes in Parameters (#1073)

* updated Quick-Strat according to last changes in Parameters

* fixed links
parent 00e03ced
...@@ -59,19 +59,22 @@ Some important parameters: ...@@ -59,19 +59,22 @@ Some important parameters:
- path to config file - path to config file
- ``task``, default=\ ``train``, type=enum, options=\ ``train``, ``prediction`` - ``task``, default=\ ``train``, type=enum, options=\ ``train``, ``predict``, ``convert_model``
- ``train`` for training - ``train``, alias=\ ``training``, for training
- ``prediction`` for prediction - ``predict``, alias=\ ``prediction``, ``test``, for prediction.
- ``convert_model``, for converting model file into if-else format, see more information in `Convert model parameters <./Parameters.rst#convert-model-parameters>`__
- ``application``, default=\ ``regression``, type=enum, - ``application``, default=\ ``regression``, type=enum,
options=\ ``regression``, ``regression_l2``, ``regression_l1``, ``huber``, ``fair``, ``poisson``, ``binary``, ``lambdarank``, ``multiclass``, options=\ ``regression``, ``regression_l1``, ``huber``, ``fair``, ``poisson``, ``quantile``, ``quantile_l2``,
``binary``, ``multiclass``, ``multiclassova``, ``xentropy``, ``xentlambda``, ``lambdarank``,
alias=\ ``objective``, ``app`` alias=\ ``objective``, ``app``
- ``regression``, regression application - regression application
- ``regression_l2``, L2 loss, alias=\ ``mean_squared_error``, ``mse`` - ``regression_l2``, L2 loss, alias=\ ``regression``, ``mean_squared_error``, ``mse``
- ``regression_l1``, L1 loss, alias=\ ``mean_absolute_error``, ``mae`` - ``regression_l1``, L1 loss, alias=\ ``mean_absolute_error``, ``mae``
...@@ -81,16 +84,31 @@ Some important parameters: ...@@ -81,16 +84,31 @@ Some important parameters:
- ``poisson``, `Poisson regression`_ - ``poisson``, `Poisson regression`_
- ``binary``, binary classification application - ``quantile``, `Quantile regression`_
- ``lambdarank``, `lambdarank`_ application - ``quantile_l2``, like the ``quantile``, but L2 loss is used instead
- ``binary``, binary `log loss`_ classification application
- multi-class classification application
- ``multiclass``, `softmax`_ objective function, ``num_class`` should be set as well
- ``multiclassova``, `One-vs-All`_ binary objective function, ``num_class`` should be set as well
- cross-entropy application
- ``xentropy``, objective function for cross-entropy (with optional linear weights), alias=\ ``cross_entropy``
- the label should be ``int`` type in lambdarank tasks, - ``xentlambda``, alternative parameterization of cross-entropy, alias=\ ``cross_entropy_lambda``
and larger number represent the higher relevance (e.g. 0:bad, 1:fair, 2:good, 3:perfect)
- ``label_gain`` can be used to set the gain(weight) of ``int`` label. - the label is anything in interval [0, 1]
- ``multiclass``, multi-class classification application, ``num_class`` should be set as well - ``lambdarank``, `lambdarank`_ application
- the label should be ``int`` type in lambdarank tasks, and larger number represent the higher relevance (e.g. 0:bad, 1:fair, 2:good, 3:perfect)
- ``label_gain`` can be used to set the gain(weight) of ``int`` label
- ``boosting``, default=\ ``gbdt``, type=enum, - ``boosting``, default=\ ``gbdt``, type=enum,
options=\ ``gbdt``, ``rf``, ``dart``, ``goss``, options=\ ``gbdt``, ``rf``, ``dart``, ``goss``,
...@@ -127,13 +145,15 @@ Some important parameters: ...@@ -127,13 +145,15 @@ Some important parameters:
- number of leaves in one tree - number of leaves in one tree
- ``tree_learner``, default=\ ``serial``, type=enum, options=\ ``serial``, ``feature``, ``data``, alias=\ ``tree`` - ``tree_learner``, default=\ ``serial``, type=enum, options=\ ``serial``, ``feature``, ``data``, ``voting``, alias=\ ``tree``
- ``serial``, single machine tree learner - ``serial``, single machine tree learner
- ``feature``, feature parallel tree learner - ``feature``, alias=\ ``feature_parallel``, feature parallel tree learner
- ``data``, alias=\ ``data_parallel``, data parallel tree learner
- ``data``, data parallel tree learner - ``voting``, alias=\ ``voting_parallel``, voting parallel tree learner
- refer to `Parallel Learning Guide <./Parallel-Learning-Guide.rst>`__ to get more details - refer to `Parallel Learning Guide <./Parallel-Learning-Guide.rst>`__ to get more details
...@@ -161,7 +181,7 @@ Some important parameters: ...@@ -161,7 +181,7 @@ Some important parameters:
- ``min_sum_hessian_in_leaf``, default=\ ``1e-3``, type=double, - ``min_sum_hessian_in_leaf``, default=\ ``1e-3``, type=double,
alias=\ ``min_sum_hessian_per_leaf``, ``min_sum_hessian``, ``min_hessian``, ``min_child_weight`` alias=\ ``min_sum_hessian_per_leaf``, ``min_sum_hessian``, ``min_hessian``, ``min_child_weight``
- minimal sum hessian in one leaf. Like ``min_data_in_leaf``, can be used to deal with over-fitting - minimal sum hessian in one leaf. Like ``min_data_in_leaf``, it can be used to deal with over-fitting
For all parameters, please refer to `Parameters <./Parameters.rst>`__. For all parameters, please refer to `Parameters <./Parameters.rst>`__.
...@@ -212,6 +232,14 @@ Examples ...@@ -212,6 +232,14 @@ Examples
.. _Poisson regression: https://en.wikipedia.org/wiki/Poisson_regression .. _Poisson regression: https://en.wikipedia.org/wiki/Poisson_regression
.. _Quantile regression: https://en.wikipedia.org/wiki/Quantile_regression
.. _log loss: https://www.kaggle.com/wiki/LogLoss
.. _softmax: https://en.wikipedia.org/wiki/Softmax_function
.. _One-vs-All: https://en.wikipedia.org/wiki/Multiclass_classification#One-vs.-rest
.. _lambdarank: https://papers.nips.cc/paper/2971-learning-to-rank-with-nonsmooth-cost-functions.pdf .. _lambdarank: https://papers.nips.cc/paper/2971-learning-to-rank-with-nonsmooth-cost-functions.pdf
.. _Dropouts meet Multiple Additive Regression Trees: https://arxiv.org/abs/1505.01866 .. _Dropouts meet Multiple Additive Regression Trees: https://arxiv.org/abs/1505.01866
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment