Commit 71539cc2 authored by Misha Lisovyi's avatar Misha Lisovyi Committed by Guolin Ke
Browse files

Metric doc update (#1325)

* update sklearn fit parameter description

* update mtric parameter description
parent ba3e1ff2
......@@ -594,51 +594,56 @@ Metric Parameters
- ``metric``, default=\ ``''``, type=multi-enum
- if \ ``''`` (empty string or not specific), metric corresponding to specified application will be used
- metric to be evaluated on the evaluation sets **in addition** to what is provided in the training arguments
- ``l1``, absolute loss, alias=\ ``mean_absolute_error``, ``mae``, ``regression_l1``
- ``''`` (empty string or not specific), metric corresponding to specified objective will be used
(this is possible only for pre-defined objective functions, otherwise no evaluation metric will be added)
- ``l2``, square loss, alias=\ ``mean_squared_error``, ``mse``, ``regression_l2``, ``regression``
- ``l2_root``, root square loss, alias=\ ``root_mean_squared_error``, ``rmse``
- ``quantile``, `Quantile regression`_
- ``'None'`` (string **not** a ``None`` value), no metric registered, alias=\ ``na``
- ``mape``, `MAPE loss`_, alias=\ ``mean_absolute_percentage_error``
- ``huber``, `Huber loss`_
- ``fair``, `Fair loss`_
- ``poisson``, negative log-likelihood for `Poisson regression`_
- ``gamma``, negative log-likelihood for Gamma regression
- ``gamma_deviance``, residual deviance for Gamma regression
- ``tweedie``, negative log-likelihood for Tweedie regression
- ``ndcg``, `NDCG`_
- ``map``, `MAP`_, alias=\ ``mean_average_precision``
- ``auc``, `AUC`_
- ``binary_logloss``, `log loss`_, alias=\ ``binary``
- ``binary_error``, for one sample: ``0`` for correct classification, ``1`` for error classification
- ``multi_logloss``, log loss for mulit-class classification, alias=\ ``multiclass``, ``softmax``, ``multiclassova``, ``multiclass_ova``, ``ova``, ``ovr``
- ``multi_error``, error rate for mulit-class classification
- ``xentropy``, cross-entropy (with optional linear weights), alias=\ ``cross_entropy``
- ``xentlambda``, "intensity-weighted" cross-entropy, alias=\ ``cross_entropy_lambda``
- ``kldiv``, `Kullback-Leibler divergence`_, alias=\ ``kullback_leibler``
- ``l1``, absolute loss, alias=\ ``mean_absolute_error``, ``mae``, ``regression_l1``
- ``l2``, square loss, alias=\ ``mean_squared_error``, ``mse``, ``regression_l2``, ``regression``
- ``l2_root``, root square loss, alias=\ ``root_mean_squared_error``, ``rmse``
- ``quantile``, `Quantile regression`_
- ``mape``, `MAPE loss`_, alias=\ ``mean_absolute_percentage_error``
- ``huber``, `Huber loss`_
- ``fair``, `Fair loss`_
- ``poisson``, negative log-likelihood for `Poisson regression`_
- ``gamma``, negative log-likelihood for Gamma regression
- ``gamma_deviance``, residual deviance for Gamma regression
- ``tweedie``, negative log-likelihood for Tweedie regression
- ``ndcg``, `NDCG`_
- ``map``, `MAP`_, alias=\ ``mean_average_precision``
- ``auc``, `AUC`_
- ``binary_logloss``, `log loss`_, alias=\ ``binary``
- ``binary_error``, for one sample: ``0`` for correct classification, ``1`` for error classification
- ``multi_logloss``, log loss for mulit-class classification, alias=\ ``multiclass``, ``softmax``, ``multiclassova``, ``multiclass_ova``, ``ova``, ``ovr``
- ``multi_error``, error rate for mulit-class classification
- ``xentropy``, cross-entropy (with optional linear weights), alias=\ ``cross_entropy``
- ``xentlambda``, "intensity-weighted" cross-entropy, alias=\ ``cross_entropy_lambda``
- ``kldiv``, `Kullback-Leibler divergence`_, alias=\ ``kullback_leibler``
- support multi metrics, separated by ``,``
- support multiple metrics, separated by ``,``
- ``metric_freq``, default=\ ``1``, type=int, alias=\ ``output_freq``
......
......@@ -321,8 +321,10 @@ class LGBMModel(_LGBMModelBase):
eval_metric : string, list of strings, callable or None, optional (default=None)
If string, it should be a built-in evaluation metric to use.
If callable, it should be a custom evaluation metric, see note for more details.
In either case, the ``metric`` from the model parameters will be evaluated and used as well.
early_stopping_rounds : int or None, optional (default=None)
Activates early stopping. The model will train until the validation score stops improving.
If there's more than one, will check all of them.
Validation error needs to decrease at least every ``early_stopping_rounds`` round(s)
to continue training.
verbose : bool, optional (default=True)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment