Unverified Commit 2067bdc5 authored by Nikita Titov's avatar Nikita Titov Committed by GitHub
Browse files

[docs][python] Improve description of `eval_result` argument in `record_evaluation()` (#4559)

* Update callback.py

* Update engine.py
parent 8a07ed2d
......@@ -78,7 +78,9 @@ def record_evaluation(eval_result: Dict[str, Dict[str, List[Any]]]) -> Callable:
Parameters
----------
eval_result : dict
A dictionary to store the evaluation results.
Dictionary used to store all evaluation results of all validation sets.
This should be initialized outside of your call to ``record_evaluation()`` and should be empty.
Any initial contents of the dictionary will be deleted.
Returns
-------
......@@ -157,9 +159,9 @@ def early_stopping(stopping_rounds: int, first_metric_only: bool = False, verbos
Parameters
----------
stopping_rounds : int
The possible number of rounds without the trend occurrence.
The possible number of rounds without the trend occurrence.
first_metric_only : bool, optional (default=False)
Whether to use only the first metric for early stopping.
Whether to use only the first metric for early stopping.
verbose : bool, optional (default=True)
Whether to print message with early stopping information.
......
......@@ -124,7 +124,7 @@ def train(
evals_result: dict or None, optional (default=None)
Dictionary used to store all evaluation results of all the items in ``valid_sets``.
This should be initialized outside of your call to ``train()`` and should be empty.
Any initial contents of the dictionary will be deleted by ``train()``.
Any initial contents of the dictionary will be deleted.
.. rubric:: Example
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment