Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
tianlh
LightGBM-DCU
Commits
a0d7313b
Commit
a0d7313b
authored
Sep 26, 2019
by
Nikita Titov
Committed by
Guolin Ke
Sep 26, 2019
Browse files
fixed docstrings (#2451)
parent
7b2963d9
Changes
5
Show whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
67 additions
and
69 deletions
+67
-69
python-package/lightgbm/basic.py
python-package/lightgbm/basic.py
+18
-18
python-package/lightgbm/callback.py
python-package/lightgbm/callback.py
+3
-5
python-package/lightgbm/engine.py
python-package/lightgbm/engine.py
+4
-4
python-package/lightgbm/plotting.py
python-package/lightgbm/plotting.py
+8
-8
python-package/lightgbm/sklearn.py
python-package/lightgbm/sklearn.py
+34
-34
No files found.
python-package/lightgbm/basic.py
View file @
a0d7313b
...
@@ -357,8 +357,8 @@ class _InnerPredictor(object):
...
@@ -357,8 +357,8 @@ class _InnerPredictor(object):
Not exposed to user.
Not exposed to user.
Used only for prediction, usually used for continued training.
Used only for prediction, usually used for continued training.
N
ote
.. n
ote
::
----
Can be converted from Booster, but cannot be converted to Booster.
Can be converted from Booster, but cannot be converted to Booster.
"""
"""
...
@@ -1939,8 +1939,8 @@ class Booster(object):
...
@@ -1939,8 +1939,8 @@ class Booster(object):
def
__boost
(
self
,
grad
,
hess
):
def
__boost
(
self
,
grad
,
hess
):
"""Boost Booster for one iteration with customized gradient statistics.
"""Boost Booster for one iteration with customized gradient statistics.
N
ote
.. n
ote
::
----
For multi-class task, the score is group by class_id first, then group by row_id.
For multi-class task, the score is group by class_id first, then group by row_id.
If you want to get i-th row score in j-th class, the access way is score[j * num_data + i]
If you want to get i-th row score in j-th class, the access way is score[j * num_data + i]
and you should group grad and hess in this way as well.
and you should group grad and hess in this way as well.
...
@@ -2340,8 +2340,8 @@ class Booster(object):
...
@@ -2340,8 +2340,8 @@ class Booster(object):
pred_contrib : bool, optional (default=False)
pred_contrib : bool, optional (default=False)
Whether to predict feature contributions.
Whether to predict feature contributions.
N
ote
.. n
ote
::
----
If you want to get more explanations for your model's predictions using SHAP values,
If you want to get more explanations for your model's predictions using SHAP values,
like SHAP interaction values,
like SHAP interaction values,
you can install the shap package (https://github.com/slundberg/shap).
you can install the shap package (https://github.com/slundberg/shap).
...
@@ -2526,8 +2526,8 @@ class Booster(object):
...
@@ -2526,8 +2526,8 @@ class Booster(object):
If int, interpreted as index.
If int, interpreted as index.
If string, interpreted as name.
If string, interpreted as name.
Note
.. warning::
----
Categorical features are not supported.
Categorical features are not supported.
bins : int, string or None, optional (default=None)
bins : int, string or None, optional (default=None)
...
...
python-package/lightgbm/callback.py
View file @
a0d7313b
...
@@ -109,8 +109,8 @@ def record_evaluation(eval_result):
...
@@ -109,8 +109,8 @@ def record_evaluation(eval_result):
def
reset_parameter
(
**
kwargs
):
def
reset_parameter
(
**
kwargs
):
"""Create a callback that resets the parameter after the first iteration.
"""Create a callback that resets the parameter after the first iteration.
N
ote
.. n
ote
::
----
The initial parameter will still take in-effect on first iteration.
The initial parameter will still take in-effect on first iteration.
Parameters
Parameters
...
@@ -154,8 +154,6 @@ def reset_parameter(**kwargs):
...
@@ -154,8 +154,6 @@ def reset_parameter(**kwargs):
def
early_stopping
(
stopping_rounds
,
first_metric_only
=
False
,
verbose
=
True
):
def
early_stopping
(
stopping_rounds
,
first_metric_only
=
False
,
verbose
=
True
):
"""Create a callback that activates early stopping.
"""Create a callback that activates early stopping.
Note
----
Activates early stopping.
Activates early stopping.
The model will train until the validation score stops improving.
The model will train until the validation score stops improving.
Validation score needs to improve at least every ``early_stopping_rounds`` round(s)
Validation score needs to improve at least every ``early_stopping_rounds`` round(s)
...
...
python-package/lightgbm/engine.py
View file @
a0d7313b
...
@@ -101,8 +101,8 @@ def train(params, train_set, num_boost_round=100,
...
@@ -101,8 +101,8 @@ def train(params, train_set, num_boost_round=100,
evals_result: dict or None, optional (default=None)
evals_result: dict or None, optional (default=None)
This dictionary used to store all evaluation results of all the items in ``valid_sets``.
This dictionary used to store all evaluation results of all the items in ``valid_sets``.
Example
.. rubric::
Example
-------
With a ``valid_sets`` = [valid_set, train_set],
With a ``valid_sets`` = [valid_set, train_set],
``valid_names`` = ['eval', 'train']
``valid_names`` = ['eval', 'train']
and a ``params`` = {'metric': 'logloss'}
and a ``params`` = {'metric': 'logloss'}
...
@@ -115,8 +115,8 @@ def train(params, train_set, num_boost_round=100,
...
@@ -115,8 +115,8 @@ def train(params, train_set, num_boost_round=100,
If int, the eval metric on the valid set is printed at every ``verbose_eval`` boosting stage.
If int, the eval metric on the valid set is printed at every ``verbose_eval`` boosting stage.
The last boosting stage or the boosting stage found by using ``early_stopping_rounds`` is also printed.
The last boosting stage or the boosting stage found by using ``early_stopping_rounds`` is also printed.
Example
.. rubric::
Example
-------
With ``verbose_eval`` = 4 and at least one item in ``valid_sets``,
With ``verbose_eval`` = 4 and at least one item in ``valid_sets``,
an evaluation metric is printed every 4 (instead of 1) boosting stages.
an evaluation metric is printed every 4 (instead of 1) boosting stages.
...
...
python-package/lightgbm/plotting.py
View file @
a0d7313b
...
@@ -469,8 +469,8 @@ def create_tree_digraph(booster, tree_index=0, show_info=None, precision=3,
...
@@ -469,8 +469,8 @@ def create_tree_digraph(booster, tree_index=0, show_info=None, precision=3,
old_node_attr
=
None
,
old_edge_attr
=
None
,
old_body
=
None
,
old_strict
=
False
,
**
kwargs
):
old_node_attr
=
None
,
old_edge_attr
=
None
,
old_body
=
None
,
old_strict
=
False
,
**
kwargs
):
"""Create a digraph representation of specified tree.
"""Create a digraph representation of specified tree.
N
ote
.. n
ote
::
----
For more information please visit
For more information please visit
https://graphviz.readthedocs.io/en/stable/api.html#digraph.
https://graphviz.readthedocs.io/en/stable/api.html#digraph.
...
@@ -545,8 +545,8 @@ def plot_tree(booster, ax=None, tree_index=0, figsize=None,
...
@@ -545,8 +545,8 @@ def plot_tree(booster, ax=None, tree_index=0, figsize=None,
show_info
=
None
,
precision
=
3
,
**
kwargs
):
show_info
=
None
,
precision
=
3
,
**
kwargs
):
"""Plot specified tree.
"""Plot specified tree.
N
ote
.. n
ote
::
----
It is preferable to use ``create_tree_digraph()`` because of its lossless quality
It is preferable to use ``create_tree_digraph()`` because of its lossless quality
and returned objects can be also rendered and displayed directly inside a Jupyter notebook.
and returned objects can be also rendered and displayed directly inside a Jupyter notebook.
...
...
python-package/lightgbm/sklearn.py
View file @
a0d7313b
...
@@ -40,8 +40,8 @@ class _ObjectiveFunctionWrapper(object):
...
@@ -40,8 +40,8 @@ class _ObjectiveFunctionWrapper(object):
hess : array-like of shape = [n_samples] or shape = [n_samples * n_classes] (for multi-class task)
hess : array-like of shape = [n_samples] or shape = [n_samples * n_classes] (for multi-class task)
The value of the second order derivative (Hessian) for each sample point.
The value of the second order derivative (Hessian) for each sample point.
N
ote
.. n
ote
::
----
For multi-class task, the y_pred is group by class_id first, then group by row_id.
For multi-class task, the y_pred is group by class_id first, then group by row_id.
If you want to get i-th row y_pred in j-th class, the access way is y_pred[j * num_data + i]
If you want to get i-th row y_pred in j-th class, the access way is y_pred[j * num_data + i]
and you should group grad and hess in this way as well.
and you should group grad and hess in this way as well.
...
@@ -127,8 +127,8 @@ class _EvalFunctionWrapper(object):
...
@@ -127,8 +127,8 @@ class _EvalFunctionWrapper(object):
is_higher_better : bool
is_higher_better : bool
Is eval result higher better, e.g. AUC is ``is_higher_better``.
Is eval result higher better, e.g. AUC is ``is_higher_better``.
N
ote
.. n
ote
::
----
For multi-class task, the y_pred is group by class_id first, then group by row_id.
For multi-class task, the y_pred is group by class_id first, then group by row_id.
If you want to get i-th row y_pred in j-th class, the access way is y_pred[j * num_data + i].
If you want to get i-th row y_pred in j-th class, the access way is y_pred[j * num_data + i].
"""
"""
...
@@ -244,8 +244,8 @@ class LGBMModel(_LGBMModelBase):
...
@@ -244,8 +244,8 @@ class LGBMModel(_LGBMModelBase):
Other parameters for the model.
Other parameters for the model.
Check http://lightgbm.readthedocs.io/en/latest/Parameters.html for more parameters.
Check http://lightgbm.readthedocs.io/en/latest/Parameters.html for more parameters.
Note
.. warning::
----
\*\*kwargs is not supported in sklearn, it may cause unexpected issues.
\*\*kwargs is not supported in sklearn, it may cause unexpected issues.
Attributes
Attributes
...
@@ -421,8 +421,8 @@ class LGBMModel(_LGBMModelBase):
...
@@ -421,8 +421,8 @@ class LGBMModel(_LGBMModelBase):
If int, the eval metric on the eval set is printed at every ``verbose`` boosting stage.
If int, the eval metric on the eval set is printed at every ``verbose`` boosting stage.
The last boosting stage or the boosting stage found by using ``early_stopping_rounds`` is also printed.
The last boosting stage or the boosting stage found by using ``early_stopping_rounds`` is also printed.
Example
.. rubric::
Example
-------
With ``verbose`` = 4 and at least one item in ``eval_set``,
With ``verbose`` = 4 and at least one item in ``eval_set``,
an evaluation metric is printed every 4 (instead of 1) boosting stages.
an evaluation metric is printed every 4 (instead of 1) boosting stages.
...
@@ -626,8 +626,8 @@ class LGBMModel(_LGBMModelBase):
...
@@ -626,8 +626,8 @@ class LGBMModel(_LGBMModelBase):
pred_contrib : bool, optional (default=False)
pred_contrib : bool, optional (default=False)
Whether to predict feature contributions.
Whether to predict feature contributions.
N
ote
.. n
ote
::
----
If you want to get more explanations for your model's predictions using SHAP values,
If you want to get more explanations for your model's predictions using SHAP values,
like SHAP interaction values,
like SHAP interaction values,
you can install the shap package (https://github.com/slundberg/shap).
you can install the shap package (https://github.com/slundberg/shap).
...
@@ -705,8 +705,8 @@ class LGBMModel(_LGBMModelBase):
...
@@ -705,8 +705,8 @@ class LGBMModel(_LGBMModelBase):
def
feature_importances_
(
self
):
def
feature_importances_
(
self
):
"""Get feature importances.
"""Get feature importances.
N
ote
.. n
ote
::
----
Feature importance in sklearn interface used to normalize to 1,
Feature importance in sklearn interface used to normalize to 1,
it's deprecated after 2.0.4 and is the same as Booster.feature_importance() now.
it's deprecated after 2.0.4 and is the same as Booster.feature_importance() now.
``importance_type`` attribute is passed to the function
``importance_type`` attribute is passed to the function
...
@@ -834,8 +834,8 @@ class LGBMClassifier(LGBMModel, _LGBMClassifierBase):
...
@@ -834,8 +834,8 @@ class LGBMClassifier(LGBMModel, _LGBMClassifierBase):
pred_contrib : bool, optional (default=False)
pred_contrib : bool, optional (default=False)
Whether to predict feature contributions.
Whether to predict feature contributions.
N
ote
.. n
ote
::
----
If you want to get more explanations for your model's predictions using SHAP values,
If you want to get more explanations for your model's predictions using SHAP values,
like SHAP interaction values,
like SHAP interaction values,
you can install the shap package (https://github.com/slundberg/shap).
you can install the shap package (https://github.com/slundberg/shap).
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment