Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
tianlh
LightGBM-DCU
Commits
a0d7313b
Commit
a0d7313b
authored
Sep 26, 2019
by
Nikita Titov
Committed by
Guolin Ke
Sep 26, 2019
Browse files
fixed docstrings (#2451)
parent
7b2963d9
Changes
5
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
67 additions
and
69 deletions
+67
-69
python-package/lightgbm/basic.py
python-package/lightgbm/basic.py
+18
-18
python-package/lightgbm/callback.py
python-package/lightgbm/callback.py
+3
-5
python-package/lightgbm/engine.py
python-package/lightgbm/engine.py
+4
-4
python-package/lightgbm/plotting.py
python-package/lightgbm/plotting.py
+8
-8
python-package/lightgbm/sklearn.py
python-package/lightgbm/sklearn.py
+34
-34
No files found.
python-package/lightgbm/basic.py
View file @
a0d7313b
...
...
@@ -357,9 +357,9 @@ class _InnerPredictor(object):
Not exposed to user.
Used only for prediction, usually used for continued training.
N
ote
----
Can be converted from Booster, but cannot be converted to Booster.
.. n
ote
::
Can be converted from Booster, but cannot be converted to Booster.
"""
def
__init__
(
self
,
model_file
=
None
,
booster_handle
=
None
,
pred_parameter
=
None
):
...
...
@@ -1939,11 +1939,11 @@ class Booster(object):
def
__boost
(
self
,
grad
,
hess
):
"""Boost Booster for one iteration with customized gradient statistics.
N
ote
----
For multi-class task, the score is group by class_id first, then group by row_id.
If you want to get i-th row score in j-th class, the access way is score[j * num_data + i]
and you should group grad and hess in this way as well.
.. n
ote
::
For multi-class task, the score is group by class_id first, then group by row_id.
If you want to get i-th row score in j-th class, the access way is score[j * num_data + i]
and you should group grad and hess in this way as well.
Parameters
----------
...
...
@@ -2340,13 +2340,13 @@ class Booster(object):
pred_contrib : bool, optional (default=False)
Whether to predict feature contributions.
N
ote
----
If you want to get more explanations for your model's predictions using SHAP values,
like SHAP interaction values,
you can install the shap package (https://github.com/slundberg/shap).
Note that unlike the shap package, with ``pred_contrib`` we return a matrix with an extra
column, where the last column is the expected value.
.. n
ote
::
If you want to get more explanations for your model's predictions using SHAP values,
like SHAP interaction values,
you can install the shap package (https://github.com/slundberg/shap).
Note that unlike the shap package, with ``pred_contrib`` we return a matrix with an extra
column, where the last column is the expected value.
data_has_header : bool, optional (default=False)
Whether the data has header.
...
...
@@ -2526,9 +2526,9 @@ class Booster(object):
If int, interpreted as index.
If string, interpreted as name.
Note
----
Categorical features are not supported.
.. warning::
Categorical features are not supported.
bins : int, string or None, optional (default=None)
The maximum number of bins.
...
...
python-package/lightgbm/callback.py
View file @
a0d7313b
...
...
@@ -109,9 +109,9 @@ def record_evaluation(eval_result):
def
reset_parameter
(
**
kwargs
):
"""Create a callback that resets the parameter after the first iteration.
N
ote
----
The initial parameter will still take in-effect on first iteration.
.. n
ote
::
The initial parameter will still take in-effect on first iteration.
Parameters
----------
...
...
@@ -154,8 +154,6 @@ def reset_parameter(**kwargs):
def
early_stopping
(
stopping_rounds
,
first_metric_only
=
False
,
verbose
=
True
):
"""Create a callback that activates early stopping.
Note
----
Activates early stopping.
The model will train until the validation score stops improving.
Validation score needs to improve at least every ``early_stopping_rounds`` round(s)
...
...
python-package/lightgbm/engine.py
View file @
a0d7313b
...
...
@@ -101,8 +101,8 @@ def train(params, train_set, num_boost_round=100,
evals_result: dict or None, optional (default=None)
This dictionary used to store all evaluation results of all the items in ``valid_sets``.
Example
-------
.. rubric::
Example
With a ``valid_sets`` = [valid_set, train_set],
``valid_names`` = ['eval', 'train']
and a ``params`` = {'metric': 'logloss'}
...
...
@@ -115,8 +115,8 @@ def train(params, train_set, num_boost_round=100,
If int, the eval metric on the valid set is printed at every ``verbose_eval`` boosting stage.
The last boosting stage or the boosting stage found by using ``early_stopping_rounds`` is also printed.
Example
-------
.. rubric::
Example
With ``verbose_eval`` = 4 and at least one item in ``valid_sets``,
an evaluation metric is printed every 4 (instead of 1) boosting stages.
...
...
python-package/lightgbm/plotting.py
View file @
a0d7313b
...
...
@@ -469,10 +469,10 @@ def create_tree_digraph(booster, tree_index=0, show_info=None, precision=3,
old_node_attr
=
None
,
old_edge_attr
=
None
,
old_body
=
None
,
old_strict
=
False
,
**
kwargs
):
"""Create a digraph representation of specified tree.
N
ote
----
For more information please visit
https://graphviz.readthedocs.io/en/stable/api.html#digraph.
.. n
ote
::
For more information please visit
https://graphviz.readthedocs.io/en/stable/api.html#digraph.
Parameters
----------
...
...
@@ -545,10 +545,10 @@ def plot_tree(booster, ax=None, tree_index=0, figsize=None,
show_info
=
None
,
precision
=
3
,
**
kwargs
):
"""Plot specified tree.
N
ote
----
It is preferable to use ``create_tree_digraph()`` because of its lossless quality
and returned objects can be also rendered and displayed directly inside a Jupyter notebook.
.. n
ote
::
It is preferable to use ``create_tree_digraph()`` because of its lossless quality
and returned objects can be also rendered and displayed directly inside a Jupyter notebook.
Parameters
----------
...
...
python-package/lightgbm/sklearn.py
View file @
a0d7313b
...
...
@@ -40,11 +40,11 @@ class _ObjectiveFunctionWrapper(object):
hess : array-like of shape = [n_samples] or shape = [n_samples * n_classes] (for multi-class task)
The value of the second order derivative (Hessian) for each sample point.
N
ote
----
For multi-class task, the y_pred is group by class_id first, then group by row_id.
If you want to get i-th row y_pred in j-th class, the access way is y_pred[j * num_data + i]
and you should group grad and hess in this way as well.
.. n
ote
::
For multi-class task, the y_pred is group by class_id first, then group by row_id.
If you want to get i-th row y_pred in j-th class, the access way is y_pred[j * num_data + i]
and you should group grad and hess in this way as well.
"""
self
.
func
=
func
...
...
@@ -127,10 +127,10 @@ class _EvalFunctionWrapper(object):
is_higher_better : bool
Is eval result higher better, e.g. AUC is ``is_higher_better``.
N
ote
----
For multi-class task, the y_pred is group by class_id first, then group by row_id.
If you want to get i-th row y_pred in j-th class, the access way is y_pred[j * num_data + i].
.. n
ote
::
For multi-class task, the y_pred is group by class_id first, then group by row_id.
If you want to get i-th row y_pred in j-th class, the access way is y_pred[j * num_data + i].
"""
self
.
func
=
func
...
...
@@ -244,9 +244,9 @@ class LGBMModel(_LGBMModelBase):
Other parameters for the model.
Check http://lightgbm.readthedocs.io/en/latest/Parameters.html for more parameters.
Note
----
\*\*kwargs is not supported in sklearn, it may cause unexpected issues.
.. warning::
\*\*kwargs is not supported in sklearn, it may cause unexpected issues.
Attributes
----------
...
...
@@ -421,8 +421,8 @@ class LGBMModel(_LGBMModelBase):
If int, the eval metric on the eval set is printed at every ``verbose`` boosting stage.
The last boosting stage or the boosting stage found by using ``early_stopping_rounds`` is also printed.
Example
-------
.. rubric::
Example
With ``verbose`` = 4 and at least one item in ``eval_set``,
an evaluation metric is printed every 4 (instead of 1) boosting stages.
...
...
@@ -626,13 +626,13 @@ class LGBMModel(_LGBMModelBase):
pred_contrib : bool, optional (default=False)
Whether to predict feature contributions.
N
ote
----
If you want to get more explanations for your model's predictions using SHAP values,
like SHAP interaction values,
you can install the shap package (https://github.com/slundberg/shap).
Note that unlike the shap package, with ``pred_contrib`` we return a matrix with an extra
column, where the last column is the expected value.
.. n
ote
::
If you want to get more explanations for your model's predictions using SHAP values,
like SHAP interaction values,
you can install the shap package (https://github.com/slundberg/shap).
Note that unlike the shap package, with ``pred_contrib`` we return a matrix with an extra
column, where the last column is the expected value.
**kwargs
Other parameters for the prediction.
...
...
@@ -705,12 +705,12 @@ class LGBMModel(_LGBMModelBase):
def
feature_importances_
(
self
):
"""Get feature importances.
N
ote
----
Feature importance in sklearn interface used to normalize to 1,
it's deprecated after 2.0.4 and is the same as Booster.feature_importance() now.
``importance_type`` attribute is passed to the function
to configure the type of importance values to be extracted.
.. n
ote
::
Feature importance in sklearn interface used to normalize to 1,
it's deprecated after 2.0.4 and is the same as Booster.feature_importance() now.
``importance_type`` attribute is passed to the function
to configure the type of importance values to be extracted.
"""
if
self
.
_n_features
is
None
:
raise
LGBMNotFittedError
(
'No feature_importances found. Need to call fit beforehand.'
)
...
...
@@ -834,13 +834,13 @@ class LGBMClassifier(LGBMModel, _LGBMClassifierBase):
pred_contrib : bool, optional (default=False)
Whether to predict feature contributions.
N
ote
----
If you want to get more explanations for your model's predictions using SHAP values,
like SHAP interaction values,
you can install the shap package (https://github.com/slundberg/shap).
Note that unlike the shap package, with ``pred_contrib`` we return a matrix with an extra
column, where the last column is the expected value.
.. n
ote
::
If you want to get more explanations for your model's predictions using SHAP values,
like SHAP interaction values,
you can install the shap package (https://github.com/slundberg/shap).
Note that unlike the shap package, with ``pred_contrib`` we return a matrix with an extra
column, where the last column is the expected value.
**kwargs
Other parameters for the prediction.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment