Unverified Commit f91dcfee authored by Nikita Titov's avatar Nikita Titov Committed by GitHub
Browse files

[docs][python] update types in docstring (#6897)

* Update basic.py

* Update sklearn.py

* Update basic.py

* Update basic.py

* Update basic.py

* Update basic.py

* Update basic.py

* Update basic.py

* Update basic.py

* Update basic.py

* Update basic.py

* Update sklearn.py

* Update sklearn.py

* Update sklearn.py
parent 9111ade7
...@@ -1100,7 +1100,7 @@ class _InnerPredictor: ...@@ -1100,7 +1100,7 @@ class _InnerPredictor:
Parameters Parameters
---------- ----------
data : str, pathlib.Path, numpy array, pandas DataFrame, pyarrow Table or scipy.sparse data : str, pathlib.Path, numpy array, pandas DataFrame, scipy.sparse or pyarrow Table
Data source for prediction. Data source for prediction.
If str or pathlib.Path, it represents the path to a text file (CSV, TSV, or LibSVM). If str or pathlib.Path, it represents the path to a text file (CSV, TSV, or LibSVM).
start_iteration : int, optional (default=0) start_iteration : int, optional (default=0)
...@@ -2596,7 +2596,7 @@ class Dataset: ...@@ -2596,7 +2596,7 @@ class Dataset:
Parameters Parameters
---------- ----------
data : str, pathlib.Path, numpy array, pandas DataFrame, scipy.sparse, Sequence, list of Sequence or list of numpy array data : str, pathlib.Path, numpy array, pandas DataFrame, scipy.sparse, Sequence, list of Sequence, list of numpy array or pyarrow Table
Data source of Dataset. Data source of Dataset.
If str or pathlib.Path, it represents the path to a text file (CSV, TSV, or LibSVM) or a LightGBM Dataset binary file. If str or pathlib.Path, it represents the path to a text file (CSV, TSV, or LibSVM) or a LightGBM Dataset binary file.
label : list, numpy 1-D array, pandas Series / one-column DataFrame, pyarrow Array, pyarrow ChunkedArray or None, optional (default=None) label : list, numpy 1-D array, pandas Series / one-column DataFrame, pyarrow Array, pyarrow ChunkedArray or None, optional (default=None)
...@@ -2741,7 +2741,7 @@ class Dataset: ...@@ -2741,7 +2741,7 @@ class Dataset:
---------- ----------
field_name : str field_name : str
The field name of the information. The field name of the information.
data : list, list of lists (for multi-class task), numpy array, pandas Series, pandas DataFrame (for multi-class task), pyarrow Array, pyarrow ChunkedArray or None data : list, list of lists (for multi-class task), numpy array, pandas Series, pandas DataFrame (for multi-class task), pyarrow Array, pyarrow ChunkedArray, pyarrow Table (for multi-class task) or None
The data to be set. The data to be set.
Returns Returns
...@@ -3214,7 +3214,7 @@ class Dataset: ...@@ -3214,7 +3214,7 @@ class Dataset:
Returns Returns
------- -------
label : list, numpy 1-D array, pandas Series / one-column DataFrame or None label : list, numpy 1-D array, pandas Series / one-column DataFrame, pyarrow Array, pyarrow ChunkedArray or None
The label information from the Dataset. The label information from the Dataset.
For a constructed ``Dataset``, this will only return a numpy array. For a constructed ``Dataset``, this will only return a numpy array.
""" """
...@@ -3227,7 +3227,7 @@ class Dataset: ...@@ -3227,7 +3227,7 @@ class Dataset:
Returns Returns
------- -------
weight : list, numpy 1-D array, pandas Series or None weight : list, numpy 1-D array, pandas Series, pyarrow Array, pyarrow ChunkedArray or None
Weight for each data point from the Dataset. Weights should be non-negative. Weight for each data point from the Dataset. Weights should be non-negative.
For a constructed ``Dataset``, this will only return ``None`` or a numpy array. For a constructed ``Dataset``, this will only return ``None`` or a numpy array.
""" """
...@@ -3240,7 +3240,7 @@ class Dataset: ...@@ -3240,7 +3240,7 @@ class Dataset:
Returns Returns
------- -------
init_score : list, list of lists (for multi-class task), numpy array, pandas Series, pandas DataFrame (for multi-class task), or None init_score : list, list of lists (for multi-class task), numpy array, pandas Series, pandas DataFrame (for multi-class task), pyarrow Array, pyarrow ChunkedArray, pyarrow Table (for multi-class task) or None
Init score of Booster. Init score of Booster.
For a constructed ``Dataset``, this will only return ``None`` or a numpy array. For a constructed ``Dataset``, this will only return ``None`` or a numpy array.
""" """
...@@ -3253,7 +3253,7 @@ class Dataset: ...@@ -3253,7 +3253,7 @@ class Dataset:
Returns Returns
------- -------
data : str, pathlib.Path, numpy array, pandas DataFrame, scipy.sparse, Sequence, list of Sequence or list of numpy array or None data : str, pathlib.Path, numpy array, pandas DataFrame, scipy.sparse, Sequence, list of Sequence, list of numpy array, pyarrow Table or None
Raw data used in the Dataset construction. Raw data used in the Dataset construction.
""" """
if self._handle is None: if self._handle is None:
...@@ -3286,7 +3286,7 @@ class Dataset: ...@@ -3286,7 +3286,7 @@ class Dataset:
Returns Returns
------- -------
group : list, numpy 1-D array, pandas Series or None group : list, numpy 1-D array, pandas Series, pyarrow Array, pyarrow ChunkedArray or None
Group/query data. Group/query data.
Only used in the learning-to-rank task. Only used in the learning-to-rank task.
sum(group) = n_samples. sum(group) = n_samples.
...@@ -4670,7 +4670,7 @@ class Booster: ...@@ -4670,7 +4670,7 @@ class Booster:
Parameters Parameters
---------- ----------
data : str, pathlib.Path, numpy array, pandas DataFrame, pyarrow Table or scipy.sparse data : str, pathlib.Path, numpy array, pandas DataFrame, scipy.sparse or pyarrow Table
Data source for prediction. Data source for prediction.
If str or pathlib.Path, it represents the path to a text file (CSV, TSV, or LibSVM). If str or pathlib.Path, it represents the path to a text file (CSV, TSV, or LibSVM).
start_iteration : int, optional (default=0) start_iteration : int, optional (default=0)
...@@ -4751,7 +4751,7 @@ class Booster: ...@@ -4751,7 +4751,7 @@ class Booster:
Parameters Parameters
---------- ----------
data : str, pathlib.Path, numpy array, pandas DataFrame, scipy.sparse, Sequence, list of Sequence or list of numpy array data : str, pathlib.Path, numpy array, pandas DataFrame, scipy.sparse, Sequence, list of Sequence, list of numpy array or pyarrow Table
Data source for refit. Data source for refit.
If str or pathlib.Path, it represents the path to a text file (CSV, TSV, or LibSVM). If str or pathlib.Path, it represents the path to a text file (CSV, TSV, or LibSVM).
label : list, numpy 1-D array, pandas Series / one-column DataFrame, pyarrow Array or pyarrow ChunkedArray label : list, numpy 1-D array, pandas Series / one-column DataFrame, pyarrow Array or pyarrow ChunkedArray
......
...@@ -1076,10 +1076,10 @@ class LGBMModel(_LGBMModelBase): ...@@ -1076,10 +1076,10 @@ class LGBMModel(_LGBMModelBase):
fit.__doc__ = ( fit.__doc__ = (
_lgbmmodel_doc_fit.format( _lgbmmodel_doc_fit.format(
X_shape="numpy array, pandas DataFrame, scipy.sparse, list of lists of int or float of shape = [n_samples, n_features]", X_shape="numpy array, pandas DataFrame, scipy.sparse, list of lists of int or float of shape = [n_samples, n_features]",
y_shape="numpy array, pandas DataFrame, pandas Series, list of int or float of shape = [n_samples]", y_shape="numpy array, pandas DataFrame, pandas Series, list of int or float, pyarrow Array, pyarrow ChunkedArray of shape = [n_samples]",
sample_weight_shape="numpy array, pandas Series, list of int or float of shape = [n_samples] or None, optional (default=None)", sample_weight_shape="numpy array, pandas Series, list of int or float, pyarrow Array, pyarrow ChunkedArray of shape = [n_samples] or None, optional (default=None)",
init_score_shape="numpy array, pandas DataFrame, pandas Series, list of int or float of shape = [n_samples] or shape = [n_samples * n_classes] (for multi-class task) or shape = [n_samples, n_classes] (for multi-class task) or None, optional (default=None)", init_score_shape="numpy array, pandas DataFrame, pandas Series, list of int or float, list of lists, pyarrow Array, pyarrow ChunkedArray, pyarrow Table of shape = [n_samples] or shape = [n_samples * n_classes] (for multi-class task) or shape = [n_samples, n_classes] (for multi-class task) or None, optional (default=None)",
group_shape="numpy array, pandas Series, list of int or float, or None, optional (default=None)", group_shape="numpy array, pandas Series, pyarrow Array, pyarrow ChunkedArray, list of int or float, or None, optional (default=None)",
eval_sample_weight_shape="list of array (same types as ``sample_weight`` supports), or None, optional (default=None)", eval_sample_weight_shape="list of array (same types as ``sample_weight`` supports), or None, optional (default=None)",
eval_init_score_shape="list of array (same types as ``init_score`` supports), or None, optional (default=None)", eval_init_score_shape="list of array (same types as ``init_score`` supports), or None, optional (default=None)",
eval_group_shape="list of array (same types as ``group`` supports), or None, optional (default=None)", eval_group_shape="list of array (same types as ``group`` supports), or None, optional (default=None)",
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment