- ``data`` :raw-html:`<a id="data" title="Permalink to this parameter" href="#data">🔗︎</a>`, default = ``""``, type = string, aliases: ``train``, ``train_data``, ``train_data_file``, ``data_filename``
- ``data`` :raw-html:`<a id="data" title="Permalink to this parameter" href="#data">🔗︎</a>`, default = ``""``, type = string, aliases: ``train``, ``train_data``, ``train_data_file``, ``data_filename``
- path of training data, LightGBM will train from this data
- path of training data, LightGBM will train from this data
...
@@ -670,6 +672,8 @@ Learning Control Parameters
...
@@ -670,6 +672,8 @@ Learning Control Parameters
- **Note**: can be used only with ``device_type = cpu``
- **Note**: can be used only with ``device_type = cpu``
- *New in version 4.0.0*
- ``num_grad_quant_bins`` :raw-html:`<a id="num_grad_quant_bins" title="Permalink to this parameter" href="#num_grad_quant_bins">🔗︎</a>`, default = ``4``, type = int
- ``num_grad_quant_bins`` :raw-html:`<a id="num_grad_quant_bins" title="Permalink to this parameter" href="#num_grad_quant_bins">🔗︎</a>`, default = ``4``, type = int
- number of bins to quantization gradients and hessians
- number of bins to quantization gradients and hessians
...
@@ -678,6 +682,8 @@ Learning Control Parameters
...
@@ -678,6 +682,8 @@ Learning Control Parameters
- **Note**: can be used only with ``device_type = cpu``
- **Note**: can be used only with ``device_type = cpu``
- *New in 4.0.0*
- ``quant_train_renew_leaf`` :raw-html:`<a id="quant_train_renew_leaf" title="Permalink to this parameter" href="#quant_train_renew_leaf">🔗︎</a>`, default = ``false``, type = bool
- ``quant_train_renew_leaf`` :raw-html:`<a id="quant_train_renew_leaf" title="Permalink to this parameter" href="#quant_train_renew_leaf">🔗︎</a>`, default = ``false``, type = bool
- whether to renew the leaf values with original gradients when quantized training
- whether to renew the leaf values with original gradients when quantized training
...
@@ -686,10 +692,14 @@ Learning Control Parameters
...
@@ -686,10 +692,14 @@ Learning Control Parameters
- **Note**: can be used only with ``device_type = cpu``
- **Note**: can be used only with ``device_type = cpu``
- *New in 4.0.0*
- ``stochastic_rounding`` :raw-html:`<a id="stochastic_rounding" title="Permalink to this parameter" href="#stochastic_rounding">🔗︎</a>`, default = ``true``, type = bool
- ``stochastic_rounding`` :raw-html:`<a id="stochastic_rounding" title="Permalink to this parameter" href="#stochastic_rounding">🔗︎</a>`, default = ``true``, type = bool
- whether to use stochastic rounding in gradient quantization
- whether to use stochastic rounding in gradient quantization
- *New in 4.0.0*
IO Parameters
IO Parameters
-------------
-------------
...
@@ -908,6 +918,8 @@ Dataset Parameters
...
@@ -908,6 +918,8 @@ Dataset Parameters
- **Note**: ``lightgbm-transform`` is not maintained by LightGBM's maintainers. Bug reports or feature requests should go to `issues page <https://github.com/microsoft/lightgbm-transform/issues>`__
- **Note**: ``lightgbm-transform`` is not maintained by LightGBM's maintainers. Bug reports or feature requests should go to `issues page <https://github.com/microsoft/lightgbm-transform/issues>`__
// alias = train, train_data, train_data_file, data_filename
// alias = train, train_data, train_data_file, data_filename
...
@@ -598,22 +599,26 @@ struct Config {
...
@@ -598,22 +599,26 @@ struct Config {
// desc = with quantized training, most arithmetics in the training process will be integer operations
// desc = with quantized training, most arithmetics in the training process will be integer operations
// desc = gradient quantization can accelerate training, with little accuracy drop in most cases
// desc = gradient quantization can accelerate training, with little accuracy drop in most cases
// desc = **Note**: can be used only with ``device_type = cpu``
// desc = **Note**: can be used only with ``device_type = cpu``
// desc = *New in version 4.0.0*
booluse_quantized_grad=false;
booluse_quantized_grad=false;
// [no-save]
// [no-save]
// desc = number of bins to quantization gradients and hessians
// desc = number of bins to quantization gradients and hessians
// desc = with more bins, the quantized training will be closer to full precision training
// desc = with more bins, the quantized training will be closer to full precision training
// desc = **Note**: can be used only with ``device_type = cpu``
// desc = **Note**: can be used only with ``device_type = cpu``
// desc = *New in 4.0.0*
intnum_grad_quant_bins=4;
intnum_grad_quant_bins=4;
// [no-save]
// [no-save]
// desc = whether to renew the leaf values with original gradients when quantized training
// desc = whether to renew the leaf values with original gradients when quantized training
// desc = renewing is very helpful for good quantized training accuracy for ranking objectives
// desc = renewing is very helpful for good quantized training accuracy for ranking objectives
// desc = **Note**: can be used only with ``device_type = cpu``
// desc = **Note**: can be used only with ``device_type = cpu``
// desc = *New in 4.0.0*
boolquant_train_renew_leaf=false;
boolquant_train_renew_leaf=false;
// [no-save]
// [no-save]
// desc = whether to use stochastic rounding in gradient quantization
// desc = whether to use stochastic rounding in gradient quantization
// desc = *New in 4.0.0*
boolstochastic_rounding=true;
boolstochastic_rounding=true;
#ifndef __NVCC__
#ifndef __NVCC__
...
@@ -777,6 +782,7 @@ struct Config {
...
@@ -777,6 +782,7 @@ struct Config {
// desc = path to a ``.json`` file that specifies customized parser initialized configuration
// desc = path to a ``.json`` file that specifies customized parser initialized configuration
// desc = see `lightgbm-transform <https://github.com/microsoft/lightgbm-transform>`__ for usage examples
// desc = see `lightgbm-transform <https://github.com/microsoft/lightgbm-transform>`__ for usage examples
// desc = **Note**: ``lightgbm-transform`` is not maintained by LightGBM's maintainers. Bug reports or feature requests should go to `issues page <https://github.com/microsoft/lightgbm-transform/issues>`__
// desc = **Note**: ``lightgbm-transform`` is not maintained by LightGBM's maintainers. Bug reports or feature requests should go to `issues page <https://github.com/microsoft/lightgbm-transform/issues>`__
Single row with the same structure as the training data.
Single row with the same structure as the training data.
If not None, the plot will highlight the path that sample takes through the tree.
If not None, the plot will highlight the path that sample takes through the tree.
.. versionadded:: 4.0.0
max_category_values : int, optional (default=10)
max_category_values : int, optional (default=10)
The maximum number of category values to display in tree nodes, if the number of thresholds is greater than this value, thresholds will be collapsed and displayed on the label tooltip instead.
The maximum number of category values to display in tree nodes, if the number of thresholds is greater than this value, thresholds will be collapsed and displayed on the label tooltip instead.