"tests/vscode:/vscode.git/clone" did not exist on "19f357726c0512736e71ca7d8eef02b303115b09"
Commit f2afb2cd authored by James Lamb's avatar James Lamb Committed by Nikita Titov
Browse files

[R-package][docs] made roxygen2 tags explicit and cleaned up documentation (#2688)



* [R-package] made roxygen2 tags explicit and cleaned up documentation

* Apply suggestions from code review
Co-Authored-By: default avatarNikita Titov <nekit94-08@mail.ru>

* Apply suggestions from code review
Co-Authored-By: default avatarNikita Titov <nekit94-08@mail.ru>

* Update R-package/man/lightgbm.Rd
Co-Authored-By: default avatarNikita Titov <nekit94-08@mail.ru>

* [R-package] moved @name to the top of roxygen blocks and removed some inaccurate information in documentation on parameters
Co-authored-by: default avatarNikita Titov <nekit94-08@mail.ru>
parent c7ae833e
......@@ -4,8 +4,7 @@
\alias{lgb_shared_params}
\title{Shared parameter docs}
\arguments{
\item{callbacks}{list of callback functions
List of callback functions that are applied at each iteration.}
\item{callbacks}{List of callback functions that are applied at each iteration.}
\item{data}{a \code{lgb.Dataset} object, used for training}
......
......@@ -43,26 +43,27 @@ If early stopping occurs, the model will have 'best_iter' field.}
\item{init_model}{path of model file of \code{lgb.Booster} object, will continue training from this model}
\item{callbacks}{list of callback functions
List of callback functions that are applied at each iteration.}
\item{callbacks}{List of callback functions that are applied at each iteration.}
\item{...}{Additional arguments passed to \code{\link{lgb.train}}. For example
\itemize{
\item{valids}{a list of \code{lgb.Dataset} objects, used for validation}
\item{obj}{objective function, can be character or custom objective function. Examples include
\item{\code{valids}: a list of \code{lgb.Dataset} objects, used for validation}
\item{\code{obj}: objective function, can be character or custom objective function. Examples include
\code{regression}, \code{regression_l1}, \code{huber},
\code{binary}, \code{lambdarank}, \code{multiclass}, \code{multiclass}}
\item{eval}{evaluation function, can be (a list of) character or custom eval function}
\item{record}{Boolean, TRUE will record iteration message to \code{booster$record_evals}}
\item{colnames}{feature names, if not null, will use this to overwrite the names in dataset}
\item{categorical_feature}{list of str or int. type int represents index, type str represents feature names}
\item{reset_data}{Boolean, setting it to TRUE (not the default value) will transform the booster model
\item{\code{eval}: evaluation function, can be (a list of) character or custom eval function}
\item{\code{record}: Boolean, TRUE will record iteration message to \code{booster$record_evals}}
\item{\code{colnames}: feature names, if not null, will use this to overwrite the names in dataset}
\item{\code{categorical_feature}: categorical features. This can either be a character vector of feature
names or an integer vector with the indices of the features (e.g. \code{c(1L, 10L)} to
say "the first and tenth columns").}
\item{\code{reset_data}: Boolean, setting it to TRUE (not the default value) will transform the booster model
into a predictor model which frees up memory and the original datasets}
\item{boosting}{Boosting type. \code{"gbdt"} or \code{"dart"}}
\item{num_leaves}{number of leaves in one tree. defaults to 127}
\item{max_depth}{Limit the max depth for tree model. This is used to deal with
\item{\code{boosting}: Boosting type. \code{"gbdt"}, \code{"rf"}, \code{"dart"} or \code{"goss"}.}
\item{\code{num_leaves}: Maximum number of leaves in one tree.}
\item{\code{max_depth}: Limit the max depth for tree model. This is used to deal with
overfit when #data is small. Tree still grow by leaf-wise.}
\item{num_threads}{Number of threads for LightGBM. For the best speed, set this to
\item{\code{num_threads}: Number of threads for LightGBM. For the best speed, set this to
the number of real CPU cores, not the number of threads (most
CPU using hyper-threading to generate 2 threads per CPU core).}
}}
......
......@@ -71,5 +71,4 @@ model <- lgb.train(
, early_stopping_rounds = 5L
)
preds <- predict(model, test$data)
}
......@@ -15,7 +15,7 @@ readRDS.lgb.Booster(file = "", refhook = NULL)
\code{lgb.Booster}.
}
\description{
Attempts to load a model using RDS.
Attempts to load a model stored in a \code{.rds} file, using \code{\link[base]{readRDS}}
}
\examples{
library(lightgbm)
......
......@@ -38,8 +38,8 @@ compression to be used. Ignored if file is a connection.}
NULL invisibly.
}
\description{
Attempts to save a model using RDS. Has an additional parameter (\code{raw}) which decides
whether to save the raw model or not.
Attempts to save a model using RDS. Has an additional parameter (\code{raw})
which decides whether to save the raw model or not.
}
\examples{
library(lightgbm)
......
......@@ -22,16 +22,19 @@ setinfo(dataset, ...)
passed object
}
\description{
Set information of an \code{lgb.Dataset} object
Set one attribute of a \code{lgb.Dataset}
}
\details{
The \code{name} field can be one of the following:
\itemize{
\item \code{label}: label lightgbm learn from ;
\item \code{weight}: to do a weight rescale ;
\item \code{init_score}: initial score is the base prediction lightgbm will boost from ;
\item \code{group}.
\item{\code{label}: vector of labels to use as the target variable}
\item{\code{weight}: to do a weight rescale}
\item{\code{init_score}: initial score is the base prediction lightgbm will boost from}
\item{\code{group}: used for learning-to-rank tasks. An integer vector describing how to
group rows together as ordered results from the same set of candidate results to be ranked.
For example, if you have a 1000-row dataset that contains 250 4-document query results,
set this to \code{rep(4L, 250L)}}
}
}
\examples{
......
......@@ -21,7 +21,7 @@ constructed sub dataset
}
\description{
Get a new \code{lgb.Dataset} containing the specified rows of
original \code{lgb.Dataset} object
original \code{lgb.Dataset} object
}
\examples{
library(lightgbm)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment