lightgbm.R 16.3 KB
Newer Older
James Lamb's avatar
James Lamb committed
1
2
3
#' @name lgb_shared_params
#' @title Shared parameter docs
#' @description Parameter docs shared by \code{lgb.train}, \code{lgb.cv}, and \code{lightgbm}
4
#' @param callbacks List of callback functions that are applied at each iteration.
5
6
7
#' @param data a \code{lgb.Dataset} object, used for training. Some functions, such as \code{\link{lgb.cv}},
#'             may allow you to pass other types of data like \code{matrix} and then separately supply
#'             \code{label} as a keyword argument.
8
9
10
11
12
#' @param early_stopping_rounds int. Activates early stopping. When this parameter is non-null,
#'                              training will stop if the evaluation of any metric on any validation set
#'                              fails to improve for \code{early_stopping_rounds} consecutive boosting rounds.
#'                              If training stops early, the returned model will have attribute \code{best_iter}
#'                              set to the iteration number of the best iteration.
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
#' @param eval evaluation function(s). This can be a character vector, function, or list with a mixture of
#'             strings and functions.
#'
#'             \itemize{
#'                 \item{\bold{a. character vector}:
#'                     If you provide a character vector to this argument, it should contain strings with valid
#'                     evaluation metrics.
#'                     See \href{https://lightgbm.readthedocs.io/en/latest/Parameters.html#metric}{
#'                     The "metric" section of the documentation}
#'                     for a list of valid metrics.
#'                 }
#'                 \item{\bold{b. function}:
#'                      You can provide a custom evaluation function. This
#'                      should accept the keyword arguments \code{preds} and \code{dtrain} and should return a named
#'                      list with three elements:
#'                      \itemize{
#'                          \item{\code{name}: A string with the name of the metric, used for printing
#'                              and storing results.
#'                          }
#'                          \item{\code{value}: A single number indicating the value of the metric for the
#'                              given predictions and true values
#'                          }
#'                          \item{
#'                              \code{higher_better}: A boolean indicating whether higher values indicate a better fit.
#'                              For example, this would be \code{FALSE} for metrics like MAE or RMSE.
#'                          }
#'                      }
#'                 }
#'                 \item{\bold{c. list}:
#'                     If a list is given, it should only contain character vectors and functions.
#'                     These should follow the requirements from the descriptions above.
#'                 }
#'             }
46
#' @param eval_freq evaluation output frequency, only effective when verbose > 0 and \code{valids} has been provided
47
#' @param init_model path of model file or \code{lgb.Booster} object, will continue training from this model
James Lamb's avatar
James Lamb committed
48
#' @param nrounds number of training rounds
49
50
51
#' @param obj objective function, can be character or custom objective function. Examples include
#'            \code{regression}, \code{regression_l1}, \code{huber},
#'            \code{binary}, \code{lambdarank}, \code{multiclass}, \code{multiclass}
52
53
#' @param params a list of parameters. See \href{https://lightgbm.readthedocs.io/en/latest/Parameters.html}{
#'               the "Parameters" section of the documentation} for a list of parameters and valid values.
54
55
#' @param verbose verbosity for output, if <= 0 and \code{valids} has been provided, also will disable the
#'                printing of evaluation during training
56
#' @param serializable whether to make the resulting objects serializable through functions such as
57
#'                     \code{save} or \code{saveRDS} (see section "Model serialization").
58
59
60
61
62
63
64
65
66
67
68
69
70
71
#' @section Early Stopping:
#'
#'          "early stopping" refers to stopping the training process if the model's performance on a given
#'          validation set does not improve for several consecutive iterations.
#'
#'          If multiple arguments are given to \code{eval}, their order will be preserved. If you enable
#'          early stopping by setting \code{early_stopping_rounds} in \code{params}, by default all
#'          metrics will be considered for early stopping.
#'
#'          If you want to only consider the first metric for early stopping, pass
#'          \code{first_metric_only = TRUE} in \code{params}. Note that if you also specify \code{metric}
#'          in \code{params}, that metric will be considered the "first" one. If you omit \code{metric},
#'          a default metric will be used based on your choice for the parameter \code{obj} (keyword argument)
#'          or \code{objective} (passed into \code{params}).
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
#' @section Model serialization:
#'
#'          LightGBM model objects can be serialized and de-serialized through functions such as \code{save}
#'          or \code{saveRDS}, but similarly to libraries such as 'xgboost', serialization works a bit differently
#'          from typical R objects. In order to make models serializable in R, a copy of the underlying C++ object
#'          as serialized raw bytes is produced and stored in the R model object, and when this R object is
#'          de-serialized, the underlying C++ model object gets reconstructed from these raw bytes, but will only
#'          do so once some function that uses it is called, such as \code{predict}. In order to forcibly
#'          reconstruct the C++ object after deserialization (e.g. after calling \code{readRDS} or similar), one
#'          can use the function \link{lgb.restore_handle} (for example, if one makes predictions in parallel or in
#'          forked processes, it will be faster to restore the handle beforehand).
#'
#'          Producing and keeping these raw bytes however uses extra memory, and if they are not required,
#'          it is possible to avoid producing them by passing `serializable=FALSE`. In such cases, these raw
#'          bytes can be added to the model on demand through function \link{lgb.make_serializable}.
87
88
89
#'
#'          \emph{New in version 4.0.0}
#'
90
#' @keywords internal
James Lamb's avatar
James Lamb committed
91
92
93
NULL

#' @name lightgbm
94
#' @title Train a LightGBM model
95
96
97
98
99
100
#' @description High-level R interface to train a LightGBM model. Unlike \code{\link{lgb.train}}, this function
#'              is focused on compatibility with other statistics and machine learning interfaces in R.
#'              This focus on compatibility means that this interface may experience more frequent breaking API changes
#'              than \code{\link{lgb.train}}.
#'              For efficiency-sensitive applications, or for applications where breaking API changes across releases
#'              is very expensive, use \code{\link{lgb.train}}.
James Lamb's avatar
James Lamb committed
101
102
#' @inheritParams lgb_shared_params
#' @param label Vector of labels, used if \code{data} is not an \code{\link{lgb.Dataset}}
103
104
#' @param weights Sample / observation weights for rows in the input data. If \code{NULL}, will assume that all
#'                observations / rows have the same importance / weight.
105
106
107
#'
#'                \emph{Changed from 'weight', in version 4.0.0}
#'
108
109
#' @param objective Optimization objective (e.g. `"regression"`, `"binary"`, etc.).
#'                  For a list of accepted objectives, see
110
111
#'                  \href{https://lightgbm.readthedocs.io/en/latest/Parameters.html#objective}{
#'                  the "objective" item of the "Parameters" section of the documentation}.
112
113
114
115
116
117
118
119
120
#'
#'                  If passing \code{"auto"} and \code{data} is not of type \code{lgb.Dataset}, the objective will
#'                  be determined according to what is passed for \code{label}:\itemize{
#'                  \item If passing a factor with two variables, will use objective \code{"binary"}.
#'                  \item If passing a factor with more than two variables, will use objective \code{"multiclass"}
#'                  (note that parameter \code{num_class} in this case will also be determined automatically from
#'                  \code{label}).
#'                  \item Otherwise, will use objective \code{"regression"}.
#'                  }
121
122
123
#'
#'                  \emph{New in version 4.0.0}
#'
124
#' @param init_score initial score is the base prediction lightgbm will boost from
125
126
127
#'
#'                   \emph{New in version 4.0.0}
#'
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
#' @param num_threads Number of parallel threads to use. For best speed, this should be set to the number of
#'                    physical cores in the CPU - in a typical x86-64 machine, this corresponds to half the
#'                    number of maximum threads.
#'
#'                    Be aware that using too many threads can result in speed degradation in smaller datasets
#'                    (see the parameters documentation for more details).
#'
#'                    If passing zero, will use the default number of threads configured for OpenMP
#'                    (typically controlled through an environment variable \code{OMP_NUM_THREADS}).
#'
#'                    If passing \code{NULL} (the default), will try to use the number of physical cores in the
#'                    system, but be aware that getting the number of cores detected correctly requires package
#'                    \code{RhpcBLASctl} to be installed.
#'
#'                    This parameter gets overriden by \code{num_threads} and its aliases under \code{params}
#'                    if passed there.
144
145
146
#'
#'                    \emph{New in version 4.0.0}
#'
James Lamb's avatar
James Lamb committed
147
148
#' @param ... Additional arguments passed to \code{\link{lgb.train}}. For example
#'     \itemize{
149
150
#'        \item{\code{valids}: a list of \code{lgb.Dataset} objects, used for validation}
#'        \item{\code{obj}: objective function, can be character or custom objective function. Examples include
James Lamb's avatar
James Lamb committed
151
152
#'                   \code{regression}, \code{regression_l1}, \code{huber},
#'                    \code{binary}, \code{lambdarank}, \code{multiclass}, \code{multiclass}}
153
154
155
156
157
158
159
#'        \item{\code{eval}: evaluation function, can be (a list of) character or custom eval function}
#'        \item{\code{record}: Boolean, TRUE will record iteration message to \code{booster$record_evals}}
#'        \item{\code{colnames}: feature names, if not null, will use this to overwrite the names in dataset}
#'        \item{\code{categorical_feature}: categorical features. This can either be a character vector of feature
#'                            names or an integer vector with the indices of the features (e.g. \code{c(1L, 10L)} to
#'                            say "the first and tenth columns").}
#'        \item{\code{reset_data}: Boolean, setting it to TRUE (not the default value) will transform the booster model
James Lamb's avatar
James Lamb committed
160
161
#'                          into a predictor model which frees up memory and the original datasets}
#'     }
162
#' @inheritSection lgb_shared_params Early Stopping
163
#' @return a trained \code{lgb.Booster}
Guolin Ke's avatar
Guolin Ke committed
164
#' @export
165
166
lightgbm <- function(data,
                     label = NULL,
167
                     weights = NULL,
168
                     params = list(),
169
                     nrounds = 100L,
170
                     verbose = 1L,
171
172
173
174
                     eval_freq = 1L,
                     early_stopping_rounds = NULL,
                     init_model = NULL,
                     callbacks = list(),
175
                     serializable = TRUE,
176
                     objective = "auto",
177
                     init_score = NULL,
178
                     num_threads = NULL,
179
                     ...) {
180

181
  # validate inputs early to avoid unnecessary computation
182
  if (nrounds <= 0L) {
183
184
    stop("nrounds should be greater than zero")
  }
185

186
187
188
189
190
191
192
193
  if (is.null(num_threads)) {
    num_threads <- lgb.get.default.num.threads()
  }
  params <- lgb.check.wrapper_param(
    main_param_name = "num_threads"
    , params = params
    , alternative_kwarg_value = num_threads
  )
194
195
196
197
198
  params <- lgb.check.wrapper_param(
    main_param_name = "verbosity"
    , params = params
    , alternative_kwarg_value = verbose
  )
199

200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
  # Process factors as labels and auto-determine objective
  if (!lgb.is.Dataset(data)) {
    data_processor <- DataProcessor$new()
    temp <- data_processor$process_label(
        label = label
        , objective = objective
        , params = params
    )
    label <- temp$label
    objective <- temp$objective
    params <- temp$params
    rm(temp)
  } else {
    data_processor <- NULL
  }

216
217
218
  # Set data to a temporary variable
  dtrain <- data

219
  # Check whether data is lgb.Dataset, if not then create lgb.Dataset manually
220
  if (!lgb.is.Dataset(x = dtrain)) {
221
    dtrain <- lgb.Dataset(data = data, label = label, weight = weights, init_score = init_score)
Guolin Ke's avatar
Guolin Ke committed
222
  }
Guolin Ke's avatar
Guolin Ke committed
223

224
225
226
227
  train_args <- list(
    "params" = params
    , "data" = dtrain
    , "nrounds" = nrounds
228
    , "obj" = objective
229
    , "verbose" = params[["verbosity"]]
230
231
232
233
    , "eval_freq" = eval_freq
    , "early_stopping_rounds" = early_stopping_rounds
    , "init_model" = init_model
    , "callbacks" = callbacks
234
    , "serializable" = serializable
235
236
237
238
239
240
241
  )
  train_args <- append(train_args, list(...))

  if (! "valids" %in% names(train_args)) {
    train_args[["valids"]] <- list()
  }

242
  # Train a model using the regular way
243
244
245
  bst <- do.call(
    what = lgb.train
    , args = train_args
246
  )
247
  bst$data_processor <- data_processor
248

249
  return(bst)
Guolin Ke's avatar
Guolin Ke committed
250
251
}

252
253
254
255
256
#' @name agaricus.train
#' @title Training part from Mushroom Data Set
#' @description This data set is originally from the Mushroom data set,
#'              UCI Machine Learning Repository.
#'              This data set includes the following fields:
257
#'
258
259
260
261
#'               \itemize{
#'                   \item{\code{label}: the label for each record}
#'                   \item{\code{data}: a sparse Matrix of \code{dgCMatrix} class, with 126 columns.}
#'                }
Guolin Ke's avatar
Guolin Ke committed
262
263
264
#'
#' @references
#' https://archive.ics.uci.edu/ml/datasets/Mushroom
265
266
267
#'
#' Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository
#' [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
Guolin Ke's avatar
Guolin Ke committed
268
#' School of Information and Computer Science.
269
#'
Guolin Ke's avatar
Guolin Ke committed
270
271
272
#' @docType data
#' @keywords datasets
#' @usage data(agaricus.train)
273
#' @format A list containing a label vector, and a dgCMatrix object with 6513
Guolin Ke's avatar
Guolin Ke committed
274
275
276
#' rows and 127 variables
NULL

277
278
279
280
281
282
283
284
285
286
#' @name agaricus.test
#' @title Test part from Mushroom Data Set
#' @description This data set is originally from the Mushroom data set,
#'              UCI Machine Learning Repository.
#'              This data set includes the following fields:
#'
#'              \itemize{
#'                  \item{\code{label}: the label for each record}
#'                  \item{\code{data}: a sparse Matrix of \code{dgCMatrix} class, with 126 columns.}
#'              }
Guolin Ke's avatar
Guolin Ke committed
287
288
#' @references
#' https://archive.ics.uci.edu/ml/datasets/Mushroom
289
290
291
#'
#' Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository
#' [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
Guolin Ke's avatar
Guolin Ke committed
292
#' School of Information and Computer Science.
293
#'
Guolin Ke's avatar
Guolin Ke committed
294
295
296
#' @docType data
#' @keywords datasets
#' @usage data(agaricus.test)
297
#' @format A list containing a label vector, and a dgCMatrix object with 1611
Guolin Ke's avatar
Guolin Ke committed
298
299
300
#' rows and 126 variables
NULL

301
302
303
304
#' @name bank
#' @title Bank Marketing Data Set
#' @description This data set is originally from the Bank Marketing data set,
#'              UCI Machine Learning Repository.
305
#'
306
307
#'              It contains only the following: bank.csv with 10% of the examples and 17 inputs,
#'              randomly selected from 3 (older version of this dataset with less inputs).
308
309
310
#'
#' @references
#' http://archive.ics.uci.edu/ml/datasets/Bank+Marketing
311
#'
312
313
314
315
316
317
318
319
320
#' S. Moro, P. Cortez and P. Rita. (2014)
#' A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems
#'
#' @docType data
#' @keywords datasets
#' @usage data(bank)
#' @format A data.table with 4521 rows and 17 variables
NULL

Guolin Ke's avatar
Guolin Ke committed
321
# Various imports
Guolin Ke's avatar
Guolin Ke committed
322
#' @import methods
323
#' @importFrom Matrix Matrix
Guolin Ke's avatar
Guolin Ke committed
324
#' @importFrom R6 R6Class
James Lamb's avatar
James Lamb committed
325
#' @useDynLib lib_lightgbm , .registration = TRUE
326
NULL
James Lamb's avatar
James Lamb committed
327
328
329
330
331
332
333

# Suppress false positive warnings from R CMD CHECK about
# "unrecognized global variable"
globalVariables(c(
    "."
    , ".N"
    , ".SD"
334
    , "abs_contribution"
335
    , "bar_color"
James Lamb's avatar
James Lamb committed
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
    , "Contribution"
    , "Cover"
    , "Feature"
    , "Frequency"
    , "Gain"
    , "internal_count"
    , "internal_value"
    , "leaf_index"
    , "leaf_parent"
    , "leaf_value"
    , "node_parent"
    , "split_feature"
    , "split_gain"
    , "split_index"
    , "tree_index"
))