Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
tianlh
LightGBM-DCU
Commits
0eb99219
Commit
0eb99219
authored
Apr 24, 2017
by
Guolin Ke
Browse files
remove some files.
parent
dbfa16c3
Changes
32
Hide whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
0 additions
and
760 deletions
+0
-760
R-package/NAMESPACE
R-package/NAMESPACE
+0
-2
R-package/man/agaricus.test.Rd
R-package/man/agaricus.test.Rd
+0
-32
R-package/man/agaricus.train.Rd
R-package/man/agaricus.train.Rd
+0
-32
R-package/man/dim.Rd
R-package/man/dim.Rd
+0
-37
R-package/man/dimnames.lgb.Dataset.Rd
R-package/man/dimnames.lgb.Dataset.Rd
+0
-40
R-package/man/getinfo.Rd
R-package/man/getinfo.Rd
+0
-51
R-package/man/lgb.Dataset.Rd
R-package/man/lgb.Dataset.Rd
+0
-46
R-package/man/lgb.Dataset.construct.Rd
R-package/man/lgb.Dataset.construct.Rd
+0
-25
R-package/man/lgb.Dataset.create.valid.Rd
R-package/man/lgb.Dataset.create.valid.Rd
+0
-36
R-package/man/lgb.Dataset.save.Rd
R-package/man/lgb.Dataset.save.Rd
+0
-31
R-package/man/lgb.Dataset.set.categorical.Rd
R-package/man/lgb.Dataset.set.categorical.Rd
+0
-32
R-package/man/lgb.Dataset.set.reference.Rd
R-package/man/lgb.Dataset.set.reference.Rd
+0
-33
R-package/man/lgb.dump.Rd
R-package/man/lgb.dump.Rd
+0
-42
R-package/man/lgb.get.eval.result.Rd
R-package/man/lgb.get.eval.result.Rd
+0
-27
R-package/man/lgb.importance.Rd
R-package/man/lgb.importance.Rd
+0
-44
R-package/man/lgb.interprete.Rd
R-package/man/lgb.interprete.Rd
+0
-51
R-package/man/lgb.load.Rd
R-package/man/lgb.load.Rd
+0
-41
R-package/man/lgb.model.dt.tree.Rd
R-package/man/lgb.model.dt.tree.Rd
+0
-55
R-package/man/lgb.plot.importance.Rd
R-package/man/lgb.plot.importance.Rd
+0
-49
R-package/man/lgb.plot.interpretation.Rd
R-package/man/lgb.plot.interpretation.Rd
+0
-54
No files found.
R-package/NAMESPACE
View file @
0eb99219
# Generated by roxygen2: do not edit by hand
S3method("dimnames<-",lgb.Dataset)
S3method("dimnames<-",lgb.Dataset)
S3method(dim,lgb.Dataset)
S3method(dim,lgb.Dataset)
S3method(dimnames,lgb.Dataset)
S3method(dimnames,lgb.Dataset)
...
...
R-package/man/agaricus.test.Rd
deleted
100644 → 0
View file @
dbfa16c3
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/lightgbm.R
\docType{data}
\name{agaricus.test}
\alias{agaricus.test}
\title{Test part from Mushroom Data Set}
\format{A list containing a label vector, and a dgCMatrix object with 1611
rows and 126 variables}
\usage{
data(agaricus.test)
}
\description{
This data set is originally from the Mushroom data set,
UCI Machine Learning Repository.
}
\details{
This data set includes the following fields:
\itemize{
\item \code{label} the label for each record
\item \code{data} a sparse Matrix of \code{dgCMatrix} class, with 126 columns.
}
}
\references{
https://archive.ics.uci.edu/ml/datasets/Mushroom
Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository
[http://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
School of Information and Computer Science.
}
\keyword{datasets}
R-package/man/agaricus.train.Rd
deleted
100644 → 0
View file @
dbfa16c3
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/lightgbm.R
\docType{data}
\name{agaricus.train}
\alias{agaricus.train}
\title{Training part from Mushroom Data Set}
\format{A list containing a label vector, and a dgCMatrix object with 6513
rows and 127 variables}
\usage{
data(agaricus.train)
}
\description{
This data set is originally from the Mushroom data set,
UCI Machine Learning Repository.
}
\details{
This data set includes the following fields:
\itemize{
\item \code{label} the label for each record
\item \code{data} a sparse Matrix of \code{dgCMatrix} class, with 126 columns.
}
}
\references{
https://archive.ics.uci.edu/ml/datasets/Mushroom
Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository
[http://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
School of Information and Computer Science.
}
\keyword{datasets}
R-package/man/dim.Rd
deleted
100644 → 0
View file @
dbfa16c3
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/lgb.Dataset.R
\name{dim.lgb.Dataset}
\alias{dim.lgb.Dataset}
\title{Dimensions of an lgb.Dataset}
\usage{
\method{dim}{lgb.Dataset}(x, ...)
}
\arguments{
\item{x}{Object of class \code{lgb.Dataset}}
\item{...}{other parameters}
}
\value{
a vector of numbers of rows and of columns
}
\description{
Returns a vector of numbers of rows and of columns in an \code{lgb.Dataset}.
}
\details{
Note: since \code{nrow} and \code{ncol} internally use \code{dim}, they can also
be directly used with an \code{lgb.Dataset} object.
}
\examples{
\dontrun{
library(lightgbm)
data(agaricus.train, package = "lightgbm")
train <- agaricus.train
dtrain <- lgb.Dataset(train$data, label = train$label)
stopifnot(nrow(dtrain) == nrow(train$data))
stopifnot(ncol(dtrain) == ncol(train$data))
stopifnot(all(dim(dtrain) == dim(train$data)))
}
}
R-package/man/dimnames.lgb.Dataset.Rd
deleted
100644 → 0
View file @
dbfa16c3
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/lgb.Dataset.R
\name{dimnames.lgb.Dataset}
\alias{dimnames.lgb.Dataset}
\alias{dimnames<-.lgb.Dataset}
\title{Handling of column names of \code{lgb.Dataset}}
\usage{
\method{dimnames}{lgb.Dataset}(x)
\method{dimnames}{lgb.Dataset}(x) <- value
}
\arguments{
\item{x}{object of class \code{lgb.Dataset}}
\item{value}{a list of two elements: the first one is ignored
and the second one is column names}
}
\description{
Only column names are supported for \code{lgb.Dataset}, thus setting of
row names would have no effect and returned row names would be NULL.
}
\details{
Generic \code{dimnames} methods are used by \code{colnames}.
Since row names are irrelevant, it is recommended to use \code{colnames} directly.
}
\examples{
\dontrun{
library(lightgbm)
data(agaricus.train, package = "lightgbm")
train <- agaricus.train
dtrain <- lgb.Dataset(train$data, label = train$label)
lgb.Dataset.construct(dtrain)
dimnames(dtrain)
colnames(dtrain)
colnames(dtrain) <- make.names(1:ncol(train$data))
print(dtrain, verbose = TRUE)
}
}
R-package/man/getinfo.Rd
deleted
100644 → 0
View file @
dbfa16c3
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/lgb.Dataset.R
\name{getinfo}
\alias{getinfo}
\alias{getinfo.lgb.Dataset}
\title{Get information of an lgb.Dataset object}
\usage{
getinfo(dataset, ...)
\method{getinfo}{lgb.Dataset}(dataset, name, ...)
}
\arguments{
\item{dataset}{Object of class \code{lgb.Dataset}}
\item{...}{other parameters}
\item{name}{the name of the information field to get (see details)}
}
\value{
info data
}
\description{
Get information of an lgb.Dataset object
}
\details{
The \code{name} field can be one of the following:
\itemize{
\item \code{label}: label lightgbm learn from ;
\item \code{weight}: to do a weight rescale ;
\item \code{group}: group size
\item \code{init_score}: initial score is the base prediction lightgbm will boost from ;
}
}
\examples{
\dontrun{
library(lightgbm)
data(agaricus.train, package = "lightgbm")
train <- agaricus.train
dtrain <- lgb.Dataset(train$data, label = train$label)
lgb.Dataset.construct(dtrain)
labels <- lightgbm::getinfo(dtrain, "label")
lightgbm::setinfo(dtrain, "label", 1 - labels)
labels2 <- lightgbm::getinfo(dtrain, "label")
stopifnot(all(labels2 == 1 - labels))
}
}
R-package/man/lgb.Dataset.Rd
deleted
100644 → 0
View file @
dbfa16c3
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/lgb.Dataset.R
\name{lgb.Dataset}
\alias{lgb.Dataset}
\title{Construct lgb.Dataset object}
\usage{
lgb.Dataset(data, params = list(), reference = NULL, colnames = NULL,
categorical_feature = NULL, free_raw_data = TRUE, info = list(), ...)
}
\arguments{
\item{data}{a \code{matrix} object, a \code{dgCMatrix} object or a character representing a filename}
\item{params}{a list of parameters}
\item{reference}{reference dataset}
\item{colnames}{names of columns}
\item{categorical_feature}{categorical features}
\item{free_raw_data}{TRUE for need to free raw data after construct}
\item{info}{a list of information of the lgb.Dataset object}
\item{...}{other information to pass to \code{info} or parameters pass to \code{params}}
}
\value{
constructed dataset
}
\description{
Construct lgb.Dataset object from dense matrix, sparse matrix
or local file (that was created previously by saving an \code{lgb.Dataset}).
}
\examples{
\dontrun{
library(lightgbm)
data(agaricus.train, package = "lightgbm")
train <- agaricus.train
dtrain <- lgb.Dataset(train$data, label = train$label)
lgb.Dataset.save(dtrain, "lgb.Dataset.data")
dtrain <- lgb.Dataset("lgb.Dataset.data")
lgb.Dataset.construct(dtrain)
}
}
R-package/man/lgb.Dataset.construct.Rd
deleted
100644 → 0
View file @
dbfa16c3
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/lgb.Dataset.R
\name{lgb.Dataset.construct}
\alias{lgb.Dataset.construct}
\title{Construct Dataset explicitly}
\usage{
lgb.Dataset.construct(dataset)
}
\arguments{
\item{dataset}{Object of class \code{lgb.Dataset}}
}
\description{
Construct Dataset explicitly
}
\examples{
\dontrun{
library(lightgbm)
data(agaricus.train, package = "lightgbm")
train <- agaricus.train
dtrain <- lgb.Dataset(train$data, label = train$label)
lgb.Dataset.construct(dtrain)
}
}
R-package/man/lgb.Dataset.create.valid.Rd
deleted
100644 → 0
View file @
dbfa16c3
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/lgb.Dataset.R
\name{lgb.Dataset.create.valid}
\alias{lgb.Dataset.create.valid}
\title{Construct validation data}
\usage{
lgb.Dataset.create.valid(dataset, data, info = list(), ...)
}
\arguments{
\item{dataset}{\code{lgb.Dataset} object, training data}
\item{data}{a \code{matrix} object, a \code{dgCMatrix} object or a character representing a filename}
\item{info}{a list of information of the lgb.Dataset object}
\item{...}{other information to pass to \code{info}.}
}
\value{
constructed dataset
}
\description{
Construct validation data according to training data
}
\examples{
\dontrun{
library(lightgbm)
data(agaricus.train, package = "lightgbm")
train <- agaricus.train
dtrain <- lgb.Dataset(train$data, label = train$label)
data(agaricus.test, package = "lightgbm")
test <- agaricus.test
dtest <- lgb.Dataset.create.valid(dtrain, test$data, label = test$label)
}
}
R-package/man/lgb.Dataset.save.Rd
deleted
100644 → 0
View file @
dbfa16c3
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/lgb.Dataset.R
\name{lgb.Dataset.save}
\alias{lgb.Dataset.save}
\title{Save \code{lgb.Dataset} to a binary file}
\usage{
lgb.Dataset.save(dataset, fname)
}
\arguments{
\item{dataset}{object of class \code{lgb.Dataset}}
\item{fname}{object filename of output file}
}
\value{
passed dataset
}
\description{
Save \code{lgb.Dataset} to a binary file
}
\examples{
\dontrun{
library(lightgbm)
data(agaricus.train, package = "lightgbm")
train <- agaricus.train
dtrain <- lgb.Dataset(train$data, label = train$label)
lgb.Dataset.save(dtrain, "data.bin")
}
}
R-package/man/lgb.Dataset.set.categorical.Rd
deleted
100644 → 0
View file @
dbfa16c3
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/lgb.Dataset.R
\name{lgb.Dataset.set.categorical}
\alias{lgb.Dataset.set.categorical}
\title{Set categorical feature of \code{lgb.Dataset}}
\usage{
lgb.Dataset.set.categorical(dataset, categorical_feature)
}
\arguments{
\item{dataset}{object of class \code{lgb.Dataset}}
\item{categorical_feature}{categorical features}
}
\value{
passed dataset
}
\description{
Set categorical feature of \code{lgb.Dataset}
}
\examples{
\dontrun{
library(lightgbm)
data(agaricus.train, package = "lightgbm")
train <- agaricus.train
dtrain <- lgb.Dataset(train$data, label = train$label)
lgb.Dataset.save(dtrain, "lgb.Dataset.data")
dtrain <- lgb.Dataset("lgb.Dataset.data")
lgb.Dataset.set.categorical(dtrain, 1:2)
}
}
R-package/man/lgb.Dataset.set.reference.Rd
deleted
100644 → 0
View file @
dbfa16c3
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/lgb.Dataset.R
\name{lgb.Dataset.set.reference}
\alias{lgb.Dataset.set.reference}
\title{Set reference of \code{lgb.Dataset}}
\usage{
lgb.Dataset.set.reference(dataset, reference)
}
\arguments{
\item{dataset}{object of class \code{lgb.Dataset}}
\item{reference}{object of class \code{lgb.Dataset}}
}
\value{
passed dataset
}
\description{
If you want to use validation data, you should set reference to training data
}
\examples{
\dontrun{
library(lightgbm)
data(agaricus.train, package ="lightgbm")
train <- agaricus.train
dtrain <- lgb.Dataset(train$data, label = train$label)
data(agaricus.test, package = "lightgbm")
test <- agaricus.test
dtest <- lgb.Dataset(test$data, test = train$label)
lgb.Dataset.set.reference(dtest, dtrain)
}
}
R-package/man/lgb.dump.Rd
deleted
100644 → 0
View file @
dbfa16c3
%
Generated
by
roxygen2
:
do
not
edit
by
hand
%
Please
edit
documentation
in
R
/
lgb
.
Booster
.
R
\
name
{
lgb
.
dump
}
\
alias
{
lgb
.
dump
}
\
title
{
Dump
LightGBM
model
to
json
}
\
usage
{
lgb
.
dump
(
booster
,
num_iteration
=
NULL
)
}
\
arguments
{
\
item
{
booster
}{
Object
of
class
\
code
{
lgb
.
Booster
}}
\
item
{
num_iteration
}{
number
of
iteration
want
to
predict
with
,
NULL
or
<=
0
means
use
best
iteration
}
}
\
value
{
json
format
of
model
}
\
description
{
Dump
LightGBM
model
to
json
}
\
examples
{
\
dontrun
{
library
(
lightgbm
)
data
(
agaricus
.
train
,
package
=
"lightgbm"
)
train
<-
agaricus
.
train
dtrain
<-
lgb
.
Dataset
(
train
$
data
,
label
=
train
$
label
)
data
(
agaricus
.
test
,
package
=
"lightgbm"
)
test
<-
agaricus
.
test
dtest
<-
lgb
.
Dataset
.
create
.
valid
(
dtrain
,
test
$
data
,
label
=
test
$
label
)
params
<-
list
(
objective
=
"regression"
,
metric
=
"l2"
)
valids
<-
list
(
test
=
dtest
)
model
<-
lgb
.
train
(
params
,
dtrain
,
100
,
valids
,
min_data
=
1
,
learning_rate
=
1
,
early_stopping_rounds
=
10
)
json_model
<-
lgb
.
dump
(
model
)
}
}
R-package/man/lgb.get.eval.result.Rd
deleted
100644 → 0
View file @
dbfa16c3
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/lgb.Booster.R
\name{lgb.get.eval.result}
\alias{lgb.get.eval.result}
\title{Get record evaluation result from booster}
\usage{
lgb.get.eval.result(booster, data_name, eval_name, iters = NULL,
is_err = FALSE)
}
\arguments{
\item{booster}{Object of class \code{lgb.Booster}}
\item{data_name}{name of dataset}
\item{eval_name}{name of evaluation}
\item{iters}{iterations, NULL will return all}
\item{is_err}{TRUE will return evaluation error instead}
}
\value{
vector of evaluation result
}
\description{
Get record evaluation result from booster
}
R-package/man/lgb.importance.Rd
deleted
100644 → 0
View file @
dbfa16c3
%
Generated
by
roxygen2
:
do
not
edit
by
hand
%
Please
edit
documentation
in
R
/
lgb
.
importance
.
R
\
name
{
lgb
.
importance
}
\
alias
{
lgb
.
importance
}
\
title
{
Compute
feature
importance
in
a
model
}
\
usage
{
lgb
.
importance
(
model
,
percentage
=
TRUE
)
}
\
arguments
{
\
item
{
model
}{
object
of
class
\
code
{
lgb
.
Booster
}.}
\
item
{
percentage
}{
whether
to
show
importance
in
relative
percentage
.}
}
\
value
{
For
a
tree
model
,
a
\
code
{
data
.
table
}
with
the
following
columns
:
\
itemize
{
\
item
\
code
{
Feature
}
Feature
names
in
the
model
.
\
item
\
code
{
Gain
}
The
total
gain
of
this
feature
's splits.
\item \code{Cover} The number of observation related to this feature.
\item \code{Frequency} The number of times a feature splited in trees.
}
}
\description{
Creates a \code{data.table} of feature importances in a model.
}
\examples{
\dontrun{
library(lightgbm)
data(agaricus.train, package = "lightgbm")
train <- agaricus.train
dtrain <- lgb.Dataset(train$data, label = train$label)
params = list(objective = "binary",
learning_rate = 0.01, num_leaves = 63, max_depth = -1,
min_data_in_leaf = 1, min_sum_hessian_in_leaf = 1)
model <- lgb.train(params, dtrain, 20)
model <- lgb.train(params, dtrain, 20)
tree_imp1 <- lgb.importance(model, percentage = TRUE)
tree_imp2 <- lgb.importance(model, percentage = FALSE)
}
}
R-package/man/lgb.interprete.Rd
deleted
100644 → 0
View file @
dbfa16c3
%
Generated
by
roxygen2
:
do
not
edit
by
hand
%
Please
edit
documentation
in
R
/
lgb
.
interprete
.
R
\
name
{
lgb
.
interprete
}
\
alias
{
lgb
.
interprete
}
\
title
{
Compute
feature
contribution
of
prediction
}
\
usage
{
lgb
.
interprete
(
model
,
data
,
idxset
,
num_iteration
=
NULL
)
}
\
arguments
{
\
item
{
model
}{
object
of
class
\
code
{
lgb
.
Booster
}.}
\
item
{
data
}{
a
matrix
object
or
a
dgCMatrix
object
.}
\
item
{
idxset
}{
a
integer
vector
of
indices
of
rows
needed
.}
\
item
{
num_iteration
}{
number
of
iteration
want
to
predict
with
,
NULL
or
<=
0
means
use
best
iteration
.}
}
\
value
{
For
regression
,
binary
classification
and
lambdarank
model
,
a
\
code
{
list
}
of
\
code
{
data
.
table
}
with
the
following
columns
:
\
itemize
{
\
item
\
code
{
Feature
}
Feature
names
in
the
model
.
\
item
\
code
{
Contribution
}
The
total
contribution
of
this
feature
's splits.
}
For multiclass classification, a \code{list} of \code{data.table} with the Feature column and Contribution columns to each class.
}
\description{
Computes feature contribution components of rawscore prediction.
}
\examples{
\dontrun{
library(lightgbm)
Sigmoid <- function(x) 1 / (1 + exp(-x))
Logit <- function(x) log(x / (1 - x))
data(agaricus.train, package = "lightgbm")
train <- agaricus.train
dtrain <- lgb.Dataset(train$data, label = train$label)
setinfo(dtrain, "init_score", rep(Logit(mean(train$label)), length(train$label)))
data(agaricus.test, package = "lightgbm")
test <- agaricus.test
params = list(objective = "binary",
learning_rate = 0.01, num_leaves = 63, max_depth = -1,
min_data_in_leaf = 1, min_sum_hessian_in_leaf = 1)
model <- lgb.train(params, dtrain, 20)
model <- lgb.train(params, dtrain, 20)
tree_interpretation <- lgb.interprete(model, test$data, 1:5)
}
}
R-package/man/lgb.load.Rd
deleted
100644 → 0
View file @
dbfa16c3
%
Generated
by
roxygen2
:
do
not
edit
by
hand
%
Please
edit
documentation
in
R
/
lgb
.
Booster
.
R
\
name
{
lgb
.
load
}
\
alias
{
lgb
.
load
}
\
title
{
Load
LightGBM
model
}
\
usage
{
lgb
.
load
(
filename
)
}
\
arguments
{
\
item
{
filename
}{
path
of
model
file
}
}
\
value
{
booster
}
\
description
{
Load
LightGBM
model
from
saved
model
file
}
\
examples
{
\
dontrun
{
library
(
lightgbm
)
data
(
agaricus
.
train
,
package
=
"lightgbm"
)
train
<-
agaricus
.
train
dtrain
<-
lgb
.
Dataset
(
train
$
data
,
label
=
train
$
label
)
data
(
agaricus
.
test
,
package
=
"lightgbm"
)
test
<-
agaricus
.
test
dtest
<-
lgb
.
Dataset
.
create
.
valid
(
dtrain
,
test
$
data
,
label
=
test
$
label
)
params
<-
list
(
objective
=
"regression"
,
metric
=
"l2"
)
valids
<-
list
(
test
=
dtest
)
model
<-
lgb
.
train
(
params
,
dtrain
,
100
,
valids
,
min_data
=
1
,
learning_rate
=
1
,
early_stopping_rounds
=
10
)
lgb
.
save
(
model
,
"model.txt"
)
load_booster
<-
lgb
.
load
(
"model.txt"
)
}
}
R-package/man/lgb.model.dt.tree.Rd
deleted
100644 → 0
View file @
dbfa16c3
%
Generated
by
roxygen2
:
do
not
edit
by
hand
%
Please
edit
documentation
in
R
/
lgb
.
model
.
dt
.
tree
.
R
\
name
{
lgb
.
model
.
dt
.
tree
}
\
alias
{
lgb
.
model
.
dt
.
tree
}
\
title
{
Parse
a
LightGBM
model
json
dump
}
\
usage
{
lgb
.
model
.
dt
.
tree
(
model
,
num_iteration
=
NULL
)
}
\
arguments
{
\
item
{
model
}{
object
of
class
\
code
{
lgb
.
Booster
}}
}
\
value
{
A
\
code
{
data
.
table
}
with
detailed
information
about
model
trees
' nodes and leafs.
The columns of the \code{data.table} are:
\itemize{
\item \code{tree_index}: ID of a tree in a model (integer)
\item \code{split_index}: ID of a node in a tree (integer)
\item \code{split_feature}: for a node, it'
s
a
feature
name
(
character
);
for
a
leaf
,
it
simply
labels
it
as
\
code
{
"NA"
}
\
item
\
code
{
node_parent
}:
ID
of
the
parent
node
for
current
node
(
integer
)
\
item
\
code
{
leaf_index
}:
ID
of
a
leaf
in
a
tree
(
integer
)
\
item
\
code
{
leaf_parent
}:
ID
of
the
parent
node
for
current
leaf
(
integer
)
\
item
\
code
{
split_gain
}:
Split
gain
of
a
node
\
item
\
code
{
threshold
}:
Spliting
threshold
value
of
a
node
\
item
\
code
{
decision_type
}:
Decision
type
of
a
node
\
item
\
code
{
internal_value
}:
Node
value
\
item
\
code
{
internal_count
}:
The
number
of
observation
collected
by
a
node
\
item
\
code
{
leaf_value
}:
Leaf
value
\
item
\
code
{
leaf_count
}:
The
number
of
observation
collected
by
a
leaf
}
}
\
description
{
Parse
a
LightGBM
model
json
dump
into
a
\
code
{
data
.
table
}
structure
.
}
\
examples
{
\
dontrun
{
library
(
lightgbm
)
data
(
agaricus
.
train
,
package
=
"lightgbm"
)
train
<-
agaricus
.
train
dtrain
<-
lgb
.
Dataset
(
train
$
data
,
label
=
train
$
label
)
params
=
list
(
objective
=
"binary"
,
learning_rate
=
0.01
,
num_leaves
=
63
,
max_depth
=
-
1
,
min_data_in_leaf
=
1
,
min_sum_hessian_in_leaf
=
1
)
model
<-
lgb
.
train
(
params
,
dtrain
,
20
)
model
<-
lgb
.
train
(
params
,
dtrain
,
20
)
tree_dt
<-
lgb
.
model
.
dt
.
tree
(
model
)
}
}
R-package/man/lgb.plot.importance.Rd
deleted
100644 → 0
View file @
dbfa16c3
%
Generated
by
roxygen2
:
do
not
edit
by
hand
%
Please
edit
documentation
in
R
/
lgb
.
plot
.
importance
.
R
\
name
{
lgb
.
plot
.
importance
}
\
alias
{
lgb
.
plot
.
importance
}
\
title
{
Plot
feature
importance
as
a
bar
graph
}
\
usage
{
lgb
.
plot
.
importance
(
tree_imp
,
top_n
=
10
,
measure
=
"Gain"
,
left_margin
=
10
,
cex
=
NULL
)
}
\
arguments
{
\
item
{
tree_imp
}{
a
\
code
{
data
.
table
}
returned
by
\
code
{\
link
{
lgb
.
importance
}}.}
\
item
{
top_n
}{
maximal
number
of
top
features
to
include
into
the
plot
.}
\
item
{
measure
}{
the
name
of
importance
measure
to
plot
,
can
be
"Gain"
,
"Cover"
or
"Frequency"
.}
\
item
{
left_margin
}{(
base
R
barplot
)
allows
to
adjust
the
left
margin
size
to
fit
feature
names
.}
\
item
{
cex
}{(
base
R
barplot
)
passed
as
\
code
{
cex
.
names
}
parameter
to
\
code
{
barplot
}.}
}
\
value
{
The
\
code
{
lgb
.
plot
.
importance
}
function
creates
a
\
code
{
barplot
}
and
silently
returns
a
processed
data
.
table
with
\
code
{
top_n
}
features
sorted
by
defined
importance
.
}
\
description
{
Plot
previously
calculated
feature
importance
:
Gain
,
Cover
and
Frequency
,
as
a
bar
graph
.
}
\
details
{
The
graph
represents
each
feature
as
a
horizontal
bar
of
length
proportional
to
the
defined
importance
of
a
feature
.
Features
are
shown
ranked
in
a
decreasing
importance
order
.
}
\
examples
{
\
dontrun
{
data
(
agaricus
.
train
,
package
=
"lightgbm"
)
train
<-
agaricus
.
train
dtrain
<-
lgb
.
Dataset
(
train
$
data
,
label
=
train
$
label
)
params
=
list
(
objective
=
"binary"
,
learning_rate
=
0.01
,
num_leaves
=
63
,
max_depth
=
-
1
,
min_data_in_leaf
=
1
,
min_sum_hessian_in_leaf
=
1
)
model
<-
lgb
.
train
(
params
,
dtrain
,
20
)
model
<-
lgb
.
train
(
params
,
dtrain
,
20
)
tree_imp
<-
lgb
.
importance
(
model
,
percentage
=
TRUE
)
lgb
.
plot
.
importance
(
tree_imp
,
top_n
=
10
,
measure
=
"Gain"
)
}
}
R-package/man/lgb.plot.interpretation.Rd
deleted
100644 → 0
View file @
dbfa16c3
%
Generated
by
roxygen2
:
do
not
edit
by
hand
%
Please
edit
documentation
in
R
/
lgb
.
plot
.
interpretation
.
R
\
name
{
lgb
.
plot
.
interpretation
}
\
alias
{
lgb
.
plot
.
interpretation
}
\
title
{
Plot
feature
contribution
as
a
bar
graph
}
\
usage
{
lgb
.
plot
.
interpretation
(
tree_interpretation_dt
,
top_n
=
10
,
cols
=
1
,
left_margin
=
10
,
cex
=
NULL
)
}
\
arguments
{
\
item
{
tree_interpretation_dt
}{
a
\
code
{
data
.
table
}
returned
by
\
code
{\
link
{
lgb
.
interprete
}}.}
\
item
{
top_n
}{
maximal
number
of
top
features
to
include
into
the
plot
.}
\
item
{
cols
}{
the
column
numbers
of
layout
,
will
be
used
only
for
multiclass
classification
feature
contribution
.}
\
item
{
left_margin
}{(
base
R
barplot
)
allows
to
adjust
the
left
margin
size
to
fit
feature
names
.}
\
item
{
cex
}{(
base
R
barplot
)
passed
as
\
code
{
cex
.
names
}
parameter
to
\
code
{
barplot
}.}
}
\
value
{
The
\
code
{
lgb
.
plot
.
interpretation
}
function
creates
a
\
code
{
barplot
}.
}
\
description
{
Plot
previously
calculated
feature
contribution
as
a
bar
graph
.
}
\
details
{
The
graph
represents
each
feature
as
a
horizontal
bar
of
length
proportional
to
the
defined
contribution
of
a
feature
.
Features
are
shown
ranked
in
a
decreasing
contribution
order
.
}
\
examples
{
\
dontrun
{
library
(
lightgbm
)
Sigmoid
<-
function
(
x
)
{
1
/
(
1
+
exp
(-
x
))}
Logit
<-
function
(
x
)
{
log
(
x
/
(
1
-
x
))}
data
(
agaricus
.
train
,
package
=
"lightgbm"
)
train
<-
agaricus
.
train
dtrain
<-
lgb
.
Dataset
(
train
$
data
,
label
=
train
$
label
)
setinfo
(
dtrain
,
"init_score"
,
rep
(
Logit
(
mean
(
train
$
label
)),
length
(
train
$
label
)))
data
(
agaricus
.
test
,
package
=
"lightgbm"
)
test
<-
agaricus
.
test
params
=
list
(
objective
=
"binary"
,
learning_rate
=
0.01
,
num_leaves
=
63
,
max_depth
=
-
1
,
min_data_in_leaf
=
1
,
min_sum_hessian_in_leaf
=
1
)
model
<-
lgb
.
train
(
params
,
dtrain
,
20
)
model
<-
lgb
.
train
(
params
,
dtrain
,
20
)
tree_interpretation
<-
lgb
.
interprete
(
model
,
test
$
data
,
1
:
5
)
lgb
.
plot
.
interpretation
(
tree_interpretation
[[
1
]],
top_n
=
10
)
}
}
Prev
1
2
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment