Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
tianlh
LightGBM-DCU
Commits
675b552d
Unverified
Commit
675b552d
authored
Jun 04, 2020
by
James Lamb
Committed by
GitHub
Jun 04, 2020
Browse files
[R-package] fix best_score using custom evaluation (fixes #3112) (#3117)
parent
9d433033
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
35 additions
and
1 deletion
+35
-1
R-package/R/lgb.train.R
R-package/R/lgb.train.R
+9
-1
R-package/tests/testthat/test_custom_objective.R
R-package/tests/testthat/test_custom_objective.R
+26
-0
No files found.
R-package/R/lgb.train.R
View file @
675b552d
...
...
@@ -298,7 +298,15 @@ lgb.train <- function(params = list(),
# When early stopping is not activated, we compute the best iteration / score ourselves by
# selecting the first metric and the first dataset
if
(
record
&&
length
(
non_train_valid_names
)
>
0L
&&
is.na
(
env
$
best_score
))
{
first_metric
<-
booster
$
.__enclos_env__
$
private
$
eval_names
[
1L
]
# when using a custom eval function, the metric name is returned from the
# function, so figure it out from record_evals
if
(
!
is.null
(
feval
))
{
first_metric
<-
names
(
booster
$
record_evals
[[
first_valid_name
]])[
1L
]
}
else
{
first_metric
<-
booster
$
.__enclos_env__
$
private
$
eval_names
[
1L
]
}
.find_best
<-
which.min
if
(
isTRUE
(
env
$
eval_list
[[
1L
]]
$
higher_better
[
1L
]))
{
.find_best
<-
which.max
...
...
R-package/tests/testthat/test_custom_objective.R
View file @
675b552d
...
...
@@ -6,6 +6,8 @@ dtrain <- lgb.Dataset(agaricus.train$data, label = agaricus.train$label)
dtest
<-
lgb.Dataset
(
agaricus.test
$
data
,
label
=
agaricus.test
$
label
)
watchlist
<-
list
(
eval
=
dtest
,
train
=
dtrain
)
TOLERANCE
<-
1e-6
logregobj
<-
function
(
preds
,
dtrain
)
{
labels
<-
getinfo
(
dtrain
,
"label"
)
preds
<-
1.0
/
(
1.0
+
exp
(
-
preds
))
...
...
@@ -41,3 +43,27 @@ test_that("custom objective works", {
bst
<-
lgb.train
(
param
,
dtrain
,
num_round
,
watchlist
,
eval
=
evalerror
)
expect_false
(
is.null
(
bst
$
record_evals
))
})
test_that
(
"using a custom objective, custom eval, and no other metrics works"
,
{
set.seed
(
708L
)
bst
<-
lgb.train
(
params
=
list
(
num_leaves
=
8L
,
learning_rate
=
1.0
)
,
data
=
dtrain
,
nrounds
=
4L
,
valids
=
watchlist
,
obj
=
logregobj
,
eval
=
evalerror
)
expect_false
(
is.null
(
bst
$
record_evals
))
expect_equal
(
bst
$
best_iter
,
4L
)
expect_true
(
abs
(
bst
$
best_score
-
0.000621
)
<
TOLERANCE
)
eval_results
<-
bst
$
eval_valid
(
feval
=
evalerror
)[[
1L
]]
expect_true
(
eval_results
[[
"data_name"
]]
==
"eval"
)
expect_true
(
abs
(
eval_results
[[
"value"
]]
-
0.0006207325
)
<
TOLERANCE
)
expect_true
(
eval_results
[[
"name"
]]
==
"error"
)
expect_false
(
eval_results
[[
"higher_better"
]])
})
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment