Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
2cf3447e
Commit
2cf3447e
authored
Nov 21, 2019
by
Juha Kiili
Committed by
Aarni Koskela
Nov 21, 2019
Browse files
Glue: log in Valohai-compatible JSON format too
parent
0cdfcca2
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
12 additions
and
3 deletions
+12
-3
examples/run_glue.py
examples/run_glue.py
+12
-3
No files found.
examples/run_glue.py
View file @
2cf3447e
...
@@ -22,6 +22,7 @@ import glob
...
@@ -22,6 +22,7 @@ import glob
import
logging
import
logging
import
os
import
os
import
random
import
random
import
json
import
numpy
as
np
import
numpy
as
np
import
torch
import
torch
...
@@ -171,13 +172,21 @@ def train(args, train_dataset, model, tokenizer):
...
@@ -171,13 +172,21 @@ def train(args, train_dataset, model, tokenizer):
if
args
.
local_rank
in
[
-
1
,
0
]
and
args
.
logging_steps
>
0
and
global_step
%
args
.
logging_steps
==
0
:
if
args
.
local_rank
in
[
-
1
,
0
]
and
args
.
logging_steps
>
0
and
global_step
%
args
.
logging_steps
==
0
:
# Log metrics
# Log metrics
logs
=
{
'step'
:
global_step
}
if
args
.
local_rank
==
-
1
and
args
.
evaluate_during_training
:
# Only evaluate when single GPU otherwise metrics may not average well
if
args
.
local_rank
==
-
1
and
args
.
evaluate_during_training
:
# Only evaluate when single GPU otherwise metrics may not average well
results
=
evaluate
(
args
,
model
,
tokenizer
)
results
=
evaluate
(
args
,
model
,
tokenizer
)
for
key
,
value
in
results
.
items
():
for
key
,
value
in
results
.
items
():
tb_writer
.
add_scalar
(
'eval_{}'
.
format
(
key
),
value
,
global_step
)
eval_key
=
'eval_{}'
.
format
(
key
)
tb_writer
.
add_scalar
(
'lr'
,
scheduler
.
get_lr
()[
0
]
,
global_step
)
tb_writer
.
add_scalar
(
eval_key
,
value
,
global_step
)
tb_writer
.
add_scalar
(
'loss'
,
(
tr_loss
-
logging_loss
)
/
args
.
logging_steps
,
global_step
)
logs
[
eval_key
]
=
str
(
value
)
logging_loss
=
tr_loss
logging_loss
=
tr_loss
loss_scalar
=
(
tr_loss
-
logging_loss
)
/
args
.
logging_steps
learning_rate_scalar
=
scheduler
.
get_lr
()[
0
]
tb_writer
.
add_scalar
(
'lr'
,
learning_rate_scalar
,
global_step
)
tb_writer
.
add_scalar
(
'loss'
,
loss_scalar
,
global_step
)
logs
[
'learning_rate'
]
=
learning_rate_scalar
logs
[
'loss'
]
=
loss_scalar
print
(
json
.
dumps
(
logs
))
if
args
.
local_rank
in
[
-
1
,
0
]
and
args
.
save_steps
>
0
and
global_step
%
args
.
save_steps
==
0
:
if
args
.
local_rank
in
[
-
1
,
0
]
and
args
.
save_steps
>
0
and
global_step
%
args
.
save_steps
==
0
:
# Save model checkpoint
# Save model checkpoint
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment