Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
649e9774
Commit
649e9774
authored
Nov 04, 2018
by
VictorSanh
Browse files
Fix bug train_batch_size not an int.
Division makes args.train_batch_size becoming a float. cc @thomwolf
parent
d55c3ae8
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
2 additions
and
2 deletions
+2
-2
run_classifier.py
run_classifier.py
+1
-1
run_squad.py
run_squad.py
+1
-1
No files found.
run_classifier.py
View file @
649e9774
...
...
@@ -426,7 +426,7 @@ def main():
raise
ValueError
(
"Invalid accumulate_gradients parameter: {}, should be >= 1"
.
format
(
args
.
accumulate_gradients
))
args
.
train_batch_size
=
args
.
train_batch_size
/
args
.
accumulate_gradients
args
.
train_batch_size
=
int
(
args
.
train_batch_size
/
args
.
accumulate_gradients
)
random
.
seed
(
args
.
seed
)
np
.
random
.
seed
(
args
.
seed
)
...
...
run_squad.py
View file @
649e9774
...
...
@@ -756,7 +756,7 @@ def main():
raise
ValueError
(
"Invalid accumulate_gradients parameter: {}, should be >= 1"
.
format
(
args
.
accumulate_gradients
))
args
.
train_batch_size
=
args
.
train_batch_size
/
args
.
accumulate_gradients
args
.
train_batch_size
=
int
(
args
.
train_batch_size
/
args
.
accumulate_gradients
)
random
.
seed
(
args
.
seed
)
np
.
random
.
seed
(
args
.
seed
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment