Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
Megatron-LM
Commits
113c6362
Commit
113c6362
authored
May 18, 2021
by
Jared Casper
Browse files
Add temporary assert to finetuning until it can be fixed.
parent
42c1cf42
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
3 additions
and
1 deletion
+3
-1
tasks/finetune_utils.py
tasks/finetune_utils.py
+3
-1
No files found.
tasks/finetune_utils.py
View file @
113c6362
...
@@ -19,7 +19,7 @@ from functools import partial
...
@@ -19,7 +19,7 @@ from functools import partial
import
torch
import
torch
from
megatron
import
get_args
from
megatron
import
get_args
,
get_num_microbatches
from
megatron
import
print_rank_0
from
megatron
import
print_rank_0
from
megatron
import
get_timers
from
megatron
import
get_timers
from
megatron
import
mpu
from
megatron
import
mpu
...
@@ -154,6 +154,8 @@ def _train(model, optimizer, lr_scheduler, forward_step,
...
@@ -154,6 +154,8 @@ def _train(model, optimizer, lr_scheduler, forward_step,
args
=
get_args
()
args
=
get_args
()
timers
=
get_timers
()
timers
=
get_timers
()
assert
get_num_microbatches
()
==
1
,
"finetuning with gradient accumulation doesn't currently work"
# Turn on training mode which enables dropout.
# Turn on training mode which enables dropout.
for
m
in
model
:
for
m
in
model
:
m
.
train
()
m
.
train
()
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment