Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
68f7064a
Commit
68f7064a
authored
Nov 04, 2019
by
Lysandre
Browse files
Add `model.train()` line to ReadMe training example
Co-Authored-By:
Santosh-Gupta
<
San.Gupta.ML@gmail.com
>
parent
c8f27121
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
0 deletions
+1
-0
README.md
README.md
+1
-0
No files found.
README.md
View file @
68f7064a
...
@@ -538,6 +538,7 @@ optimizer = AdamW(model.parameters(), lr=lr, correct_bias=False) # To reproduce
...
@@ -538,6 +538,7 @@ optimizer = AdamW(model.parameters(), lr=lr, correct_bias=False) # To reproduce
scheduler
=
WarmupLinearSchedule
(
optimizer
,
warmup_steps
=
num_warmup_steps
,
t_total
=
num_total_steps
)
# PyTorch scheduler
scheduler
=
WarmupLinearSchedule
(
optimizer
,
warmup_steps
=
num_warmup_steps
,
t_total
=
num_total_steps
)
# PyTorch scheduler
### and used like this:
### and used like this:
for
batch
in
train_data
:
for
batch
in
train_data
:
model
.
train
()
loss
=
model
(
batch
)
loss
=
model
(
batch
)
loss
.
backward
()
loss
.
backward
()
torch
.
nn
.
utils
.
clip_grad_norm_
(
model
.
parameters
(),
max_grad_norm
)
# Gradient clipping is not in AdamW anymore (so you can use amp without issue)
torch
.
nn
.
utils
.
clip_grad_norm_
(
model
.
parameters
(),
max_grad_norm
)
# Gradient clipping is not in AdamW anymore (so you can use amp without issue)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment