Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
apex
Commits
ca35aa79
Commit
ca35aa79
authored
Jun 24, 2019
by
Michael Carilli
Browse files
Updating gradient accumulation guidance
parent
f29b3f8d
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
5 additions
and
0 deletions
+5
-0
docs/source/advanced.rst
docs/source/advanced.rst
+5
-0
No files found.
docs/source/advanced.rst
View file @
ca35aa79
...
@@ -145,6 +145,11 @@ Gradient accumulation across iterations
...
@@ -145,6 +145,11 @@ Gradient accumulation across iterations
The
following
should
"just work,"
and
properly
accommodate
multiple
models
/
optimizers
/
losses
,
as
well
as
The
following
should
"just work,"
and
properly
accommodate
multiple
models
/
optimizers
/
losses
,
as
well
as
gradient
clipping
via
the
`
instructions
above
`
_
::
gradient
clipping
via
the
`
instructions
above
`
_
::
#
If
your
intent
is
to
simulate
a
larger
batch
size
using
gradient
accumulation
,
#
you
can
divide
the
loss
by
the
number
of
accumulation
iterations
(
so
that
gradients
#
will
be
averaged
over
that
many
iterations
):
loss
=
loss
/
iters_to_accumulate
if
iter
%
iters_to_accumulate
==
0
:
if
iter
%
iters_to_accumulate
==
0
:
#
Every
iters_to_accumulate
iterations
,
unscale
and
step
#
Every
iters_to_accumulate
iterations
,
unscale
and
step
with
amp
.
scale_loss
(
loss
,
optimizer
)
as
scaled_loss
:
with
amp
.
scale_loss
(
loss
,
optimizer
)
as
scaled_loss
:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment