Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
9e95cd8c
"ppocr/git@developer.sourcefind.cn:wangsen/paddle_dbnet.git" did not exist on "21fca149b2be224337ff36f4df0611804d7ad42a"
Commit
9e95cd8c
authored
Nov 09, 2018
by
thomwolf
Browse files
clean up optimizer from unused functions
parent
34a1a010
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
0 additions
and
21 deletions
+0
-21
optimization.py
optimization.py
+0
-21
No files found.
optimization.py
View file @
9e95cd8c
...
@@ -90,27 +90,6 @@ class BERTAdam(Optimizer):
...
@@ -90,27 +90,6 @@ class BERTAdam(Optimizer):
lr
.
append
(
lr_scheduled
)
lr
.
append
(
lr_scheduled
)
return
lr
return
lr
def
to
(
self
,
device
):
""" Move the optimizer state to a specified device"""
for
state
in
self
.
state
.
values
():
state
[
'exp_avg'
].
to
(
device
)
state
[
'exp_avg_sq'
].
to
(
device
)
def
initialize_step
(
self
,
initial_step
):
"""Initialize state with a defined step (but we don't have stored averaged).
Arguments:
initial_step (int): Initial step number.
"""
for
group
in
self
.
param_groups
:
for
p
in
group
[
'params'
]:
state
=
self
.
state
[
p
]
# State initialization
state
[
'step'
]
=
initial_step
# Exponential moving average of gradient values
state
[
'exp_avg'
]
=
torch
.
zeros_like
(
p
.
data
)
# Exponential moving average of squared gradient values
state
[
'exp_avg_sq'
]
=
torch
.
zeros_like
(
p
.
data
)
def
step
(
self
,
closure
=
None
):
def
step
(
self
,
closure
=
None
):
"""Performs a single optimization step.
"""Performs a single optimization step.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment