Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
ResNet50_tensorflow
Commits
bac0cbdf
Commit
bac0cbdf
authored
Aug 09, 2022
by
Chen Qian
Committed by
A. Unique TensorFlower
Aug 09, 2022
Browse files
Internal change
PiperOrigin-RevId: 466508813
parent
549d620f
Changes
4
Hide whitespace changes
Inline
Side-by-side
Showing
4 changed files
with
4 additions
and
4 deletions
+4
-4
official/modeling/optimization/ema_optimizer.py
official/modeling/optimization/ema_optimizer.py
+1
-1
official/modeling/optimization/lars_optimizer.py
official/modeling/optimization/lars_optimizer.py
+1
-1
official/modeling/optimization/legacy_adamw.py
official/modeling/optimization/legacy_adamw.py
+1
-1
official/vision/beta/projects/yolo/optimization/sgd_torch.py
official/vision/beta/projects/yolo/optimization/sgd_torch.py
+1
-1
No files found.
official/modeling/optimization/ema_optimizer.py
View file @
bac0cbdf
...
@@ -21,7 +21,7 @@ import tensorflow as tf
...
@@ -21,7 +21,7 @@ import tensorflow as tf
# pylint: disable=protected-access
# pylint: disable=protected-access
class
ExponentialMovingAverage
(
tf
.
keras
.
optimizers
.
Optimizer
):
class
ExponentialMovingAverage
(
tf
.
keras
.
optimizers
.
legacy
.
Optimizer
):
"""Optimizer that computes an exponential moving average of the variables.
"""Optimizer that computes an exponential moving average of the variables.
Empirically it has been found that using the moving average of the trained
Empirically it has been found that using the moving average of the trained
...
...
official/modeling/optimization/lars_optimizer.py
View file @
bac0cbdf
...
@@ -22,7 +22,7 @@ import tensorflow as tf
...
@@ -22,7 +22,7 @@ import tensorflow as tf
# pylint: disable=protected-access
# pylint: disable=protected-access
class
LARS
(
tf
.
keras
.
optimizers
.
Optimizer
):
class
LARS
(
tf
.
keras
.
optimizers
.
legacy
.
Optimizer
):
"""Layer-wise Adaptive Rate Scaling for large batch training.
"""Layer-wise Adaptive Rate Scaling for large batch training.
Introduced by "Large Batch Training of Convolutional Networks" by Y. You,
Introduced by "Large Batch Training of Convolutional Networks" by Y. You,
...
...
official/modeling/optimization/legacy_adamw.py
View file @
bac0cbdf
...
@@ -20,7 +20,7 @@ from absl import logging
...
@@ -20,7 +20,7 @@ from absl import logging
import
tensorflow
as
tf
import
tensorflow
as
tf
class
AdamWeightDecay
(
tf
.
keras
.
optimizers
.
Adam
):
class
AdamWeightDecay
(
tf
.
keras
.
optimizers
.
legacy
.
Adam
):
"""Adam enables L2 weight decay and clip_by_global_norm on gradients.
"""Adam enables L2 weight decay and clip_by_global_norm on gradients.
[Warning!]: Keras optimizer supports gradient clipping and has an AdamW
[Warning!]: Keras optimizer supports gradient clipping and has an AdamW
...
...
official/vision/beta/projects/yolo/optimization/sgd_torch.py
View file @
bac0cbdf
...
@@ -43,7 +43,7 @@ def _var_key(var):
...
@@ -43,7 +43,7 @@ def _var_key(var):
return
var
.
_unique_id
return
var
.
_unique_id
class
SGDTorch
(
tf
.
keras
.
optimizers
.
Optimizer
):
class
SGDTorch
(
tf
.
keras
.
optimizers
.
legacy
.
Optimizer
):
"""Optimizer that simulates the SGD module used in pytorch.
"""Optimizer that simulates the SGD module used in pytorch.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment