Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
ResNet50_tensorflow
Commits
7f0ee4cb
Commit
7f0ee4cb
authored
May 08, 2021
by
Yuexin Wu
Committed by
A. Unique TensorFlower
May 08, 2021
Browse files
Correct description typos.
PiperOrigin-RevId: 372738818
parent
afe4802e
Changes
3
Show whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
6 additions
and
6 deletions
+6
-6
official/modeling/optimization/configs/learning_rate_config.py
...ial/modeling/optimization/configs/learning_rate_config.py
+2
-2
official/modeling/optimization/configs/optimization_config.py
...cial/modeling/optimization/configs/optimization_config.py
+2
-2
official/modeling/optimization/lr_schedule.py
official/modeling/optimization/lr_schedule.py
+2
-2
No files found.
official/modeling/optimization/configs/learning_rate_config.py
View file @
7f0ee4cb
...
@@ -154,11 +154,11 @@ class PowerAndLinearDecayLrConfig(base_config.Config):
...
@@ -154,11 +154,11 @@ class PowerAndLinearDecayLrConfig(base_config.Config):
1) offset_step < 0, the actual learning rate equals initial_learning_rate.
1) offset_step < 0, the actual learning rate equals initial_learning_rate.
2) offset_step <= total_decay_steps * (1 - linear_decay_fraction), the
2) offset_step <= total_decay_steps * (1 - linear_decay_fraction), the
actual learning rate equals lr * offset_step^power.
actual learning rate equals lr * offset_step^power.
3) total_decay_steps * (1 - linear_decay_fraction) < offset_step <
3) total_decay_steps * (1 - linear_decay_fraction) <
=
offset_step <
total_decay_steps, the actual learning rate equals lr * offset_step^power *
total_decay_steps, the actual learning rate equals lr * offset_step^power *
(total_decay_steps - offset_step) / (total_decay_steps *
(total_decay_steps - offset_step) / (total_decay_steps *
linear_decay_fraction).
linear_decay_fraction).
4) offset_step > total_decay_steps, the actual learning rate equals zero.
4) offset_step >
=
total_decay_steps, the actual learning rate equals zero.
Attributes:
Attributes:
name: The name of the learning rate schedule. Defaults to
name: The name of the learning rate schedule. Defaults to
...
...
official/modeling/optimization/configs/optimization_config.py
View file @
7f0ee4cb
...
@@ -57,7 +57,7 @@ class LrConfig(oneof.OneOfConfig):
...
@@ -57,7 +57,7 @@ class LrConfig(oneof.OneOfConfig):
"""Configuration for lr schedule.
"""Configuration for lr schedule.
Attributes:
Attributes:
type: 'str', type of lr schedule to be used, on the
of
fields below.
type: 'str', type of lr schedule to be used, on
e of
the fields below.
constant: constant learning rate config.
constant: constant learning rate config.
stepwise: stepwise learning rate config.
stepwise: stepwise learning rate config.
exponential: exponential learning rate config.
exponential: exponential learning rate config.
...
@@ -86,7 +86,7 @@ class WarmupConfig(oneof.OneOfConfig):
...
@@ -86,7 +86,7 @@ class WarmupConfig(oneof.OneOfConfig):
"""Configuration for lr schedule.
"""Configuration for lr schedule.
Attributes:
Attributes:
type: 'str', type of warmup schedule to be used, on the
of
fields below.
type: 'str', type of warmup schedule to be used, on
e of
the fields below.
linear: linear warmup config.
linear: linear warmup config.
polynomial: polynomial warmup config.
polynomial: polynomial warmup config.
"""
"""
...
...
official/modeling/optimization/lr_schedule.py
View file @
7f0ee4cb
...
@@ -205,11 +205,11 @@ class PowerAndLinearDecay(tf.keras.optimizers.schedules.LearningRateSchedule):
...
@@ -205,11 +205,11 @@ class PowerAndLinearDecay(tf.keras.optimizers.schedules.LearningRateSchedule):
1) offset_step < 0, the actual learning rate equals initial_learning_rate.
1) offset_step < 0, the actual learning rate equals initial_learning_rate.
2) offset_step <= total_decay_steps * (1 - linear_decay_fraction), the
2) offset_step <= total_decay_steps * (1 - linear_decay_fraction), the
actual learning rate equals lr * offset_step^power.
actual learning rate equals lr * offset_step^power.
3) total_decay_steps * (1 - linear_decay_fraction) < offset_step <
3) total_decay_steps * (1 - linear_decay_fraction) <
=
offset_step <
total_decay_steps, the actual learning rate equals lr * offset_step^power *
total_decay_steps, the actual learning rate equals lr * offset_step^power *
(total_decay_steps - offset_step) / (total_decay_steps *
(total_decay_steps - offset_step) / (total_decay_steps *
linear_decay_fraction).
linear_decay_fraction).
4) offset_step > total_decay_steps, the actual learning rate equals zero.
4) offset_step >
=
total_decay_steps, the actual learning rate equals zero.
"""
"""
def
__init__
(
self
,
def
__init__
(
self
,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment