Commit e6cda015 authored by Reed Wanderman-Milne's avatar Reed Wanderman-Milne Committed by A. Unique TensorFlower
Browse files

Use nonexperimental mixed precision API for official models.

For all modified calls to set_mixed_precision_policy(), the loss_scale argument was removed, as it cannot be passed if the nonexperimental API is used. For all such callers, the loss_scale is later used to explicitly create a LossScaleOptimizer, so removing the argument has no impact.

Switching to the non-experimental LossScaleOptimizer has no effect, as it has near identical behavior and all isinstance checks within the official models check for the non-experimental version.

PiperOrigin-RevId: 368101975
parent b71df5b1
......@@ -91,9 +91,7 @@ def run(flags_obj):
dtype = flags_core.get_tf_dtype(flags_obj)
performance.set_mixed_precision_policy(
flags_core.get_tf_dtype(flags_obj),
flags_core.get_loss_scale(flags_obj, default_for_fp16=128),
use_experimental_api=True)
flags_core.get_tf_dtype(flags_obj))
data_format = flags_obj.data_format
if data_format is None:
......@@ -196,16 +194,11 @@ def run(flags_obj):
decay_rate=flags_obj.lr_decay_factor,
staircase=True),
momentum=0.9)
if flags_obj.fp16_implementation == 'graph_rewrite':
# Note: when flags_obj.fp16_implementation == "graph_rewrite", dtype as
# determined by flags_core.get_tf_dtype(flags_obj) would be 'float32'
# which will ensure tf.compat.v2.keras.mixed_precision and
# tf.train.experimental.enable_mixed_precision_graph_rewrite do not double
# up.
optimizer = (
tf.compat.v1.mixed_precision.enable_mixed_precision_graph_rewrite(
optimizer))
optimizer = performance.configure_optimizer(
optimizer,
use_float16=flags_core.get_tf_dtype(flags_obj) == tf.float16,
use_graph_rewrite=flags_obj.fp16_implementation == 'graph_rewrite',
loss_scale=flags_core.get_loss_scale(flags_obj, default_for_fp16=128),)
# TODO(hongkuny): Remove trivial model usage and move it to benchmark.
if flags_obj.use_trivial_model:
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment