Unverified Commit 31d38900 authored by josh11b's avatar josh11b Committed by GitHub
Browse files

Merge pull request #5598 from tensorflow/josh11b-patch-1

AllReduceCrossTowerOps -> AllReduceCrossDeviceOps
parents 48a4b443 c5dbd487
...@@ -27,8 +27,9 @@ def get_distribution_strategy(num_gpus, all_reduce_alg=None): ...@@ -27,8 +27,9 @@ def get_distribution_strategy(num_gpus, all_reduce_alg=None):
Args: Args:
num_gpus: Number of GPUs to run this model. num_gpus: Number of GPUs to run this model.
all_reduce_alg: Specify which algorithm to use when performing all-reduce. all_reduce_alg: Specify which algorithm to use when performing all-reduce.
See tf.contrib.distribute.AllReduceCrossTowerOps for available algorithms. See tf.contrib.distribute.AllReduceCrossDeviceOps for available
If None, DistributionStrategy will choose based on device topology. algorithms. If None, DistributionStrategy will choose based on device
topology.
Returns: Returns:
tf.contrib.distribute.DistibutionStrategy object. tf.contrib.distribute.DistibutionStrategy object.
...@@ -41,7 +42,7 @@ def get_distribution_strategy(num_gpus, all_reduce_alg=None): ...@@ -41,7 +42,7 @@ def get_distribution_strategy(num_gpus, all_reduce_alg=None):
if all_reduce_alg: if all_reduce_alg:
return tf.contrib.distribute.MirroredStrategy( return tf.contrib.distribute.MirroredStrategy(
num_gpus=num_gpus, num_gpus=num_gpus,
cross_tower_ops=tf.contrib.distribute.AllReduceCrossTowerOps( cross_tower_ops=tf.contrib.distribute.AllReduceCrossDeviceOps(
all_reduce_alg, num_packs=2)) all_reduce_alg, num_packs=2))
else: else:
return tf.contrib.distribute.MirroredStrategy(num_gpus=num_gpus) return tf.contrib.distribute.MirroredStrategy(num_gpus=num_gpus)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment