Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
ResNet50_tensorflow
Commits
31d38900
Unverified
Commit
31d38900
authored
Oct 25, 2018
by
josh11b
Committed by
GitHub
Oct 25, 2018
Browse files
Merge pull request #5598 from tensorflow/josh11b-patch-1
AllReduceCrossTowerOps -> AllReduceCrossDeviceOps
parents
48a4b443
c5dbd487
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
4 additions
and
3 deletions
+4
-3
official/utils/misc/distribution_utils.py
official/utils/misc/distribution_utils.py
+4
-3
No files found.
official/utils/misc/distribution_utils.py
View file @
31d38900
...
@@ -27,8 +27,9 @@ def get_distribution_strategy(num_gpus, all_reduce_alg=None):
...
@@ -27,8 +27,9 @@ def get_distribution_strategy(num_gpus, all_reduce_alg=None):
Args:
Args:
num_gpus: Number of GPUs to run this model.
num_gpus: Number of GPUs to run this model.
all_reduce_alg: Specify which algorithm to use when performing all-reduce.
all_reduce_alg: Specify which algorithm to use when performing all-reduce.
See tf.contrib.distribute.AllReduceCrossTowerOps for available algorithms.
See tf.contrib.distribute.AllReduceCrossDeviceOps for available
If None, DistributionStrategy will choose based on device topology.
algorithms. If None, DistributionStrategy will choose based on device
topology.
Returns:
Returns:
tf.contrib.distribute.DistibutionStrategy object.
tf.contrib.distribute.DistibutionStrategy object.
...
@@ -41,7 +42,7 @@ def get_distribution_strategy(num_gpus, all_reduce_alg=None):
...
@@ -41,7 +42,7 @@ def get_distribution_strategy(num_gpus, all_reduce_alg=None):
if
all_reduce_alg
:
if
all_reduce_alg
:
return
tf
.
contrib
.
distribute
.
MirroredStrategy
(
return
tf
.
contrib
.
distribute
.
MirroredStrategy
(
num_gpus
=
num_gpus
,
num_gpus
=
num_gpus
,
cross_tower_ops
=
tf
.
contrib
.
distribute
.
AllReduceCross
Tower
Ops
(
cross_tower_ops
=
tf
.
contrib
.
distribute
.
AllReduceCross
Device
Ops
(
all_reduce_alg
,
num_packs
=
2
))
all_reduce_alg
,
num_packs
=
2
))
else
:
else
:
return
tf
.
contrib
.
distribute
.
MirroredStrategy
(
num_gpus
=
num_gpus
)
return
tf
.
contrib
.
distribute
.
MirroredStrategy
(
num_gpus
=
num_gpus
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment